Monitoring Elementary and Secondary School Emergency Relief (ESSER) Fund Plans

By Emi Fujita-Conrads | October 1, 2021


The transition to remote learning during the COVID-19 pandemic has impacted students’ academic, social, and emotional outcomes, with students of color disproportionately affected. The Department of Education provided additional funding through the Elementary and Secondary School Emergency Relief (ESSER) plan to address the urgent needs of teachers and students.

The Region 17 Comprehensive Center collaborated with the Idaho Department of Education to support the development and implementation of its ESSER American Rescue Plan (ARP). It was clear that monitoring and evaluating ESSER programs would be critical to determine whether they are effectively addressing student needs, so appropriate adjustments could be made in time to ensure a positive impact. A well-designed and well-implemented evaluation has the potential to provide relevant and trustworthy data to inform program improvement. The Centers for Disease Control and Prevention evaluation framework specifies six steps for implementing a successful evaluation plan. Each is outlined below.

Step 1: Engage stakeholders

Key to developing a relevant and useful evaluation is the meaningful inclusion of program stakeholders. Program stakeholders are individuals or groups who have a vested interest in the program. These could include the participating students, teachers implementing the intervention, or families and community members. At the outset of the evaluation, brainstorm who might be a relevant program stakeholder and how you might engage them in the evaluation process.

Step 2: Describe the program

A program description is used to understand the underlying rationale of the program and to determine whether the program logic makes sense. A logic model is an effective method for summarizing the link between what a program does and what it seeks to accomplish. Specifically, this model should include your program inputs (i.e., financial, human, and material resources), activities, outputs, and outcomes (i.e., changes in participant awareness, knowledge, or skills). The logic-modeling process can shed light on why a particular activity is important to accomplish a specific outcome, or if there are any inconsistencies in the program’s design.

Step 3: Focus the evaluation design

The logic model can be used to derive evaluation questions. Implementation evaluation questions concern how well programmatic activities are implemented and the extent to which the program is being implemented as designed. During this stage, one may ask:

  • How well is the program being implemented?
  • Is it being implemented with fidelity?
  • How satisfied are participants (and which participants)?

Impact evaluation questions relate to whether the program is producing the intended outcomes. Questions one might ask during this phase are:

  • How well did the program work?
  • To what extent can changes be attributed to the program?
  • Did the program produce intended outcomes in the short-, medium-, and long-term?

Step 4: Gather credible evidence

Data collection should focus on answering the developed evaluation questions. Continuous program improvement models emphasize collecting data for program refinement and improvement. These models specify that the evaluator implement short, iterative cycles of data collection. Then, adjustments to program design and implementation can be made based on ongoing evaluation findings.

Step 5: Justify conclusions

Once data is gathered, judgments regarding how well a program is being implemented and whether it is achieving intended outcomes can be formed. Determinations of how well a program is functioning should be made in conversation with your program stakeholders. Provide stakeholders an opportunity to engage and discuss the findings and what they mean for their program.

Step 6: Ensure use

Evaluation use should be fostered throughout the process. Regularly involve program stakeholders in the evaluation, such as through developing evaluation questions, determining methods for data collection, and discussing findings. In addition to program stakeholders involved in the evaluation process, findings should be communicated to relevant audiences for their use as well. A formal evaluation report is not always necessary. Think about how presentations, community dialogues, or short memos can reach a diverse group of stakeholders who may be interested in the evaluation results.

The focus of program evaluation should not be accountability, but continuous improvement. As we move to provide the most efficient and effective support possible for students in the wake of the COVID-19 pandemic, ensuring we know if our programs are achieving their intended purposes is a critical component for our students’ success.

Connect with us!