Extra Resources for MSAP Rigorous Evaluation
Is rigorous evaluation right for you?
Watch this video to learn more about what’s involved in an MSAP rigorous evaluation and how to decide whether it may work for your district.
View Video (6:33)
Conducting an MSAP rigorous evaluation provides a unique opportunity to gauge the impact of your program on student achievement and minority group isolation. There are some extra considerations the evaluation team should think about in setting up a rigorous evaluation. The additional resources provided here and for other practices will help you determine if this type of evaluation is appropriate for your district and, if so, to design and carry out the most rigorous evaluation possible.
Design the most rigorous evaluation possible
Even if it isn’t possible to use random assignment processes to assign students to the magnet school you are evaluating, you can still do a rigorous evaluation. Talk to your evaluator about various quasi-experimental designs that your district may be able to use. When well executed, these types of evaluations are also considered to be rigorous.
The matching required for a strong quasi-experimental study needs significant resources to gather and analyze information to determine appropriate comparison groups. Make sure your evaluation budget accounts for this additional requirement. Also be mindful of the demands on individuals in the comparison group who do not know much, if anything, about the program you are evaluating or the time and effort involved in annual data collection.
To best choose a comparison group, use a two-stage matching process. First, select schools that are comparable to the magnet schools. Then, using propensity scoring, select students who most closely match the magnet school students.
Use student-level data to add to the power of the evaluation. Make sure the treatment and comparison groups have enough students to detect meaningful differences.
Define a unique treatment
In choosing comparison groups for a rigorous evaluation, avoid schools that have programs similar to your magnets. Non-magnet schools may receive funding or reform mandates that result in similar curricular programs, especially in areas like arts or science.
For evaluations using random assignment, the assumption is the unique experience of the magnet school is the only major difference between the treatment and control groups. However, it’s still important to know what’s happening in the control group’s schools because if they have similar experiences, this may help account for the lack of differences in outcomes.
TOOL Selecting Comparison Schools (.doc 86.5 KB)
Assess whether the program elements of a potential comparison school are different enough from your magnet program treatment to be effective for a quasi-experimental evaluation.
VIGNETTE When a Comparison School Engages in Similar Treatment (.pdf 143.3 KB)
Reflect on common challenges and remedies related to comparison school selection.
Identify outcome measures and instruments
- Make sure you are familiar with the additional requirements for ensuring valid data in conducting an experimental or quasi-experimental evaluation.
TOOL Getting Key Items Right When You Measure Program Outcomes (.doc 88.5 KB)
Check if your evaluation uses key data collection approaches that produce valid evidence of program impact for a rigorous evaluation.