When evaluators meet for coffee...
To determine the impact of the magnet program, and whether or not it actually makes a difference, your evaluation design should be as rigorous as possible. You can better isolate the effects of a program by using a control or comparison group to compare outcomes between magnet and non-magnet students. But even if such designs aren't appropriate for your particular context, your evaluation should assess how well the program is meeting its desired outcomes.
SAMPLE MATERIAL Tools, Tips, and Common Issues in Evaluation Design Choices (.pdf 355.9 KB)
Review the advantages and disadvantages of various designs to narrow the design choices for your project.
TOOL Learning About Experimental and Quasi-Experimental Design (.doc 82 KB)
Watch an interview to understand key issues related to these two types of evaluation designs.
TOOL Decision Tree: Determining Feasibility for Rigorous Evaluation Design (.doc 88 KB)
Use this flowchart to figure out an appropriate evaluation design for your magnet program.
SAMPLE MATERIAL Recruiting Comparison Groups (.pdf 290.6 KB)
Determine the approaches for recruiting and retaining comparison schools and students that are most likely to work in your community.
SAMPLE MATERIAL Process for Selecting Comparison Schools (.pdf 845.8 KB)
Review one district’s process and determine what best applies to your own program.
Extra Resources for MSAP Rigorous Evaluation
Even if it isn’t possible to use random assignment processes to assign students to the magnet school you are evaluating, you can still do a rigorous evaluation. Talk to your evaluator about various quasi-experimental designs that your district may be able to use. When well executed, these types of evaluations are also considered to be rigorous.
The matching required for a strong quasi-experimental study needs significant resources to gather and analyze information to determine appropriate comparison groups. Make sure your evaluation budget accounts for this additional requirement. Also be mindful of the demands on individuals in the comparison group who do not know much, if anything, about the program you are evaluating or the time and effort involved in annual data collection.
To best choose a comparison group, use a two-stage matching process. First, select schools that are comparable to the magnet schools. Then, using propensity scoring, select students who most closely match the magnet school students.
Use student-level data to add to the power of the evaluation. Make sure the treatment and comparison groups have enough students to detect meaningful differences.