Q: What is an outcome evaluation?
A: An evaluation that assesses the extent to which an intervention affects a) its participants (i.e., the degree to which changes occur in their knowledge, skills, attitudes, or behaviors) and b) the environments of the school, community, or both. Several important design issues must be considered, including how to best determine the results and how to best contrast what happens as a result of the intervention with what happens without the program.
Outcomes are the post-treatment effects of the program. An outcome evaluation assesses the extent to which such effects exist and whether they can be ascribed to the program. The outcome evaluation answers the question of whether the program produced valuable changes.
Q: What are the benefits of an outcome evaluation? Why should we do one?
A: Outcome evaluations provide all stakeholders with concrete information about the extent to which the program has made a difference in student learning and minority group isolation. As a result, stakeholders can decide whether to continue, expand, or limit the program.
Q: What is an MSAP rigorous evaluation?
A: Beginning in the 2004 funding cycle, the federal Magnet Schools Assistance Program encouraged applicants to use impact studies of magnet programs. Districts receive an invitational priority to include a rigorous (experimental or quasi-experimental) evaluation design in their MSAP grant proposals.
Q: What if we cannot do a random control trial because we do not have oversubscription to our program? Does that rule out a “rigorous evaluation”?
A: Even if you cannot do random assignment, you can still do a rigorous evaluation. Quasi-experimental designs, when well executed, are considered to be rigorous.
Q: Is there a difference between a “comparison group” and a “control group”?
A: A control group exists only in a true experiment. In such an experiment, participants are randomly assigned to the treatment condition, with those who are not assigned serving as “controls.” In a quasi-experimental design, the evaluator identifies a group that is as much like the treatment group as possible (“matches” the group) to serve as a comparison group. Comparison group members are selected because they share important characteristics with the treatment group but experience different programs.