Impact Evaluation

… a core concept used in Evaluation and Performance Measurement and Atlas108

Concept description

Leslie Pal (reference below) defines impact evaluation as “analysis of the actual effect or impact of a program on its intended target, along with unintended consequences” (p. 303)..

Pal writes (p. 279):

“A central evaluative question is whether a policy or program has an impact. Evaluating outcomes is critical to determining whether a program is successful or not in terms of its intended effects. Impact evaluation takes the program as the independent or causal variable and tries to isolate its effect from other influences in the environment. This approach assumes that goals are clear, but sometimes they are not. … in addition to trying to understand the goals, the evaluator should also develop a map of the causal theory that underpins the program, along with mechanisms of change. Logic models help do this, since they explicitly connect resources and activities to outputs and outcomes. The arrows or connections between the elements of a logic model are the causal mechanisms. The connections not only help in determining what data to gather and which indicators to highlight, but also provides alternative causal path explanations that might be examined if the evaluation turns up ambiguous results.”

Experimental design

“The ideal method to try to empirically isolate cause and effect is the classic experimental design. In this design, people are randomly assigned to one of two groups; measures are taken of target variables before the program is introduced and again afterward. The program is applied to only one group, the experimental group. The second group is the control group. If there is a sufficiently large difference in post-program scores, then the program or intervention is deemed to have caused it. The random assignment of individuals to the two groups controls for alternative explanations or causes, since the odds of being in either group are the same. In aggregate, the groups are identical in every respect except for the policy intervention (Weiss, 1998, p. 215). Experimental designs are frequently used in the educational policy field, where, for example, a new reading program might be tested on two groups of students. Preprogram reading scores for the experimental group and the control group would be gathered, the program administered, and post-program scores compared to see if they are statistically different.

“Despite their statistical superiority as a measure of impact, experimental designs are rarely used in policy evaluation in Canada, though more widely in the United States, especially in the education policy field (Torgerson, Torgerson, & Taylor, 2010). That is because they are costly and time consuming; decisionmakers frequently want quick answers. There are also political and ethical problems with separating people into experimental and control groups. Many public programs deliver benefits to the populace, and from a political perspective, it might be imprudent to deliberately withhold a benefit from some group simply to meet testing requirements. As Rossi, Lipsey, and Freeman (2004) point out, randomized experiments run into major ethical dilemmas, as in the controversy over withholding potentially beneficial AIDS drugs. Finally, some important policy variables cannot be disaggregated to observe differential effects on separate groups. Interest rates, the value of the dollar, and budget deficits are examples of policy variables that apply nationally or not at all. Classic experimental designs are useless in trying to determine their impact.”

See also: Categories of Program Evaluation; Process Evaluation; Impact Evaluation.

Atlas topic, subject, and course

The Study of Evaluation and Performance Measurement in the Public Sector (core topic) in Evaluation and Performance Measurement and Atlas108 Analytic Methods and Evaluation.

Sources

Leslie Pal (2014), Beyond Policy Analysis – Public Issue Management in Turbulent Times, Fifth Edition, Nelson Education, Toronto. See Beyond Policy Analysis – Book Highlights.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.

Torgerson, C. J., Torgerson, D. J., & Taylor, C. A. (2010). Randomized controlled trials and nonrandomized designs. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (3rd ed.). (pp. 144–162). San Francisco, CA: Jossey-Bass.

Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice Hall.

Page created by: Ian Clark, last modified 14 April 2017.

Image: Pinterest, at https://www.pinterest.com/pin/80150068348866898/, accessed 10 April 2017.