Process Evaluation
… a core concept used in Evaluation and Performance Measurement and Atlas108
Concept description
Leslie Pal (reference below) states that process evaluation “monitors an existing program to assess the effort and organizational resources put into it” (p. 303).
Pal writes (p. 279-286):
“Process or implementation evaluation monitors an existing program to assess the effort put into it (Posvac & Carey, 1980, p. 12). This is not the same as measuring success. As Patton (2002) put it, process evaluation involves looking at “how something happens rather than or in addition to examining outputs and outcomes” (p. 159). Typically, reporting systems are developed to provide agencies with information about target populations, extent of coverage, and delivery mechanisms. Process evaluation can review program guidelines, the organization of field offices, staff training, communications systems, and even staff morale to improve organizational performance. It takes the program for granted and aims at improving the process whereby goals are met. This thrust may sound routine, but is conceptually important and, in practical terms, accounts for a great deal of what passes for program evaluation.
“Process evaluation is clearly linked to implementation and can be thought of as the evaluation of implementation procedures. This perspective helps clarify the importance of process evaluation to program evaluation as a whole, as well as its link to impact analysis. … Process evaluation is a natural complement to impact evaluation, since we need to know whether observed outcomes are the result of the program intervention as it was planned or are due to quirks in the delivery. In other words, program theory and design … may be fine, but the execution is flawed. If we can assure ourselves that execution is as planned, then any failures will be due to program design. This idea, of course, makes it sound as though implementation and impact can be neatly severed, but they cannot. They represent different orientations in evaluative work.
“The full description of program components is the foundation of process analysis, though the concept of program logic is sometimes used to sketch out the causal links for impact evaluation (Framst, 1995). Program components are all the various bits and pieces, technologies and resources, as well as intended targets and delivery modes. Mapping this out is tougher than it sounds and requires extended interviews with both program administrators and clients to see what the components are and how well they are being implemented. The next trick is to determine what good or effective implementation entails. It is not directly linked to desired outcomes, which is the realm of impact evaluation. Rather, it tries to determine what the desired outcome is and then asks what steps or mechanisms the program envisions in delivering the intervention to achieve that outcome. …”
“[T]he general field of process and implementation evaluation has grown dramatically in the last 20 years. Evaluation as a whole has become more important as governments are under pressure to be more results oriented and accountable. Impact evaluation, the analysis of results, is not easy or cheap, and there is a natural inclination to assume that the design is fine and that disappointing outcomes must be due to inadequate effort. At the same time, governments are looking more to consumer or client satisfaction as a key program outcome, and this often has more to do with delivery parameters than with the causal modelling underlying program design.
“Delivery parameters are more commonly referred to as “benchmarks” and typically focus on performance rather than process, though the two are related. Performance measures or benchmarks for concrete services are usually based on measures of workload, efficiency, effectiveness, or productivity. Finally, as more and more programs are delivered by nongovernmental organizations, and as more of their financial and organizational support comes from groups of partners (firms, government, foundations), these partners want to monitor how well things are proceeding. Monitoring tends to be more casual and less systematic than process evaluation, but they are “similar kinds of inquiry” (Weiss, 1998, p. 181). These, and other forces, have conspired in recent years to raise the profile of process and implementation analysis quite significantly.”
See also: Categories of Program Evaluation; Efficiency Evaluation; Impact Evaluation.
Atlas topic, subject, and course
The Study of Evaluation and Performance Measurement (core topic) in Evaluation and Performance Measurement and Atlas108 Analytic Methods and Evaluation.
Sources
Leslie Pal (2014), Beyond Policy Analysis – Public Issue Management in Turbulent Times, Fifth Edition, Nelson Education, Toronto. See Beyond Policy Analysis – Book Highlights.
Framst, G. (1995, October/November). Application of program logic model to agricultural technology transfer programs. Canadian Journal of Program Evaluation, 123–132.
Posvac, E. J., & Carey, R. G. (1980). Program evaluation: Methods and case studies. Englewood Cliffs, NJ: Prentice-Hall.
Weiss, C. H. (1998). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
Page created by: Ian Clark, last modified 28 October 2017.
Image: FastTrack, at http://www.fasttrakauto.com/blog/2013/10/01/a-look-at-current-system-architectures/, accessed 10 April 2017.