Leslie Pal (reference below) describes performance reporting, or managing for results, as a type of process evaluation (p. 285).
Pal writes (pp. 286-286):
“Performance reporting and management depends on the distinction between inputs, outputs, outcomes, and indicators (Schacter, 1999). Inputs are the resources allocated to programs and organizations. Outputs are the activities government agencies undertake, such as the provision of services. Outcomes are the eventual results of those activities in terms of the public good. Indicators are the empirical measures of inputs, outputs, and outcomes. The thrust of performance measurement is to train attention on outcomes – what ultimately matters the most – and link them to a logical model that connects inputs (resources) with activities, outputs, and outcomes. Looked at in this way, performance measurement is about much more than simply measuring things – it entails a management regime that requires a public organization to have a clear idea of what its objectives are and a regular means of reporting on its success in achieving those objectives. Performance reporting is thus different from policy or program evaluation, which typically takes place near the end of a program’s life and is more of a one-time analysis of program impacts. Performance measurement should be viewed as part of a larger management regime, which should try to link results with strategic planning and budgeting and resource allocation.
“It is important to get several key factors right in order to do performance measurement properly and successfully (Holden & Zimmerman, 2009; Performance-Based Management Special Interest Group, 2001).
- Clarity about the program. Since performance measurement is about measuring the success of a program, it is vital to know what that program is about and what its intended objectives are. Determining this is more difficult than it seems, since different people in an organization may have different ideas about what their program is about. “Profile – a concise description of the policy, program or initiative, including a discussion of the background, need, target population, delivery approach, resources, governance structure and planned results” (Treasury Board of Canada, 2010).
- Logic model. … at the heart of any process of performance reporting is a “logic model” that ties inputs to activities, to short-term, intermediate, and final or ultimate outcomes. Part of the challenge of performance measurement is coming up with indicators for these different levels of outcomes, and coming to judgments about the specific contribution of an agency and its activities to eventual outcomes. A logic model is “an illustration of the results chain or how the activities of a policy, program, or initiative are expected to lead to the achievement of the final outcomes” (Treasury Board of Canada, 2010).
- Judgement. The paradox of performance measurement is that while it is driven by a desire for precision and a clear assessment of the contribution of government programs to specific outcomes, the literature acknowledges that there are huge technical problems associated with disentangling the specific effect of those programs from all of the other factors that might contribute to those outcomes. This challenge means that successful performance measurement has to acknowledge that there is always an element of judgment. That judgment can be disciplined and careful, but it still is judgment. It is important to acknowledge the limits of both the indicators one chooses and the evidence for those indicators. This acknowledgment, in turn, has consequences for the presentation of the performance report. Rather than try to come up with hard, conclusive links between inputs, activities, and outcomes, evaluators are encouraged to tell a performance story that provides a credible portrait in narrative form of results and expectations, mentioning both anticipated as well as unanticipated outcomes.
- Attribution. A key challenge in performance measurement is attribution, or determining what a program’s contribution has been to a specific outcome. The more difficult question is usually determining just what contribution the specific program in question made to the outcome. How much of the success (or failure) can we attribute to the program? What has been the contribution made by the program? Despite the measurement difficulty, attribution is a problem that cannot be ignored when trying to assess the performance of government programs. Without an answer to this question, little can be said about the worth of the program, nor can advice be provided about future directions (Mayne, 2001).
- Credible indicators. Performance can be measured only if there are indicators of both outputs and outcomes. Selecting indicators is not automatic, even if a program is explicit about what its intended outcomes are supposed to be. Successful performance measurement depends, in part, on finding credible indicators that tell you something important about a program and that can be successfully measured.
- Linking resources to results. Performance measurement is not an end in itself. It should contribute to the wider process of governmental resource allocation. In principle, if programs are found to be underperforming, resources should be moved out of them to other programs that are achieving deeper public benefits. Moreover, linking resources to results is a mechanism for supporting transparency in government decisions as well as stronger accountability to citizens.
- Sustainability – part of a strategy. Performance measurement needs to be part of a broader, ongoing strategy of performance assessment. It cannot be episodic or occasional. This feature touches quite closely on the issue of organizational culture, since it highlights the fact that proponents of performance measurement are not simply looking for a new tool of governance, but at changing the way in which governance operates. The ultimate goal is government that continually tries to do better, to be more responsive, and to assess its activities against standards and benchmarks. This focus is strategic, not simply technical.”
See also Performance Measurement.
Atlas topic, subject, and course
Leslie Pal (2014), Beyond Policy Analysis – Public Issue Management in Turbulent Times, Fifth Edition, Nelson Education, Toronto. See Beyond Policy Analysis – Book Highlights.
Holden, D. J., & Zimmerman, M. A. (Eds.) (2009). A practical guide to program evaluation planning. Los Angeles, CA: Sage.
Mayne, J. (2001, Spring). Addressing attribution through contribution analysis: Using performance measures sensibly. The Canadian Journal of Program Evaluation, 16, 1–24.
Performance-Based Management Special Interest Group. (2001). The performance-bases management handbook: A six volume compilation of techniques and tools for implementing the Government Performance and Results Act of 1993. Washington, DC: Training Resources and Data Exchange, Performance-Based Management Special Interest Group for the Office of Strategic Planning and Program Evaluation. Retrieved from http://www.orau.gov/pbm/pbmhandbook/pbmhandbook.html
Schacter, M. (1999). Means … ends … indicators: Performance measurement in the public service, Ottawa, ON: Institute on Governance.
Treasury Board of Canada. (2010). Guide for the development of results-based management and accountability frameworks. Retrieved from http://www.tbs-sct.gc.ca/cee/tools-outils/rmaf-cgrr/guide02-eng.asp#note
Page created by: Ian Clark, last modified 11 April 2017.
Image: Andrea Little Limbago, Endgame, at https://www.endgame.com/blog/bestiary-cyber-intelligence, accessed 11 April 2017.