Contrasting Purposes of Evaluation

… a core concept used in Implementation and Delivery and Atlas108

Concept description

Leslie Pal (reference below) describes the different, and sometimes contrasting, purposes of evaluation.

Pal writes (p. 272):

“The improvement of programs is the core function or purpose of evaluation as it is conducted in governments, but there are other purposes as well. Rossi, Lipsey, and Freeman (2004) highlight four (pp. 34–38):

  1. The first is … program improvement. Typically this means working with program managers as the program is being implemented, or what is called formative evaluation.
  2. Another purpose is for accountability – in this instance, the objective is oversight once the program is close to completion, or what is called summative evaluation.
  3. Evaluations can also be done to generate more general knowledge that may or may not be directly relevant to the program but that might cast light on a social issue or casual questions.
  4. Finally, evaluations can be done as a political ruse or for public relations – either to produce data to support a program or justify a decision that has already been made.

Pal provides the following definitions (p. 303):

  • formative evaluation – evaluation designed to support development and improvement of a program as it is being implemented
  • program evaluation – an essential part of any reasonable approach to policymaking that assesses, in some sense, how well programs are doing in terms of their stated objectives
  • summative evaluation – an evaluation undertaken at the end of a program to gauge its success
Improvement vs. accountability

Clark and Swain (reference below) have emphasized the need to remember the work of Doug Hartle and Rod Dobell in distinguishing between the first two purposes above:

“Before leaving Ottawa, Dobell had thought through and spoken about most of the conundrums and crucial distinctions needed to perform program evaluation: the purpose of the evaluation (forward-looking for purposes of improving designs and allocating resources or backward-looking for purposes of reinforcing accountability relationships); the audience for the evaluation (principal or agent); and the ethical dilemmas associated with risk and decision-making by public managers.

“The matters of purpose and audience are basic … asking the public service manager to subject his operations to recurrent comprehensive evaluation is like asking a dog to carry the stick with which she or he is to be beaten (Dobell and Zussman 1981), and Dobell and Zussman note that “the process of policy analysis (including policy and program appraisal, or evaluation) is subject to both procedural impediments, arising out of the fact that the work takes place in an organizational and political context, and to analytical limits arising out of the lack of analytical criteria or relevant information to guide the key choices to be faced” (404) and that:

[There is an] extensive literature on the importance of bureaucratic games, formal and informal pay-off rules or incentive systems, procedural constraints leading to distortions in collective decision processes, and so on. The point is simply that evaluation takes place within a political and organizational context which drives analysis and analysts to an essentially adversarial role…Within such a framework of advocacy, the bureaucratic incentives do not press in the direction of continuing searching evaluation. (Dobell and Zussman 1981, 413)

“The essential distinctions between various purposes and audiences for evaluation are simply not acknowledged in the current federal evaluation and performance measurement policies, which seem to assume that the same set of measures and techniques can serve the needs of program management, resource allocation and accountability. Indeed, the April 1, 2009 Policy on Evaluation holds that one flavour of evaluation is to serve all three purposes and audiences:

3.1 In the Government of Canada, evaluation is the systematic collection and analysis of evidence on the outcomes of programs to make judgments about their relevance, performance and alternative ways to deliver them or to achieve the same results.

3.2 Evaluation provides Canadians, Parliamentarians, Ministers, central agencies and deputy heads an evidence-based, neutral assessment of the value for money, i.e. relevance and performance, of federal government programs. Evaluation:

a. supports accountability to Parliament and Canadians by helping the government to credibly report on the results achieved with resources invested in programs;

b. informs government decisions on resource allocation and reallocation by:

i. supporting strategic reviews of existing program spending, to help Ministers understand the ongoing relevance and performance of existing programs;

ii. providing objective information to help Ministers understand how new spending proposals fit with existing programs, identify synergies and avoid wasteful duplication;

c. supports deputy heads in managing for results by informing them about whether their programs are producing the outcomes that they were designed to produce, at an affordable cost; and,

d. supports policy and program improvements by helping to identify lessons learned and best practices (Treasury Board Secretariat 2009).”

Clark and Swain make four recommendations rooted in the distinction between evaluation for improvement and evaluation for accountability:

  1. The government should make explicit the distinction between “evaluation for improvement” and “evaluation for accountability” and devote the vast majority of evaluation resources to the former.
  2. Senior officials should apply more judgement in selecting topics for evaluation. Rather than applying across-the-board rules such as “every program, every five years” or “before seeking renewal of any program of specific duration,” the topics should be selected on the basis of the extent to which evaluation might lead the government to make material changes in program design or funding.
  3. The central agencies that advise ministers on policy priorities and funding should become more involved in the selection and design of evaluation projects.
  4. There should be fewer, but more thorough, evaluations and they should draw on the techniques of analysis employed in academic social sciences research and the techniques for engaging interested parties employed by successful public enquiries.

See also: Policy Analysis and Policy Evaluation; Categories of Program Evaluation.

Atlas topic, subject, and course

The Study of Evaluation and Performance Measurement in the Public Sector (core topic) in Evaluation and Performance Measurement and Atlas108 Analytic Methods and Evaluation.

Sources

Leslie Pal (2014), Beyond Policy Analysis – Public Issue Management in Turbulent Times, Fifth Edition, Nelson Education, Toronto. See Beyond Policy Analysis – Book Highlights.

Ian Clark and Harry Swain (2015), Program Evaluation and Aboriginal Affairs: A History and a Thought Experiment, in Edward Parson, ed., A Subtle Balance: Expertise, Evidence, and Democracy in Policy and Governance, 1960-2010, McGill-Queens University Press, 2015. Penultimate draft available, with the publisher’s permission, at http://atlas101.ca/ic/wp-content/uploads/2016/01/Program_Evaluation_and_Aboriginal_Affairs_draft_chapter_by_Ian_D._Clark_and_Harry_Swain-July_2014.pdf, accessed 10 April 2017.

Dobell, Rodney A. and David Zussman. (1981). An evaluation system for government: If politics is theatre, then evaluation is (mostly) art. Canadian Public Administration. 24(3): 404-427.

Treasury Board of Canada Secretariat (2009). Policy on Evaluation. Archived at https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=15024, accessed 10 April 2017.

Page created by: Ian Clark, last modified 14 April 2017.

Image: Digitech, Apples and Oranges, at http://digitechcomputer.com/2016/09/16/apples-and-oranges-the-problem-with-comparing-collection-percentages/, accessed 11 April 2017.