Atlas114M Evaluation and Performance Management

Description

This module-length (six-week) synthetic course outline covers the methodologies and tools of evaluation often used to assess public programs, and will familiarize students with concepts, methods and applications of evaluation research. As well, this module will equip students with a critical lens that could help students read evaluation research critically; ability to anticipate and/or improve a given program based on evaluation results; and understanding of performance management system (as well as the implementation and usage of such systems) to manage an organization.

Learning outcomes

On successful completion of this course students will have the skills and knowledge to be able to analyze public management problems by appropriately utilizing the theories and principles in the normed topics and concepts noted below.

Normed topics

The topics are normed in having a volume of content capable of being taught in one course-week of instruction – nominally 3 hours of in-class work and 7 hours of outside-class reading.

  1. Evaluation Purposes, Types and Questions
  2. Fundamental Identification Problem: Causality, Counterfactual Responses, Heterogeneity, Selection
  3. Assessing the Confounding Effects of Unobserved Factors
  4. Sensitivity Analysis
  5. Data Collection Strategies
  6. Performance Measurement and Performance Management

Like other normed topics on the Atlas, each of these has a topic description, links to core concepts relevant to the topic, learning outcomes, a reading list drawn from available course syllabi, and a series of assessment questions.

Concepts to be learned

Sensitivity AnalysisIntangible Benefits of ProgramsResultResults ChainBeneficiariesBest PracticesDepartmental Evaluation PlanDepartmental Performance Report; Economy(in evaluation)Evaluation CriteriaEvaluation ProductsFormative EvaluationGaming (in evaluation)Gender-based Analysis (GBA)Impact EvaluationOpen Systems ApproachProgram EvaluationProject/Program ObjectiveTerms of Reference (in evaluation); AttributionBaseline InformationEfficiencyEpistemologyEvaluation AssessmentNeeds AssessmentPolicy Outputs vs. OutcomesProgram LogicWith-versus-WithoutCausal ChainCausal ImagesCausal Relationship; EvaluabilitySingle Difference (in impact evaluation); Diagnostic ProceduresLogic Model; Case StudiesElite InterviewsLiterature Review; PerformancePerformance ExpectationsPerformance MonitoringProductivity in the Public SectorResults Based ManagementResults-Based Reporting; Performance ReportingPerformance StoryBenchmarkExpected ResultIntermediate OutcomeLessons LearnedNeutralityObjectivityOutcomeOutputsPerformance AuditPerformance Criteria; Performance IndicatorPerformance MeasurePerformance MeasurementPerformance Measurement Strategy.

Course syllabi sources

University of Toronto: PPG-1008 & PPG-2021; Carleton PADM-5420; Harvard Kennedy School: API-208 & MLD-101B; NYU Wagner School: GP-2170 & GP-2171; American University: PUAD-604; Rutgers: 34:833:632; Maryland: PUAF-689Xl; University of Southern California: PPD-560; Northern Carolina State University: PA-511

Recommended readings

Week 1:  Evaluation Purposes, Types and Questions

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 1-3.

Mertens, Donna M., and Wilson, Amy T., Program Evaluation Theory and Practice: A Comprehensive Guide. New York: The Guilford Press, 2012. Chapter 8.

W.K. Kellogg Foundation. “Logic Model Development Guide.” Battle Creek, Michigan: W.K. Kellogg Foundation, 2004. Chapters 3 and 4.

Week 2: Fundamental Identification Problem: Causality, Counterfactual Responses, Heterogeneity, Selection

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 8-9

Bornstein D. (2012). “The Dawn of the Evidence-Based Budget.” NY Times, May 30, 2012. Available at: http://opinionator.blogs.nytimes.com/2012/05/30/worthy-of-government-funding-prove-it/

Angrist, J.D. and A.B. Krueger (2000), “Empirical Strategies in Labor Economics,” in A. Ashenfelter and D. Card eds. Handbook of Labor Economics, vol. 3. New York: Elsevier Science. Sections 1 and 2.

Imbens, G.W. and J.M. Wooldridge (2009) “Recent Developments in the Econometrics of Program Evaluation,” Journal of Economic Literature, vol. 47(1), 5-86.

Khandker, S.R., Koolwal, G.B., & Samad, H.A. (2010). Basic issues of evaluation (p. 18 – 29). Excerpt of chapter 2 in Handbook on impact evaluation: Quantitative methods and practices. Washington, DC: The World Bank.

Week 3: Assessing the Confounding Effects of Unobserved Factors

Rosenbaum, P.R. (2005), “Sensitivity Analysis in Observational Studies,” Encyclopedia of Statistics in Behavioral Science, vol. 4, 1809-1814.

Imbens, G.W. (2003), “Sensitivity to Exogeneity Assumptions in Program Evaluation,” American Economic Review (Papers & Proceedings), vol. 93(2), 126-132.

Rosenbaum, P.R. (2002), Observational Studies. New York: Springer-Verlag. Chapter 4.

Rosenbaum, P.R. and D.B. Rubin (1983), “Assessing Sensitivity to an Unobserved Binary Covariate in an Observational Study with Binary Outcome,” Journal of the Royal Statistical Society. Series B, vol. 45(2), 212-218.

Week 4: Sensitivity Analysis

Anderson, David, et al. Quantitative methods for business. Cengage Learning, 2012. Chapter 4.

McKenzie, Richard B., and Gordon Tullock. “Anything Worth Doing Is Not Necessarily Worth Doing Well.” The New World of Economics. Springer Berlin Heidelberg, 2012. 25-42.

Merrifield, J. (1997). Sensitivity analysis in benefit-cost analysis: A key to increased use and acceptance. Contemporary Economic Policy, 15, p.82-92.

John Graham, “Risk and Precaution,” transcript of remarks delivered at AEI-Brookings Joint Center conference, “Risk, Science, and Public Policy: Setting Social and Environmental Priorities,” October 12, 2004. http://georgewbush-whitehouse.archives.gov/omb/inforeg/speeches/101204_risk.html

Week 5: Data Collection Strategies

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 6 & 8.

Mertens, Donna M., and Wilson, Amy T., Program Evaluation Theory and Practice: A Comprehensive Guide. New York: The Guilford Press, 2012. Chapter 10.

Wholey, Joseph S., Harry Hatry, Kathryn Newcomer. Handbook of Practical Program Evaluation, 2nd Edition. San Francisco: Jossey-Bass, 2010. Chapter 14 (11-13 and 15-18 optional).

Week 6: Performance Measurement and Performance Management

Dave Ulrich, Delivering Results: A New Mandate for Human Resource Professionals, Boston: Harvard Business School Press, 1998. (Introduction and Chapter 1).

Ebrahim, A., & Rangan, V. K. (2010). The limits of nonprofit impact: A contingency framework for measuring social performance. Boston, MA: Harvard Business School Working Paper. http://www.hbs.edu/research/pdf/10-099.pdf

Kotter, J. P. (1990). What leaders really do. Harvard Business Review, 68(3), 103-111.

Donald Moynihan et al, Performance Regimes Amidst Governance Complexity, Journal of Public Administration Research and Theory (JPART), Jan. 2011, vol, 21, p, 141-155

Laurence J. O’Toole, Jr., Treating Networks Seriously:  Practical and Research-Based Agendas in Public Administration, PAR, Jan/Feb 1997, vol. 57, no. 1, p. 45-52.

Lester M. Salamon, The New Governance and the Tools of Public Action:  An Introduction, Chapter 1 (p. 1-47) in the Tools of Government:  A Guide to the New Governance, edited by Lester M. Salamon, Oxford University Press, 2002

Sample assessment questions

1a) Define the following terms: Sensitivity AnalysisIntangible Benefits of ProgramsResultResults ChainBeneficiariesBest PracticesDepartmental Evaluation PlanDepartmental Performance Report;Economy(in evaluation)Evaluation CriteriaEvaluation ProductsFormative EvaluationGaming (in evaluation)Gender-based Analysis (GBA)Impact EvaluationOpen Systems ApproachProgram EvaluationProject/Program ObjectiveTerms of Reference (in evaluation). 1b) What is a summative evaluation? How does a summative evaluation differ from a formative evaluation? 1c) What are policy outcomes? Why is it preferable for evaluation strategies to measure policy outcomes as opposed to inputs or outputs? 1d) What is gaming? Please provide a (real or hypothetical) example. 1e) What is the role of the “terms of reference” in the evaluation process? 1f) What is meant by the term “intangible benefits of programs?” Why is this an important concept for program evaluators to understand?

2a) Define the following terms: AttributionBaseline InformationEfficiencyEpistemologyEvaluation AssessmentNeeds AssessmentPolicy Outputs vs. OutcomesProgram LogicWith-versus-WithoutCausal ChainCausal ImagesCausal Relationship 2b) What is a confounding variable? Why is this an important concept for policy/program evaluators to understand? 2c) What does the term “counterfactual” mean? Why is this an important concept for policy/program evaluators to understand?

3a) Define the following terms:  EvaluabilitySingle Difference (in impact evaluation) 3b) Identify one program, policy or government activity that is particularly difficult to evaluate in terms of efficiency and effectiveness. In a 3-5 page short paper, describe the evaluation challenges involved and identify some possible strategies to overcome those challenges and evaluate the program/policy/activity in question.

4a) Define the following terms: Diagnostic ProceduresLogic Model 4b) What is sensitivity analysis? Why is this an important topic for students of public administration to study? 4c) What is a logic model? Draw a mock logic model for any public policy/program of your choice.

5a) Define the following terms: Case StudiesElite InterviewsLiterature Review 5b) What are case studies? What are some of the advantages and disadvantages of case studies as a tool for gathering information about the effectiveness of specific policy choices? Discuss in a short 2-3 page paper.

6a) Define the following terms: PerformancePerformance ExpectationsPerformance MonitoringProductivity in the Public SectorResults Based ManagementResults-Based Reporting; Performance ReportingPerformance StoryBenchmarkExpected ResultIntermediate OutcomeLessona LearnedNeutralityObjectivityOutcomeOutputsPerformance AuditPerformance Criteria; Performance IndicatorPerformance MeasurePerformance MeasurementPerformance Measurement Strategy 6b) What is the difference between performance measurement and evaluation? 6c) What is the role of performance measurement in the policy cycle? 6d) What are three characteristics of useful performance indicators? For the policy or program of your choice, identify two potential performance indicators that would be useful for performance measurement purposes and describe in one paragraph why they are potentially valuable measures of program effectiveness or efficiency.

Page created by: James Ban on 8 July 2015 and edited for new Atlas by Ian Clark on 9 December 2015.