Atlas108 Analytic Methods and Evaluation

… one of the core Atlas Courses

AnalyticMethods2Atlas course syllabus

This course covers the fundamentals of analytic methods and evaluation.

Learning outcomes

On successful completion of this course students will have the skills and knowledge to be able to analyze public management problems by appropriately utilizing the theories and principles in the normed topics and concepts noted below.

Normed topics

The topics are normed in having a volume of content capable of being taught in one course-week of instruction − nominally 3 hours of in-class work and 7 hours of outside-class reading.

  1. The Study of Analytic Methods for Public Management
  2. Working with Spreadsheets
  3. Decision Analysis
  4. Cost-Benefit Analysis
  5. Project Management
  6. TQM, Six Sigma, and Lean
  7. Impact of Bias on Decision-Making and Insights from Behavioural Economics
  8. The Study of Evaluation and Performance Measurement
  9. Fundamental Identification Problem: Causality, Counterfactual Responses, Heterogeneity, Selection
  10. Assessing the Confounding Effects of Unobserved Factors
  11. Sensitivity Analysis
  12. Data Collection Strategies
  13. Performance Measurement and Performance Management

Like other normed topics on the Atlas, each of these has a topic description, links to core concepts relevant to the topic, learning outcomes, a reading list drawn from available course syllabi, and a series of assessment questions.

Concepts to be learned
Concepts listed alphabetically under topics
(Old Atlas – first draft list)
Working with Spreadsheets

Excel Basics – The Interface

Excel Data Formatting, Grouping, Showing & Hiding

Excel Data Inputting

Excel Data Referencing – Absolute & Relative

Excel Data Functions – Mathematical, Statistical

Excel Data Generation – Time, Date, Random

Excel Data Comparison – Logical & Conditional Formatting

Excel Data Seeking – Reference, Search

Excel Text Functions

Excel Data Determination – Goal Seek, What-If

Excel Working with Complex Formula – Tracing

Excel Data Checking & Validation

Excel Presenting Data – Formatting

Excel Presenting Data – Tables

Excel Presenting Data Visually

Decision Analysis

Decision Chain

Decision Point

Cost-Benefit Analysis

Cost-benefit Analysis

Cost-effectiveness Analysis

Project Management

Capital Project

Project/Program Objective

Knowledge Project

Risk Management

Catastrophic Harms

Residual Risk

Risk

Risk Appetite

Risk Tolerance

Risk Identification

Risk Management

Risk Mitigation

Risk Profile

Risk Strategy

Impact of Bias on Decision-Making and Insights from Behavioural Economics

Anchoring Effect

Confirmation Bias

Groupthink

Negativity Bias

Evaluation Purposes, Types and Questions

Sensitivity Analysis

Intangible Benefits of Programs

Result

Results Chain

Beneficiaries; Best Practices

Departmental Evaluation Plan

Departmental Performance Report

Economy (in evaluation)

Evaluation Criteria

Evaluation Products

Formative Evaluation

Gaming (in evaluation)

Gender-based Analysis (GBA)

Impact Evaluation

Open Systems Approach

Program Evaluation

Project/Program Objective

Terms of Reference (in evaluation)

Fundamental Identification Problem: Causality, Counterfactual Responses, Heterogeneity, Selection

Attribution

Baseline Information

Causal Chain

Causal Images

Causal Relationship

Efficiency

Epistemology

Evaluation Assessment

Needs Assessment

Policy Outputs vs. Outcomes

Program Logic

With-versus-Without

 

Assessing the Confounding Effects of Unobserved Factors

Evaluability

Single Difference (in impact evaluation)

Sensitivity Analysis

Diagnostic Procedures

Logic Model

Data Collection Strategies

Case Studies

Elite Interviews

Literature Review

Performance Measurement and Performance Management

Benchmark

Expected Result

Intermediate Outcome

Lessons Learned

Neutrality

Objectivity

Outcome

Outputs

Performance

Performance Audit

Performance Criteria

Performance Expectations

Performance Indicator

Performance Measure

Performance Measurement

Performance Measurement Strategy

Performance Monitoring

Performance Reporting

Performance Story

Productivity in the Public Sector

Results-Based Management

Results-Based Reporting

Course syllabi sources

University of Toronto: PPG-1001,PPG-1007, PPG-1008 & PPG-2021; Carleton PADM-5814 & PADM-5272; Harvard Kennedy School: API-201 & API-139M & MLD-110B; University of Chicago: PPHA-31920; University of Singapore: PP-6703; University of Illinois-Chicago: PA-526; University of Saskatchewan-Regina: JSGS-828; Rutgers (Bloustein): 34:833:543 & 34:833:632; Carleton PADM-5420; Harvard Kennedy School: API-208 & MLD-101B; NYU Wagner School: GP-2170 & GP-2171; American University: PUAD-604; Rutgers: 34:833:632; Maryland: PUAF-689Xl; University of Southern California: PPD-560; Northern Carolina State University: PA-511.

Recommended readings

Week 1: Working with Spreadsheets

[to come]

Week 2: Agency Theory

Bendor, Glazer and Hammond (2001) “Theories of Delegation”, Annual Review of Political Science 4:235–269

Gibbons, R. “Lecture Note 1: Agency Theory.” http://web.mit.edu/rgibbons/www/LN_1_Agency_Theory.pdf

Holmstrom and Milgrom (1991). “Multitask Principal Agent Analyses: Incentive Contracts, Asset Ownership, and Job Design”, Journal of Law, Economics, and Organization 7: 24-52.

Eisenhardt, M, K. (1989). “Agency theory: An assessment and review.” Academy of Management, The Academy of Management Review, 14(1), pp. 57.

Ferris, J. A. (1992). School-based decision making: A principal-agent perspective. Educational Evaluation and Policy Analysis, 14(4), 333-346.

Week 3: Cost-Benefit Analysis

De Rus, Ginés. Introduction to Cost–Benefit Analysis. Edward Elgar, 2010. Chapters 1, 2 (2.1-2.3), 3(3.1-3.2)

Arrow, Kenneth, et al. 1996. “Benefit-Cost Analysis in Environmental, Health, and Safety Regulation: A Statement of Principles” AEI-Brookings Joint Center.

Boardman, A.E., D.H. Greenberg, A.R. Vining, and D.L. Weimer. 2011. Cost-Benefit Analysis: Concepts and Practice (Fourth Edition). Upper Saddle River, N.J.: Pearson. Chapters 2-5.

Week 4: Project Management

Brinkerhoff, Derick W., ‘Looking out, looking in, looking ahead: guidelines for managing development programs,’ International Review of Administrative Sciences, Vol. 58, 1992, pp. 483- 503.

Pellegrinelli, Sergio, ‘What’s in a name: Project or programme?’ International Journal of Project Management, No. 29, 2011, pp. 232-240.

White, Louise G., Creating Opportunities for Change: Approaches to Managing Development Programs, Lynne Rienner Publishers, 1987, Chapter 1.

Wysocki, R. K. 2009. Effective Project Management: Traditional, Agile, Extreme (5th ed.). Indianapolis: Wiley. Chapters 1-9 and 11.

Week 5: Risk Management

Eggers, William and John O’Leary. If We Can Put a Man on the Moon: Getting Big Things Done in Government. (Harvard Business Press, Boston, 2009) Chapter 4, The Overconfidence Trap, 107-134.

Hopkin, Paul. Fundamentals of Risk Management: Understanding, Evaluating and Implementing Effective Risk Management. Second Edition. Institute of Risk Management, 2012. Chapters 1-6

Sparrow, Malcolm. The Character of Harms. (Cambridge University Press, 2008) Introduction, 1-18 and Chapter 6, 101-107.

Week 6: Impact of Bias on Decision-Making and Insights from Behavioural Economics

Forester, John. “Bounded Rationality and the Politics of Muddling Through.” Public Administration Review 44, 1 (January 1984), 23-31.

Henrich, John, et al. 2001. “In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies.” The American Economic Review 91, 2: 73-78.

Gladwell, Malcolm. 2005. Chapter 3, “The Warren Harding Error: Why We Fall for Tall, Dark, and Handsome,” in Blink: The Power of Thinking Without Thinking. Pages 72-98.

Jones, Bryan D. “Bounded Rationality.” Annual Review of Political Science 2 (1999), 297-321.

March, James G. and Johan P. Olsen. 1996. “Institutional Perspectives on Political Institutions.” Governance 9, 3: 247-264.

Wilson, Rick. 2011. “The Contribution of Behavioral Economics to Political Science.” Annual Review of Political Science 14: 201-223.

Renwick Monroe, Kristen and Kristen Hill Maher. 1995. “Psychology and Rational Actor Theory.” Political Psychology 16, 1: 1-21.

Tversky, Amos, and Daniel Kahneman. 1974. “Judgment under uncertainty: Heuristics and biases.” science 185.4157: 1124-1131.

Tversky, Amos and Daniel Kahneman. 1981. “The Framing of Decisions and the Psychology of Choice.” Science 211, 4481 : 453-458.

Week 7:  Evaluation Purposes, Types and Questions

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 1-3.

Mertens, Donna M., and Wilson, Amy T., Program Evaluation Theory and Practice: A Comprehensive Guide. New York: The Guilford Press, 2012. Chapter 8.

W.K. Kellogg Foundation. “Logic Model Development Guide.” Battle Creek, Michigan: W.K. Kellogg Foundation, 2004. Chapters 3 and 4.

Week 8: Fundamental Identification Problem: Causality, Counterfactual Responses, Heterogeneity, Selection

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 8-9

Bornstein D. (2012). “The Dawn of the Evidence-Based Budget.” NY Times, May 30, 2012. Available at: http://opinionator.blogs.nytimes.com/2012/05/30/worthy-of-government-funding-prove-it/

Angrist, J.D. and A.B. Krueger (2000), “Empirical Strategies in Labor Economics,” in A. Ashenfelter and D. Card eds. Handbook of Labor Economics, vol. 3. New York: Elsevier Science. Sections 1 and 2.

Imbens, G.W. and J.M. Wooldridge (2009) “Recent Developments in the Econometrics of Program Evaluation,” Journal of Economic Literature, vol. 47(1), 5-86.

Khandker, S.R., Koolwal, G.B., & Samad, H.A. (2010). Basic issues of evaluation (p. 18 – 29). Excerpt of chapter 2 in Handbook on impact evaluation: Quantitative methods and practices. Washington, DC: The World Bank.

Week 9: Assessing the Confounding Effects of Unobserved Factors

Rosenbaum, P.R. (2005), “Sensitivity Analysis in Observational Studies,” Encyclopedia of Statistics in Behavioral Science, vol. 4, 1809-1814.

Imbens, G.W. (2003), “Sensitivity to Exogeneity Assumptions in Program Evaluation,” American Economic Review (Papers & Proceedings), vol. 93(2), 126-132.

Rosenbaum, P.R. (2002), Observational Studies. New York: Springer-Verlag. Chapter 4.

Rosenbaum, P.R. and D.B. Rubin (1983), “Assessing Sensitivity to an Unobserved Binary Covariate in an Observational Study with Binary Outcome,” Journal of the Royal Statistical Society. Series B, vol. 45(2), 212-218.

Week 10: Sensitivity Analysis

Anderson, David, et al. Quantitative methods for business. Cengage Learning, 2012. Chapter 4.

McKenzie, Richard B., and Gordon Tullock. “Anything Worth Doing Is Not Necessarily Worth Doing Well.” The New World of Economics. Springer Berlin Heidelberg, 2012. 25-42.

Merrifield, J. (1997). Sensitivity analysis in benefit-cost analysis: A key to increased use and acceptance. Contemporary Economic Policy, 15, p.82-92.

John Graham, “Risk and Precaution,” transcript of remarks delivered at AEI-Brookings Joint Center conference, “Risk, Science, and Public Policy: Setting Social and Environmental Priorities,” October 12, 2004. http://georgewbush-whitehouse.archives.gov/omb/inforeg/speeches/101204_risk.html

Week 11: Data Collection Strategies

Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies 2nd edition. Prentice Hall. Chapter 6 & 8.

Mertens, Donna M., and Wilson, Amy T., Program Evaluation Theory and Practice: A Comprehensive Guide. New York: The Guilford Press, 2012. Chapter 10.

Wholey, Joseph S., Harry Hatry, Kathryn Newcomer. Handbook of Practical Program Evaluation, 2nd Edition. San Francisco: Jossey-Bass, 2010. Chapter 14 (11-13 and 15-18 optional).

Week 12: Performance Measurement and Performance Management

Dave Ulrich, Delivering Results: A New Mandate for Human Resource Professionals, Boston: Harvard Business School Press, 1998. (Introduction and Chapter 1).

Ebrahim, A., & Rangan, V. K. (2010). The limits of nonprofit impact: A contingency framework for measuring social performance. Boston, MA: Harvard Business School Working Paper. http://www.hbs.edu/research/pdf/10-099.pdf

Kotter, J. P. (1990). What leaders really do. Harvard Business Review, 68(3), 103-111.

Donald Moynihan et al, Performance Regimes Amidst Governance Complexity, Journal of Public Administration Research and Theory (JPART), Jan. 2011, vol, 21, p, 141-155

Laurence J. O’Toole, Jr., Treating Networks Seriously:  Practical and Research-Based Agendas in Public Administration, PAR, Jan/Feb 1997, vol. 57, no. 1, p. 45-52.

Lester M. Salamon, The New Governance and the Tools of Public Action:  An Introduction, Chapter 1 (p. 1-47) in the Tools of Government:  A Guide to the New Governance, edited by Lester M. Salamon, Oxford University Press, 2002.

Sample assessment questions

1a) Define the following terms: Decision ChainDecision Point. 1b) We often have to make decisions under uncertainty. In a short one-page paper, describe one technique for approaching a decision about which you have imperfect information (use an example, real or hypothetical). 1c) What is a formal decision model? How can these be helpful in public policy development?

2a) Define the following terms: Agency TheoryThompson’s Three Models of Public Sector Accountability. 2b) What is a principal-agent problem? Why is this an important concept for public sector managers to understand? 2c) Explain how the principal-agent problem can arise in the specific instance of collaboration between unelected public servants and the elected official who they serve.

3a) Define the following terms: Cost-benefit AnalysisCost-effectiveness Analysis. 3b) “Cost-benefit analysis for decisions is somewhat different in the public sector than the private sector because while private sector firms’ objective is to make money and maximize profit, public sector organizations have a much more diverse and sometimes difficult to measure set of objectives.” Please write a 2 page paper either agreeing or disagreeing with this statement, using real world examples. 3c) What are unintended consequences? How can policymakers aim to include the potential for unintended consequences in their cost-benefit analyses given that they are often very difficult to predict and estimate the importance of?

4a) Define the following terms: Capital ProjectProject/Program ObjectiveKnowledge Project. 4b) What is a project life cycle? Describe the main phases of a project life cycle. 4c) What are the differences between agile and traditional project management?

5a) Define the following terms: RiskResidual RiskRisk AppetiteRisk ToleranceCatastrophic HarmsRisk IdentificationRisk ManagementRisk MitigationRisk ProfileRisk Strategy. 5b) What is risk management? Why is it an important topic for public policy students to study? 5c) “In making policy decisions, governments should always aim to minimize risk.” Discuss this statement in a 1-page response. You may agree, disagree, or simply provide a response to the statement that is neither an endorsement nor a rejection.

6a) Define the following terms: Negativity Bias; Anchoring Effect; Confirmation Bias; Groupthink. 6b) What is confirmation bias? Why is this concept important for people working in public management to understand? 6c) “People systematically behave in irrational, self-harming ways because of cognitive bias, and government should therefore intervene to protect people from their own biased and flawed decision making.” Discuss this statement in a short 2-3 page response. You may, but need not, offer an endorsement or rejection of the statement. Please support your argument with real-world evidence.

7a) Define the following terms: Sensitivity AnalysisIntangible Benefits of ProgramsResultResults ChainBeneficiariesBest PracticesDepartmental Evaluation PlanDepartmental Performance Report;Economy(in evaluation)Evaluation CriteriaEvaluation ProductsFormative EvaluationGaming (in evaluation)Gender-based Analysis (GBA)Impact EvaluationOpen Systems ApproachProgram EvaluationProject/Program ObjectiveTerms of Reference (in evaluation). 7b) What is a summative evaluation? How does a summative evaluation differ from a formative evaluation? 7c) What are policy outcomes? Why is it preferable for evaluation strategies to measure policy outcomes as opposed to inputs or outputs? 7d) What is gaming? Please provide a (real or hypothetical) example. 7e) What is the role of the “terms of reference” in the evaluation process? 7f) What is meant by the term “intangible benefits of programs?” Why is this an important concept for program evaluators to understand?

8a) Define the following terms: AttributionBaseline InformationEfficiencyEpistemologyEvaluation AssessmentNeeds AssessmentPolicy Outputs vs. OutcomesProgram LogicWith-versus-WithoutCausal ChainCausal ImagesCausal Relationship 8b) What is a confounding variable? Why is this an important concept for policy/program evaluators to understand? 8c) What does the term “counterfactual” mean? Why is this an important concept for policy/program evaluators to understand?

9a) Define the following terms:  EvaluabilitySingle Difference (in impact evaluation) 9b) Identify one program, policy or government activity that is particularly difficult to evaluate in terms of efficiency and effectiveness. In a 3-5 page short paper, describe the evaluation challenges involved and identify some possible strategies to overcome those challenges and evaluate the program/policy/activity in question.

10a) Define the following terms: Diagnostic ProceduresLogic Model 10b) What is sensitivity analysis? Why is this an important topic for students of public administration to study? 10c) What is a logic model? Draw a mock logic model for any public policy/program of your choice.

11a) Define the following terms: Case StudiesElite InterviewsLiterature Review 11b) What are case studies? What are some of the advantages and disadvantages of case studies as a tool for gathering information about the effectiveness of specific policy choices? Discuss in a short 2-3 page paper.

12a) Define the following terms: PerformancePerformance ExpectationsPerformance MonitoringProductivity in the Public SectorResults Based ManagementResults-Based Reporting; Performance ReportingPerformance StoryBenchmarkExpected ResultIntermediate OutcomeLessons LearnedNeutralityObjectivityOutcomeOutputsPerformance AuditPerformance Criteria; Performance IndicatorPerformance MeasurePerformance MeasurementPerformance Measurement Strategy 12b) What is the difference between performance measurement and evaluation? 12c) What is the role of performance measurement in the policy cycle? 12d) What are three characteristics of useful performance indicators? For the policy or program of your choice, identify two potential performance indicators that would be useful for performance measurement purposes and describe in one paragraph why they are potentially valuable measures of program effectiveness or efficiency.

Page created by: James Ban on 3 July 2015 and last modified by Ian Clark on 23 April 2017.

Image: Eidiko, at http://www.eidiko.com/tech_ba.php, accessed 10 March 2015.