Judicial Evaluation Model
Sewall and Santaga (1986, reference below) describe the Judicial Evaluation Model as being “conceptualized in the early 1970s, by Robert Wolf, as a method which would be useful for administrators who need to reach some decision regarding an educational program and want to go beyond the gathering of objective data.”
The judicial evaluation model (JEM) is an example of an adversary-oriented approach to evaluation. See slide 2015 Introduction to Evaluation course presentation by Anthony Artino, William Lightfoot, and Robin Smith above right.
Sewall and Santaga describe judicial evaluation model in more detail as follows:
“The model allows for the gathering of both objective and subjective information and is patterned after the legal model which places a premium on human testimony and judgement. It establishes a systematic procedure for inquiry including criteria for classifying, evaluating and presenting evidence in a clear, concise and reasonable manner.
“Two investigative teams are formed to evaluate the program in question. One team is charged with building a case for a reduction or elimination of the program while the other is asked to gather evidence in support of its continued existence. Each team includes a case analyst who serves as the team supervisor and a case presenter who has the responsibility of actually presenting the case in “court.” A forum moderator enforces the rules established for the proceeding and rules on objections. Finally, a clarification panel is selected to consider the evidence and present a written statement of their recommendations.
“The judicial evaluation model is implemented in four stages. Stage one is an exploratory stage designed to identify as broad a range of issues as possible. The naturalistic inquiry paradigm is used heavily in this stage of the process. During the second stage, the issues identified are placed in order of priority and pooled to reduce them to a manageable size. Again, decisions are made through the extensive use of the strategies of naturalistic inquiry.The third stage involves building cases and preparing arguments for case presentation. As in judicial court proceedings, both teams share their information and respective plans of action. In the final stage there is a public presentation of the data and other information collected. Case presenters make their cases and call witnesses. Direct and cross examination of witnesses are engaged in and opening and closing arguments are presented. Based on the evidence presented, the panel makes its decision and recommendations regarding the program being evaluated.
“The Judicial Evaluation model can be particularly useful for dealing with policy-level problems. However, implementing this model in its entirety is a complex and complicated task. This model would probably be most useful for adult educators who want to be sure to incorporate both the positive and negative aspects of a particular program into their evaluation plan along with a heavy emphasis on naturalistic inquiry methods and human judgement.” (pages 10-12)
More recently, Robert Picciotto (2018, reference below) writes:
“The adversary evaluation model emerged in a context that favored democratic debate. It was successfully used in a variety of sectors following its inception, but it was abandoned once neo-liberal thinking and goal achievement approaches became dominant. It is time to give it a second chance. The judicial evaluation model (JEM) relies on human testimony, rules of evidence, cross examination and principled deliberation. These features contribute to evaluation independence, a characteristic that is sorely needed in today’s fractured social environment. JEM promotes civil interaction among groups committed to different ideologies. It encourages tolerance and respects pluralism by combining professional authority with direct citizen participation and neutral facilitation. Competently designed and managed, it resists capture by vested interests and it holds promise as an instrument of progressive evaluation focused on the public interest. Since it is demanding and time consuming, it is especially relevant for large, controversial and complex interventions.”
Atlas topic, subject, and course
Timothy Sewall and Marsha Santaga (1986), A Reference Guide to Program Evaluation in Adult Education, Wisconsin University, Green Bay Assessment Center, at https://files.eric.ed.gov/fulltext/ED267246.pdf, accessed 3 December 2018.
Robert Picciotto (2018), Is Adversary Evaluation Worth a Second Look?, American Journal of Evaluation, at https://doi.org/10.1177/1098214018783068, accessed 3 December 2018.
Page created by: Alec Wreford and Ian Clark, last modified 3 December 2018.
Image: Anthony Artino, William Lightfoot, and Robin Smith (2015), Adversary-Oriented Approaches to Evaluation, EDF 5461, Introduction to Program Evaluation, Florida State University, at https://www.powershow.com/viewfl/62b8a0-ZDA1O/Adversary-Oriented_Approaches_to_Evaluation_powerpoint_ppt_presentation, accessed 3 December 2018.