Use of Evaluation Rubrics

Pragmatica is well known for using rubrics in evaluation, having used them for over a decade. Judy regularly mentors others in their use. This page includes many of the key publications and presentations developed over time, in collaboration with others.

Evaluation building blocks

This short guide was developed in collaboration with Kinnect Group colleagues and draws on over ten years’ reflection of using rubrics.

“We find that using the building blocks explained in the guide helps us to do credible and useful evaluation. Our clients tell us it gives them meaningful and insightful findings that they can use to take action”.

DOWNLOAD PDF HERE

Adaptive Evaluation: A synergy between complexity theory and evaluation practice

Some traditional evaluation designs assume there will be high levels of predictability and control. The problem is that complex programs or contexts challenge these basic assumptions. Often programs deal with emergent outcomes and objectives, adaptive program processes, nonlinear theories of change and evolving stakeholder expectations. In these instances, we need a more adaptive approach to evaluation. One that fits the environment without compromising rigor.

In this paper, we provide an overview of the challenge and previous efforts to address it, an introduction to basic theory and practice of human systems dynamics (HSD) and theoretical foundations for a new approach to evaluation in complex environments, Adaptive Evaluation. We demonstrate applications of this new evaluation practice in a case study. Finally, we articulate lessons learned and emerging questions.

Read the article Glenda Eoyang and Judy Oakden wrote about using generic rubrics in complex evaluation settings here.

DOWNLOAD ARTICLE HERE

What is on the rubrics horizon?

Evaluators in Aotearoa New Zealand are increasingly using rubrics in their evaluative practice. We now have a working knowledge about using rubrics and have some sense of what makes them more or less effective. While rubrics have shifted our evaluation practice, it has not been without challenges. Judy observes “I’d put my money on rubrics being here to stay, but I think we need to understand the challenges of using them and mitigate against the risks.”

In this e-book Judy Oakden and Melissa Weenink explore some of the challenges they have encountered using rubrics in their practice. They also include feedback from a discussion during a practice-based session at the ANZEA Conference in Auckland, New Zealand where they explored difficulties they and others face with rubrics.

Download PDF
Image

Providing mentoring support to develop rubrics

To develop and use rubrics on a large complex evaluation of the Food Systems Innovation (FSI) Initiative, team members sought mentoring from Pragmatica. This practice note, by Samantha Stone-Jovicich from CSIRO, describes the steps she went through and the challenges she overcame to develop rubrics for the FSI Initiative.

Samantha reflected afterwards:

[This was] a very large, complex project where we were trying to design a monitoring, evaluation and learning (MEL) system to capture the dynamics of innovation and its impacts. Judy helped us adapt and tailor a rubrics approach. Her abundance of knowledge and experience, coupled with her collegiality, collaborative nature, and flexibility and creativity, were instrumental to supporting us incorporating a useful and fit-for-purpose rubrics approach into our MEL system.

Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence

This is a practice example which is part of Judy Oakden’s early writing on how evaluators can use rubrics to make evaluative judgements. It details how the use of rubrics supports robust data collection and frames the analysis and reporting. Judy Oakden wrote this detailed account of the process as part of the first Better Evaluation writeshop process, led by Irene Guijt. 

Download PDF

Rubrics: A Method for Surfacing Values and Improving the Credibility of Evaluation; Journal of MultiDisciplinary Evaluation

This is a practice-based article by the Kinnect Group members (Julian King, Kate McKegg, Judy Oakden, Nan Wehipeihana). It shares the Group’s learnings on the use of evaluative rubrics to deal with the challenge of surfacing values and improving the credibility of evaluation. Evaluative rubrics enable values to be dealt with in a more transparent way. In the Group’s experience, when evaluators and evaluation stakeholders get clearer about values, evaluative judgments become more credible and warrantable.

Download PDF