Our evaluation
Evaluation Projects
Here is a selection of published evaluation projects completed by Pragmatica, in collaboration with others.
Creatives in Schools
The Creatives in Schools programme funds schools and kura to partner with professional artists and creative practitioners to share specialist artistic knowledge and creative practice with ākonga and students. The evaluations have assesses Creatives in Schools’
first year of operation (called “Round 1”] was a pilot round with 34 schools
second year of operation in 2021 (called “Round 2”) which featured a considerable up-scaling of schools including support for the COVID recovery for creative practitioners
third year of operation in 2022 [called Round 3] featured even further upscaling to additional schools.
For the past three years Pragmatica has evaluated the Creatives in Schools programme. Each year the evaluation sought to:
uncover outcomes in that year of operation
assess the extent to which the programme implementation was effective
support any fine-tuning or adaptations for the next Round.
To see the evaluations and case studies click here.
Reading Together® Te Pānui Ngātahi
The Reading Together® programme, Te Pānui Ngātahi, (Reading Together®) is a research-based, four-session workshop programme that supports parents and whānau to effectively raise their children’s reading achievement. This proven programme has a high impact on parents, children, school leadership, teachers, and the wider community (Robinson et al., 2009). From participating in Reading Together® multiple valued outcomes occur for children, parents and whānau as well as for school and library communities.
Te Tāhuhu o te Mātuauranga – Ministry of Education wished to capture the valuable learnings from 38 years of previous research and evaluation to support revitalised implementation of the programme. We were commissioned to develop a Reading Together® Te Pānui Ngātahi summary of evaluations and implementation as part of the Iterative Best Evidence Synthesis (BES) Hei Kete Raukura reporting. The analysis of the research reports used an evidence-based policy approach coupled with a Māori potential approach and systems-thinking framing.
The summary report and accompanying infographics can be found here.
Talanoa Ako (PowerUP)
Talanoa Ako (previously known as Pacific PowerUP) is an education programme for Pacific parents. Since 2013, it has helped Pacific parents and communities build their educational knowledge to support their children’s learning. The PowerUP model of engagement is strengths-based. It encourages parents, families and communities to take ownership of the programme.
From 2016 to 2019, PowerUP families shared their stories through the Guided Talanoa Series.
The evaluations and case studies (2016-2019) can be found here.
Envirolink Evaluation
The Ministry of Business, Innovation and Employment commissioned us to review the Envirolink scheme and inform them of:
- how well the scheme operates, and
- whether it achieved its intended outcomes and provided value for money.
This was the first review of the scheme since it commenced in 2005.
The results of the review were positive and we rated the scheme very good overall. The review also provided evidence that the scheme makes a worthwhile and valuable contribution in supporting eligible councils to engage with and use environmental science and support uptake by users. The report’s recommendations will inform future funding decisions relating to Envirolink.
Regional Growth Programme Evaluation
The Regional Growth Programme (RGP) Evaluation was commissioned by MBIE and MPI. It assessed whether the RGP worked as intended. The evaluation reviewed some aspects of the RGP implementation, systems and processes. It assessed the value of the results so far.
This evaluation focused on ways government agencies worked with each other. It also considered ways central agencies liaised with the regional stakeholders and with Māori in the regions. It was not an evaluation of regional economic progress. Lessons learned at central, regional and project levels may inform future regional initiatives.
The evaluation team included: Judy Oakden, Kataraina Pipi, Kellie Spee, Michelle Moss, Roxanne Smith, and Julian King.
There are four documents linked to this evaluation:
The Regional Growth Programme Evaluation Report (103 pages) – full evaluation findings and methodology.
Three infographics which provide:
an overview of the findings of the evaluation (1 A3 page) – for a general audience
key findings from the Kawerau Case Study (1 page) – for a general audience
key findings from the Extension 350 Case Study (1 page) – for a general audience.
Sustainable Farming Fund Evaluation
The Evaluation of the Sustainable Farming Fund (SFF) provided an independent, formal assessment confirming the SFF’s value. It also provided information to help ensure the SFF is well-positioned for the future by aligning more closely with Government objectives.
Judy Oakden led the evaluation of the Sustainable Farming Fund (SFF) with team members; Julian King and Dr Will Allen.
SFF Evaluation Summary Report (18 pages) – for a general audience
SFF Main Evaluation Report (69 pages) – full evaluation findings and methodology
Three Case Studies: a companion document to the SFF Evaluation (46 pages):
Protecting the sustainability of New Zealand vineyards
Top of the South: Setting an example for sustainable water quality
Sustainable development and podocarp restoration on Tuawhenua lands.
Evaluation of stakeholder perceptions of the implementation of the Waste Minimisation Act
The evaluation of stakeholder perceptions of the implementation of the Waste Minimisation Act, looked into waste stakeholders’ perceptions of the early implementation phase of the Waste Minimisation Act, and the short-term outcomes (2009‒2010). This baseline evaluation of stakeholder perceptions was undertaken by the Ministry for the Environment with consultants Judy Oakden and Kate McKegg of the Kinnect Group in late 2010. The evaluation used ‘a mix of focus groups, key informant interviews and an online survey of 325 stakeholders.
Use of evaluation rubrics
Understanding the components of evaluative rubrics – new thoughts
In this e-book, Judy Oakden explores the different ways evaluative rubrics can be constructed from three basic components:
- key aspects of performance
- levels of performance
- importance of each aspect of performance
Here she shows some alternative ways she has combined the components in her own practice. She discusses the benefits and challenges of each approach.
Evaluation building blocks: a guide
With colleagues from Kinnect Group we published an e-book on the use of rubrics in evaluation. This free downloadable guide provides a simple nine- step process for planning and undertaking an evaluation. The ideas in this book were developed over ten years of collaboration.
Release Announcement: Evaluation Building Blocks – A Guide
This e-book was recommended reading by Better Evaluation. Better Evaluation is a website which collects best evaluation practice from around the globe. It aims to build “evaluation practice, evaluation capacity strengthening and research and development in evaluation”.
Better Evaluation reviewer, Patricia Rogers, commented
[The guide} is particularly strong [for developing] a framework to assess performance. [It]… has detailed examples of using global scales (rubrics) to synthesise both quantitative and qualitative data... [It helps] to avoid the common problems caused by replying only on Key Performance Indicators and targets.... I’d especially recommend it in terms of developing a framework for evaluating performance.
Chapter on evaluation in "Social science research in New Zealand: An Introduction"
“Chapter 13 Evaluation” contributed to the textbook “Social science research in New Zealand: An introduction” Edited by Martin Tolich and Carl Davidson., Published by Auckland University Press
Social science research in New Zealand: An introduction (2018) is a textbook that is positioned as “the definitive introduction for students and practitioners undertaking social research in New Zealand”. Judy Oakden and Julian King contributed a chapter on evaluation which described the evaluation approach used for the Evaluation of the Sustainable Farming Fund. An important part of this chapter was a description of the use of rubrics throughout the project.
The editors observed at the start of this chapter:
“This is an important chapter because evaluation is a discipline that is often misunderstood both among other researchers and the users or research outputs. Evaluation differs from other research approaches because it focusses on understanding the value of something. … Much of what researchers think of as ‘leading-edge’ practices have come from evaluation, and the authors capture much of that excitement and innovation here”.
For more information, please click here
Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence
This is a practice example which is part of Judy Oakden’s early writing on how evaluators can use rubrics to make evaluative judgements. It details how the use of rubrics supports robust data collection and frames the analysis and reporting. Judy Oakden wrote this detailed account of the process as part of the first Better Evaluation writeshop process, led by Irene Guijt.
Rubrics: A Method for Surfacing Values and Improving the Credibility of Evaluation; Journal of MultiDisciplinary Evaluation
This is a practice-based article by the Kinnect Group members (Julian King, Kate McKegg, Judy Oakden, Nan Wehipeihana). It shares the Group’s learnings on the use of evaluative rubrics to deal with the challenge of surfacing values and improving the credibility of evaluation. Evaluative rubrics enable values to be dealt with in a more transparent way. In the Group’s experience, when evaluators and evaluation stakeholders get clearer about values, evaluative judgments become more credible and warrantable.
Providing mentoring support to develop rubrics
To develop and use rubrics on a large complex evaluation of the Food Systems Innovation (FSI) Initiative, team members sought mentoring from Pragmatica. This practice note, by Samantha Stone-Jovicich from CSIRO, describes the steps she went through and the challenges she overcame to develop rubrics for the FSI Initiative.
Samantha reflected afterwards:
[This was] a very large, complex project where we were trying to design a monitoring, evaluation and learning (MEL) system to capture the dynamics of innovation and its impacts. Judy helped us adapt and tailor a rubrics approach. Her abundance of knowledge and experience, coupled with her collegiality, collaborative nature, and flexibility and creativity, were instrumental to supporting us incorporating a useful and fit-for-purpose rubrics approach into our MEL system