Why waste precious resources on evaluation?

  • Authorship
    Dr. Jyotsna Puri (Jo)
    Former Head of the IEU
  • Article type Blog
  • Publication date 08 Jun 2018

In too many discussions, evaluations are discussed as either a sledgehammer tool, or as a millstone around the neck of programme managers. Evaluations are all too often seen as wasting money, resources and precious time.

In truth, during 23 years of being involved with evaluations, I have ever only seen two or three cases of sledgehammer evaluations being wielded Thor-like to bring down the world of a project or programme.

Overwhelmingly, I have found evaluations to be extremely useful, particularly in two special ways.

First, when they are independent, they bring credibility to findings, assuring users they have been produced without interference and bias and therefore can be relied upon. This is in contrast to the otherwise often unproven and ‘‘rumored’ anecdotal successes of programmes or snickeringly touted ‘failed programmes’ that usually stay mired in he-said, she-said spirals. Another way evaluations can help is in learning. Here are a few examples.

Weakest links

Good theories of change help planning. But we need to go further. We need to identify possible ‘weakest links’. Arguably, but unfortunately, weakest links are best identified when the evaluator both independently and collaboratively looks at the programme design in an unbiased way and can tell the programme designer just where their weakest links are.

My colleagues and I have seen this often. In one case, an evaluation set up a theory of change for a cookstoves programme in Ghana. The formative work showed people were using improved cookstoves along with the efficient cookstoves. But use dropped off dramatically after six months. We didn’t need a full-fledged programme to know it wasn’t working. In another programme that was setting-up SMS text reminders to parliamentarians as a tool for accountability, the evidence from the formative work for the evaluation showed that the number of parliamentarians using this programme was so small that it was very, very unlikely the programme was going to work. The programme was cancelled before it started. In both cases, formative evaluations helped learning.

Are you hi-fidelity?

Just understanding where a programme is getting implemented and how much is implemented can be done nicely by monitoring.

However, by including an impact evaluation, at the beginning, evaluators can inform programme managers about the fidelity of implementation: Is the programme really being implemented as they had thought? Is the dosage, frequency of the programme exactly as they thought? In this sense, when programme designs include ex-ante impact evaluations at the beginning (rather than planning them ex-post) allows programmes to learn and adapt with real-time assessments.

Again, my colleagues and I helped to do this with a very interesting project implemented by an NGO called Breakthrough in India. Incorporating ex-ante impact evaluations designed into the programme helped the project staff learn about whether community training, videos and other forms of awareness building were helping to change social behavior or not. They were, but not in the way we’d think. (Final results are forthcoming). Another example is of a large, well-known assessment in Indonesia that found that targeting the population using a not proxy means test (the method that the government was using), but a combination of community-based targeting and proxy means testing, worked to increase people’s satisfaction, given the same levels of cash transfers. This method has now been adopted by the Government of Indonesia for its alleviation programmes.

To scale or not to scale?

In many cases, evaluative and measured evidence from a programme helps teams decide whether it should be scaled up or not.

An evaluation of an anemia-alleviating programme in China started with the assumption that merely distributing iron tablets would relieve anemia. However, it was only after the project experimented with different incentives, that the project realized that incentives provided to teachers, along with subsidies provided to student families, led to success. Distributing information itself had NO effect. Similarly in a study of agricultural insurance in developing countries, my co-authors and I found that despite high climate variability farmers do not buy (even actuarially fair) weather insurance. Instead behavioral ‘nudges’ to farmers are much more likely to induce them to change their behaviour. It underscored the importance of building in learning from evaluations into projects.

Hmmm… what should become policy?

Cash transfer programmes have repeatedly provided robust evidence that they work to improve livelihoods. At least over short periods of time. So has a programme called ‘graduating from ultra-poverty’ that is a combination of training, coaching, subsistence related transfers and business related loans, that has been implemented in six countries.

Similarly, in public health, corticosteroids became the standard for care only after evidence from seven different studies showed consistently that corticosteroids reduced deaths among pregnant women who were likely to give birth to pre-mature babies.

Eureka moment!

If we build evaluations from the beginning into programmes, they can help us learn. Learning after a project is done and dusted is often times, too late.

 

Disclaimer: The views expressed in blogs are the author's own and do not necessarily reflect the views of the Independent Evaluation Unit of the Green Climate Fund.