Home Blog

Mind the gap: the fragile state of the impact evidence base

Written by Richard Mallett on 03 December, 2012 : 15:58

“What do we do if we need results to justify a development or humanitarian programme, but don’t have the evidence to demonstrate what works?”

SLRC Research Officer, Richard Mallett

As Rachel Slater and Samuel Carpenter recently argued, getting aid programming right in fragile and conflict-affected situations is challenging but hugely important. From a purely monetary perspective, there’s $46.7 billion at stake – which is a lot of money to demonstrate good value for.

A sizable chunk of this is spent on programmes aimed at supporting livelihoods and stimulating economic recovery, and many donors, DFID included, are increasingly looking to justify the specifics of this spend on the basis of results. Many aspects of this approach are hard to disagree with: the argument that we should know what works before throwing around funds is a no-brainer, and programme effectiveness is to many a more sensible mechanism for allocating aid than, say, the logics of self-interest and soft power.

But do we actually know what works?

Partly as a result of the sharpened focus on results and value-for-money, DFID country offices are now required to assemble business cases for new spending that cite evidence to justify their decisions. However, a new review of the evidence on growth and livelihoods in conflict-affected situations suggests that there is surprisingly little out there for them to draw on. Despite the range of programmes on offer to aid agencies and governments wanting to protect livelihoods and promote economic recovery – from public works programmes to the distribution of seeds and tools – in many cases the impact data just aren't there. Much of the time, it seems, we simply don’t know whether programmes are working for beneficiaries, having no effect at all or, worst case scenario, making things worse. (It should be noted that although we are talking primarily about the micro-level impacts of programmes here rather than the meso- and macro-level impacts of reforms, it is also understood that many developing countries similarly suffer from a lack of data on macroeconomic performance – see page 8 of this newsletter from the Centre for the Study of African Economies).

This may come as some surprise to those who have spent any time with the burgeoning literature on livelihood and economic programming in conflict-affected situations. There is no shortage of claims and recommendations to be found within the abundance of donor reports and policy briefs, suggesting that the impact evidence base is pretty strong and that our level of knowledge is pretty good. But as soon as we start asking serious questions about the sources for claims and the basis for recommendations, their mask of certainty and assuredness starts to slip. Most of the time, study methodologies are rarely discussed in any detail; sometimes, they are barely mentioned at all. For something so straightforward – and so fundamental – this is baffling.

Studies that are clear on methodology and that examine impact are massively in the minority. One illustration of this emerges from our review. As part of our review methodology – and in an attempt to inject some additional rigour into the process – we undertook two systematic reviews in addition to more orthodox review practices. We wanted to know about the impacts of two separate interventions – seeds-and-tools programmes and ‘markets for the poor’ (M4P) interventions – in countries defined as fragile and / or conflict-affected. Even without specifying which outcomes we were interested in, our two systematic reviews yielded a depressingly low number of relevant studies – nine on seeds-and-tools and just three on M4P – and, of these, the quality was generally low.

What might explain this sizeable gap in the evidence base?

It’s difficult to be sure, but there may be a number of reasons why there is so little evidence of impact. In no particular order:

  • Doing impact evaluation well is not easy or cheap. Studies that take impact, causality and attribution seriously take a long time to do and attract substantial costs – even more so in difficult contexts.
  • Fund programmes, not studies. In conflict-affected situations, donors are faced with a huge number of urgent humanitarian and recovery needs. Funding research may not be at the top of their list of priorities when there are other, more pressing things to invest in.
  • Assumptions of effectiveness can prove remarkably resilient. To many, it may seem obvious that giving people jobs in war zones is a good thing to do – why spend money on research that will simply tell us what we already know? Deductive logic such as this is certainly compelling, and often convincing, but research can turn conventional wisdom on its head.
  • The truth might hurt. If a donor has been funding programme x for several years, it may not be in their interest to then fund research that tells them they’ve been doing it wrong.
  • We are measuring impact! Many studies we came across in our review used ‘impact’ to refer to how well a programme functions in terms of its own design – i.e. was it completed on time? Was the right amount of, say, seed distributed? This may be one way of measuring success, but it doesn’t tell us anything about what the programme did for beneficiaries.

Managing the gap

So, what do we do if we need results to justify a programme, but don’t have the evidence to demonstrate what works? In such circumstances, it may be tempting to argue for donors to reduce the burden of proof required to justify decisions, to ‘lower the bar’ for evidence-based policy making in conflict-affected countries.

But we’d argue that this would be the wrong approach to take. It is possible to do high quality, methodologically rigorous research in difficult places, and there are plenty of cases of where this has been done. Take, for example, the Households in Conflict Network and MICROCON research programme who, together, have generated a valuable body of robust, fascinating and methodologically clear evidence on the micro-level causes and consequences of war. Or how about the multi-year study of rural change in eight Afghan villages conducted by researchers at the Afghanistan Research and Evaluation Unit? Such examples provide clear demonstrations that doing high quality research in conflict-affected environments is not beyond the limits of possibility.

Thus, rather than throw out the results-based agenda altogether, we would instead suggest a number of recommendations that might help move us forward.

First, there should be an obligation for people doing both research and monitoring and evaluation in conflicts to be much more systematic and rigorous in presenting their methodologies.

Second, given the current lack of impact evidence, donors and aid agencies need to be more cautious in their policy recommendations.

Third, there’s also a need to guard against conflict exceptionalism – not everything about conflict-affected places is qualitatively different. Aid actors working in conflicts could do better at drawing on evidence from other development contexts and considering how approaches could be adapted to deal with the challenges of conflict (as has been argued in relation to delivering social protection).

Finally, we simply need more and better research…which is what researchers always think but this time there really is a clear cut case for it!

Read the full Working Paper: Growth and Livelihoods in Fragile and Conflict-Affected Situations and the accompanying 4-page Briefing Paper: Growth and livelihoods in conflict-affected situations: what do we know?.


Have you come across the work of the Poverty Action Lab based out of MIT? It is run by two economists and their area of focus is on conducting randomized control trials to find evidence of policies/interventions that work. Check them out. http://www.povertyactionlab.org/
Roger Hawcroft
I found this a stimulating and valuable paper, particularly as I am a relative tyro when it comes to this area, despite having supported 'causes' and 'action' for several decades. However, as someone who has had a strong interest in measuring effectiveness and in benchmarking programs in my area of work (information and library management) I do think that there is a misconception promoted in the way that "impact" is used in the paper: "We are measuring impact! Many studies we came across in our review used ‘impact’ to refer to how well a programme functions in terms of its own design – i.e. was it completed on time? Was the right amount of, say, seed distributed? This may be one way of measuring success, but it doesn’t tell us anything about what the programme did for beneficiaries." I have no problem with the substance of the point being made but I suggest that the indicators described do not constitute a study of "impact" but rather a confusion of "outputs" and of "outcomes". I think that in any measurement, it is important to use an appropriate scale and to understand what value that scale can actually indicate. In terms of programs, my view would be that the following measures are important: *inputs......... $ / time / effort / assets / etc. put int the program *processes.. Organisation / communication / management / administration ... *outputs ...... What is actually delivered eg. food / education / wells / nets ... *outcomes .. The change that occurs from before output to after output *impact ....... What results from the change eg. social / educational / health ... "Impact" is clearly what we want to achieve with aid programs however it is the analysis of each of those components and their relationship to one another and the final impact which needs to be measured and analysed. If the impact does not match the goal then the reason will generally be found within those other components, i.e. not enough capital / wasteful processes / unnecessary bureacracy / too little delivery to make change / an unanticipated response or reaction to the change ... for instance. In a nutshell, I believe that all too often, outputs, outcomes, and impact are used interchangeably and that this is unhelpful. Having said all that, I remind you that I acknowledge my relative ignorance about the field to which this report relates and therefore if my analysis is ignorant or unsuitable, I apologise. My intention is not to denigrate but to offer a constructive criticism.
Andrew Mack
Thanks for the heads up. This SLRC project is REALLY important and the analysis is terrific. One thing. Much of the non-econometric cross-national analysis of the impact of war on education outcomes may well suffer from selection bias. Excellent individual studies (by Patricia Justino and other members of HICN) have tended, naturally enough to focus on the worst cases where needs are greatest. But drawing conclusions from these studies means generalizing from the worst cases. This is essentially what UNESCO does in its 2011Education For All report which focuses on the impact of war on education. But, as we point in Ch 4 of our most recent Human Security Report (see, http://hsrgroup.org/docs/Publications/HSR2012/HSRP2012_Chapter%204.pdf) comparative small-N quantitative studies by two education research institutes (one of them UNESCO's own Institute for Statistics) find that in a surprising number of cases educational outcomes improve during wartime -- even in the areas worst affected by conflict. Finally a large-N cross-national econometric study by the Peace Research Institute, Oslo doen for WDR 2011 found that on average there was no statistically significant effect of conflict on educational outcomes. Our conclusion––conflict is not necessarily "development in reverse". Andrew Mack Human Security Report Project www.hsrgroup.org.
Rich Mallett
Thanks for the comments and apologies for the late response. Roger: Glad you enjoyed the post and thanks for highlighting the misconception held by some that output = outcome = impact. I tried to make the point in the blog that these are often conflated - which I think is your main argument - but maybe this didn't come across as clearly as I would have liked. So thanks for bringing it up. Andrew: An excellent point about the dangers of generalising from studies whose results are limited to particular times and places. And your conclusion - that conflict is not necessarily development in reverse (contrary to what Paul Collier and others may have reasoned in the past) - resonates strongly with some of the key messages emerging from our evidence review, which draws on many of the latest micro-level quant studies (http://securelivelihoods.org/publications_details.aspx?resourceid=153&). Rich

Add a comment


Welcome to SLRC's blog.

This blog will feature reflections from our team of researchers on the practicalities of actually conducting research in conflict-affected situations. We will also be posting guest blogs written by key researchers and practioners working on livelihoods, basic services and social protection in conflict-affected situations.