Mind the gap: the fragile state of the impact evidence base

“What do we do if we need results to justify a development or humanitarian programme, but don’t have the evidence to demonstrate what works?”

As Rachel Slater and Samuel Carpenter recently argued [^], getting aid programming right in fragile and conflict-affected situations is challenging but hugely important. From a purely monetary perspective, there’s $46.7 billion at stake – which is a lot of money to demonstrate good value [^] for.

A sizable chunk of this is spent on programmes aimed at supporting livelihoods and stimulating economic recovery, and many donors, DFID included, are increasingly looking to justify the specifics of this spend on the basis of results [^]. Many aspects of this approach are hard to disagree with: the argument that we should know what works before throwing around funds is a no-brainer, and programme effectiveness is to many a more sensible mechanism for allocating aid than, say, the logics of self-interest and soft power.

But do we actually know what works?

Partly as a result of the sharpened focus on results and value-for-money, DFID country offices are now required to assemble business cases for new spending that cite evidence to justify their decisions. However, a new review of the evidence on growth and livelihoods in conflict-affected situations [^] suggests that there is surprisingly little out there for them to draw on. Despite the range of programmes on offer to aid agencies and governments wanting to protect livelihoods and promote economic recovery – from public works programmes [^] to the distribution of seeds and tools [^] – in many cases the impact data just aren’t there. Much of the time, it seems, we simply don’t know whether programmes are working for beneficiaries, having no effect at all or, worst case scenario, making things worse. (It should be noted that although we are talking primarily about the micro-level impacts of programmes here rather than the meso- and macro-level impacts of reforms, it is also understood that many developing countries similarly suffer from a lack of data on macroeconomic performance – see page 8 of this newsletter from the Centre for the Study of African Economies [^]).

This may come as some surprise to those who have spent any time with the burgeoning literature on livelihood and economic programming in conflict-affected situations. There is no shortage of claims and recommendations to be found within the abundance of donor reports and policy briefs, suggesting that the impact evidence base is pretty strong and that our level of knowledge is pretty good. But as soon as we start asking serious questions about the sources for claims and the basis for recommendations, their mask of certainty and assuredness starts to slip. Most of the time, study methodologies are rarely discussed in any detail; sometimes, they are barely mentioned at all. For something so straightforward – and so fundamental – this is baffling.

Studies that are clear on methodology and that examine impact are massively in the minority. One illustration of this emerges from our review. As part of our review methodology – and in an attempt to inject some additional rigour into the process – we undertook two systematic reviews [^] in addition to more orthodox review practices. We wanted to know about the impacts of two separate interventions – seeds-and-tools programmes and ‘markets for the poor’ (M4P [^]) interventions – in countries defined as fragile and / or conflict-affected. Even without specifying which outcomes we were interested in, our two systematic reviews yielded a depressingly low number of relevant studies – nine on seeds-and-tools and just three on M4P – and, of these, the quality was generally low.

What might explain this sizeable gap in the evidence base?

It’s difficult to be sure, but there may be a number of reasons why there is so little evidence of impact. In no particular order:

  • Doing impact evaluation well is not easy or cheap. Studies that take impact, causality and attribution seriously take a long time to do and attract substantial costs – even more so in difficult contexts.
  • Fund programmes, not studies. In conflict-affected situations, donors are faced with a huge number of urgent humanitarian and recovery needs. Funding research may not be at the top of their list of priorities when there are other, more pressing things to invest in.
  • Assumptions of effectiveness can prove remarkably resilient. To many, it may seem obvious that giving people jobs in war zones is a good thing to do – why spend money on research that will simply tell us what we already know? Deductive logic such as this is certainly compelling, and often convincing, but research can turn conventional wisdom on its head.
  • The truth might hurt. If a donor has been funding programme x for several years, it may not be in their interest to then fund research that tells them they’ve been doing it wrong.
  • We are measuring impact! Many studies we came across in our review used ‘impact’ to refer to how well a programme functions in terms of its own design – i.e. was it completed on time? Was the right amount of, say, seed distributed? This may be one way of measuring success, but it doesn’t tell us anything about what the programme did for beneficiaries.


Managing the gap

So, what do we do if we need results to justify a programme, but don’t have the evidence to demonstrate what works? In such circumstances, it may be tempting to argue for donors to reduce the burden of proof required to justify decisions, to ‘lower the bar’ for evidence-based policy making in conflict-affected countries.

But we’d argue that this would be the wrong approach to take. It is possible to do high quality, methodologically rigorous research in difficult places, and there are plenty of cases of where this has been done. Take, for example, the Households in Conflict Network [^] and MICROCON [^] research programme who, together, have generated a valuable body of robust, fascinating and methodologically clear evidence on the micro-level causes and consequences of war. Or how about the multi-year study of rural change [^] in eight Afghan villages conducted by researchers at the Afghanistan Research and Evaluation Unit [^]? Such examples provide clear demonstrations that doing high quality research in conflict-affected environments is not beyond the limits of possibility.

Thus, rather than throw out the results-based agenda altogether, we would instead suggest a number of recommendations that might help move us forward.

First, there should be an obligation for people doing both research and monitoring and evaluation in conflicts to be much more systematic and rigorous in presenting their methodologies.

Second, given the current lack of impact evidence, donors and aid agencies need to be more cautious in their policy recommendations.

Third, there’s also a need to guard against conflict exceptionalism – not everything about conflict-affected places is qualitatively different. Aid actors working in conflicts could do better at drawing on evidence from other development contexts and considering how approaches could be adapted to deal with the challenges of conflict (as has been argued in relation to delivering social protection [^]).

Finally, we simply need more and better research…which is what researchers always think but this time there really is a clear cut case for it!

Read the full Working Paper: Growth and Livelihoods in Fragile and Conflict-Affected Situations [^] and the accompanying 4-page Briefing Paper: Growth and livelihoods in conflict-affected situations: what do we know?