This was the theme of an event at IDS organised by the PPSC team recently. The participants were concerned about the increasing emphasis on measurable results, which they felt was sometimes at the cost of focusing on ‘social transformation’. According to them, the problem is
Eyben says there are many reasons for the new funding environment which, she argues, fails to recognise the complexity of development and risks losing the voices and knowledge of local actors.
These include supporters’ and taxpayers’ lack of appetite for complex messages; increased pressure for quick ‘wins’ to demonstrate that aid works; and a belief that challenges such as high levels of maternal mortality in many developing countries are solely technical problems for which straight-forward technical solutions can be found.
The group identified the following as the way forward –
· Building ‘counter-narratives’ that emphasise accountability to those for whom international aid exists.
· Developing innovative communication channels in order to better communicate with the public the complex nature of development.
· Developing different methods of reporting, so that the requirement for aggregated numbers at Northern policy level captures the character of programming in complex development contexts.
· Collaborating with people working for change inside donor agencies.
· Re-claiming the term ‘value for money’.
· Enhancing organisational learning and reflective practice to nurture out-of-the-box thinking and approaches.
· Scrutinising the role of big business in development aid and its impact on discourse, quality and accountability.
Working in the measurement industry, I can see how this can horribly go wrong. Is the PPSC advocating a return to the time when development interventions were primarily ideology-driven, with results being secondary? I do not think so. What they are pushing back against, is the tendency of aid agencies aiming for what is easily measurable and in turn, ignoring programs that cannot be measured as easily. Worse still, they may be ignoring sectors that don’t lend themselves to easy evaluations, such as justice systems or taxation.
I wonder if the challenge should not be upon us to come up with a more versatile tool-kit of evaluation methods? We cannot be method-driven, neither in research not in implementation – and that itself is not a new idea. But with events such as this, that push back on measurement, one needs to be absolutely sure the message doesn’t get muddled in the rhetoric.