‘Rigorous’ impact assessment was increasingly demanded. The so-called gold standard for this became randomised control trials (RCTs). These can make sense for medical research where there are many highly standardised units (people and their bodies) and inputs (immunisations, medicines, treatments) but misfit the realities of the complexity of social and much other change, with their uncontrolled conditions, multiple treatments, multiple and indeterminate causation, and unpredictable emergence .
In such contexts, RCTs are liable to postpone and limit learning, and to be costly, slow and inconclusive. Another contested manifestation of this control orientation has been the logframe. Thought by many in the late 1990s to fit realities and programme and project needs so badly and to have so many defects that it would die a natural death, the logframe has to the contrary flourished and spread to become a methodological monoculture in donor requirements.
So in the name of rigour and accountability what fits and works better in the controllable, predictable, standardised and measurable conditions of the things and procedures paradigm has been increasingly applied to the uncontrollable, unpredictable, diverse and less measurable paradigm of people and processes
Robert blogged at Aid on the edge.
I hold Robert in great respect and cherish every moment I have spent around him during my year at IDS. I fully agree with the problem with the ‘things’ paradigm and share his loathing for logframes.
But this does not convincingly explain to me why quantitative evaluation as represented by an RCT is inferior to an alternative research methodology. Yes, sole reliance on RCTs is not desirable – but the same holds for its possible alternatives. In fact, RCTs can be used to study the efficacy of qualitative participatory methodologies – as was the case in the Bandhan TUP study – and may actually validate the fact that a participatory selection mechanism is more equitable than a top-down follow-the-government-list eligibility criterion.
Next, Robert has always told us that we should “ask them”. A well-designed and well-implemented questionnaire will do exactly that. In large scale field surveys, all we do is to “ask them”. We don’t record our opinions or impressions and in fact, field surveyors are usually explicitly instructed to dumb down and record exactly what respondents say. Survey data does attempt to boil down people’s responses to numbers – but that doesn’t mean anyone should ignore what goes into a number. As I see it, RCTs are not the ultimate truth – but they also need not limit learning. Quite the opposite, an RCT, as a tool in the toolkit of evaluation methodologies help to push the limits of our knowledge and insights – and that cannot be a bad thing.
This was Part 1/2 from Robert. Can’t wait for the second half!