‘Mechanism mapping’ for policy design

I just finished reading a recent working paper by Martin Williams, titled “External validity and policy adaptation: From impact evaluation to policy design”.

In this paper, Martin tackles the question – how will a policymaker apply evidence available to her to design a policy/programme that will fix a particular problem at hand? He first takes us through the ways in which we think of this currently – primarily by attempts to strengthen the external validity of evaluations – and points out the limitations of these approaches. The central critique is that most of this thinking puts the evaluators/researchers at the centre and tries to devise ways in which the evidence generated by their research can be generalised beyond their specific study samples. This is at odds with what a policymaker (in this paper, a public official in a given country) needs in order to make decisions about how to use evidence from elsewhere to design a policy/programme for her specific context.

The answer, Martin suggests, is ‘mechanism mapping’ – a five-step process where the public official lays out:

Step 1: The theory of change (ToC) of the programme that generated the evidence at hand;

Step 2: The context within which the programme ToC worked;

Step 3: The context at the destination – local factors that will affect a potential replication;

Step 4: By doing the above, interrogate the evidence with the new context, and suggest design modifications; and

Step 5: Iterate, until the gaps are plugged.

Sounds reasonably straight-forward, and would seem to function quite well as a way to break it down for a public official who needs inputs into designing programmes. At the same time, introducing a structured way to do this lends a degree of rigour that will benefit the process.

A point made repeatedly in this paper is that a lot of the thinking around research, evaluation and evidence is focused on the researchers/evaluators. As a result, many of these conversations focus on whether (and how) a certain body of evidence that has been generated would apply ‘elsewhere’, without necessarily specifying a destination for this policy design. Thinking about this from the perspective of the public official ensures that the problem is framed as “what tools do public officials need to apply evidence to their specific context” instead of “how can our evidence be applied in the design of public policy elsewhere”. This switch in perspective allows us to do more than just empathise with the constraints that public officials function within. It enables us to think of solutions that focus on the end-user, even if that might mean trade-offs in statistical rigour.

This brings me to an important factor in how this would work – is the availability of information, and the institutional set-up where the mechanism mapping might take place. How well equipped is a public official in a capital city of a developing country to do this? Who should he consult? Will he? How decentralised is policymaking in the said country, and how does that factor in? My first frame of reference is India, so please pardon my concern for size and complexity. It will ultimately boil down to having reasonable limits to such consultation, and to the amount of information that will be fed into such an exercise.

Another factor to consider is the capacity of public officials. At a time when even researchers hardly fully understand (or bother with) complex theories of change, assuming that public officials will do this on their own requires a certain faith in their capacity. Martin’s core experience of bureaucracy and public management comes from Ghana, where he spent two years working within the Ministry of Trade and Industry. This is reflected in what I think is a key characteristic of this paper – Martin’s ability to think like a public official. Working in a stable state with a reasonably functional public sector, as opposed to a nascent or a failing state, shapes one’s perspectives of state and bureaucratic capacity in significant ways. If your canvas is the former, you are more likely to trust that public officials are reasonably able to competently design and implement policies, and that their work lies within a stable policy context. But even so, public officials will need training and support. And researchers will need to be trained to resist the temptation to turn up their noses at these inevitably ‘messy’ exercises.

Below, is an illustration (this is not from the paper) of how a public official might approach this process, beyond going through the ‘mechanism mapping exercise’ – by considering the combinations thrown up by the interaction between “complexity of the ToC” and “complexity of Context”.  For simplicity, I present four scenarios, and in each, the responses a public official might have in order to move ahead with policy design. The top-right quadrant represents a complex programme ToC that needs to be applied to a complex context – a big challenge, where the response has to be a willingness to experiment and iterate. On the other hand, is the quadrant on the bottom-left, which is likely to be satisfied by technical fixes. Note that this figure does not directly mention ‘information gaps’, although a high level of complexity indicates a high probability of information asymmetries on both counts – in the ToC, as well as in the Context.


Finally, I did not forget to ask myself this – “did we not already know all this?” Like with a lot of the work on ‘adaptive management’ and PDIA, I think the honest answer is that we do, but certainly not enough of it, and not in a systematic manner. Good programmes travel, and policy adaptation is a continuous process – public officials (and other practitioners) learn and adapt continuously. ‘Mechanism mapping’ is a powerful tool to systematise this iterative learning process, and potentially, shorten the learning curve. In a context where the application of evidence to policy design cannot be taken for granted, this paper and its illustration of ‘mechanism mapping’ is an important contribution.

Here is the policy memo. What we need next is a training module for public officials – I hear its coming soon, to a website near you…


Defending Liberia’s right to experiment, and a few questions

Liberia continues to attract criticism for its Partnership Schools for Liberia (PSL) pilot. Here is a recent news report  already pronouncing the verdict on the programme:

Coalition for Transparency and Accountability in Education (COTAE) said in its report released last week Wednesday that the PPP is gradually but emphatically proving to be a failure and the education sector further weakening, presenting a vague future for a nation of impoverished and mostly illiterate citizens.

I have earlier written about how we must support Liberia in experimenting with this model of partnership schools. To recap, Liberia’s Ministry of Education acknowledged that “42 percent of primary age children remain out of school. And most of those who are enrolled are simply not receiving the quality of education they deserve and need” – commendably referring to the problems of both access and quality of education in the country. Conventional education systems have failed to deliver, and research from across the globe supports the view that just having higher paid, or qualified permanent civil service teachers do not yield results. In this context, PSL seeks to generate evidence, and provide decision-makers in Liberia with the tools to iterate reforms to their largely dysfunctional schooling system. Liberia’s education system is not working, and it needs to test out bold new ideas. I therefore fully defend the government’s right to experiment.

It is sometimes hard to disentangle the criticism of the concept of a Public-Private Partnership (PPP) from that of specific providers. Much of the criticism of the PSL seems to be directly targeting the for-profit education company, Bridge International Academies (BIA). But BIA are only one of the eight service providers, running 24 out of the 93 schools (fewer than the originally intended 120) under the PSL.

It is no secret that BIA’s classroom cap (maximum 55 students) is denying students access to education by denying them access to the Bridge schools. These students unfortunately end up not enrolling in school at all—a situation that is counterproductive to government’s compulsory primary education policy. Some of those that are rejected end up in an overcrowded class in another nearby school that tries to accommodate them…

…Schools accommodating students who were denied access to BIA schools are overcrowded and face serious logistical challenges. In some instances, parents have hurriedly erected makeshift structures to accommodate students rejected by Bridge, but lack of teachers and other logistical challenges are still affecting the quality of education in these schools…

…COTAE also accused BIA of breaching the MOU with the government. “Some schools close before the stipulated time due to lack of or inconsistency of the feeding program for students. This breach has serious implications for the curriculum as all materials may not be covered. Students, mostly children, are expected to be in school from 7:30 a.m. to 3:45 p.m., but without food,” the report noted.

In recent months, critics, led prominently by Action Aid International, have sharpened their attacks. The Economist writes about the main concerns being voiced: one, that PSL operators have limited class sizes and are pushing out poor-performing students, and more broadly, will look to game the system to suit their methods; two, that operators are raising and spending philanthropic funds in these schools in addition to the government’s capitation (computed annually, per-child) grant of $50; and three, the business model operated by operators like BIA end up channelling a significant proportion of philanthropic funds raised such programmes on people and systems located outside the recipient country.

There are clearly two separate sets of issues here. One, the question of legality of practices followed in PSL schools. It is important to remember that as in any Public-Private-Partnership, the government needs to play an active oversight and regulatory role. It will not be up to the researchers (however independent they might be) to bring to light, cases of operational deficiencies, or even malfeasance. If BIA and other providers are indulging in practices that violate the commitments made by Government of Liberia to its citizens (and indeed, commitments made by the PSL providers to the GoL), those have to be addressed through the education system, and law enforcement. Admittedly, it is easy to sit outside and demand that a government, already suffering from capacity constraints, play an active role and stand up to powerful donors and donor-funded multinational corporates/NGOs when there are instances of wrongdoing.  But that’s where critics and activists should focus their efforts – in supporting the government to monitor better, and enforce standards.

The second set of issues that The Economist raised are related to the success/failure of the pilot, and its replicability. These are weighty criticisms, and are being addressed to varying degrees by the independent evaluation led by Innovations for Poverty Action (IPA). The researchers have set up a randomised control trial, where intervention schools were assigned randomly to the operators from a set of schools chosen for the evaluation. Critics however argue that the independent evaluation will not provide clear evidence on the PSL. See here and here for this debate that will be fought out in the months and years to come.

The question of additional philanthropic funds being pumped by the operators into PSL schools is a tricky one for the evaluation. Different operators, to their ability and intent, will bring in varying amounts and types of additional investments. These investments are helping them overcome the terms of their agreement with the government that stipulate that they cannot charge any school fees. This could make the programme entirely un-viable even if it comes out successful in the evaluation. Providers like BIA might argue that unit costs would fall with scale, but there are obviously no assurances that will happen. This will also be partially determined by the extent to which the government eventually wants to regulate private providers in the education sector. This is of course a secondary question – first, PSL has to deliver improvements in learning – but one that the government and donors should already be thinking about.

A view on Hope; and a pertinent question to randomistas

…What concerns me is that the direct interventions that are targeted towards addressing such psycho-social constraints are not highlighted or even mentioned as they are not neat enough for RCT measurement. Instead the reliance is on the outcome variables and a black box of intervention package which is not very helpful for intervention design. Short of component randomization which is impractical, evaluation experts should come up with credible ways to speak to this need. Analytical narration of interventions that directly address such constraints could be a starting point…

A comment by IMatin, who I think is Imran Matin on this Economist article on the JPAL study on Bandhan’s ultra-poor intervention in West Bengal

The I-Told-You-So test for research questions

A few weeks back, Innovations for Poverty Action (IPA) (my former employer) posed this question to readers with reference to two small and micro-enterprises (SME) studies in Ghana and Mexico (RCTs, of course) –

In the summer issue of SSIR, we will discuss the results of these two studies in more detail. But here, we’d like YOU to predict the results. We are doing this because people often have preconceptions about solving poverty issues, and rigorous evaluations often challenge conventional wisdom. It’s always easy to say, “I told you so” when there is no clear record of what the predictions were; ideally, people could register their predictions in advance

Why this teaser, you ask? Here is the answer…

First, it would allow stakeholders to stake their claim (pun intended) on their predictions and be held to acclaim when they are right or to have their opinions challenged when they are wrong. Second, such a market could help donors, practitioners, and policymakers make decisions about poverty programs, by engaging the market’s collective wisdom

…or what can also be called the ‘I told you so’ test.

Useful, for sure! There have been multiple occasions when I have tried to explain why one needs to go through three years of arduous research to answer a research question whose answer seems “common sense”. Simply put, when it comes to assessing impact of projects for the poor, guesses are not good enough. Who should be taking the test – probably some researchers, but practitioners should, definitely – governments, NGOs, donors? All of them definitely have much to gain from learning if their predictions turn out to be right or not. Will it increase the value of research in their eyes though? I am not so sure…could turn out both ways, I guess.

By the way, can we think of asking project participants what their prediction on a particular project is? Bet that would throw up some exciting results…

Insights into female voting behaviour in rural Pakistan

Chris Blattman flags a new World Bank paper by Ghazala Mansuri and Xavier Gine. The authors find that information dissemination (of the nature of pre-election voting awareness camapigns) increased voting among women by about 12% on average. Alongside this enhanced political participation, women also displayed greater degrees of independent decision-making when it came to voting for their chosen candidates. In addition, the study finds significant levels of information spill-overs, making such interventions scale-able. The authors report these findings by –

conducting a field experiment to assess the impact of information on female turnout and independence of candidate choice. The setting for the experiment is rural Pakistan where women still face significant barriers to effective political participation, despite legislative reforms aimed at enhancing female participation in public life (Zia and Bari, 1999).

Kudos to the researchers for choosing rural Pakistan, and not some part of say, rural India (far easier from a logistics and security point of view). The intervention and the research methods make for great reading. An interesting folllow-up would be to go back to these communities and present these results. It would be great to get their thoughts on these findings. Also, a couple of questions come to mind –
  1. In the light of these findings, would political parties be inclined to step-up their voter outreach campaigns? – In this study, the vote-share of the losing political party seems to have gone up as a result of the information campaign intervention.
  2. Do voters (men and women) truly understand that ‘every vote counts’? Or do they go out to vote only to reward, punish or under other patron-client relationships? – Is linked to the point above – if voters didn’t think that their vote counted, why would they have gone out and voted for the party that was almost sure to lose anywhichway?

Treated women also voted in larger numbers for PML-F which was seen as less likely to win, thereby changing the vote share of the losing party in sample polling stations. This is perhaps even more remarkable given that the field teams were mostly PPPP supporters. This suggests that the intervention empowered women and thus may have modified the rational calculus of voting (Downs, 1957) by including a utility gain from the mere act of voting (Riker and Ordeshook, 1968)

More praise for Esther Duflo

Chris Udry, writing about Esther, says –

There are few precedents for Esther in our profession; right from the start of her career as a new assistant professor, she has taken on a rare combination of professional roles as a cutting-edge researcher, a catalyst of research for a new generation of scholars, a policy activist, and a public intellectual. Instead of diffusing her impact, this coupling of her intellectual agenda with her passionate social activism has begun to reshape scholarship, policy, public debate, and the everyday lives of many of the world’s poor.

Definitely worth reading in full. As RCTs gain in importance, its flag bearers need to combine their academic brilliance with a willingness to subject themselves to higher levels of public scrutiny. The role of the academic-policy activist goes a long way in dispelling notions of academics being confined to their ivory towers.

PS: I have enormous respect for Chris and have always been impressed by his ability to lucidly synthesise. He does the same with Esther’s body of work in this paper. Another example – this one from his own work on agriculture in Africa, is here.

Economists as anthropologists

…In fact, the most powerful moments in the book are almost touchingly old-fashioned. In the chapter on education, there is a poignant moment that tells you more about the ways in which our education system fails the poor than any randomised trial would. This is the moment where one of their interlocutors uses the phrase “children from homes like ours..,” highlighting a persistent problem of treating the poor as another species. Banerjee and Dufflo indict the system for its low expectations of what poor students can accomplish; these low expectations constitute the poverty trap the poor are trying to escape. Non-economists may have an interest in exaggerating this aspect. But the qualities of research that stand out most vividly in this book are not the randomised trials, but the richness with which Dufflo and Banerjee bring the poor into the conversation. We are grateful to randomised trials because they have turned economists into first-rate anthropologists.

Pratap Bhanu Mehta on Poor Economics. For more, see Ed Carr