Defending Liberia’s right to experiment, and a few questions

Liberia continues to attract criticism for its Partnership Schools for Liberia (PSL) pilot. Here is a recent news report  already pronouncing the verdict on the programme:

Coalition for Transparency and Accountability in Education (COTAE) said in its report released last week Wednesday that the PPP is gradually but emphatically proving to be a failure and the education sector further weakening, presenting a vague future for a nation of impoverished and mostly illiterate citizens.

I have earlier written about how we must support Liberia in experimenting with this model of partnership schools. To recap, Liberia’s Ministry of Education acknowledged that “42 percent of primary age children remain out of school. And most of those who are enrolled are simply not receiving the quality of education they deserve and need” – commendably referring to the problems of both access and quality of education in the country. Conventional education systems have failed to deliver, and research from across the globe supports the view that just having higher paid, or qualified permanent civil service teachers do not yield results. In this context, PSL seeks to generate evidence, and provide decision-makers in Liberia with the tools to iterate reforms to their largely dysfunctional schooling system. Liberia’s education system is not working, and it needs to test out bold new ideas. I therefore fully defend the government’s right to experiment.

It is sometimes hard to disentangle the criticism of the concept of a Public-Private Partnership (PPP) from that of specific providers. Much of the criticism of the PSL seems to be directly targeting the for-profit education company, Bridge International Academies (BIA). But BIA are only one of the eight service providers, running 24 out of the 93 schools (fewer than the originally intended 120) under the PSL.

It is no secret that BIA’s classroom cap (maximum 55 students) is denying students access to education by denying them access to the Bridge schools. These students unfortunately end up not enrolling in school at all—a situation that is counterproductive to government’s compulsory primary education policy. Some of those that are rejected end up in an overcrowded class in another nearby school that tries to accommodate them…

…Schools accommodating students who were denied access to BIA schools are overcrowded and face serious logistical challenges. In some instances, parents have hurriedly erected makeshift structures to accommodate students rejected by Bridge, but lack of teachers and other logistical challenges are still affecting the quality of education in these schools…

…COTAE also accused BIA of breaching the MOU with the government. “Some schools close before the stipulated time due to lack of or inconsistency of the feeding program for students. This breach has serious implications for the curriculum as all materials may not be covered. Students, mostly children, are expected to be in school from 7:30 a.m. to 3:45 p.m., but without food,” the report noted.

In recent months, critics, led prominently by Action Aid International, have sharpened their attacks. The Economist writes about the main concerns being voiced: one, that PSL operators have limited class sizes and are pushing out poor-performing students, and more broadly, will look to game the system to suit their methods; two, that operators are raising and spending philanthropic funds in these schools in addition to the government’s capitation (computed annually, per-child) grant of $50; and three, the business model operated by operators like BIA end up channelling a significant proportion of philanthropic funds raised such programmes on people and systems located outside the recipient country.

There are clearly two separate sets of issues here. One, the question of legality of practices followed in PSL schools. It is important to remember that as in any Public-Private-Partnership, the government needs to play an active oversight and regulatory role. It will not be up to the researchers (however independent they might be) to bring to light, cases of operational deficiencies, or even malfeasance. If BIA and other providers are indulging in practices that violate the commitments made by Government of Liberia to its citizens (and indeed, commitments made by the PSL providers to the GoL), those have to be addressed through the education system, and law enforcement. Admittedly, it is easy to sit outside and demand that a government, already suffering from capacity constraints, play an active role and stand up to powerful donors and donor-funded multinational corporates/NGOs when there are instances of wrongdoing.  But that’s where critics and activists should focus their efforts – in supporting the government to monitor better, and enforce standards.

The second set of issues that The Economist raised are related to the success/failure of the pilot, and its replicability. These are weighty criticisms, and are being addressed to varying degrees by the independent evaluation led by Innovations for Poverty Action (IPA). The researchers have set up a randomised control trial, where intervention schools were assigned randomly to the operators from a set of schools chosen for the evaluation. Critics however argue that the independent evaluation will not provide clear evidence on the PSL. See here and here for this debate that will be fought out in the months and years to come.

The question of additional philanthropic funds being pumped by the operators into PSL schools is a tricky one for the evaluation. Different operators, to their ability and intent, will bring in varying amounts and types of additional investments. These investments are helping them overcome the terms of their agreement with the government that stipulate that they cannot charge any school fees. This could make the programme entirely un-viable even if it comes out successful in the evaluation. Providers like BIA might argue that unit costs would fall with scale, but there are obviously no assurances that will happen. This will also be partially determined by the extent to which the government eventually wants to regulate private providers in the education sector. This is of course a secondary question – first, PSL has to deliver improvements in learning – but one that the government and donors should already be thinking about.

A view on Hope; and a pertinent question to randomistas


…What concerns me is that the direct interventions that are targeted towards addressing such psycho-social constraints are not highlighted or even mentioned as they are not neat enough for RCT measurement. Instead the reliance is on the outcome variables and a black box of intervention package which is not very helpful for intervention design. Short of component randomization which is impractical, evaluation experts should come up with credible ways to speak to this need. Analytical narration of interventions that directly address such constraints could be a starting point…

A comment by IMatin, who I think is Imran Matin on this Economist article on the JPAL study on Bandhan’s ultra-poor intervention in West Bengal

The I-Told-You-So test for research questions

A few weeks back, Innovations for Poverty Action (IPA) (my former employer) posed this question to readers with reference to two small and micro-enterprises (SME) studies in Ghana and Mexico (RCTs, of course) –

In the summer issue of SSIR, we will discuss the results of these two studies in more detail. But here, we’d like YOU to predict the results. We are doing this because people often have preconceptions about solving poverty issues, and rigorous evaluations often challenge conventional wisdom. It’s always easy to say, “I told you so” when there is no clear record of what the predictions were; ideally, people could register their predictions in advance

Why this teaser, you ask? Here is the answer…

First, it would allow stakeholders to stake their claim (pun intended) on their predictions and be held to acclaim when they are right or to have their opinions challenged when they are wrong. Second, such a market could help donors, practitioners, and policymakers make decisions about poverty programs, by engaging the market’s collective wisdom

…or what can also be called the ‘I told you so’ test.

Useful, for sure! There have been multiple occasions when I have tried to explain why one needs to go through three years of arduous research to answer a research question whose answer seems “common sense”. Simply put, when it comes to assessing impact of projects for the poor, guesses are not good enough. Who should be taking the test – probably some researchers, but practitioners should, definitely – governments, NGOs, donors? All of them definitely have much to gain from learning if their predictions turn out to be right or not. Will it increase the value of research in their eyes though? I am not so sure…could turn out both ways, I guess.

By the way, can we think of asking project participants what their prediction on a particular project is? Bet that would throw up some exciting results…

Insights into female voting behaviour in rural Pakistan

Chris Blattman flags a new World Bank paper by Ghazala Mansuri and Xavier Gine. The authors find that information dissemination (of the nature of pre-election voting awareness camapigns) increased voting among women by about 12% on average. Alongside this enhanced political participation, women also displayed greater degrees of independent decision-making when it came to voting for their chosen candidates. In addition, the study finds significant levels of information spill-overs, making such interventions scale-able. The authors report these findings by –

conducting a field experiment to assess the impact of information on female turnout and independence of candidate choice. The setting for the experiment is rural Pakistan where women still face significant barriers to effective political participation, despite legislative reforms aimed at enhancing female participation in public life (Zia and Bari, 1999).

Kudos to the researchers for choosing rural Pakistan, and not some part of say, rural India (far easier from a logistics and security point of view). The intervention and the research methods make for great reading. An interesting folllow-up would be to go back to these communities and present these results. It would be great to get their thoughts on these findings. Also, a couple of questions come to mind –
  1. In the light of these findings, would political parties be inclined to step-up their voter outreach campaigns? – In this study, the vote-share of the losing political party seems to have gone up as a result of the information campaign intervention.
  2. Do voters (men and women) truly understand that ‘every vote counts’? Or do they go out to vote only to reward, punish or under other patron-client relationships? – Is linked to the point above – if voters didn’t think that their vote counted, why would they have gone out and voted for the party that was almost sure to lose anywhichway?

Treated women also voted in larger numbers for PML-F which was seen as less likely to win, thereby changing the vote share of the losing party in sample polling stations. This is perhaps even more remarkable given that the field teams were mostly PPPP supporters. This suggests that the intervention empowered women and thus may have modified the rational calculus of voting (Downs, 1957) by including a utility gain from the mere act of voting (Riker and Ordeshook, 1968)

More praise for Esther Duflo

Chris Udry, writing about Esther, says –

There are few precedents for Esther in our profession; right from the start of her career as a new assistant professor, she has taken on a rare combination of professional roles as a cutting-edge researcher, a catalyst of research for a new generation of scholars, a policy activist, and a public intellectual. Instead of diffusing her impact, this coupling of her intellectual agenda with her passionate social activism has begun to reshape scholarship, policy, public debate, and the everyday lives of many of the world’s poor.

Definitely worth reading in full. As RCTs gain in importance, its flag bearers need to combine their academic brilliance with a willingness to subject themselves to higher levels of public scrutiny. The role of the academic-policy activist goes a long way in dispelling notions of academics being confined to their ivory towers.

PS: I have enormous respect for Chris and have always been impressed by his ability to lucidly synthesise. He does the same with Esther’s body of work in this paper. Another example – this one from his own work on agriculture in Africa, is here.

Economists as anthropologists

…In fact, the most powerful moments in the book are almost touchingly old-fashioned. In the chapter on education, there is a poignant moment that tells you more about the ways in which our education system fails the poor than any randomised trial would. This is the moment where one of their interlocutors uses the phrase “children from homes like ours..,” highlighting a persistent problem of treating the poor as another species. Banerjee and Dufflo indict the system for its low expectations of what poor students can accomplish; these low expectations constitute the poverty trap the poor are trying to escape. Non-economists may have an interest in exaggerating this aspect. But the qualities of research that stand out most vividly in this book are not the randomised trials, but the richness with which Dufflo and Banerjee bring the poor into the conversation. We are grateful to randomised trials because they have turned economists into first-rate anthropologists.

Pratap Bhanu Mehta on Poor Economics. For more, see Ed Carr

Thoughts on (and from) yet-unread randomistas’ books

Why I am looking forward to reading these books:
More Than Good Intentions and Poor Economics were published last month. By a happy coincidence, I have worked in Ghana and India – two countries that feature prominently in many of the accounts in both the books. Also, these books have stories from field experiments conducted by IPA and CMF (an org I have worked with).
On RCTs, ‘big ideas’ and potential alliances:
From the book websites and the numerous (mostly positive) reviews, the books seem to contain many examples of interventions that have worked and of those that haven’t. Put together, they seem to debunk the notion of ‘big ideas’ that can wipe off poverty and deprivation. Serious implementers know this – when partnering with researchers, they were not always looking to build ‘big ideas’ about what worked in development.

RCTs can identify interventions that work and those that don’t and serve as a starting point to develop theories about the real world development problems that are broader than the findings from separate studies. The natural follow-up would be to combine RCTs and rigorous qualitative work to delve into the processes through which the identified impact was achieved – thereby posing the two realms of research not as competitors, but as fruitful collaborators. And no, I am not just being idealistic – such collaborations are beginning to happen – see the ‘graduation pilot’ evaluations, for instance.

A tribute to implementers, those that are willing to learn:
Like any publication of RCT results, these books are in part, a tribute to partner organisations that signed up to implement these studies as part of their operations, which I know from experience to be a costly and (often) tedious undertaking. As much as RCTs mean researchers spending time on the ground engaging with real-world programme implementation from the very start, they also usually involve partner organisations that are brave and willing to experiment with their implementation models, while subjecting themselves to an external evaluation.
For some of these implementing partners (that I have worked closely with), I know that they were mostly looking for solutions that could work for them. Sure, everyone benefits from what they learn from other experiments that work, but serious implementers are also aware that there are no blue-prints – that to make it work, they have to take a good idea and tweak it to suit their context and capacities. Its only natural therefore that process evaluations and qualitative studies that accompany RCTs would help implementers learn more about the pathways to the kind of impact their interventions achieve.