I’m just beginning a year of much-awaited research time in Tokyo. I was planning to take a few weeks to settle in and lie low, but my eye was caught by an ambitious, bursting-at-the-seams new study, supported by Britain’s DFID and completed by independent researchers (Duvendack, Palmer-Jones, Copestake, Hooper, Loke, and Rao). The topicis one that I’ve written about often: “What is the evidence of the impact of microfinance on the well-being of poor people?”
Here are some thoughts, written during an early-morning round of jet lag.
The DFID study is a sprawl (17 appendices), obviously a major effort, and filled with technical observations. But I fear that it also will add confusion to a conversation that’s already muddled.
The biggest confusion focuses on the essential difference between
• Proving that something doesn’t work and
• Not being able to prove that it works.
In the first case, you’re able to rigorously show that the intervention makes no difference. You can line up like with like, and you can show that people who received the intervention are no better off than people who did not. If the result is generalizable, the intervention is a waste of money and time. Forget about it. Move on.
In the second case, you don’t have a definitive study. Instead, you have theory and anecdotes. You have studies with one limitation or another and much suggestive evidence. The intervention isalso complicated, with pros and cons. But you lack anything that can prove the case. So, you stick to your guns and re-double efforts to do better evaluations.
The state of the literature on microfinance impacts is better characterized as “not being able to prove that it works” than as “proving that it doesn’t work.” I’ve long argued that, because of the lack of clarity on big impacts, we need to cut the hype—and, based on the available evidence, proceed with cautious optimism. We also need to draw on a variety of kinds of evidence (which means more serious qualitative studies, not more weak statistical studies).
The DFID study appears to suggest that enough is enough. More precisely, given that the microfinance agenda may actually be harmful to some people some of the time, taking the principled position for cautious optimism may in fact endanger families at risk. The bottom line, the study concludes, is that the wait-and-see view could do damage.
This is big at distinction, but it’s the source of confusion in the DFID study.
The study’s conclusion in the executive summary (p. 4), especially the final line, seems to embrace the wait-and-see side (though a 1992 study seems beside the point):
Thus, our report shows that almost all impact evaluations of microfinance suffer from weak methodologies and inadequate data (as already argued by Adams and von Pischke 1992), thus the reliability of impact estimates are adversely affected. This can lead to misconceptions about the actual effects of a microfinance programme, thereby diverting attention from the search for perhaps more pro-poor interventions. Therefore, it is of interest to the development community to engage with evaluation techniques and to understand their limitations, so that more reliable evidence of impact can be provided in order to lead to better outcomes for the poor.
Yet, the conclusion of the overall study resonates differently, barely disguised by technical jargon (pp. 72-73):
Failing to contradict the alternate hypothesis encourages one to believe there is a positive effect and therefore to tend to (continue to) reject the null (no effect) hypothesis even though it (no effect) may be true. This of course depends on the decision procedure (see Neyman and Pearson 1933, for a detailed discussion on decision rules) and weighing the costs and benefits of an intervention. Even for critics of these evaluations the absence of robust evidence rejecting the null hypothesis of no impact has not led to a rejection of belief in the beneficent impacts of microfinance (Armendáriz de Aghion and Morduch 2010, p310; Roodman and Morduch 2009, p39-40), since it allows the possibility that more robust evidence (from better designed, executed and analysed studies) could allow rejection of this null. However, given the possibility that much of the enthusiasm for microfinance could be constructed around other powerful but not necessarily benign, from the point of view of poor people, policy agendas (Bateman 2010, Roy 2010), this failure to seriously consider the limitations of microfinance as a poverty reduction approach, amounts in our view to a failure to take seriously the results of appropriate critical evaluation of evaluations.
I had to read this twice before I appreciated the political argument. Their big concern—here at least—is not so much with the idea that microfinance may be doing damage through over-indebting customers. The big concern appears to be that microfinance is dominating the policy agenda (perhaps even shaping that agenda) in ways that disadvantage poor families, squeezing out attention that could go to other interventions that might do even better. It’s an argument about an opportunity cost at a high level—about missed opportunities for government ministers to put resources into doing things that would be relatively better. It is not an argument about the possible harm directly created by microfinance itself.
What’s striking is that the argument can be true even if microfinance shows clear, demonstrated positive benefits. The argument can be true even if microfinance does absolute good, just not as much absolute good as other interventions.
In this sense, it’s an odd argument with which to close a methodological review of microfinance impacts. It’s really an argument about the dynamics of high-level policy, but that seems best separated from this kind of technical review of on-the-ground realities. The big point may or may not be right; it’s certainly contentious. Most oddly, it’s tossed off in a way that doesn’t do justice to the political and policy argument, or at least doesn’t really begin to spell it out. Because of that, the frame undermines the care with which the larger technical messages of the bulk of the study are crafted, allowing them to be made subservient to the political point.
So it is that Bangladesh’s bdnews24 seized on the study to argue that “Microcredit is a mirage, says UK study,” describing the “damning report” as skewering the microfinance “myth.”
If you read the study (mainly recommended for researchers and technical types), you’ll see it doesn’t skewer myths. Instead it patiently discusses topics like propensity score matching methods and instrumental variables estimation. Moreover, there’s plenty to support there the view that we ought to keep doing better studies while maintaining a critical eye–and without forsaking a decent dose of optimism.