A few weeks ago I attended the first day of the New England Universities Development Consortium’s annual conference. It’s a good place to see the latest economics research on a pretty wide variety of development topics, including microfinance. During one session that included presentations of four papers, I noticed that three were about “savings” but each, on closer inspection, had a very different definition of “savings.”
One paper was examining the demand for credit versus savings, but the savings in question was money set aside for less than two weeks. Another was evaluating a program to encourage savings among 8 and 9 year olds and measured account balances at the end of a school semester. The third discussed savings accounts held in formal banks in Nigeria, with massive balances compared to the other papers.
So what are we talking about when we talk about savings? Read More
There’s a new weapon in the fight to expand financial access.
he Entrepreneurial Finance Lab, founded by faculty and students from the Harvard Kennedy School and Harvard Business School, is pioneering new personality-assessment based tools to expand credit access. Survey-based measures of personality characteristics – such as ethics, character, intelligence, attitudes and beliefs – combined with measures of business skills turn out to be powerful predictors of loan repayment in real-world settings. The Entrepreneurial Finance Lab creates alternative credit scores based on these characteristics to expand credit access in partnership with banks and microfinance institutions from around the world.
The approach originates from research in both psychology and business administration . . . Read More
I write this from Dhaka, where I am visiting for the second time to help get our mobile banking impact evaluation in motion. I am not here alone, however, and I wanted to devote this post to introducing the truly outstanding Bangladeshi economists, research staff and organizations who are our partners in this research study.
First, we are uniquely privileged to be working with Dr. Hassan Zaman as a co-principal investigator on this study. Dr. Zaman is the chief economist of Bangladesh’s central bank, although he will soon be returning to Washington, DC to take a director-level advisory position on South Asia at the World Bank. He spent much of his career prior to Bangladesh Bank at the World Bank and earlier worked for BRAC. He has generated a body of policy and academic work that reflects a diverse mix of interests in development, including on development and finance, and will lead the World Bank’s work on poverty reduction and human development in his new role . . . Read More
In mid-June the Stanford Social Innovation Review blogged the results of a survey they conducted. The survey’s purpose: to understand the role of academic research in the work of practitioners in a broad range of social, environmental and economic issue areas. Many of the 1,800 respondents described academic research as difficult to access, expensive, too narrow, and not relevant. Seventy percent cited the “difficulty of translating research findings into concrete action” as one of the reasons for a substantial gap between the two worlds.
The results of the survey brought to my mind strong words from former Freedom from Hunger CEO Chris Dunford about the usefulness and applicability of one specific type of academic research, randomized control trials . . . Read More
A theme on the social science blogs these days is “everything we know is wrong.”
The frequent citation of drug trials as the basis for sound social science experiments disguises an unsettling fact about medical research in general: it’s often statistically and causally naïve. Political scientist/economist Chris Blattman recently pointed to a piece documenting that a widely influential fish oil/heart disease study that had been used to sell millions of dollars of fish oil never directly measured heart disease in the population of interest. Emily Oster, an economist at the University of Chicago, is now writing regularly for data journalism site fivethirtyeight on the spurious correlations in a lot of medical research. But it’s not just a problem of medical research. “As I teach my students,” Blattman wrote, “the first thing you should say to yourself as you open every book or research paper is, ‘This is almost certainly wrong’…Welcome to science" . . . Read More
This summer the Journal of Development Studies accepted a manuscript by Jonathan Morduch and myself laying out our critique of an influential microcredit study from the 1990s by Mark Pitt of Brown University and Shahidur Khandker of the World Bank. Our article should appear in the journal this year or next. The acceptance is milestone for Jonathan and me, for it represents a ratification of our work, and is very long in coming.
It was 15 years ago that Jonathan first laid out his doubts about Pitt and Khandker (P&K). Pitt retorted the next year. And there the dispute rested, never adjudicated by journals, until I entered the picture 6 years ago by writing a program that, for the first time, allowed an exact replication of P&K’s math.
Jonathan and I have played a sort of doubles match with Mark and Shahid . . . Read More
When it comes to costs and benefits, we at FAI tend to focus on benefits. The recent release of the Compartamos microfinance impact evaluation was thus a big event in our office. With our heads in the academic literature, we tend to write a lot about RCTs and other ways to measure benefits of interventions.
We’re contributing to a problem, though. There’s a big danger in conflating impact and value. We can’t say much about the value of microfinance (or any other intervention) based on benefits alone. The most realistic proposition in favor of microfinance is that relatively small benefits are paired with relatively small costs, leading to a favorable cost-benefit ratio. That’s a hypothesis, of course, and it hinges on a careful reckoning of the cost data. Read More
In last week’s blog post, I suggested that self-reported data should be supplemented with objective sources of information from independent third-party entities. Sometimes, however, independent data sources simply aren’t available and researchers have no choice but to base their analysis on self-reported data. Under these circumstances, some data collection methodologies might be more useful than others in ensuring that self-reported data are reliable. In this post, I discuss several studies of the potential of the diaries methodology and alternative strategies to capture accurate self-reported data.
Klaus Deininger, Calogero Carletto, Sara Savastano and James Muwonge examine the effect of personal diaries on the quality of self-reported agricultural data in their study, “Can Diaries Help in Improving Agricultural Production Statistics? Evidence from Uganda.” In Uganda, a large part of crop output consists of continually harvested crops such as cassava and banana. Since these crops are harvested over long periods of time, farmers who are asked to report harvest data may have trouble recalling events that happened several months earlier . . . Read More
Program evaluations and policy proposals are only as good as the data upon which they are based. Although we all know this to be true, discussions about the reliability of data, especially self-reported data, have only recently emerged in the field of development economics. The other week, I highlighted two papers from the Journal of Development Economics’ Symposium on Measurement and Survey Design which discussed how recall bias might undermine the reliability of self-reported data. Even when recall bias is not at play though, self-reported data might be threatened by respondents’ desire to misreport their activities so as to portray their behaviors in a more positive light.
Sarah Baird and Berk Özler explore this phenomenon as it relates to education in their study, “Examining the Reliability of Self-Reported Data on School Participation.” Many Conditional Cash Transfer (CCT) programs are evaluated based on self-reported data about school enrollment and attendance rates. However, the desire to give socially desirable answers or the belief that program funding is linked to evaluation results might lead survey participants to over-report their level of school participation. Baird and Özler test the extent to which self-reported data of school enrollment rates can be considered reliable in CCT evaluations of this nature . . . Read More
A regular theme in our writing is about the need for the microfinance industry to learn from and adapt to the needs of poor households. A few weeks ago, a new paper appeared based on an interesting attempt to test whether MFIs are interested in generating and using rigorous evidence. The researchers sent emails to 1,419 MFIs inquiring about their interest in "a partnership to randomly evaluate their programs." There were three different emails sent however: 1) a neutral email, 2) an email that emphasized positive findings from other studies of microfinance, and 3) an email that emphasized "null" findings from other studies of microfinance.
Unsurprisingly, the positive emails had double the response rate of the negative emails. The authors interpret this finding as evidence of confirmation bias among MFIs--they are only intereted in good news that backs up their existing beliefs, and less interested in learning how to improve . . . Read More
In a recent post, Tim Ogden and I discussed the importance of having solid, reliable data on which to base program evaluations and policy decisions. The Journal of Development Economics explored this theme in last year’s Symposium on Measurement and Survey Design which featured more than a dozen papers on improving data quality in development research (Hat tip to Berk Ozler of the World Bank’s Development Impact blog for pointing us to it).
An important discussion at the symposium was the extent to which self-reported data can be considered accurate and reliable. Because study participants are usually asked to report information after significant time has elapsed, self-reported data are often subject to recall bias and can be inaccurate or misleading. This post is the first in a three-part series that will explore the reliability of self-reported data through a discussion of papers featured at the symposium . . . Read More
What’s next? Jonathan Morduch says: Making RCTs more useful.
When you’re thirsty, that first gulp of water is really satisfying. But after months of just drinking water, you’ll likely start hoping for more from your beverages.
I think that’s where we are with RCTs of microfinance.
The first microfinance RCTs were refreshing. They quenched a thirst for any credible, rigorous evidence on microcredit impacts. No one was particularly hankering for data specifically on microfinance in Manila, Hyderabad, Morocco, or Bosnia. But that’s what we got. It didn’t particularly matter where the studies were from, or what the particular financial methodology was, or who exactly the customers were. Especially since the results were not only credible but surprising and provocative. Researchers were opportunistic choosing sites and partners , and who can blame them? Read More
Last November, the Consumer Financial Protection Bureau’s Office of Financial Empowerment hosted a conference on “Empowering Low-Income and Economically Vulnerable Consumers: Making the Case through Access, Data and Scale.” A key highlight of the conference was a breakout session about the incentives and obstacles to collecting data in the field. Leading the session were representatives from LISC, NeighborWorks, CGAP and the University of North Carolina’s Center for Community Capital. Everyone agreed that we need more rigorous data. What was less clear was exactly how to get there. Two key questions emerged throughout the day:What outcomes are we measuring? And, how do we collect data?
What outcomes are we measuring? Read More
We do our best (not always successfully) to keep up with new research relevant to finance, poverty and development. Today, I’ll be sharing highlights from some new papers by FAI affiliate Sendhil Mullainathan.
In “Behavioral Design: A New Approach to Development Policy,” Mullainathan andSaugato Datta advocate for employing a behaviorally-informed economic perspective to design development policies and programs. Since behavioral economics helps us understand why people behave as they do, analyzing development policies through a behavioral lens allows us to make better policy diagnoses, which in turn lead to better-designed policies.
Mullainathan and Datta outline three ways in which behavioral economics can improve program design. First, it can change how we diagnose problems . . . Read More
On October 3rd, FAI will host a conversation with Jonathan Morduch and David Roodman, a senior fellow at the Center for Global Development (CGD). The conversation will focus on Roodman’s new book, Due Diligence, which has been widely praised (but you should also check out some of the critiques) for its detailed, evidence-based look at the state of microfinance today.
Those familiar with Roodman from his work in microfinance may be unaware of his influential work in other areas of development. We thought we’d provide a quick overview to the other sides of David Roodman (though all of the sides feature an exceedingly careful attention to detail and data).
In addition to his work on microfinance for CGD, David also manages the Commitment to Development Index . . . Read More
The Curiosity rover’s Mars landing is only the most recent instance of the awe-inspiring advances made by the physical sciences. Our wonder at such achievements has even become codified in our language. “It’s not rocket science!” is the standard invocation to suggest a problem just requires common sense instead of the complex physics of, say, landing rovers on far-away planets. The phrase has been directed at everything from Social Security to healthcare, and yes, to poverty alleviation programs.
But, as I heard recently from researcher Duncan Watts, social science “is not rocket science—we’re actually pretty good at rocket science.” He proceeded to list a bunch of “hard” science things that humans have figured out quite well—vaccines for diseases, satellites in orbit, and any number of biological, chemical, and technological advances. The issues explored by “soft” science—how to get people vaccinated, prevent civil wars, and bring about gender equality—now that’s the hard stuff . . . Read More
This is the third (and I hope final for a while) post in a series on the standard critiques of randomized control trials (RCTs). The first post examined the External Validity Critique; the second took on the Transcendental Significance Critique. In both, I suggested that while the critiques aren’t invalid they are typically overblown and rarely acknowledge that other evaluation approaches carry the same or similar challenges.
In this post, I want to lay out what I think the advocates of RCTs, including myself, could be doing better to maximize the short and long-term impact of RCT-based studies and of the movement itself.
First is to dial up the humility. I’ve argued elsewhere that the greatest threat to aid and charity is overpromising and inflated expectations . . . Read More
This is the second of three posts addressing the standard critiques of RCTs. In the last post I addressed the External Validity Critique. In this post I’ll take up the Transcendental Significance Critique—or put a different way, the “It doesn’t matter anyway” critique. In the final post in the series, I’ll discuss some of the problems of interpretation and implementation of findings from RCTs.
The Transcendental Significance Critiques takes several different forms. One is evident in my interaction with Eric Meade on the Stanford Social Innovation Review Blog. This version suggests that RCTs don’t effectively shed light on a grand epistemological view of poverty and social change (this is a different from a critique about theory-less RCTs, a different topic entirely). Another version suggests that RCTs are irrelevant because they cannot be used to measure what really matters—which isn’t foreign aid or charitable programs. In Angus Deaton’s version of Transcendental Significance Critique focuses on national policy and broad development initiatives which can’t be field tested. Philip Auerswald’s version keys on entrepreneurship and economic dynamism which he believes are the real drivers of development and change. Finally, there is a version of the critique which focuses on the static nature of any field experiment. The results of an RCT only tell you about a particular moment in time, and usually well after that moment in time has passed. This critique argues that the world is so dynamic that moment in time snapshots are not useful . . . Read More
Within development and philanthropy circles, there seems to be a cycle of critique of randomized control trials in operation. Every few months a variety of posts and articles pop up discussing the limitations of RCTs attempting to make the point that RCTs are overhyped or at least substantially less useful than proponents assert.
For instance, Philip Auerswald, an economist at George Mason University who focuses on entrepreneurship, rehashed—though in slightly different form—some of the standard critiques this past week. After engaging in discussion in the comments on Phil’s site I thought it might be useful to address some of these common critiques in a more public and visible space.
The most important point to make up front is that RCTs do have limitations. They are by no means a perfect instrument even theoretically; there are also serious practical limitations in the way RCTs are deployed, reported and interpreted. The second most important point is that most of these limitations are shared by the alternatives to RCTs. I am most frustrated by critiques of RCTs that do not acknowledge this . . . Read More
I’m just beginning a year of much-awaited research time in Tokyo. I was planning to take a few weeks to settle in and lie low, but my eye was caught by an ambitious, bursting-at-the-seams new study, supported by Britain’s DFID and completed by independent researchers (Duvendack, Palmer-Jones, Copestake, Hooper, Loke, and Rao). The topic is one that I’ve written about often: “What is the evidence of the impact of microfinance on the well-being of poor people?”
Here are some thoughts, written during an early-morning round of jet lag.
The DFID study is a sprawl (17 appendices), obviously a major effort, and filled with technical observations. But I fear that it also will add confusion to a conversation that’s already muddled.
The biggest confusion focuses on the essential difference between
• Proving that something doesn’t work and
• Not being able to prove that it works.
In the first case, you’re able to rigorously show that the intervention makes no difference . . . Read More