A theme on the social science blogs these days is “everything we know is wrong.”
The frequent citation of drug trials as the basis for sound social science experiments disguises an unsettling fact about medical research in general: it’s often statistically and causally naïve. Political scientist/economist Chris Blattman recently pointed to a piece documenting that a widely influential fish oil/heart disease study that had been used to sell millions of dollars of fish oil never directly measured heart disease in the population of interest. Emily Oster, an economist at the University of Chicago, is now writing regularly for data journalism site fivethirtyeight on the spurious correlations in a lot of medical research. But it’s not just a problem of medical research. “As I teach my students,” Blattman wrote, “the first thing you should say to yourself as you open every book or research paper is, ‘This is almost certainly wrong’…Welcome to science.”
Yes, skepticism is probably the correct posture to assume when first approaching a new research finding, but before we succumb completely to the dismal inevitability of “wrong” science, economist Ted Miguel of UC Berkeley has a compelling presentation and series of papers on why we need more transparency in social science research, and is supporting an agenda of “three core practices” to address the problem: disclosing details about how data is collected and analyzed; publicly registering plans for the analysis and tests to be conducted before they are executed; and making available data and materials that allow other researchers to test and replicate results.
In our subfield of financial access, the yawning chasm of what we don’t know can seem especially steep, and a new paper published in the Journal of Development Effectiveness helps to illustrate why. The paper, by researchers from University of East Anglia and Maastricht University, asks what the existing body of evidence has to say about the impact of microcredit on women’s empowerment. They attempt a meta-analysis of the hundreds of papers written on the subject, and find that “effect sizes are small.” But that’s not really the main finding in the paper. The main finding is that it is impossible to do a credible meta-analysis of the literature, in part because the studies are methodologically weak, and in part because of the heterogeneous nature of the variables and outcomes studied.
If you imagine a spectrum that has on one end relatively straightforward, yes/no, and easy-to-measure outcomes like survival rate (alive vs not alive) or HIV status (positive vs negative), then an outcome like female empowerment is clearly on the other, fuzzier end of that spectrum. Part of the problem has to do with choosing what observable behaviors or characteristics to measure. Empowerment can be defined (and has been, for the purpose of studies considered for the meta-analysis) as: participation in household decision-making; control over assets; physical mobility; participation in political campaigns and public protests; and knowledge of accounting practices (see the paper for specific citations.)
All this underscores the need for scholars to continue to attempt syntheses and meta-analyses to fight the tyranny of the single study, and to write papers that are methodologically strong enough, and measure enough similar outcomes in similar ways, to be usefully included in those meta-analyses. And the need for all of us consumers of research to be good Bayesians and fight the availability heuristic.
In what must be one of the most humble concluding remarks in the history of economics papers, the authors note:
…There appears to be a gap between the often optimistic (societal) belief in the capacity of microcredit to ameliorate the position of women in decision-making processes within the household on the one hand, and the empirical evidence base on the other hand, a gap which our meta-analysis should not be thought to have bridged.
Thanks to David McKenzie for the pointer.