Blog

Making RCTs Better

This is the third (and I hope final for a while) post in a series on the standard critiques of randomized control trials (RCTs). The first post examined the External Validity Critique; the second took on the Transcendental Significance Critique. In both, I suggested that while the critiques aren’t invalid they are typically overblown and rarely acknowledge that other evaluation approaches carry the same or similar challenges.

In this post, I want to lay out what I think the advocates of RCTs, including myself, could be doing better to maximize the short and long-term impact of RCT-based studies and of the movement itself.

First is to dial up the humility. I’ve argued elsewhere that the greatest threat to aid and charity is overpromising and inflated expectations. That’s equally true of asking RCTs to carry too much weight in the quest for truth and effective programs. Part of the defense of RCTs in development is the requirement of using the method where the stakes are highest: medical care. But we are finally getting to the point of acknowledging that somewhere around 80% of papers published in medical journals cannot be replicated and retractions of published papers are on the rise. If you want more detail on this issue see herehere or here. RCTs are the best we’ve got when it comes to establishing cause and effect, but all human endeavors, whether programs or tests of those programs, have flaws and particularities that it may take decades to uncover. In the ongoing battle to convince the general aid and development world that RCTs are a critical tool to make the world a better place, it is all too easy, for me at least, to overstate the case of a particular study or of the approach.

Second is to fight the academic (and journal) power. One of the reasons that such an astounding number of medical studies cannot be replicated (and that we’re only finding that out now) is publication bias—journals are biased to only reporting papers that produce large and positive results, or that dramatically overturn conventional wisdom. Journals are especially biased against printing papers that replicate or fail to replicate earlier papers. This is as true of economics journals as it is of medical journals. Now there is an important feedback loop with academia and funding that makes this bias even worse: those seeking tenure or research funding are often primarily judged by their publication record. So there is tremendous bias on the part of researchers to “find” results that are publishable.

There are a number of ways to fight this power nexus. Funders and non-profits should commit to an open-access, publish everything model. That means that rather than judging research or funding opportunities by the publication record of a researcher, or prospects for publication of new research, opportunities should be evaluated on their own merit—and with a requirement that all data and findings be freely available via the Internet if nowhere else.

Another approach is one explained by GiveWell (where I serve on the board): researchers should publish their expectations and specifications in advance of beginning research. This will make it clearer how much effort is being expended in any particular study to “find” results.

Third, is to remember the audience. The benefit of RCTs is not just expanding knowledge, it’s creating useful knowledge that can be applied by funders, agencies and NGOs. Unfortunately the academic power nexus comes into play here as well. RCTs today are primarily driven by academics who have an obvious bias towards questions of academic rather than necessarily practical interest. It’s time for NGOs and funders to take back the knowledge and provide both the funding and the opportunities to research the questions that are directly useful to them. Chris Blattman has some useful thoughts on this. Academic researchers on the other hand, can certainly spend more time, no matter what their interest, in collaborating with NGOs to ensure that questions of implementation and program management are usefully addressed. If you have a chance to hear Chris Dunford (Freedom from Hunger) speak on this topic, make sure you take the opportunity.

I remain firmly convinced that RCTs of development interventions are desperately needed and we need to accelerate the pace of new studies and replications of existing studies. I’m encouraged by what I see already happening in this regard, but I eagerly await the day when all claims of impact are backed by a robust evidence, publicly scrutinized and still taken with a measure of well-intentioned skepticism. 

Timothy Ogden is an executive partner at Sona Partners, the editor in chief of Philanthropy Action, and co-author of Toyota Under Fire. He also blogs at HBR and SSIR.

Note: CGAP has recently published a summary of what we can learn from the growing collection of randomized studies of microfinance. The summary is compiled from papers and presentations from the Microfinance Impact and Innovations Conference in 2010 and featuring contributions from CGAP, the Financial Access Initiative, IPA, and J-PAL.


<< Back to Blog



, ,