BRAC, the world’s largest development NGO, examines the limitations of Cost Benefit Analyses and how to improve research methodologies for more informed policy and investment decisions.
By Emily Coppel and Matt Kertman
In the last ten years, we have seen a push from investors, foundations, governments and concerned citizens for better impact evaluations to prove which development programs work consistently across contexts, with sustained outcomes over time.
As a result, the sophistication of these evaluations has improved dramatically in the last ten years, with more organizations, outside of academia, piloting and testing their programs. The good news: we have some extremely robust research that illustrates what works in different contexts. But now, faced with multiple options to choose from, investors want to compare costs to identify which program is the best investment.
A recent piece in the Economist, “How to spend it,” discusses the application of cost-benefit analyses – a tool designed to calculate the returns on investment – to evaluate development programs in Bangladesh. The article mentions research commissioned by a think-tank known as the Copenhagen Consensus (CC), which was curated to identify programs across multiple sectors with high benefit-cost ratios (BRAC's Research and Evaluation Unit also participated in some of the research studies). The article highlights some of the advantages of using cost-benefit analyses (CBAs), but also identifies issues that need to be addressed before they can be comprehensive tools for making policy or program decisions.
A simple CBA can be useful for determining whether a program is worth the cost for investors—in other words, if the impact of the program is greater than the cost of delivering the program. If it costs $100 to deliver a package of services that provides only $50 of benefits, the program may not be a good investment. The promise of CBAs is that they allow for more than just simple comparisons of program cost to impact. In theory, they can be used as a unifying measurement to compare programs broadly, which was the aim of the Consensus.
This analysis was one of the first of its kind to look at programs across sectors, which is a step in the right direction. But more robust research is needed before CBAs can be used to accurately make cross-sector comparisons.
For example, the Consensus looked at BRAC’s Targeting the Ultra Poor program. According to a study commissioned by the Copenhagen Consensus and authored by BRAC researchers, two years after the program ends, it has a benefit-cost of two to one. Comparing this two-to-one ratio to a benefit-cost calculated for another program with a shorter timespan wouldn’t necessarily reflect sustainability of the benefits beyond the life of the program. Unfortunately, research that’s comparable across programs and takes into account sustainability is only now being developed. Although donors are asking for these comparisons, there are still few who have invested in the research to deliver meaningful results.
There have been a few external critiques of the Copenhagen Consensus, including from development economists Justin Sandefur and Rachel Glennerster, who criticized the analysis on Twitter for putting rigorous studies and far less rigorous ones on par with each other. Some of the research used detailed cost accounting of programs that have been implemented (e.g. Graduation) alongside estimates for programs that have yet to be tried. The latter are speculative, and implementation costs could be much higher than projected (not to mention lacking real world estimates of impact). Comparing programs that exist to programs that don’t is apples and oranges. Worse yet, investors could expect an orange and get a rotten apple: unexpectedly high implementation costs or lower impact could result in a one-to-three benefit-cost return instead of a three-to-one.
“The purpose of this benefit-cost comparison is not to come up with a conclusion that one approach is better than the other, especially given the lack of comparable robust evidence across the interventions,” writes co-author Munshi Sulaiman, Research Director for BRAC International, in his report for the CC that was used to calculate the original ranking of the Graduation program, Comparative Cost-Benefit Analysis of Programs for the Ultra-Poor in Bangladesh.
Dr. Sulaiman argues that the point is to assess whether this program is a sensible investment among similar alternatives. In essence, is it the best apple in the bushel?
“Looking ahead, we should figure out the pain points in the analysis to see how it can be done better,” said Dr. Sulaiman.
“If we repeat a program ten times, what is the likelihood that the same CBA would be returned? These sorts of calculations are necessary if we want to achieve scale.”
The Copenhagen Consensus also explained that the methodologies used, especially when making cross-sectoral comparisons, need to be improved.
“Quantifying social and other benefits was necessarily informed and in cases limited by the evidence available. Not all evidence from Bangladesh and elsewhere in the world on costs and benefits is equally robust. The panel makes a general point that further and deeper research is needed to better capture the social and other benefits of interventions,” reads the Bangladesh Priorities Eminent Panel Findings, a paper co-published by BRAC and the Copenhagen Consensus on May 12th illustrating the new rankings that were announced at the CC’s culminating event.
Timeframes also have the potential to change a program’s impact. For example, the benefit-cost of the Graduation approach in this study was two to one after two years. The two-year time-frame was calculated as a median of twelve estimates. However, research published last December by the London School of Economics, University of Central London, and University of Bocconi with BRAC found that, based on trends calculated after seven years, every dollar invested has a potential return of $5.40 (based on consumption, accumulation of productive assets, and savings). That means, after two years, there was a return of two to one, but over a longer period of time, it could be as much as a five–to-one benefit-cost.
Timeline matters, especially for programs that address the needs of the poorest people. Returns could be slow at the onset, as a participant is finding her footing, but have the potential to increase exponentially over time as she continues to improve her economic wellbeing.
For example, an ultra-poor woman might develop a small business selling cow dung while participating in the Graduation program. After the program ends, she could have saved enough money to invest in another, more profitable business, which increases her revenue. In doing so, she could continue to increase her income, as she saves and invests more. There is also potential for intergenerational impact as her income level changes, which should not be overlooked – for some investors, it’s their main goal.
This may be the main challenge with using CBAs to inform policy decisions: they don’t account for the diverse goals of investors. Some organizations, BRAC included, have started using indexed approaches to measure multiple indicators. For example, we’re designing research to measure the impact of Graduation-style programs on a variety of outcomes that can only be turned into dollar figures using very subjective judgments: the psychological wellbeing of participants, intergenerational impact, and the ability of women to make their own decisions. All of these measures will be weighted and valued differently depending on the donor, investor, government body or multilateral.
Cost-benefit analyses will be most useful when they are able to measure varying types of impact to help investors interpret true value when faced with multiple indicators. This will require additional research funding to ensure that we’re measuring apples to apples when deciding on a program to implement. It will also decrease the likelihood that, two years from now, we end up with a bushel full of rotten apples.