Blog

Uncovering the Secret to Successful Aid?

This seems to be our week to blog about Bill Easterly's blog.  Although in this one, we aren't going to agree as much.  I was really excited to see his report about Women's Trust until I got to the final paragraph:

"This kind of aid project is based on a lot of personal, face to face interaction, developing trust and a shared vision, so it is small scale, it has to let things proceed at their own pace, it can’t meet rigid pre-set output targets, it could never be judged by a rigorous “randomized controlled trial” methodology. In short, it involves the kind of tacit knowledge and individual adaptation that could never be converted into a routinized project implemented by the official aid bureaucracies. It breaks all the rules, and it works."

We would be happy to take on an evaluation of just this type of project.  The key here is to ask the right question.  Easterly is assuming that the "question" in an RCT has to be rigid and non-reactive.  But that is not the case.  We can evaluate a "process" just as well as we can evaluate a fixed intervention with no flexibility.  In fact, we are doing just that.  In Ghana (Returns to consulting) and Mexico (mentoring MSMEs) we are engaged in projects that study the impact of a fairly fluid process of "business advice."  

But even if it is possible to evaluate a process, that doesn't mean that Women's Trust is ready for an RCT.  If, in fact, the project is only working with 50 women or so, then it might simply be too small.  In that case, we would agree that this particular project should not be evaluated for impact (though it would still be good to have some monitoring to track the expenses per person reached, e.g.).  The reality is that not all projects can or should get evaluated rigorously.  But we should still use rigorous methods to learn whether the “idea” is good, and to learn where and when it should be implemented.  For that, this exact idea is perfectly well suited to be evaluated with an RCT, ideally using both qualitative and quantitative measurement tools.

Ironically, it seems that the feature Easterly is most concerned about is the small-scale highly-adaptive nature of this project.  But there's another obvious issue with a program that is so small and specialized that it can't be ramped up to a larger scale.  Aside from a nice story, what impact can we possibly expect from it?  Can we really expect thousands of NGO's to pop up and do just this, on a tiny scale?  

Or is that the exact key to success, to somehow stimulate thousands (no, millions) of nano-level NGOs, doing proactive, highly flexible, tailored work to help individuals one-by-one?  In other words, how excited should we be by this story?  Personally, I'm torn until I see the data, to see which approach can make a big impact.  Thoughts?


<< Back to Blog