RCTs – what type of learning?
March 26, 2014 3 Comments
Could somebody do a rigorous impact evaluation of whether all the rigorous impact evaluations out there are actually having an impact on improving outcomes for people in developing countries?
Probably not, but this is essentially the question that Lant Pritchett is asking in his latest Center for Global Development blog post. In short, he argues that the value of writing more economics papers about the impact of projects is pretty much taken on faith, without showing any evidence that those papers are changing anything. His basic argument goes: “While it might be the case that RCTs could accelerate poverty reduction this was (and is) a faith-based, not evidence-based, claim.”
Pritchett’s irony is sharp, though it doesn’t necessarily prove anything either. Of course, he’s right that believing in the value of RCTs involves a degree of faith. All policy involves a degree of faith. Of course, the randomistas are also right that people can learn more from evidence.
But what type of learning? Perhaps this is the real question we should be asking – not the yes-or-no “is anyone learning anything?” question, but rather, “how are we learning?”
Pritchett’s implication, in organizational learning language, is that the RCT fad justifies itself based on Single-Loop Learning – that we are getting better at doing evaluations and RCTs. But, he implies, that there is no evidence that Double-Loop Learning is happening – aka learning about whether RCTs are actually making anything better.
To explore this question through learning language, again we must distinguish between single and double loop learning. The single loop question asks whether an RCT helped somebody do better something that was already being done. A good example is deworming. A lot of studies have showed that deworming in schools improves health and educational achievement of kids. And, a lot of development programs, and even governments, have taken up this model. J-PAL, when it helped the Indian state of Andhra Pradesh scale up a policy of deworming in schools, made its argument based on evidence from these RCTs.
That’s an easy one – deworming works pretty much the same everywhere. What about issues that are far more contextualized?
An example here could be gender and microfinance. Initial evidence praised the effects of microfinance on women’s empowerment, but a big study in 2009 from MIT questioned the gospel and soon many other critical studies emerged as well. They suggested that access to small loans can actually have a detrimental effect on women’s empowerment, for example by increasing women’s workload in the household, pulling girls from school to help run family businesses, or allowing men to spend their wives’ loans but still forcing the wife to pay it back. Of course, each of these studies happened in a specific context, and the results didn’t necessarily apply to every microfinance program. Still, it sent shockwaves through the whole development community.
The point is, people started asking new questions. Deeper, more critical questions. And these questions made their way, quickly, into the mainstream as practitioners and organizations realized they needed to take a more critical look at their assumptions and their data. And when assumptions are questioned, double-loop learning can occur.
Did we need RCTs to figure this out? Well, here I’ll take it on faith that RCTs have helped, at least to spread the news faster. No doubt the disadvantages of microfinance would have leaked into mainstream practice eventually, but the splash of a high-profile study can provide the shock it takes to spur critical reflection, and for tougher questions to spread more quickly.
Again from Lant, writing last November: “RCTs are one hammer in the development toolkit and previously protruding nails were ignored for lack of a hammer, but not every development problem is a nail.”
True words. Some are nails, others aren’t. The best answers to difficult questions often only lead to more questions – but even if they do, we know we are starting to ask the right questions, and engaging in deeper learning.