Rigor in studies – and in use of them.

Rigorous methods have led to excellent, and very useful, social science. Especially in development, RCTs and other highly scientific evaluations have produced evidence that has led to better policies and more effective programs. I’ve just finished two years of grad school, where I learned how to do use of those methods and learned the power (yes yes, nerdy pun) that good studies can have. As I head back into the development world as a practitioner, I aim to continue using good studies to inform programming and policy.

But there always should be the caveat: Just because a study found it, doesn’t make it so. Chris Blattman points out that even peer review is not infallible – a team that resubmitted published articles under false names found that 89% of these articles that were sent for peer review were rejected. Shocking number! But of course the stats can be manipulated even here – this is 8 rejections out of only 9 articles, far too minuscule a sample for the percentage to mean much at face value. Plus, this paper is from 1982, and econometric methods have advanced quite a bit since then – at least for development evaluations.

Still, it is a good reminder to be cautious.

Star power in academia may not be helpful either. The most recent high-profile example is the paper by Harvard economists Reinhart & Rogoff that linked high debt to recession, whose data and methods appear to have been less than rigorous. Though again with this idea of power – the reputation of these individuals and their institution seem to have made this paper highly – and as we know now, unduly – influential in government policy. To continue with the pun, a study’s power may not always be correlated with its power.

The rise of evidence-based policymaking is a good thing. But use of evidence demands rigor too, just as the production of it.