1. Causality: In this great book I know, Jonathan Morduch describes an obsession over causality as "the marker of the tribe" of economists. Most people outside the field, then, might be surprised to find out how unsettled the science of causality is and how much, after all these years, the practice of academic economics is 80% arguing about causal inference. Well, at least in the circles of applied micro that I run in. Recently Emi Nakamura, an "empirical macroeconomist", won the Clark Medal("American economist under the age of 40 who is judged to have made the most significant contribution to economic thought and knowledge") for her work mapping macro theory to macro reality. One of her more well-known papers is a discussion of the gap between theory and evidence in macro; it has a jaw-dropping section on the best existing "evidence" on the effects of monetary policy. So much for an obsession over causal identification. Now before getting too holier-than-thou over what is considered evidence in macroeconomics, it's worth pointing out that the experimental micro-crowd is just getting around to measuring general equilibrium effects, the defining feature of macro debates. I've linked multiple times to recent work on GE effects of microcredit (and related programs) on labor markets (See here for links and lots of discussion on that). While I was writing about that the other day, it occurred to me to wonder, given what we know about peer effects in education, whether anyone had looked at whether spillovers/GE effects were responsible for the rapid fade-out of early childhood education interventions. Less than 24 hours later, this new paper from List, Momeni and Zenou showed up in my Twitter feed, finding large spillover effects from an early childhood intervention (1.2 SD! on non-cognitive skills, which are increasingly found to be the more important feature of such programs), which lead to substantial underestimation of program impact. On a related note, here's a short video of Paul Niehaus talking about the value of experiments at scale, including better measurement of GE effects. Still, there are lots of appealing things about using experiments to establish causality, even if it is somewhat akin to looking for the keys under the streetlights. For instance street lights cause a 36% reduction in nighttime outdoor crime in New York City housing developments. Unfortunately, people really don't like the idea of being experimented on, or even the idea of other people being part of an experiment even when the treatment arms are "unobjectionable." (MR summary here). I'm not really sure how to think about that. If you want to dig deep into causality discussions, Cyrus Samii's syllabus for hisQuant II class this spring is here. Lots (and lots) of interesting and useful links there. If you're more of the video type, Nick Huntington-Klein has a new series of videos on causal inference, including one on causal diagrams and using Daggity to draw them. If you are among the obsessed and want to be even more so, Macartan Humphreys is looking for a post-doc to work with him on causal inference at WZB Berlin.
2. Academic Publishing: To understand the RCT movement you have to know something about one of the world's least efficient markets: economics journals (Yes, I'm sure someone has a paper/post explaining how the market is actually efficient after all). Seema Jayachandran tweeted this week about stats from her first year as co-editor at AEA: Applied: "4% were R&R, 36% were reject w/ reports, 60% were desk rejects." All of her R&Rs were eventually accepted and average and median time to decision was less than 2 months. Data on the acceptance rates at all the AEA journals shows that Seema is doing an exceptional job. AEJ: Micro received 415 papers over a 12 month period, made decisions on only 55% of them, which were all rejections. Yes, zero of those 415 papers were accepted. The overall data led to this thread from Jake Vigdor with the provocative question: "If a journal...never accepts a manuscript, does it exist?" Or how about this paper from Clemens, Montenegro and Pritchett that was finally published in REStat after a decade in R&R? For the record, I have a paper with Michael that we got back for R&R after 4 years that I'm supposed to be revising but I'm writing the faiV instead. While I'm grinding an axe, let me also boost this question from Justin Sandefur on why citations still exist and haven't been replaced by hyperlinks. I wonder if an estimation of the dead weight loss from searching for, formatting and copyediting citation details could get published in an economics journal? One of the reasons for the dismal acceptance rates in journals is the same as the dismal acceptance rates at top ranked universities. Reputation matters a lot. Tatyana Deryugina has a (revised) proposal on a different way of ranking journals that could lead to a more efficient publishing market. It's a start. And to close out with some positive news: JDE is now prospectively accepting papers based on pre-analysis plans, without requiring the authors to commit to publishing there. It's almost as if the editors aren't maximizing their oligopolistic power. I hope they don't have their economist credentials revoked. 3. Digital Finance/Bangladesh: When the subject turns to mobile money, the country under discussion is still almost always Kenya even 12 years after the founding of m-Pesa. I have a particular axe to grind about counting use of mobile money without including payment cards, but there is now another reason to look beyond Kenya. There are now more people in Bangladesh with mobile money accounts than in Kenya. Of course, that's a function of population--penetration in Kenya is 73% (axe grinding: 70% of Americans have a credit card; this discussion does not include China), while it's just over 20% in Bangladesh. But we should expect adoption to accelerate in Bangladesh, and Kenya to be left well in the dust in terms of accounts.