Week of March 12, 2018

1. Microfinance and Digital Finance: Apparently the "farmer suicide over indebtedness" hype train is kicking up again in India. That's not to imply that farmer suicides are not a serious issue. But Shamika Ravi delves into the data and points out that indebtedness doesn't seem to be the driver of suicides and so attacking lenders or forgiving debts isn't going to fix the problem. Certainly poverty and indebtedness add huge cognitive burdens to people that affect their perceptions and decisions in negative ways, including despair. Here's a new video about poverty's mental tax--there's nothing new here, but a useful and simple explanation of the concepts.
Last year (or the year before) I noted Google's decision to play a role in safeguarding people in desperate straits from negative financial decisions: the company banned ads from online payday lenders, in effect becoming a de facto financial regulator. This week, Google announced another regulatory action. Beginning in June it will ban ads for initial coin offerings (if you don't know what those are, congratulate yourself). While I'm all for the decision, it's strange for Google to conclude that these ads are so dangerous to the public that they should be banned, but not for three more months. Cryptocurrency fraudsters, get a move on! Meanwhile, the need for Google and Apple (and presumably Facebook, Amazon, Alibaba and every other tech platform) to step up their financial regulation game is becoming clearer. In an obviously self-promotional, but still concerning survey web security firm Avast found that 58% of users thought a real banking app was fraudulent, while 36% thought a fraudulent app was real. I don't really buy the numbers, but my takeaway is: people have no idea how to identify digital financial fraud. I wish that seemed more concerning to people in the digital finance world.

2. Our Algorithmic Overlords: I've had a couple of conversations with folks after my review of Automating Inequality, and had the chance to chat quickly with Virginia Eubanks after seeing her speak at the Aspen Summit on Inequality and Opportunity. My views have shifted a bit: in her talk Eubanks emphasized the importance of keeping the focus on who is making decisions, and that the danger that automation can make it much harder to see who (as opposed to how) has discretion and authority. A big part of my concern about the book was that it put too much emphasis on the technology and not the people behind it. Perhaps I was reading my own concerns into the text. I also had a Twitter chat with Lucy Bernholz who should be on your list of people to follow about it. She made a point that has stuck with me: automation, at least as it's being implemented, prioritizes efficiency over rights and care, and that's particularly wrong when it comes to public services.
I closed the review by saying that "the problem is the people"; elsewhere I've joked that "AI is people!" Well at least I thought I was joking. But then I saw this new paper about computational evolution--an application of AI that seeks to have the machine experiment with different solutions to a problem and evolve. And it turns out that while AI may not be people, it behaves just like people do. The paper is full of anecdotes of machines learning to win by gaming the system (and being lazy): for instance, by overloading opponents' memory and making them crash, or deleting the answer key to a test in order to get a perfect score. I think the latter was the plot of 17 teen movie comedies in the '80s. Reading the paper is rewarding but if you just want some anecdotes to impress your friends at the bar tonight, here's a twitter thread summary. It's funny, but honestly I found it far scarier than that video of the robot opening a door from last month. Apparently our hope against the robots is not the rules that we can write, because they will be really good at gaming them, but that the machines are just as lazy as we are.
To round out today's scare links, here's a news item about a cyberattack against a chemical plant apparently attempting to cause an explosion; and here's a useful essay on our privacy dystopia.

Read More

Week of March 4, 2018

1. Crappy Financial Products: The results are no surprise, but it remains troubling to see the numbers. “Color and Credit” is a 2018 revision of a 2017 paper by Taylor Begley and Amitatosh Purnanandam. The subtitle is “Race, Regulation, and the Quality of Financial Services.” Most studies of consumer financial problems look at quantity: the lack of access to financial products. But here the focus is on quality: You can get products, but they’re lousy. Too often, they’re mis-sold, fraudulent, and accompanied by bad customer service. These problems had been hard to see, but they’ve been uncovered via the Consumer Financial Protection Bureau Complaints database, a terrifically valuable, publicly accessible—and freely downloadable—database. (Side note: this makes me very nervous about the CFPB’s current commitment to maintaining the data.)

Thousands of complaints are received each week, and the authors look at 170,000 complaints from 2012-16, restricted to mortgage problems. The complaints come from 16,309 unique zipcodes – and the question is: which zipcodes have the most complaints and why? The first result is that low income and low educational attainment in a zipcode are strongly associated with low quality products. Okay, you already predicted that. On top of those effects, the share of the local population identified as being part of a minority group also predicts low quality. No surprise again, but you might not have predicted the magnitude: The minority-share impact is 2-3 times stronger then the income or education impact (even when controlling for income and education). The authors suspect that active discrimination is at work, citing court cases and mystery shopper exercises which show that black and Hispanic borrowers are pushed toward riskier loans despite having credit scores that should merit better options. So, why? Part of the problem could be that efforts to help the most disadvantaged areas are backfiring. Begley and Purnanandam give evidence that regulation to help disadvantaged communities actually reduces the quality of financial products. The culprit is the Community Reinvestment Act, and the authors argue that by focusing the regs on increasing the quantity of services delivered in certain zipcodes, the quality of those services has been compromised – and much more so in heavily-minority areas. Unintended consequences that ought to be taken seriously.

2. TrumpTown: Another great database. ProPublica is a national resource – a nonprofit newsroom. They’ve been doing a lot of data gathering and number-crunching lately. Four items today are from ProPublica. The first is the geekiest: a just-released, searchable database of 2,475 Trump administration appointees. The team spent a year making requests under the Freedom of Information Act, allowing you to now spend the afternoon getting to know the mid-tier officials who are busily deregulating the US economy. The biggest headline is that, of the 2,475 appointees, 187 had been lobbyists, 125 had worked at (conservative) think tanks, and 254 came out of the Trump campaign. Okay, that’s not too juicy. Still, the database is a resource that could have surprising value, even if it’s not yet clear how. Grad students: have a go at it. (Oh, and I’d like to think that ProPublica would have done something similar if Hilary Clinton was president.)

3. Household Finance (and Inequality): This ProPublica story is much more juicy, and much more troubling. Writing in the Washington Post, ProPublica’s Paul Kiel starts: “A ritual of spring in America is about to begin. Tens of thousands of people will soon get their tax refunds, and when they do, they will finally be able to afford the thing they’ve thought about for months, if not years: bankruptcy.” Kiel continues, “It happens every tax season. With many more people suddenly able to pay a lawyer, the number of bankruptcy filings jumps way up in March, stays high in April, then declines.” Bankruptcy is a last resort, but for many people it’s the only way to get on a better path. Even when straddled with untenable debt, it turns out to be costly to get a fresh start.

The problem will be familiar to anyone who has read financial diaries: the need for big, lumpy outlays can be a huge barrier to necessary action. Bankruptcy lawyers usually insist on being paid upfront (especially for so-called “chapter 7” bankruptcies). The problem is that if the lawyers agreed to be paid later, they fear that their fees would also be wiped away by the bankruptcy decision. So, the lawyers put themselves first. The trouble is that the money involved is sizeable: The lawyers’ costs plus court fees get close to $1500. The irony abounds. Many people tell Kiel that if they could easily come up with that kind of money, then they probably wouldn’t be in the position to go bankrupt. Bankruptcy judges see the problem and are trying to jerry-rig solutions, but nonprofits haven’t yet made this a priority. So, for over-indebted households, waiting to receive tax refunds turns out to be a key strategy.

Read More

First Week of March, 2018

1. Global Development: One of the more encouraging trends in development economics as far as I'm concerned is the growth of long-term studies that report results not just once but on an on-going basis. Obviously long-term tracking like the Young Lives Project or smaller scale work like Robert Townsend's tracking of a Thai village (which continues to yield valuable insights) falls in this category, but it's now also happening with long term follow-up from experimental studies. Sometimes that takes the form of tracking down people affected by earlier studies, as Owen Ozier did with deworming in Kenya. But more often it seems, studies are maintaining contact over longer time frames. A few weeks ago I mentioned a new paper following up on Bloom et. al.'s experiment with Indian textile firms. The first paper found significant effects of management consulting in improving operations and boosting profits. The new paper sees many, but not all, of those gains persist eight years later. Another important example is the on-going follow up of the original Give Directly experiment on unconditional cash transfers. Haushofer and Shapiro have new results from a three year follow-up, finding that as above, many gains persist but not all and the comparisons unsurprisingly get a bit messier.
Although it's not quite the same, I do feel like I should include some new work following up on the Targeting the Ultra Poor studies--in this case not of long-term effects but on varying the packages and comparing different approaches as directly as possible. Here's Sedlmayr, Shah and Sulaiman on a variety of cash-plus interventions in Uganda--the full package of transfers and training, only the transfers, transfers with only a light-touch training and just attempting to boost savings. They find that cash isn't always king: the full package outperforms the alternatives.

2. Our Algorithmic Overlords: If you missed it, yesterday's special edition faiV was a review of Virginia Eubanks Automating Inequality. But there's always a slew of interesting reads on these issues, contra recent editorials that no one is paying attention. Here's NYU's AINow Institute on Algorithmic Impact Assessments as a tool for providing more accountability around the use of algorithms in public agencies. While I tend to focus this section on unintended negative consequences of AI, there is another important consideration: intended negative consequences of AI. I'm not talking about SkyNet but the use of AI to conduct cyberattacks, create fraudulent voice/video, or other criminal activities. Here's a report from a group of AI think tanks including EFF and Open AI on the malicious use of artificial intelligence.

3. Interesting Tales from Economic History: I may make this a regular item as I tend to find these things quite interesting, and based on the link clicks a number of you do too. Here's some history to revise your beliefs about the Dutch Tulip craze, a story it turns out that has been too good to fact check, at least until Anne Goldgar of King's College did so. And here's work from Judy Stephenson of Oxford doing detailed work on working hours and pay for London construction workers during the 1700s. Why is this interesting? Because it's important to understand the interaction of productivity gains, the industrial revolution, wages and welfare--something that we don't know enough about but has implications as we think about the future of work, how it pays and the economic implications for different levels of skills. And in a different vein, but interesting none-the-less, here is an epic thread from Pseudoerasmus on Steven Pinker's new book nominally about the Enlightenment.

Read More

Book Review Special Edition: Automating Inequality

1. Algorithmic Overlords (+ Banking + Digital Finance + Global Development) book review: I'd like to call myself prescient for bringing Amar Bhide into last week's faiV headlined by questions about the value of banks. Little did I know that he would have a piece in National Affairs on the value of banks, Why We Need Traditional Banking. The reason to read the (long) piece is his perspective on the important role that efforts to reduce discrimination through standardization and anonymity played in the move to securitization. Bhide names securitization as the culprit for a number of deleterious effects on the banking system and economy overall (with specific negative consequences for small business lending). 
The other reason to read the piece is it is a surprisingly great complement to reading Automating Inequality, the new book from Virginia Eubanks. To cut to the chase, it's an important book that you should read if you care at all about the delivery of social services, domestically or internationally. But I think the book plays up the technology angle well beyond it's relevance, to the detriment of very important points.
The subtitle of the book is "how high-tech tools profile, police and punish the poor" but the root of almost all of the examples Eubanks gives are a) simply a continuation of policies in place for the delivery of social services dating back to, well, the advent of civilization(?), and b) driven by the behaviors of the humans in the systems, not the machines. In a chapter about Indiana's attempt to automate much of its human services system, there is a particularly striking moment where a woman who has been denied services because of a technical problems with an automated document system receives a phone call from a staffer who tries very hard to convince her to drop her appeal. She doesn't, and wins her appeal in part because technology allowed her to have irrefutable proof that she had provided the documents she needed to. It's apparent throughout the story that the real problem isn't the (broken) automation, but the attitudes and political goals of human beings.
The reason why I know point a) above, though, is Eubanks does such an excellent job of placing the current state in historical context. The crucial issue is how our service delivery systems "profile, police and punish" the poor. It's not clear at all how much the "high tech tools" are really making things worse. This is where Bhide's discussion is useful: a major driver toward such "automated" behaviors as using credit scores in lending was to do an end-run around the discrimination that was rampant among loan officers (and continues to this day, and not just in the US). While Eubanks does raise the question of the source of discrimination, in a chapter about Allegheny County, PA, she doesn't make a compelling case that algorithms will be worse than humans. In the discussion on this point she even subtly undermines her argument by judging the algorithm by extrapolating false report rates from a study conducted in Toronto. This is the beauty and disaster of human brains: we extrapolate all the time, and are by nature very poor judges of whether those extrapolations are valid. In Allegheny County, according to Eubanks telling, concern that case workers were biased in the removal of African-American kids from their homes was part of the motivation for adopting automation. They are not, it turns out. But there is discrimination. The source is again human beings, in this case the ones reporting incidents to social services. The high-tech is again largely irrelevant.
I am particularly sensitive to these issues because I wrote a book in part about the Toyota "sudden acceleration" scare a few years ago. The basics are that the events described by people who claim "sudden acceleration" are mechanically impossible. But because there was a computer chip involved, many many people were simply unwilling to consider that the problem was the human being, not the computer. There's more than a whiff of this unjustified preference for human decision-making over computers in both Bhide's piece and Eubanks book. For instance, one of the reasons Eubanks gives for concern about automation algorithms is that they are "hard to understand." But algorithms are nothing new in the delivery of social services. Eubanks uses a paper-based algorithm in Allegheny County to try to judge risk herself--it's a very complicated and imprecise algorithm that relies on a completely unknowable human process, that necessarily varies between caseworkers and even day-to-day or hour-to-hour, to weight various factors. Every year I have to deal with social services agencies in Pennsylvania to qualify for benefits for my visually impaired son. I suspect that everyone who has done so here or any where else will attest to the fact that there clearly is some arcane process happening in the background. When that process is not documented, for instance in software code, it will necessarily be harder to understand.
To draw in other examples from recent faiV coverage, consider two papers I've linked about microfinance loan officer behavior. Here, Marup Hossain finds loan officers incorporating information into their lending decisions that they are not supposed to. Here, Roy Mersland and colleagues find loan officers adjusting their internal algorithm over time. In both cases, the loan officers are, according to some criteria, making better decisions. But they are also excluding the poorest, even profiling, policing and punishing them, in ways that are very difficult to see. While I have expressed concern recently about LenddoEFL's "automated" approach to determining creditworthiness, at least if you crack open their data and code you can see how they are making decisions.
None of which is to say that I don't have deep concerns about automation and our algorithmic overlords. And those concerns are in many ways reinforced and amplified by Eubanks book. While she is focused on the potential costs to the poor of automation, I see two areas that are not getting enough scrutiny.
First, last week I had the chance to see one of Lant Pritchett's famous rants about the RCT movement. During the talk he characterized RCTs as "weapons against the weak." The weak aren't the ultimate recipients of services but the service delivery agencies who are not politically powerful enough to avoid scrutiny of an impact evaluation. There's a lot I don't agree with Lant on, but one area where I do heartily agree is his emphasis on building the capability of service delivery. The use of algorithms, whether paper-based or automated, can also be weapons against the weak. Here, I look to a book by Barry Schwarz, a psychologist at Swarthmore perhaps most well-known for The Paradox of Choice. But he has another excellent book, Practical Wisdom, about the erosion of opportunities for human beings to exercise judgment and develop wisdom. His book makes it clear that it is not only the poor who are increasingly policed and punished. Mandatory sentencing guidelines and mandated reporter statutes are efforts to police and punish judges and social service personnel. The big question we have to keep in view is whether automation is making outcomes better or worse. The reasoning behind much of the removal of judgment that Schwartz notes is benign: people make bad judgments; people wrongfully discriminate. When that happens there is real harm and it is not obviously bad to try to put systems in place to reduce unwitting errors and active malice. It is possible to use automation to build capability (see the history of civilization), but it is far from automatic. As I read through Eubanks book, it was clear that the automated systems were being deployed in ways that seemed likely to diminish, not build, the capability of social service agencies. Rather than pushing back against automation, the focus has to stay on how to use automation to improve outcomes and building capability.
Second, Eubanks makes the excellent point that while poor families and wealthier families often need to access similar services, say addiction treatment, the poor access them through public systems that gather and increasingly use data about them in myriad ways. One's addiction treatment records can become part of criminal justice, social service eligibility, and child custody proceedings. Middle class families who access services through private providers don't have to hand over their data to the government. This is all true. But it neglects that people of all income levels are handing over huge amounts of data to private providers who increasingly stitch all of that data together with far less scrutiny than public agencies are potentially subject to. Is that really better? Would the poor be better off if their data was in the hands of private companies? It's an open question whether the average poor person or the average wealthy person in America has surrendered more personal data--I lean toward the latter simply because the wealthier you are the more likely you are to be using digital tools and services that gather (and aggregate and sell) a data trail. The key determinant of what happens next isn't, in my mind, whether the data is held by government or a private company, but who has the power to fight nefarious uses of that data. Yes, the poor are often going to have worse outcomes in these situations but it's not because of the digital poorhouse, it's because of the lack of power to fight back. But they are not powerless--Eubanks stories tend to have examples of political power reigning in the systems. As private digital surveillance expands though, the percentage of the population who can't fight back is going to grow.
So back to the bottom line. You should read Automating Inequality. You will almost certainly learn a lot about the history of poverty policy in the US and what is currently happening in service delivery in the US. You will also see lots to be concerned about in the integration of technology and social services. But hopefully you'll also see that the problem is the people.

Week of February 12, 2018

1. Banking: In case you missed it, here's that link from last week finding that banks would be better off if they did a lot less. Well, a lot less of the complicated financial stuff that most (large) banks spend a lot of time doing. Matt Levine sees a generalized trend in a positive direction--that is that the financial engineering that financial services companies are engaged in is focused much less on engineering complex financial instruments and a lot more on software and technology engineering. Even the cool project names are being reserved for technology projects rather than hard-to-understand derivatives-of-futures-of-insurance-of-bonds-of-weather-derivatives.
That does raise some questions about the evolution of fintech--if the banks themselves are more focused on the technology of service delivery, what does that mean for the technology firms? I do feel a bit of unease that these are the same banks that don't seem to be able to add value to themselves in their core area of expertise (and it's not just the banks, remember that Morningstar's ratings are negative information). How much should we expect from their wading into technology and advice? More on that below, in item 2.
There's another concern with banks moving in this direction. While it's not always the case, the kind of engineering that banks are doing now tend to increase consolidation: returns to scale tend to be bigger and matter more in software, data and high-volume/low-margin activities. And when consolidation happens it tends to be bad for lower-income customers. Here's a recent paper examining the impact of bank consolidation in the US (particularly large banks acquiring small banks): higher minimum account balances and higher fees, particularly in low-income neighborhoods. Those neighborhoods see deposits flow out of bank accounts (justifying closing branches) and later see increases in check-cashing outlets and decreased resilience to financial shocks. But wait there's more: the current version of the Community Reinvestment Act regulations tend to focus on places where banks have a physical presence. So closing branches and delivering more services through technology means, well, that those banks have less worries about complying with CRA. Hey did you know that the Treasury Department is considering making changes to the CRA regulations? I'm guessing the first priority isn't going to be expanding the CRA mandates.
And just to throw in a little non-US spice, here's a story about massive bank fraud at the Punjab National Bank in India.

2. Our Algorithmic Overlords: I've made jabs in the faiV pretty regularly about fintech algorithms ability to make good recommendations, particularly for lower income households. It turns out I'm not alone in distrusting machine-generated recommendations. Human beings tend to believe pretty strongly that humans make better recommendations than machines particularly when it comes to matter of taste. But we're all wrong. Here's a new paper from Kleinberg, Mullainaithan, Shah and Yeomans testing human versus machine recommendations of jokes(!). The machines do much better. Perhaps I should shift my concern away from machine-learning-driven recommendations and spend more time on a different preoccupation: why humans are so bad at making recommendations. There is perhaps another way: making humans and machines both part of the decision-making loop. A great deal of work in machine learning right now is organized around humans "teaching" a machine to make decisions, and then turning the machine loose. An alternative approach is having the "machine-in-the-loop" without ever turning it loose. That is the approach generally being used in such things as bail decisions. The big outstanding question is where we should allow humans (and which humans) to overrule machine recommendations and when we should allow the machines (and which machines) to overrule the humans.
Key to making such decisions is whether the human is able to understand what the machine is doing, and whether humans should trust the machine. Both are dependent on replicability of the AI. You might think sharing data and code in AI research would be standard. But you'd be as wrong as I was about recommendations. There's a budding replication crisis in AI studies because it is so rare for papers to be accompanied by the training data (about 30%) used in machine-learning efforts, much less the source code for their algorithms (only 6%!). Of note if you click on the paper above about recommendations, on page two  there is note that all of the authors' data and code are available for download.

Read More

Week of February 5, 2018

1. Digital Finance: When I name an item "digital finance" you know I'm going to be talking about mobile money and fintech--but should you? Is there something that's particularly more digital about mobile money than about payment cards or plain-old ATMs (both of which are, of course, fintech). Arguably paying a vendor with a credit card requires fewer real world actions than using mobile money--there are certainly fewer keys to be pressed. That's the overriding thought I had when looking at this new research from CGAP and FSD Kenya on digital credit in Tanzania: digital credit looks like credit cards. It's being used to fill gaps in spending, not for investment; is mostly being used by people with other alternatives; it's mostly expanding the use of credit (on the intensive margin); and it's really unclear whether it's helping or hurting.
Perhaps the most striking thing is that digital credit is not being used for "emergencies." Part of the interest, I think, in mobile money and digital credit was that it might enable users to better bridge short-term liquidity gaps given the well-documented volatility of earnings. But that's not what seems to be happening. Again it seems to be mirroring other forms of digital finance that we don't really call "digital finance", namely payday loans (which after all typically involve an automated digital transfer out of the borrowers checking account). Borrowers are very likely to miss payments (1/2 of borrowers) or default (1/3 of borrowers, based on self-reports, not administrative data). Given that, these papers (one, two, three, four) on whether access to payday loans helps or hurts seem like they should be required reading for digital credit observers (and don't forget the links from Sean Higgins a few weeks ago). The gist--they do help when there really are emergencies like natural disasters, but hurt a lot when there aren't.

This week in the US is providing an unusual window into emergencies and digital finance. The sharp declines in the US stock market caused a lot of folks to go look at their portfolios, which brought down a new generation of digital finance websites like Wealthfront and Betterment. Even Fidelity and Vanguard had problems. There's an element there of concern about mobile money systems in developing countries: we really don't know what a "run" on a mobile money platform would look like and how systems and people would be able to handle outages whatever their cause. But the more important story is that the problems encountered were probably pretty good for consumers. Preventing people from accessing their accounts in the perceived emergency of stock prices dropping kept them from panic selling, which is a thing humans do a lot. In fact, for those customers that could log in, they found lots of artificial barriers to taking action. Digital finance's key contribution in this case wasn't expanding access, it was limiting it.

2. Household Finance: Which brings us back to the ever recurring theme of household finance: it's complicated and we really don't understand it very well. What we do understand is that it's very hard for people to make sound decisions (causal inference is hard!) when it comes to money. Here, at long last, is the write-up of work by Karlan, Mullainathan and Roth on debt traps for fruit vendors. You may remember this being referenced in the book Scarcity--but if not, the basics are that people in chronic debt who have their loans paid off fall quickly back into chronic debt. That also seems like something digital credit observers should be thinking about.

Here's another understudied puzzle: consumers do seem to react to stock market gyrations even though only a small portion of Americans have meaningful investments in stocks. Really, the figure is a lot lower than you likely think. But if it's not sold out yet, you can start investing in stocks at a big discount today--not because of the decline of the stock markets, but this curious offer to buy a "gift card" for $20 worth of stock in major companies for $10. I stared at this for a long time wondering, "Should I use this as a teaching tool for my kids? And if so, should the lesson be arbitrage or why not to invest in individual stocks?"
   
3. Our Algorithmic Overlords: I promised a review of Virginia Eubanks new book Automating Inequality this week, but I'm not ready yet. In the meantime, I'll point you to Matt Levine's discussion of how little of what we do matters and how big data is starting to illustrate that. It's a riff that starts from a new paper showing that what banks do doesn't seem to matter much, which I suppose is a big support to the point above about how hard household finance is--even highly paid professionals can't seem to do anything that makes a difference.

And the founder of the Electronic Frontier Foundation died this week. I found this reflection thought-provoking in a number of directions: "I knew it’s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls 'turn-key totalitarianism.'”

Read More

The First Week of February 2018: The Morduch Edition

1. Development Economics Superstars: You know by now that NYU economist Paul Romer is heading home to downtown NY, leaving his post as the World Bank Chief Economist. It’s good news for the NYU development economics community. Don’t worry about the World Bank, though – if this list of amazing seminar speakers is any indication, the World Bank continues to be a place to find exciting ideas and research. The first speaker was this week: MIT’s Tavneet Suri talking about digital products and economic lives in Africa (video).

2. Dueling Deatons: It would be embarrassing to let on just how much I’ve learned from reading Angus Deaton over the years. But there are different versions of Deaton. One of them is a careful analyst of income and consumption data with a no-BS attitude toward poverty numbers. Another wrote an op-ed in the New York Times last week.
Deaton’s op-ed argued (1) that there’s quite a lot of extreme poverty in the US, not just in poorer countries, and (2) perhaps we should move budget from anti-poverty efforts abroad to those at home. Development economists & allied cosmopolitans rose up. Princeton ethicist Peter Singer argues that argument #2 clearly fails a cost-benefit test: it’s simply much cheaper to address needs abroad. Charles Kenny and Justin Sandefur of the Center for Global Development reject the idea that spending more in Mississippi should mean spending less in India, and they take a good whack at the US poverty data. But if you’re going to read just one rebuttal, make it Ryan Brigg’s essay in Vox. It’s the rebuttal to “provocative Deaton” that “no-BS Deaton” would have written. The main argument is: no, actually, there isn’t much “extreme poverty” in the US once you look at the data more carefully. Deaton’s basic premise thus falls away.
On a somewhat more personal note: in recent years, I’ve spent time down the back roads of Mississippi with people as poor as you’ll find in the state. I’ve come to know the kinds of Mississippi towns that Kathryn Edin and Luke Shaefer describe in their powerful US book, $2 a Day (one of Deaton’s sources). At the same time, I’ve worked in villages in India and Bangladesh where many households are targeted as “ultrapoor”. So I think I have a sense of what Deaton’s getting at in a more visceral way. He’s right about the essential point: It’s hard not to be angry about our complacency about poverty – both at home and in the US. We should be more aware (and more angry). But Deaton picked the wrong fight (and made it the wrong way) this time. 

3. Risk and Return (Revisited): A big paper published this week. It’s nominally about farmers in Thailand, but it challenges common ways of understanding finance and inequality in general. The study holds important lessons but is fairly technical and not so accessible. The paper is “Risk and Return in Village Economies” by Krislert Samphantharak and Robert Townsend in the American Economic Journal: Microeconomics (ungated).
Why does poverty and slow economic growth persist? A starting point is that banks and other financial institutions often don’t work well in low-income communities. One implication is that small-scale farmers and micro-enterprises can have very high returns to capital -- but (or because) they can’t get hold of enough capital to invest optimally. The entire microfinance sector was founded on that premise, and there’s plenty of (RCT) evidence to back it.
Samphantharak and Townsend use 13 years’ worth of Townsend’s Thai monthly data to dig deeper. The paper gathers many insights, but here are two striking findings: The Thai households indeed have high average returns to capital but they also face much risk. Making things harder, much of that risk affects the entire village or broader economy and cannot be diversified away. As a result, much of the high return to capital is in fact a risk premium and risk-adjusted returns are far, far lower. That means that poorer households may have high returns to capital but they are not necessarily more productive than richer households (counter to the usual microfinance narrative). The action comes from the risk premium.
What is happening (at least in parts of these Thai data) is that poorer farmers are engaged in more risky production modes than richer farmers. Once risk premia are netted out, the picture changes and richer farmers are in fact shown to have higher (risk-adjusted) returns.
A few implications (at least in these data): (1) better-off farmers are both more productive and have more predictable incomes. So inequality in wealth is reinforced by inequality in basic economic security, the kind of argument also at the heart of the US Financial Diaries findings. (2) Poorer farmers face financial constraints, but not of the usual kind addressed by microfinance. The problems largely have to do with coping with risk. That might explain evidence that microfinance isn’t effective in the expected ways. (3) The evidence starkly contrasts with arguments made by people (like me) who argue that rural poverty is bound up with the inability to take on riskier projects.

Read More

CEGA Special Edition: A bit more from AEA

1. Financial Inclusion: I [Sean] organized a session on savings and financial inclusion that looked at the impact of various savings interventions such as commitment devices, opt-out savings plans, and mobile money. Continuing last week’s theme on similarities between developed and developing countries, a savings intervention that has greatly increased savings in the US is opt-out savings plans or “default assignment,” such as being automatically enrolled in a 401(k) plan. In an experiment in Afghanistan, Joshua Blumenstock, Michael Callen, and Tarek Ghani explore why defaults affect behavior: some employees are defaulted into a savings program where 5% of their salaries are automatically deposited in a mobile money savings account, but they can opt out at any time. Those who were defaulted in were 40 percentage points more likely to contribute to the savings account, which is comparable to the effect of the employer matching 50% of employees’ savings contributions

Commitment savings accounts have also been tested in the US and in many other countries. In a study by Emily Breza, Martin Kanz, and Leora Klapper, employees in Bangladesh were offered a commitment savings account, with a twist: depending on the treatment arm, employers sometimes endorsed the product, and employees were sometimes told that their decision would be disclosed to the employer. Only the treatment arm that had both employer endorsement and disclosure of the employee’s choice led to higher take-up, suggesting that workplace signaling motivated employees to save. Another study by Simone Schaner et al. (covered in last week’s faiV) offered employees in Ghana a commitment savings product with the goal of building up enough savings to stop incurring overdraft fees, which are common. Take-up was high, but baseline overdrafters were more likely to draw down their savings before the commitment period ended -- meaning they benefited less from the intervention.
Two important barriers to financial inclusion in the US and around the world are transaction costs and low trust in banks. In a paper I coauthored with Pierre Bachas, Paul Gertler, and Enrique Seira, we study the impact of providing debit cards to government cash transfer recipients who were already receiving their benefits directly deposited into a bank account. Debit cards lower the indirect transaction costs -- such as time and travel costs -- of both accessing money in a bank account and monitoring the bank to build trust. Once they receive debit cards, beneficiaries check their balances frequently, and the number of checks decreases over time as their reported trust in the bank and savings increase"

2. Household Finance: Digital credit is a financial service that is rapidly spreading around the world; it uses non-traditional data (such as mobile phone data) to evaluate creditworthiness and provide instant and remote small loans, often through mobile money accounts. One of the concerns about digital credit is that customers’ credit scores can be negatively impacted, even for the failure to repay a few dollars. In turn, this can leave them financially excluded in the future. Andres Liberman, Daniel Paravisini, and Vikram Pathania find a similar result for “high-cost loans” in the UK (which we would call payday loans in the US). They use a natural experiment and compare applicants who receive loans with similar applicants who do not receive loans to study the impact of the loans on financial outcomes. For the average applicant, taking up a high-cost loan causes an immediate decrease in the credit score, and as a result the applicant has less access to credit in the future. 

Read More

Week of January 8, 2018

1. The Economics Production Function: Over the last few years, papers on microenterprises generally shared a couple of remarkable--given the general narrative--findings: microenterprises (on average) didn't grow no matter what you did to try to boost them, and women-owned microenterprises performed worse than male-owned ones. Those findings led to plenty of yowls from practitioners whose work, livelihoods and in some cases core beliefs were based on the opposite. In many conversations I had, I got the impression that people outside the profession believed that economists would publish these findings and then move on. But that perception really misunderstands the motivations of economists and the way the field works. Economists don't leave puzzles alone once they find them--the field pursues them relentlessly.
The best session I attended this weekend was based on the particular puzzle of why female-owned microenterprises are less profitable. Natalia Rigol presented work following up on an earlier studies that documented the profitability gender gap, finding that the source of the gap is mostly due to lower returns from female-owned enterprises where there was another (male-owned) enterprise in the household. Those male-owned enterprises were in more profitable industries (something documented in the original studies), so the households were making quite rational decisions to allocate additional funds to the more profitable business (and making it look as if the female-owned business had 0 or negative returns). In households where there was only a female-owned business there is no gap in returns to capital. Leonardo Iacovone and David McKenzie presented on efforts in Mexico and Togo, respectively, to provide training to help women entrepreneurs improve their businesses with positive results--in both cases seemingly based on personal initiative training rather than business skills. And Gisella Nagy presented results (unfortunately there's nothing yet to point to on this one) that women tailors in Ghana show lower profitability than male tailors because there are more women tailors which drives down prices they can get in the market. This last finding is particularly important because it suggests that part of the way forward for microcredit aimed at building women's businesses is to do a much better job targeting, or as I've called it elsewhere, abandoning the vaccine (everyone gets one!) model of microcredit for an antibiotic (only people who really need it get one!) model.
And all of that is just a very small sample of work being done on the puzzle of heterogeneity of returns to microenterprises and what can be done about it. I'm now sorely tempted to write an overview on all these studies, but dammit I really want to get to "subsistence retail."

2. Causal Inference is Hard: Those two topics aren't orthogonal to each other of course. One way they are joined together is my common theme about how hard causal inference is for the average person, and in particular for the subsistence (or just above) operator of a microentrprise (whether farming or retail). That's what I kept thinking about when reading this new post from David McKenzie on "Statistical Power and the Funnel of Attribution". David is writing for economists trying to write convincing papers, but this point "Failure to see impacts on your ultimate outcome need not mean the program has no effect, just that the funnel of attribution is long and narrows" is equally important for the people being treated. If the funnel of attribution is long and narrows, then its approaching impossible for the individual (not gifted with a large sample size or a deep understanding of statistics) to figure out which of their actions actually matter.
There is a connection to AEA here. As I was perusing the poster displays (also known as "the saddest place on earth") I kept hearing people arguing with Jacob Cosman, the creator of a poster about how the opening of new restaurants in a neighborhood affects the behavior of existing restaurants. The answer: a very precisely estimated no effect at all. (Here's a link to an old version of the paper with somewhat different results.) Economists walking by simply couldn't believe this and were constantly suggesting to the author things he must have done wrong. I was amused. My strong prior is that a person would not open one of these restaurants unless they believed that their restaurant was unique (otherwise, you would believe that your restaurant would quickly fail like the 90+% of other small restaurants and you wouldn't open in the first place). So when another new restaurant arrives, you don't actually see it as a threat that needs a response. You are, after all, different! But even if you did think you needed to respond, how would you possibly know what the right response was? Do prices matter? Menus? Advertising? Item descriptions? Coupons? The funnel of attribution on all of these is so long and imprecise we should assume that individual entrepreneurs have no idea what to do even if they wanted to do something. Which ultimately brings us back to why it's so hard to get microentrepreneurs to change their behavior in a lasting way, and why personal initiative training may work much better than business skills. Personal initiative training teaches you that what you do matters, even if you can't tell, while business skills training teaches you to do something even though you can't tell that it matters.

Read More

Week of December 4, 2017

1. Social Investment: Last week I was at European Microfinance Week. Video of the closing plenary I participated in is here. My contribution was mainly to repeat what seems to me a fairly obvious point but which apparently keeps slipping from view: there are always trade-offs and if social investors don't subsidize quality financial services for poor households, there will be very few quality financial services for poor households.
Paul DiLeo of Grassroots Capital (who moderated the session at eMFP) pointed me to this egregious example of the ongoing attempt to fight basic logic and mathematics from the "no trade-offs" crowd. This sort of thing is particularly baffling to me because of the close connection that impact investing has to investing--a world where everything is about trade-offs: risk vs. return; sector vs. sector; company vs. company; hedge fund manager vs. hedge fund manager. The logic in this particular case, no pun intended, is that a fund to invest in tech start-ups in the US Midwest is an impact investment, even though the founder explicitly says it isn't, because it is "seeking potential return in parts of the economy neglected by biases of mainstream investors." If that's your definition of impact investing you're going to have a tough time keeping the Koch Brothers, Sam Walton and Ray Dalio out of your impact investment Hall of Fame. Sure, part of the argument is that these are investments that could create jobs in areas that haven't had a lot of quality job growth. But by that logic, mining BitCoin is a tremendous impact investment. You see, mining BitCoin and processing transactions is enormously energy intensive. And someone's got to produce that energy, and keep the grid running. Those electrical grid jobs are one of the few high paying, secure mid-skill jobs. Never mind that BitCoin mining is currently increasing its energy use every day by 450 gigawatt-hours, or Haiti's annual electricity consumption. And, y'know, reversing the trend toward more clean energy. Hey anyone remember the good old days of "BitCoin for Africa"?

2. Philanthropy: There are plenty of trade-offs and questions about impact in philanthropy, not just in impact investing, and not just in programs. Here's a piece I wrote with Laura Starita about making the trade-offs of foundations investing in weapons, tobacco and the like more transparent.
I could have put David Roodman's new reassessment of the impact of de(hook)worming in the American South in early 20th century under a lot of headings (for instance, Roodman once again raises the bar on research clarity, transparency and data visualizations; Worm Wars is back!; etc.). The tack I'm going to take, in keeping with the prior item, is the impact of philanthropy. The deworming program was driven by the Rockefeller Sanitary Commission and is frequently cited, not only as evidence for current deworming efforts, but as evidence for the value and impact of large scale philanthropy. Roodman, using much more data than was available when Hoyt Bleakley wrote a paper about it more than 10 years ago, finds that there isn't compelling evidence that the Rockefeller program got the impact it was looking for. Existing (and continuing) trends in schooling and earnings appear unaltered. 
Ben Soskis has a good overview of the seminal role hookworm eradication had in the creation of American institutional philanthropy. His post was spurred by an article I linked back in the fall about the return of hookworm in many of the places it was (supposedly?) eradicated from by Rockefeller's philanthropy. We may need to rewrite a lot of philanthropic history to reflect that the widely cited case study in philanthropic impact didn't eradicate hookworm and may not have had much effect. And while we're in the revision process, it may be useful to reassess views on the impact of the Ford Foundation-sponsored Green Revolution: a new paper that argues that there was no measurable impact on national income and the primary effect was keeping people in rural farming communities (as opposed to migrating to urban areas). Given what we now generally know about the value to rural-to-urban migration, that means likely significant negative long-term effects.
If you care about high quality thinking about philanthropy, democracy and charitable giving in general, which I of course think you should, you should also be paying attention to some of Ben Soskis' other current writing. Here he is moderating a written discussion of Americans' giving capacity. And here's a piece about how the Soros conspiracy theories are damaging real debate about the role of large scale philanthropy in democratic societies.
In the spirit of the holidays, I feel like I should wrap up an item on philanthropy with some good news. In the last full edition of the faiV I mentioned the MacArthur Foundation's 100&Change initiative, which is picking one idea to get $100 million to "solve" a problem. For all the problems I have with that, the program is doing something really interesting, thanks to Brad Smith and the Foundation Center. All of the proposals, not just the finalists, are now publicly available for other foundations to review.

Read More