The Writing on the Wall Edition
1. Our Algorithmic Overlords: I've long argued that teaching kids to code is as much of a waste of time as financial literacy. The simplified version of the argument is that most people are terrible programmers and computers are already better at coding than the average human. As a consequence I emphasize to my own kids and to others who are blinkered enough to ask my advice, that learning how to communicate/write is a much more important tool for the future (yes, yes, cognitive dissonance).
While I still think I'm right about the first part, it turns out I'm wrong about the second part. Yesterday OpenAI "released" work on an AI system that writes shockingly good text. I use scare quotes because, in another sign of things to come, OpenAI has only published a small subset of their work because they believe that the potential malicious use of the technology is great enough to restrict access. There are a bunch of news stories about this. Here's Wired, for instance. But the most interesting one I've come across is The Guardian because they had the algorithm write an article based on their lede.
Let's stick to the disturbing for a bit, because it's that kind of day. The World Food Program has formed a partnership with Palantir to analyse its data on food distributions, apparently with the main motivation being to look for "anomalies" that indicate that aid is being diverted or wasted. The idea of handing over data about some of the world's most vulnerable people to a private company that specializes in surveillance and tracking of people hasn't gone over well with a wide variety of people. As background, here's an article about what Palantir does for their biggest client, the NSA. Sometimes it seems like some people at the UN look at the one world government kooks and think, "What could we do to make their conspiracy theories more plausible?"
On a more theoretical level, Kleinberg, Ludwig, Mullainathan and Sunstein have a new paper on "Discrimination in the Age of Algorithms," arguing that despite fears of algorithmic discrimination, proving discrimination by algorithms is a lot easier than proving discrimination by humans. Of course, that requires putting regulations in place that allow algorithms to be examined. I'm going to flatter myself by pointing out it's similar to an argument I made in my review of Automating Inequality. So I feel validated.
Speaking of transparency, regulation and of algorithmic surveillance, here's David Siegel and Rob Reich arguing that it's not too late for social media to regulate itself, by setting up something like FINRA (Financial Industry Regulatory Authority, which polices securities firms). It's an argument that I would have given short-shrift to, but the FINRA example is credible.
Finally, I'll be dating myself in the Graphic of the Week below, but here's another way to figure out how old I am: when I was an undergrad, most of the "power imbalance" between developing countries and private firms literature was about GM. Here's a new piece from Michael Pisa at CGD on the new power imbalance and it's implications: the relationship between developing countries and tech giants.
2. Digital Finance: That feels like as reasonable a transition as I'm going to get to new data from Pew on the global spread of smartphones. Given limited consumer protections, regulatory and enforcement capability, and "digital literacy" in many developing countries, I will confess this worries me a lot, cf Chris Blattman's thread on "creating a 20th Century...system in an 18th Century state."
Here's a particular instance of that concern, tieing together the last few items: the rapidly growing use of "alternative credit scores" using things like digital footprints and psychometrics. You can make an argument that such things are huge boon to financial inclusion by tackling the thorny problem of asymmetric information. But there are big questions about what such alternative metrics are actually measuring. For instance, as the article above illustrates, the argument is that in lending, character matters and that psychometrics can effectively evaluate character. But it doesn't ask whether character is in-born or shaped by circumstance? No matter which way you answer that question, you're going to have a tough time arguing that discriminating based on character is fair. And that's all before we get to all the other possible dimensions of opaque discrimination.
The growing use of alternative data is starting to get attention from developed world regulatory agencies, but the first frontier of regulation is likely to be from securities regulators. I don't think they are going to be particularly interested in protecting developing world consumers. I guess that idea about self-regulation is starting to look more appealing, particularly if it's trans-national.
Meanwhile, the frontier of digital finance is advancing rapidly, even without alternative data. Safaricom introduced what is here called a "overdraft facility" in January, but I think of it more as a digital credit card. In the first month it was available, $620 million was borrowed. The pricing seems particularly difficult to parse but that may be just the reporting. One of the very first things I wrote for FAI was arguing for development of a micro-line-of-credit. Now that it's here, I confess it makes me very nervous.
3. Financial Inclusion: That's not to say that digital tools don't hold lots of promise for financial inclusion, just check the Findex. This week CGAP hosted a webinar with MIX on "What Makes a Fintech Inclusive?" There are some sophisticated answers to that question with some good examples, but I often return to the simplest answer: it cares about poor and marginalized people. And so I especially worry when I see answers to that question that lead with tech.
The financial inclusion field as a whole has been in something of a slow-moving existential crisis for the last few years. The best evidence of that is the number of efforts to define or map the impact of financial services and financial inclusion, several of which I'm a part of. Last week I linked to an IPA-led evidence review on financial inclusion and resilience. The week before that to a Cochrane Collaboration review of reviews of evidence on financial inclusion. This week, the UNCDF and BFA published their take on pathways for financial inclusion to impact the SDGs (full report here). I could say I expect there will be more, but I know there will be more in this vein, if I can finish revisions, etc.
4. US Inequality: It's tax return/refund time in the US. So there's a lot of discussion of the size of tax refunds and how people should withhold less and save more of their refund etc. It's particularly an issue this year because refunds seem to be smaller because of last years tax law changes and perhaps pressure on the Treasury to reduce withholding so more people would see a quick boost in their paycheck. Justin Fox takes a look, using the US Financial Diaries and some related work to show what a dumb policy that was and saving me from reposting my annual tax time lament.
There are a few things here I've been meaning to include for a few weeks but haven't gotten to. Here's a look at how tech is "splitting the workforce in two" which has some big implications for inequality. Here's a look at how stacked against the young the US system has become, which has implications for the persistence of the current very high levels of inequality. And here's one of those very depressing looks at how well-intentioned policies to do something about inequality end up being churned up in the meatgrinder and making things worse, in this case having to do with pushing colleges to admit poorer kids. The latter two are why I have a problem with the proposed incremental approach to Medicaid-for-All by allowing people between 50 and 62 to buy in to the system. I'm usually a great fan of incremental, but that specific proposal seems likely to accelerate the transfers from young-to-old in many ways worse than we can imagine.
5. Evidence-Based Policy: Yes, it's a dark day. So I'm going to revel in it and continue that theme of well-intentioned not working out so well, in this case from the old scale problem. One of the staples of "evidence-based" interventions in the last decade or so has been home visitation for new mothers/infants. An evaluation of a scaled-up version of the program found "no statistically significant effect on the evaluations focal outcomes" and no significant heterogeneity of effects (e.g. no larger or smaller effects for ex-ante determined high-risk or low-risk families). Chile scaled up cognitive behavior therapy in schools to deal with disruptive kids. It made things mostly worse. Pittsburgh scaled up a "restorative justice" program in an attempt to deal with discriminatory discipline practices for disruptive students (African-American kids get suspended from school much more often than white kids). Some people are saying it made things worse, but I look at the results table and see "no effect" given the number of outcomes.
Andrew Gelman features an old Michael Crichton piece on why media depictions of research are so wrong with some actually, it seems to me, good advice on what to do about it. If anyone ends up creating the proposed organization to do rapid response to spurious reporting of research, hire me. I want to do that. I suppose in some small way, that is what the faiV does. So, I guess, sponsor the faiV?
And here's a report from the William Grant Foundation on "Reframing Evidence-Based Policy to Align with the Evidence" which seems a useful thing to do if you've clicked on the three links above.