1. Banking: In case you missed it, here's that link from last week finding that banks would be better off if they did a lot less. Well, a lot less of the complicated financial stuff that most (large) banks spend a lot of time doing. Matt Levine sees a generalized trend in a positive direction--that is that the financial engineering that financial services companies are engaged in is focused much less on engineering complex financial instruments and a lot more on software and technology engineering. Even the cool project names are being reserved for technology projects rather than hard-to-understand derivatives-of-futures-of-insurance-of-bonds-of-weather-derivatives. That does raise some questions about the evolution of fintech--if the banks themselves are more focused on the technology of service delivery, what does that mean for the technology firms? I do feel a bit of unease that these are the same banks that don't seem to be able to add value to themselves in their core area of expertise (and it's not just the banks, remember that Morningstar's ratings are negative information). How much should we expect from their wading into technology and advice? More on that below, in item 2. There's another concern with banks moving in this direction. While it's not always the case, the kind of engineering that banks are doing now tend to increase consolidation: returns to scale tend to be bigger and matter more in software, data and high-volume/low-margin activities. And when consolidation happens it tends to be bad for lower-income customers. Here's a recent paper examining the impact of bank consolidation in the US (particularly large banks acquiring small banks): higher minimum account balances and higher fees, particularly in low-income neighborhoods. Those neighborhoods see deposits flow out of bank accounts (justifying closing branches) and later see increases in check-cashing outlets and decreased resilience to financial shocks. But wait there's more: the current version of the Community Reinvestment Act regulations tend to focus on places where banks have a physical presence. So closing branches and delivering more services through technology means, well, that those banks have less worries about complying with CRA. Hey did you know that the Treasury Department is considering making changes to the CRA regulations? I'm guessing the first priority isn't going to be expanding the CRA mandates. And just to throw in a little non-US spice, here's a story about massive bank fraud at the Punjab National Bank in India.
2. Our Algorithmic Overlords: I've made jabs in the faiV pretty regularly about fintech algorithms ability to make good recommendations, particularly for lower income households. It turns out I'm not alone in distrusting machine-generated recommendations. Human beings tend to believe pretty strongly that humans make better recommendations than machines particularly when it comes to matter of taste. But we're all wrong. Here's a new paper from Kleinberg, Mullainaithan, Shah and Yeomans testing human versus machine recommendations of jokes(!). The machines do much better. Perhaps I should shift my concern away from machine-learning-driven recommendations and spend more time on a different preoccupation: why humans are so bad at making recommendations. There is perhaps another way: making humans and machines both part of the decision-making loop. A great deal of work in machine learning right now is organized around humans "teaching" a machine to make decisions, and then turning the machine loose. An alternative approach is having the "machine-in-the-loop" without ever turning it loose. That is the approach generally being used in such things as bail decisions. The big outstanding question is where we should allow humans (and which humans) to overrule machine recommendations and when we should allow the machines (and which machines) to overrule the humans. Key to making such decisions is whether the human is able to understand what the machine is doing, and whether humans should trust the machine. Both are dependent on replicability of the AI. You might think sharing data and code in AI research would be standard. But you'd be as wrong as I was about recommendations. There's a budding replication crisis in AI studies because it is so rare for papers to be accompanied by the training data (about 30%) used in machine-learning efforts, much less the source code for their algorithms (only 6%!). Of note if you click on the paper above about recommendations, on page two there is note that all of the authors' data and code are available for download.