Blog

Goodbye to Financial Literacy Month

This post is co-authored by Timothy Ogden and Julia Brand

April has been Financial Literacy Month. You probably didn’t notice; most people have more pressing issues on their mind. Some of the advocates still think it’s important, arguing that the economic effects of the global pandemic wouldn’t have been as severe if people were more financially literate. Others hope that 2021 is the last financial literacy month. 

If you’ve read anything from Tim on financial literacy you know which of those two perspectives we agree with. Tim’s Washington Post op-ed decrying the expansion of mandatory high school financial literacy courses lays out the issues and the research that drive our view that financial literacy isn’t a useful policy intervention. 

That op-ed was written back in 2018 though. Last year, there was a new entry in the financial literacy effectiveness meta-analysis archives. Kaiser, Lusardi, Menkhoff and Urban reviewed 76 randomized experiments, and they find that “financial education programs have, on average, positive causal effects on financial knowledge and downstream financial behaviors.” But it hasn’t changed minds around here. In this post, we’ll explain why.

 First, though, some background. Julia Brand, whose name you haven’t seen before, was an intern with FAI in 2020. We tasked her specifically with looking at the new meta-analysis—and the papers included in it—so we could better understand what was going on. Keep in mind that prior meta-analyses had found little to no effect of financial literacy programs on financial behaviors. That’s another important point—our critique of financial literacy programs isn’t that they don’t influence knowledge. It’s pretty clear that it is possible to teach people about financial matters in a way that they retain the knowledge long enough to pass a test on the subject. But the intent of financial literacy isn’t for people to be more knowledgeable, it’s to improve people’s financial situations by changing their behavior and decisions. So the key to answering the question about the value of financial literacy is whether, 1) it changes actual behavior, and 2) those behaviors lead to better outcomes.

 The new meta-analysis tries to answer these critiques specifically—does financial literacy lead to changed financial behaviors and is it cost-effective? The authors conclude that “financial education programs have...positive causal treatment effects on...financial behaviors [which are] meaningful in size…” This meta-analysis is more complete and more thorough than other attempts to systematize the literature. But it didn’t change our minds. We still think that, in general, most financial literacy programs are not useful and certainly not better than alternatives.

There are a number of reasons for that. 

The limits of self-reported data

The most important is a problem with the vast majority of the financial literacy literature: it is very hard to collect objective administrative data on financial behaviors and outcomes. So most studies rely on self-reported data. Why is that such a problem? Well, financial literacy programs are designed to teach things like the importance of budgeting and saving. As we noted, there is little question that financial literacy can improve knowledge. But when you can effectively teach people that they should budget/save, and then later ask them whether they budget/save, it should be obvious that you can’t really rely on their answers. Studies that do this are essentially asking the same question twice but reporting the answers as two different outcomes. 

This isn’t just a theoretical worry. Attanasio et al. (2018), (a paper included in the meta-analysis) examined the impact of a tablet-based financial literacy program, collecting both self-reported survey data and administrative bank account savings and transactional data on a subsample of participants. They did this under the assumption that the administrative data would be more reliable. The RCT’s self-reported data indeed indicated a larger effect than the administrative data. Moreover, while the self-reported data captured a positive effect in both the short- and medium-term, the administrative data only captured an impact in the short run.

To be clear, this isn’t a problem that the authors of the meta-analysis can do anything about (and they do acknowledge this is a problem). 

Is “financial literacy” a meaningful term?

The second reason we are not convinced is how much variation there is in the programs and populations included. While we didn’t look at every paper included in the meta-analysis, a random subsample of 34 papers showed the vast diversity of what falls under the umbrella of “financial literacy intervention.” Those papers included evaluating the effects of high school classroom financial literacy courses, and of providing Ivy League students and faculty with information about S&P 500 index funds. In some studies, education was provided through tablet applications, movies, and other web resources; in others, through information fairs or classroom interactive activities. Studies evaluated samples with average ages as young as 9 and as old as 51. The meta-analysis authors perform robustness checks on sub-samples of their data and generally find there are no statistically significant differences between sub-samples such as high or low-income populations or age of participant. But it’s a substantial leap from there to a conclusion that combining all these studies together tells you something meaningful. It certainly doesn’t tell you anything about what kind of financial literacy intervention to deploy. There is reasonable evidence that interventions delivered at the point of decision are impactful--it’s just really hard to do--but a conclusion that “financial literacy works” from such a disparate set of interventions can easily be misleading.

So many ways to measure “effectiveness” 

The third reason is an inherent problem of meta-analyses (as opposed to a problem of the papers included). The purpose of a meta-analysis is to make sense of a number of studies that are looking at the same issue. But any meta-analysis necessarily has to remove a lot of important details. That includes both the details of the intervention but also how “effectiveness” is measured. Whether a meta-analysis is convincing crucially depends on how much similarity there is in the underlying papers. When it comes to financial literacy studies, the amount of similarity is—well, let’s just say it’s not large. 

What does that mean in practice for this paper? Here’s an example from 2 of the included studies: The financial literacy curricula in Calderone et al. (2018), a study that took place in India, and Abebe et al. (2018), in Ethiopia, both focused on financial planning, savings, and budgeting, and both provided a subsample of their participants with follow-up reminders after their main education modules. So far, so good. But the two studies reported opposite outcomes in terms of guidance for other programs. Calderone et al. reported that their financial literacy intervention was generally effective but found that the reminders resulted in no additional overall savings increase. In contrast, Abebe et al. found that their financial literacy intervention was generally ineffective, except in cases where participants received reminders. But both of these outcomes, despite their contradictions, count as “financial literacy works.” 

And the differences go much further than that. The measures of financial behaviors in the underlying papers are all over the map, but all count as “effective.” Alan and Ertac (2018) measured their results through behavioral evaluations of elementary-school-age participants; Angel et al. (2018) measured results in terms of self-reported change in interest and attitudes towards financial matters such as savings; Jamison et al. (2014) measured change in savings; Song et al. (2016) measured participants’ pension contributions. Since these studies examined a variety of financial literacy programs to teach different concepts to different groups of people in different places, it’s reasonable that they used different metrics. But that also should make you very skeptical that the extrapolation of all of these different metrics tells you anything meaningful about designing a program or a policy. If the question was “Does classroom financial literacy education increase emergency savings balances?” a meta-analysis could produce a useful answer. This type of meta-analysis can’t. 

Where do we go from here?

So what does it all add up to? Anna-Maria Lusardi suggested that the pandemic illustrates why financial literacy is important. We would argue the exact opposite. We don’t expect people to be able to calculate the relative effectiveness of different vaccines, or even whether a vaccine is safe. We don’t expect people to be able to perform cost-benefit analyses of a variety of behaviors and the risk of transmission. We expect highly-trained experts to determine whether a vaccine is effective or “right” for a given population. We expect simple, easy to follow guidance on what behaviors are acceptable and which are risky. We get frustrated when that advice is muddled or confusing. 

In other words, we do rely on trusted experts— like epidemiologists and immunologists and public health experts—in other domains, and we don’t put all of the responsibility on vulnerable individuals to make life-affecting choices alone, including understanding a complex array of products and services, and deciphering highly technical subject matter that even experts themselves don’t always understand. Yet the ethos of financial literacy programs is that, with a bit of training, people should be able to navigate their way to success. There is too little recognition that other hurdles get in the way. We shouldn’t expect individuals to budget and save their way to resilience in a global pandemic or a financial crisis and deep recession, and it’s dangerous to blame them for being “illiterate” if they don’t.

If we think that sitting through a financial literacy course is enough to head off financial instability and precarity, then we’re turning a blind eye to all the ways that the system excludes poor, vulnerable, and marginalized people, as well as how the pandemic has deepened those existing inequalities. The only way to deal with systemic problems is through systemic reforms, in finding durable ways to make the system work better for people who haven’t had a fair chance. While perhaps a harder sell, structural reform is also a much better investment than tinkering at the margins with financial literacy programs.

If this were the last financial literacy month, what should replace it? Some would propose “financial health” but we remain concerned that framing still puts too much of the onus on the individual. Inclusive Financial Systems Month, anyone?