Professor Campbell Harvey is a Professor of Finance at Duke University. He has 70 thousand citations and is one of the most recognised researchers in finance. Professor Harvey spent over 33 years in academia, published over 125 papers and advised some of the largest and most influential financial institutions. In our latest episode, he shares stories and lessons he learned while studying under several Nobel laureates. He also explains why he claims that over 50% of claimed research findings in financial economics are likely false and what drives him as a scholar. Furthermore, he expands on the differences in incentives between the industry and academia and what is the likely path to recovery from the COVID-19 crisis.
Why most claimed research findings in financial economics are likely false?
If you try enough things, even if they're all random, even if they are just like simulated, something is going to work by chance. If you try 20 variables to explain one variable, one will appear to work purely by chance. You need to take that into account. And it's true for many applications in finance, and maybe the most prominent application is manager fund performance.
So you look at somebody that has beaten the market 10 years in a row, and you think that person's skilled, but that could easily happen purely by chance. And indeed, if there's about 10,000 managers in the world, 9 of them will beat the market 10 years in a row, even if all 10,000 are just flipping coins, so nobody's skilled. So you have a situation where nobody is skilled, you'll hardwire it, then you will see a performance like that. My idea that was going to be an idea that wasn't published came a paper that has got ready a lot of citations right now. And that's called … and the Cross-Section of Expected Returns published in the review of financial studies in 2016. And that's exactly true.
We make the claim in this paper with Yan Liu, that over half of the empirical work in finance is likely faults. So that's a pretty strong claim. But I'm fairly confident that that's the case. And this is not just about the factor zoo, and all of these variables that have been proposed to beat the market.
There's many studies in corporate finance that try to explain, for example, the cash holdings, a firm's capital structure, all of these use dozens and dozens of variables that they're trying, and you find something that looks significant, you need to adjust for the possibility that it's just purely by chance.
So what we haven't done in finance is a good job of controlling for this multiplicity. We look at some variable as well as two standard deviations away from zero, therefore it's significant. We got 95% confidence… no, that's totally false! You need to allow for the possibility of this just being purely a random effect. And when you try many things, it's essential to make corrections.
My research stream in the last four years has been focused on this idea that you need to make adjustments to the way that we usually do finance in order to have more confidence in the results. And if you don't do that, then you get this problem of the type I error. So type I errors, the false positive, you declare something significant when it isn't. And I think that over half of the research in my field is likely a type I error.
Different Incentives in Industry vs Academia
My presidential address to the American Finance Association is published in the Journal of Finance in 2017, addresses the issue about the different incentives between academia and business. So let me talk about the complex agency problem that we face in academia, and then how that is how it plays out in business.
1.) There's competition amongst the journals. So you want to be the best journal. And the way that the best is defined often is by something that's known as an impact factor, and that's basically the number of scientific papers that are citing papers that are published in your journal, let's say the Journal Finance. So if the Journal of Finance gets a lot of citations from other journals, then that means it must be really good.
2.) Papers that have positive results. So they support the hypothesis being tested, tend to have more citations. So think of it as you write a paper and then you provide some empirical work that shows the effect doesn't work. Well, people not going to cite that.
And this difference between the positive result and negative result leads to a selection issue.
And over 90% of the papers published in finance and economics, have results that support the hypothesis and you would think it should be a lot less It's really hard to find stuff that actually works.
In other fields like astrophysics, that ratio is more like 50%. And finance and economics are not the worst. The field that is the worst in terms of the percentage is psychology, where almost everything supports the hypothesis that's being tested. So given that the vast proportion is a positive result, then the researchers the faculty members actually figured this out. And they realize that they need to get a publication, they need to show a result that's significant. And this causes a lot of problems, because people will make choices to make strategic selections in order to get that significant result.
So I call this p-hacking.
And essentially it involves many different things. It could be a sample selection, it might be exclusion of certain periods, it might be certain rules on outliers. It might be an estimation method that's being tried. It might be a series of variables that are looked at where the best one is cherry picked. There are just so many leavers that you can exercise to get that significant result. And when you do this, you're just maximizing the chance that the result is not going to stand the test of time that it will be a false result out of sample of so called type I error.
So that is an issue. It's a big issue. And it's really, really difficult to deal with. It’s lot easier to deal with in other sciences, where you post before you do your experiment exactly what you're going to do, the tests the data, all of the rules are done beforehand.
In finance and economics, the data is available, people look at the data before they actually do their research.
Often the empirical work is done and then the theory is developed after, which you should do in the reverse.
So essentially, you find a result and then you concoct a story that you call a theory to explain that result. And this plays out negatively in a number of different ways.
One way is that practitioners will see the academic research, assume that it's true, and then just use it and then wrap a product around it and say, well, this paper published in the Journal of Finance had this idea we've implemented that, but they don't take into account that many of these ideas are false, because there was some data mining involved out of sample wasn't done properly, you've got choices that were made that make the result look better than it actually is. So there are incentives like that in academia, that kind of work to increase the number of type I errors in terms of the papers.
On the practitioner side, you're correct that the incentives are different. So you actually do not want to go to market with a product that has been overfed or p-hacked, because you sell out to your customers, then the performance will disappoint and that hurts your reputation. And it actually hurt your bottom line. Like it's just not get the funds coming in. So for practitioners, they need to be really, really careful in terms of what they're doing, because they want to make sure that they deliver a product whose performance when you look at a back test will repeat in the future. And that's really difficult to do. So it's really important to have the right incentives at an investment management firm.
Let me just give you an example.
Suppose that you've got two researchers of equal quality and a company. They both have master's degrees from good schools, they started at the same time, and they both pitch an idea, different ideas and management thinks, well, those ideas are worth pursuing. They're good ideas. So, researcher 1 pursues the idea. They do the tests and it fails once they do the under sample analysis, but the work was excellent in terms of the testing and the care that went into it. And then researcher 2 is also very careful in the research does the testing and it actually works, it goes into production and the company uses it. It is a huge mistake to reward researcher 2 and punish researcher 1. So both of them had equally as good ideas. They're very careful in what they did. And it turned out that one worked and the other didn't work. If you punish researcher 1, then within the firm, people figure out what to do. I need to get a result that's significant or my bonuses cut, or I'm going to be let go and that encourages the same sort of behavior, the same sort of p-hacking within a company.
So companies need to be very aware of this also. I've seen results from some firms that are obviously p-hacked. The incentives are different. But I want to emphasize there are two issues.
One, I've mentioned already, that I think practitioners are often too quick to embrace the academic research, and they don't realize the incentives and academia so that there could be some overfitting and the out of sample performance is not going to be as good. So I think that needs to be taken into account.
Also researchers that are really thinking of long term impact, and I put myself in that category, we're really careful that we want to make sure that the paper we publish is not going to just create a buzz for a year or two, we wanted to actually have legs. So, again, it's difficult in academia, many schools, just count publications, and then many schools, just getting one publication as a top journal is good enough for tenure. And doesn't really matter what the paper says. So again, you've got this incentive to find something that appears to work and get it published. Whereas others are much more careful. They're playing for the long term; they're playing for the big idea. And, if it does work, then you will be recognized you'll have impact for many different years. So yes, there's different incentives.
I think that academics can learn from practitioners and I certainly also think that practitioners need to be much more careful in understanding the sort of incentives that academics have, so that they can do a better job.
And the last thing I'll mention is that some practitioners engage in effectively, the same thing is what academics do. So if you're paid just based upon assets under management, rather than having any performance fee, it might be that it's no big deal. You're going to wrap some ETF around 300 different academic ideas, and 300 ETFs are launched. And you fully know that over half of them are false. That's not like a great equilibrium. I think that companies need to be much more careful. They've got a fiduciary duty to do the due diligence on these products that are based upon academic research. So you need to go back, recreate, do robustness in order to maximize the chance that it is a real phenomenon. And it's got the maximum chance of not disappointing, the person that invests in it.
Personal Experience of Being an Academic and Extensively Working with the Industry
I've been very fortunate in my career. I've been involved in many different kind of practical initiatives. As a relatively junior faculty, I was advisor to many firms. And now as you said, I'm an advisor to two, Research Affiliates and Man Group. I've been very fortunate because I learned so much. It is a really high integrity atmosphere at both of these companies, they absolutely want to do the right thing for their customers. They understand the reputation issue, and are playing for the long term. And it just fits so well in terms of what I want to do. So the sort of research obviously is more applied when you look at practitioner research, but nevertheless, I've learned a huge amount from both of these companies. There's a couple of dimensions in terms of what I've learned.
I've learned things that the academics often make simplifications that are not really challenged by the academics or the peer review process are the editors of the journals, just much more difficult to actually implement these ideas. So if you think of these factors that are proposed, the factors such as value factor or growth factor here based upon long short portfolios and the stuff that's published, doesn't even have any transactions costs in it. And that's completely unrealistic.
And then managers are benchmarked against these academic factors unfairly, because the academic factors don't take into account the transaction costs.
And some of these transaction costs are so large to make the factor infeasible. So imagine the cost of shorting a smaller micro-cap of stock. It's enormous, yet, that is just assumed to be a zero cost in the construction of these factors. I've learned also, it's been drilled to me about the importance of cross validation, and out of sample analysis, I was pleasantly surprised at both of these firms where the idea is proposed first, before you look at the data, and that's really important. So it's not like these firms have researchers just going through and analyzing data with no kind of structure. No, you actually come up with an idea first, the idea is vetted. And then it's decided upon as to whether the idea is pursued. There's discipline that's imposed all the way along the research process. So I've learned a lot from that. I've been very fortunate.
I think that people in my profession would greatly benefit from spending a day at a company learning what they do and the challenges that they face.
Projection For The Recovery From COVID-19
Indeed, seven weeks ago, I published one of the first forecasting models for COVID-19. reported cases and deaths, uses data from all over the world. It's updated every day, and includes not just country level data but province and state level data. I was frustrated that our policymakers were not showing their models. There was a model from Imperial College, there was a University of Washington model. But who knows what the CDC model is. And these models are not really that complex. And I fit a basic model to basically, the idea was to figure out what the inflection point was going to be. So the point where the number of new cases starts to decrease the point where the number of deaths start to decrease, and I thought those were really important for the psychology of this particular recession.
So this recession is different than other recessions in that it's causes biological. And it's also different from other recessions in that the solution is also biological.
In the global financial crisis, it was slow moving train wreck, unemployment peaked, after the Great Recession was over. So most people don't know that it kept on going up. And indeed, we didn't achieve the level of unemployment that we had before the Great Recession for nine years. So that was very long, uncertain. People really didn't know what was happening was almost like a like a lost decade. This one's different. It happened very quickly. The peak of the business cycle February 2020. And then we crash and we crash really hard we go to unemployment of 25%. I estimate within like a month, and 25% is like the level of the Great Depression, but I think it's highly misleading. In the Great Depression, you lost your job, you might not be able to get a job for 10 years, there's just no opportunity and the great recession or the global financial crisis, you lost your job at Lehman Brothers, you're not going back there because they are out of business and hard to get a job at a similar firm. And again, you could be unemployed for an extended period of time.
This recession, there is a different word, furlough, so many people have been furloughed. They've been told, well, hopefully, you can come back in two months, maybe before maybe a little later. So there's the expectation that you're going to go back to work. In the global financial crisis, the firm's that essentially caused the crisis are banks. They were doing a very poor job at risk management. So they're extremely levered. They're operating like hedge funds, a financial event occurred that wasn't really that consequential, but it put them over the edge jeopardized the whole economy. And then we had to bail them out.
This time around is completely different. The firms that are affected, were not offside. They weren't doing anything bad. They were high quality firms, and they get hit with a natural disaster, you can think of it that way. So there is this possibility that they can essentially come out of stasis, and hire back the people that they've furloughed. And this allows for the possibility of a fairly quick recovery.
Indeed, if you think about it, it's pretty clear what the biological solutions are, number one, a pharmacological solution that reduces the fatality rate. So it doesn't prevent the COVID-19 but it just reduces the fatality rate a number of things are in process in trial to actually help mitigate the symptoms. And then of course, number two is the vaccine. And I just don't believe this talk about, oh, it's going to take 18 months for a vaccine. It's really hard to do. Or I hear well, empirically, only 19% of vaccines that go into stage one trials succeed. Well, I actually believe the 19%. So that's just a fact, based upon the past, but there's over 100 initiatives on the vaccine front. So there's going to be multiple vaccines that will be available, we're doing things differently, instead of waiting for stage one, clinical trial two and then begin stage two, then begin stage three, those are being compressed. So I fully expect that we'll have a viable vaccine available in the fourth quarter and widely deployed in the first quarter and what that means is the uncertainty is effectively greatly diminished. So again, in the global financial crisis, the Great Recession, the companies didn't want to make capital investment, they didn't want to hire new employees, consumers didn't want to spend, because they just didn't know how long that's going to last.
Whereas this one, there is light at the end of the tunnel, that the biological solutions widely expected.
It's going to happen when we have a vaccine. It's essentially all clear back to business as usual.
What I basically forecast is kind of like a skinny U shape for the second quarter is very bad. Third quarter is a little better. The fourth quarter, there'll be substantial growth and more substantial growth in the first quarter of 21. We won't get exactly back to where we were because there is some structural damage. But the key is to minimize that structural damage. And by structural damage, I mean, high quality firms going bankrupt.
So we're seeing a lot of bankruptcies. But these firms, we all know, it was just a matter of time for them. So Hertz is the latest one, but we knew that they had been struggling for years. So this crisis has effectively accelerated their problems. I do think that this is a different type of recession.
I actually call it a great compression, where you get all this bad news very quickly, but you also get good news very quickly. But again, it's interesting in terms of financial economics, a number of people have stepped up to actually develop models that could inform policymakers in terms of the strategic reducing the lockdown measures and opening up the economy. So, I think that this is important, obviously, in investment finance. Usually these recessions are financially oriented or some economic cause. This one's biological. So people need to step up and understand the science behind us. And the conversation has changed, I think dramatically in terms of what's going on. People are talking about instead of ICU beds and respirators, they're talking about vaccines, and then pharmacological solutions and when to open up or than locked down. So I do see that things will improve in fairly short order.
Advice To Younger Self
I want to link this to something that I talked about earlier. And that is searching for the big idea. There was a point in my career where I realized that oh, well, this publishing stuff is not that difficult. So I was able to publish some papers that when I look back on them, they probably shouldn't have been pursued. The ideas were small, they were good enough to get into a top journal, but not really the big ideas.
And this is advice that I give to my students, that one of the most important things is not just generating the ideas, you'll have ideas, but it's kind of an asset allocation problem.
You need to allocate to the ideas that have the highest expected return.
And that takes a discipline to actually do because you've got an idea, oh, well, I can rent this up and maybe publish it in a one of the top journals. And you need to, to basically say, no.
I had a paper that I remember, I thought it was a pretty good paper and I had a positive review at one of the top journals. And then I started I got access to a new database of emerging market stock returns. So it's the first database ever put together was done by the International Finance Corporation of the World Bank. And I was brought in early before it was released. And I realized that this was a dream data set, that I could write some very important papers with that data set. There are outstanding, important economic questions that were there that could be addressed with these data. And I just dropped everything. So I dropped that paper that had a good review that it could have been published in the top journal. And I just focused on these insights that I thought were important, not just for publishing but for emerging markets. Also, I thought this was a good thing for emerging markets in general. It would help them in terms of reducing their cost of capital, their risk and attract new investment. So you need to have this discipline in terms of allocating your time to the highest possible payoff projects. And I think that I've done an okay job at that. But I think that if I knew what I knew today that I would have been more focused, and you mentioned 125 publications, well, maybe that should have been 75 publications than 125. So I could be much more focused on this.
Another piece of advice that's really important, and I didn't realize that at the time, that you become very specialized, and this is true in almost all of science, that you start working on something and then you read the papers within that sub area of your field. And don't really understand what's going on in other areas of your field. And that leads to a type of myopia. And I was really fortunate to be co-editor of the Review of Financial Studies for six years and then editor of the Journal of Finance where I had to read stuff outside of my area. So I think that's really important to not just focus on your narrow area, but to be informed in terms of the problems outside of your area.
You need to be broad within your actual field.
And the third thing is, I think it's really important for young people to have a broad education. So that doesn't mean just doing like one thing like psychology, it means having a broad slice of what's going on. And that could be mathematics. It could be physics, English history, political science, some economics, but get this broad foundation that could be useful for you later. And this also is the same thing in academia. In preparing for my presidential address to the American Finance Association. I took basically two years where I spent a substantial amount of time looking at fields outside of finance and economics and how they did research in the topic of my address was essentially, it's called the scientific outlook and financial economics. I wanted to actually look at the outlook in other fields, how they did research, and what we could learn from other fields that could be applied in financial economics. So my third piece of advice is, is to be broad.
Be curious outside of your area, because when you're narrow, you won't be able to generate the same type of big ideas.
We can learn a lot from other fields. And yes, it's true, it will take time away. And maybe it's going to cost you a research paper here or there. But it's likely going to cost you a research paper that's a small idea. And what I'm talking about is hunting for that big idea. And often you can be inspired by research outside of your field.