Not even the machines are rational
For 50 years, behavioural economics has thrown the gauntlet at the rational expectation hypothesis and the concept of homo economicus. But now, rationality could fight back. AI and machine learning algorithms have become so powerful that their forecasts can compete with analyst forecasts (at least before transaction costs) and these algorithms certainly aren’t biased like humans are. Or are they?
A team from the University of Minnesota ran a series of algorithms from simple linear regression to more complex machine learning algorithms and used them to forecast corporate earnings. Equity analysts exhibit all kinds of behavioural biases when it comes to earnings forecasts, one of which is overreaction. What tends to happen is that we humans tend to overreact to the most recent news we get about a company and extrapolate that into the future. Hence, after a string of positive earnings surprises, analysts tend to become more and more optimistic about future earnings growth while a string of earnings misses leads to excessive analyst pessimism. Of course, trends don’t last forever, and analysts are eventually forced to revise their forecasts in the opposite direction.
But algorithms don’t have that kind of recency bias because for them one number is the same as any other number, no matter how recent that number was published. Yet, when the researchers let algorithms forecast earnings, they found that these algorithms were also prone to overreaction. Not as much as humans, but they still overreacted to recent news.
What seems to be going on here is that the machine learning algorithm needs to “learn” the relationship between past and future earnings. To do this, the algorithm is trained on past earnings and with every new data point added to the time series, the algorithm is learning something new. One key variable needed to train the algorithm is the “learning rate” i.e. the weight given to new data points relative to older ones. And this is what creates the overreaction bias, because in order to improve the forecasts of the algorithm, the learning rate has to be relatively high, i.e. the machine has to put a lot of emphasis on the most recent data points. But that is just what humans do and why they overreact to recent news, so the machine starts to do the same.
Unlike humans, one can tell the algorithm to give recent data a lower weight, i.e. one can reduce the learning rate, but that means that forecast accuracy drops. Thus, algorithms come with a trade-off. Increase accuracy and you get more biased forecasts or reduce the bias and you get less accurate forecasts.
By the way, this is even true for the most basic of forecasting algorithms, the linear regression. If one wants to have decent forecasts with a linear regression model, one typically needs to restrict the regression to the most recent data points in order to match the regression model to the current market environment. But that means that newer data points get a larger weight in the forecast.
Alternatively, one can run the regression with more data points going very far into the past. That will reduce the bias of the linear regression model. But it typically also reduces the accuracy of the model because now your regression assumes that the relationships of, say, the 1970s still apply today when we have a very different economy.
And to come full circle, you can observe similar things with human analysts. Often, analysts and market pundits compare the current situation to the 1970s or some distant past. And such comparisons can be enlightening and very useful to get a feeling for what could happen this time. I am very much in favour of learning from the past, so we don’t have to repeat it. But with these historic analogies comes the risk of not recognising or discounting the differences between today and the past. The world has changed and sometimes these changes make all the difference between the outcomes today and the outcomes in the past. Learning what episodes in the past matter today and which ones don’t is key to good forecasting.