The obstacles to scientific progress or: The Dead Parrot Sketch

The physicist Max Planck once quipped: 

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Indeed, even in natural sciences like physics where experiments can confirm or reject a given scientific hypothesis it often takes the death of a leading scientist in a field before new ideas can take over and lead to progress in our understanding of the world. Albert Einstein was famously skeptical about quantum physics, the revolutionary theory developed by Max Planck and others. And while he eventually came to accept quantum physics as correct, he certainly wasn’t helping to spread it widely.

In social sciences like economics and finance the challenges for new scientific ideas are even greater because unlike in the natural sciences it is often impossible to create laboratory experiments that conclusively prove or disprove a given theory. As a result, even theories that have shown to fail in many instances in real life can survive for decades.

James Montier famously compared the Capital Asset Pricing Model (CAPM) – and by extension modern portfolio theory overall – with Monty Python’s Dead Parrot Sketch

The CAPM is the financial theory equivalent of the ‘Monty Python Dead Parrot Sketch’. As many readers will know, an exceedingly annoyed customer who recently bought a parrot from a pet shop returns to the owner and berates: 

‘He’s passed on. This parrot is no more! He has ceased to be! He’s expired and gone to meet his maker. He’s a stiff! Bereft of Life, he rests in peace! If you hadn’t nailed him to the perch he’d be pushing up daisies! His metabolic processes are now history! He’s off the twig! He’s kicked the bucket. He’s shuffled off his mortal coil, run down the curtain and joined the bleedin’ choir invisible! This is an ex-parrot!

Yet, despite the limitations and violations of modern portfolio theory discovered by behavioral economists, modern portfolio theory and the CAPM remain in wide use amongst both academics and practitioners. And there are still papers published today that try to show that some concepts of modern portfolio theory are superior to other approaches that have been shown to produce much better results for investors in the real world. 

In fact, there are so many “zombie ideas” being used in economics and finance today that John Quiggin was able to write an entire book about them. Just think of trickle-down economics and the Laffer Curve, which are the foundations for the 2018 income tax cuts in the US. The idea there was that these income tax cuts should boost the economy so much, that the government would not suffer declining tax revenues and rising budget deficits. How is that working out?

I am not saying that these theories and zombie ideas are worthless. Modern portfolio theory and the CAPM have their value as academic concepts, and as such they deserve the many prices that the inventors of these theories have won. But investors shouldn’t apply these theories in real life and expect to get the best possible outcome for their investments. As behavioral finance and the emerging science of complex dynamic systems show, financial markets are far more complex than these old theories assume and as a result, these theories don’t work well in practice. 

Yet, many of these theories are still not implemented by practitioners, asset managers and banks. The reasons range from considerations of career risk to monetary incentives. After all, it is hard to go to your customers and tell them that the way you invested their money over the last several decades has been suboptimal and they probably had much lower returns in the past than they could have.

What is needed to mainstream these new ideas is typically a generational change. In academia this often means that a star scientist in a field has to die, while in business it means that challenger companies (e.g. Vanguard) have to become so successful and big that they become an existential threat to an existing business model. 

Pierre Azoulay and his colleagues have recently examined how new ideas spread in life sciences. They found that as long as a scientific field is dominated by a star scientist and his colleagues, there is little chance for new ideas in the field to gain acceptance. New ideas in a field are not consciously suppressed by star scientists and their collaborators but unconsciously because new ideas require research funding and this funding has to go through a peer review process. As a result, star scientists and their collaborators are able to control the use of resources and have an unconscious bias to recommend research projects that are in line with their theory and academic thinking.

Once a star scientist in a field dies, the field is more open to outsiders who bring new ideas and concepts to a field. As a result, the academic output and importance of the old guard declines while new ideas become more prominent (see chart). In this sense, science progresses truly one funeral at a time. 

Unfortunately, the study of Azoulay was done in life sciences where laboratory experiments provide hard data. In economics and finance, there are, as I said, moneyed interests at play that prevent finance from adopting new ideas. The result is that old ideas that have limited application in real life will continue to be used way past their use-by date. The losers in this game are the investor who are stuck with suboptimal portfolios and low returns.

Change in publication and funding flow in a subfield before and after a star scientist dies

Source: Azoulay et al. (2019).

Another reason why interest rates have to stay low for a decade

Yesterday, the US Congressional Budget Office (CBO) published its latest set of projections for the coming decade. This time it includes the two-year budget deal that was struck at the beginning of August. As always, the report is full of interesting tidbits, but here is the gist of the story: The US budget deficit is expected to rise much faster than previously forecast and reach $960bn (4.5% of GDP) in 2019. Over the next ten years, the deficits are expected to be $809bn larger than previously thought. The chart below shows the CBO’s estimate for the Federal deficit until 2024 including a 66% confidence interval. 

Projected US deficit

Source: CBO.

But what I want to focus on here is the large uncertainty around interest rates and their impact on the deficit. Unlike GDP, inflation rates and interest rates are notoriously hard to forecast and come with a large uncertainty. This can be seen by comparing the CBO projections of the average interest rate the US has to pay on its debt. In January, the CBO expected this average interest rate to rise from 2.6% in 2019 to 3.5% in 2029. Now, interest rates have fallen and the CBO simply assumes they are going to stay lower forever. In the August projections, the CBO assumes that the average interest rates on Federal Debt start at 2.5% in 2019 and rise to 3.0% in 2029.

Compare this to my own long-term projections based on a model that includes the projected demographic changes in the US, changes in consumption patterns, productivity and investments etc. According to this model, the average interest rate on US debt should rise to 4.9% in 2029. For my younger readers, I probably need to explain that a 5% interest rate on public debt is nothing spectacular and does not imply a 1970s-like inflation scenario. Instead it would simply be a return to the interest rate levels we have lived with between 2000 and 2008. What would be the impact on Federal net interest costs if interest rates would climb to these levels?

Of course, there is always the possibility that interest rates keep falling as they have done over the last decade and we enter a Japan-like scenario, where the average interest cost on US debt is somewhere around 0.5% in ten years. The chart below shows these two scenarios for interest rates together with the CBO projections from January and August.

Interest rate scenarios

Source: CBO, Fidante Partners.

None of these interest rate scenarios are extreme. Instead they represent a reasonable range of possible outcomes. But the impact on Federal interest expenses and the deficit is large. The chart below shows the projected net interest expense for these four scenarios if everything else is being held equal. That is, the projected tax revenues and GDP growth are all correct – arguably a heroic assumption.

It turns out that just by assuming the lower interest level on Treasuries, the CBO managed to reduce the expected interest expense by $1.3 trillion over the next ten years! That is 0.4% of US GDP per year. If interest rates would normalize towards 5%, as I expect, interest expenses would rise by about $4 trillion over the next decade and the deficit would increase by 1.3% of GDP per year on average and reach 6.6% of GDP in 2029. 

Currently, the CBO predicts the debt/GDP ratio of the US to climb from 70.6% in 2019 to 88.0% in 2029. With higher interest rates the debt/GDP-ratio could quickly approach 100%. Remember that these calculations imply that GDP growth, tax revenues etc. all remain unchanged, something that is not necessarily a given if interest rates rise significantly. 

Projected net interest expense

Source: CBO.

Of course, all of this need not happen. Instead, the US could face a Japan-like scenario where interest rates keep falling and thus interest expenses will be much lower than expected. This would be the “get out of jail”-card for the US. All that needs to happen for this scenario to materialize is the Fed to keep long-term interest rates very low for a long time. In other words, the reality of these numbers is that the Fed will face enormous pressure to enter in a QE infinity scenario, where it manipulates the entire yield curve for years to come, just like the Bank of Japan does. 

This pressure to keep interest rates low for long will not subside once Donald Trump leaves office. While Donald Trump may publicly bully the Fed into lower rates, the Democrats are flirting with Modern Monetary Theory and large deficits to finance large-scale investment and welfare projects. Furthermore, fiscal stimulus might be the only weapon we have left to fight the next recession. And the bigger the deficit gets, the higher the pressure will be on the Fed to keep rates lower for longer.

The Virtuous Investor: Rule 2

Have confidence in your knowledge – even if it brings temporary losses

This post is part of a series on The Virtuous Investor. For an overview of the series and links to the other parts, click here.

“The next that thou go unto the way of life, not slothfully, not fearfully: but with sure purpose, with all thy heart, with a confident mind, and (if I may so say) with such mind as he hath that would rather fight than drink.”

Erasmus of Rotterdam

Once the virtuous investor has acquired the required knowledge in investment techniques, fees, market behaviour etc. it is time to put this knowledge into action. In fact, typically, the virtuous investor will build his knowledge in investments in a trial-and-error fashion, learning as he goes along. This is what we all do. 

However, when putting our investment knowledge into practice we often abandon them at the first sight of trouble. A good investment strategy needs time to come to fruition and show its merits. Every asset allocator will tell you that you can only start to measure the performance of a strategy after five years or more. The higher the downside risks of a strategy (e.g. a pure equity portfolio) the longer the investment horizon needs to be. Yet even institutional investors typically assess their investment strategy after three years or less.

Individual investors are an even more impatient lot. In its Global Investor Survey 2019, Schroders asked 25,000 people in 32 countries how long they expect to hold an investment. Our chart shows that even in Japan, where investors were the most patient, the intended holding period for investments is 4.5 years. In Europe, it is typically around 3 years and in emerging markets it is typically 2 years or less. But these are self-reported expectations. In a previous job I had at a global wealth management firm we checked how long investors stuck to a specific asset allocation in their discretionary portfolio mandates. It turned out that investors changed strategy on average every 18 months! There is absolutely no way that an investment strategy can be evaluated after just one and a half years.

Expected holding period for investments

Source: Schroders Global Investor Survey 2019.

According to the 2016 survey from Schroders, it seems that institutional investors are a little bit more patient and expect to hold their investments about 4.7 years on average, but if you look at their deeds, not their words, they, too seem very impatient. Anne Tucker investigated the turnover of fund managers with three different metrics. Our chart below shows the average length of time that a stock stays in the portfolio of an actively managed fund. Note that this measure below is based on quarterly reported portfolio holdings by US mutual funds and thus excludes any stocks that have been bought and sold within a quarter. Even so, the average holding period for stocks was a mere 14 months, thought there is a trend towards longer holding periods visible in the chart. 

Average holding period of stocks in active mutual funds

Source: Tucker (2018).

All of this buying and selling not only increases trading costs in a portfolio, it also introduces significant timing risk. If investments are sold and bought at the wrong time, the investor might end up missing rallies and experience a much lower return than a true long-term buy-and-hold investor. Of course, if the timing is right, the investor could also generate a much higher performance than the buy and hold investor if he manages to avoid drawdowns. Morningstar has for many years calculated the performance of the average investor and compared it to the performance of a buy-and-hold investor in the same fund. They call the difference between the performance of a buy-and-hold investor and the average investor the “behaviour gap”. The size of this behaviour gap over the ten years ending in March 2018 is shown in the chart below. Negative numbers imply that the average investor has had a worse performance than the buy-and-hold investor. Typically, the behaviour gap increases as investment strategies become more volatile. Investors in a defensive allocation fund with 15% to 30% allocated to equities lost about 0.4% per year relative to a buy-and-hold investor. Investors in global equity funds, however, lost 1.8% per year through ill-timed transactions. 

Interestingly, if we enter the realm of pure bond portfolios, Morningstar shows rising behaviour gaps again with government bond funds, corporate bond funds and high yield bond funds showing behaviour gaps in excess of 1% per year. All of these numbers are large and lead to significant underperformance of the average investor vs. a long-term buy-and-hold investor. 

Behaviour gap 2008 – 2018

Source: Morningstar.

No wonder that 51% of investors surveyed by Schroders were disappointed by their investment performance with the second most common complaint being that they should have remained invested for longer (the most common being a rather unspecific complaint about insufficient performance of the product). The survey also looked into the behaviour of investors during the short period of market volatility in the fourth quarter of 2018. Only 30% of investors did not change their portfolios while two out of three did make changes in response to the market volatility. Of these, one in five moved some of their investments in cash and 37% moved some of their investments in lower risk investments – thus missing out on the strong recovery in the first half of 2019. 

Buy and hold is hard to do, but there are techniques that investors can use to “keep the faith” in their investment knowledge and their investment strategy. One of them is to put short-term market volatility into a long-term context as I have done here. I will write about another tool that links your portfolio to your investment goals in two weeks time when we talk about rule 4.

Machine learning is getting better, but has much to learn

Last week, I discussed the tremendous risks of overfitting algorithms to noisy data and the potential to create seemingly profitable investment strategies due to data mining. Because machine learning and artificial intelligence (AI) applications tend to work with extremely large amounts of data, this risk seems particularly prevalent in those fields.

The promise of machine learning and AI is that it can work with unstructured data and discover nonlinear relationships in markets that are ignored by traditional regression-based statistical methods. I am convinced that markets are full of nonlinearities and if we can develop reliable methods to identify and “predict” them, the investment world would make a giant leap forward.

But if your machine learning application is too simple, it will essentially act like a chart-technical analyst who tries to predict the future development of asset prices by identifying patterns in past prices that may or may not be there to begin with. And when this “pattern recognition” approach to investing is left unchecked it can potentially lead to some dangerous outcomes. Take the following anecdote in Jonathan Zittrein’s recent article for the New Yorker:

In 2011, a biologist named Michael Eisen found out, from one of his students, that the least-expensive copy of an otherwise unremarkable used book – “The Making of a Fly: The Genetics of Animal Design” – was available on Amazon for $1.7 million, plus $3.99 shipping. The second-cheapest copy cost $2.1 million. The respective sellers were well established, with thousands of positive reviews between them. When Eisen visited the book’s Amazon page several days in a row, he discovered that the prices were increasing continually, in a regular pattern. Seller A’s price was consistently 99.83 per cent that of Seller B; Seller B’s price was reset, every day, to 127.059 per cent of Seller A’s. Eisensurmisedthat Seller A had a copy of the book, and was seeking to undercut the next-cheapest price. Seller B, meanwhile, didn’t have a copy, and so priced the book higher; if someone purchased it, B could order it, on that customer’s behalf, from A.

Now imagine something like that happening in the stock market. Obviously, this is an extreme example, but market episodes like flash crashes may be due at least to some extent to algorithms trading with each other without being checked by fundamental investors and other market participants. 

In order to make useful predictions about stocks and markets, machine learning programmes cannot just be trained in pattern recognition but need to be trained with some fundamental input about the underlying drivers of the time series. This is one of the key recommendations of the excellent article by Keywan Rasekhschaffe and Robert Jones in the latest edition of the Financial Analysts Journal. 

They stress that machine learning can only hope to become better if users engage in “feature engineering”, i.e. use their knowledge about the fundamentals underlying a time series to provide a framework for the machine learning algorithm within which to search for the best combination of factors to predict the time series. For example, when forecasting overall stock market prices, it is important to introduce general macroeconomic relationships (e.g. the influence of interest rates on stock prices) to the algorithm. When trying to predict individual stock returns, on the other hand, these macroeconomic relationships might be less useful and instead company fundamentals (e.g. corporate leverage ratios) might be used to train the algorithm. Otherwise, the machine learning algorithm can fall into the trap of mistaking correlation for causation. We all heard these stories that the butter price in Thailand “predicts” the S&P 500 and so on. Because the data used by machine learning applications is so large and so intransparent, feature engineering will become an absolute necessity to reduce the likelihood of overfitting the application to noisy data.

Even so, machine learning programmes have a hard time beating traditional statistical approaches like linear regressions. Alaa Sheta and his colleagues have put machine learning algorithms as well as linear regression analysis to the test in 2015 and tried to predict the S&P 500 index. While machine learning algorithms performed better for one-day forecasts, they quickly went out of control and had a bigger prediction error than linear regressions for forecast horizons of several days or more. 

In a more comprehensive exercise, Spyros Makridakis and his collaborators looked at eight machine learning algorithms, two neural network algorithms and eight classical statistical methods to forecast the S&P 500. The chart below is taken from their paper and shows the symmetric Mean Absolute Percentage Error (sMAPE) of all the different algorithms and statistical methods they used. Note that smaller values indicate better forecasts in this chart. All the machine learning methods were dominated by the traditional statistical methods when it came to forecasting the S&P 500. The lesson to be learned here is that at least when it comes to univariate time series, it probably is best to start with a simple statistical method to forecast the time series. Most likely they will be anyway better than machine learning methods.

Forecast error of different algorithms trying to forecast the S&P 500

Source: Makridakis et al. (2018)

However, machine learning methods may have an edge when it comes to forecasting large multivariate time series. In their FAJ paper, Rasekhschaffe and Jones used machine learning algorithms to predict the returns of thousands of US and international stocks. Again, they found that individual machine learning and neural network algorithms often performed no better than an ordinary linear regression. And if the number of variables in the regression approach was reduced with the help of principal component analysis and the like, standard statistical methods could perform almost as well as the most sophisticated neural networks. The difference in performance between statistical methods augmented with principal component analysis and machine learning approaches were small in practice.

However, if different machine learning algorithms are combined to enhance the forecast signal, the advantage over the statistical methods increased. And this is the second recommendation of the article. Don’t just stick with one method to forecast markets. Combine many different methodologies to improve your forecasts. This is no different than what quant investors have always done, but it is worth repeating since so many investors seem to think that machine learning provides the ultimate black box that cannot be improved upon. Instead, machine learning and AI should be seen as just another approach to analyse data. As with every advancement in quantitative finance, it will help us move forward and improve our ability to understand and forecast markets. But these improvements will likely be gradual, rather than the revolution that is so often promised. And like every other advancement in quantitative finance, it will likely lead to disappointing results for investors who believe the hype and optimism of marketers and early adopters. 

Financial markets have a way of reinventing the wheel but every time with new bells and whistles. Machine learning and AI is just another shiny new wheel.

The plot thickens…

Last week, stock markets finally freaked out about the possibility of a recession in the US. Bond markets have witnessed declining yields and a flattening yield curve for weeks, but stock markets kept their cool. Until the US Treasury yield curve inverted for the first time since 2007. Of course, investors have been debating yield curve inversions all year since different parts of the yield curve have inverted before, but last week, we saw the spread between 10-year and 2-year Treasury yields finally turn negative. As I said before, the 10Y-2Y yield spread is my preferred measure for yield curve inversions because it tends to be the most reliable one with few false alarms. 

However, the eternally bullish society of fund managers keeps arguing that it won’t be as bad as everyone thinks. According to the latest Bank of America Merrill Lynch fund manager survey, the share of fund managers who expect a recession in the US in the next 12 moths may be at the highest level since 2011, but 64% of fund managers still don’t expect a recession. That is two out of three fund managers are busy explaining away the inverted yield curve as a recession indicator. I guess, if you run a fund, you better be eternally bullish. It’s called talking your book.

In any case, there are by now several indicators that historically have “predicted” a recession more or less accurately. Of course, for every single indicator one might find good reasons why this one might not be reliable today, but if you are faced with several different indicators signaling a recession, then things get a bit more difficult.

Thus, I have looked at nine different indicators that are often quoted in the media as recession predictors and checked if they currently signal a recession:

  • The yield spread between 10-year and 2-year Treasuries. If it is negative, I count it as an indicator that signals a recession in the next 12 months.

  • The Federal Reserve is cutting interest rates. If this is the case, I count it as an indicator that signals a recession in the next 12 months.

  • The ISM Manufacturing Index. If it drops below 50 points, I count it as an indicator that signals a recession in the next 12 months.

  • Quarterly GDP growth. If we get one quarter of negative growth, I count it as an indicator that signals a recession in the next 12 months.

  • Nonfarm payrolls. If the average number of new jobs over the last three months is below the average number of the previous three months, I consider it a recession signal. The three-month averages are used to smooth out any outliers that may appear in any given month.

  • Existing and new home sales. If the three-month average is below the average of the previous three months, I consider it a recession signal. 

  • The recession probability for the next twelve months calculated by the New York Fed and the Cleveland Fed based on two different methodologies. If the recession probability rises above 30%, I consider it a recession signal. The level of 30% is used, because historically, almost every time these indicators climbed above 30% they continued towards 50% and a recession followed. However, in some cases (e.g. the recession of 2001) these indicators never even reached 50% probability.

In total this provides me with nine different indicators that may signal a recession. You can argue with every single one of these indicators but if many of them trigger a recession warning at the same time, you can be reasonably sure that there is something going on.

The chart below shows the number of signals that triggered a recession warning (blue line) together with the official US recessions (grey bars) over the last 40 years. Even though I use nine different indicators, at no point did all indicators signal a recession at the same time. The best we can hope for is six out of nine. 

Within the last three weeks the number of indicators signaling a recession has jumped from three to five, thanks to the Fed starting to cut interest rates and the yield curve inverting. With one exception, whenever the signal count was at five or higher in the last 40 years, the US was either already in recession or dropped into recession in the next 12 months. The only exception was April 2003, when the signal count briefly jumped to five before dropping immediately back to four. 

And while that proves nothing, it clearly shows that the likelihood of a recession in the US in the next 12 months is very high. The plot thickens…

Number of indicators signaling a recession in the US

Source: Fred, St. Louis Fed.

Loading more posts…