Yesterday, I wrote about a brain teaser in probability and statistics that shows how we are not well-equipped as humans to intuitively grasp probabilities in a realistic setting. The example I used was two experts making two different point forecasts for inflation and then the Minister of Finance combining these two forecasts to make a forecast that was higher than all the existing forecasts.
The seemingly surprising result may have been less surprising if the experts and the Ministry of Finance had made forecasts of ranges rather than point forecasts. For example, instead of forecasting 2% inflation next year, an expert could say that with a 90% probability, inflation will fall between 1.5% and 2.5%.
I think forecast ranges are far better than point forecasts because they let the forecaster express a degree of confidence (or lack thereof). In the hard sciences using confidence intervals around your forecasts is not just common, it is a licence to operate. I sometimes joke that back in my days studying physics, I could only publish a paper if I showed confidence intervals for every prediction. Once I switched to finance and economics, I could publish if I did NOT show confidence intervals.
Many people have a hard time dealing with uncertainty and randomness because it lacks clarity and guidance. If you receive a forecast range instead of a point forecast, you have to start thinking for yourself to assess what to do with your investments. Much easier to simply take a number and run with it. In fact, in my job I am moving away from forecast ranges and towards directional forecasts (e.g. inflation will drop or increase).
To show you why let’s assume that all economists and analysts forecast ranges instead of target numbers.
Suppose the Minister of Finance needs to know inflation in order to formulate ministry policy. He consults an expert, who tells him that inflation is going to be 1%. The expert cannot be entirely sure about this number, but she is confident that inflation lies between 0.8% and 1.2%. The minister then proceeds with policy based on this information. After some time he thinks it wise to consult a second expert. The second expert tells him that inflation will be 3%. This expert is not certain either, but he is confident that inflation lies between 2.6% and 3.4%. The minister believes that the first expert is slightly more reliable than the second expert, but only slightly. Based on this new information the minister decides to change the inflation forecast from 1% (the old information) to 2% (the average of the old and the new information). But how much confidence should the minister have in this new number?
Inflation lies between 1.9% and 2.1%
Inflation lies between 1.5% and 2.5%
Inflation lies between 1.1% and 2.9%
Inflation lies between 0.8% and 3.4%
In a survey of 340 students of science, medicine, economics, and humanities, more than half answered that inflation would lie between 0.8% and 3.4%. This is technically not wrong, but one can do much better. Only 4% (two out of fifty) realised that based on the information given, inflation would lie between 1.9% and 2.1%
When I give this answer, people are not surprised that the new estimate of 2% inflation lies between the two individual estimates of 1% and 3%. That seems intuitive (though as we have seen yesterday, it is not necessarily true). What trips people up is that the confidence interval of the first expert was plus or minus 0.2 percentage points and the confidence interval of the second expert was plus or minus 0.4 percentage points, yet the confidence interval of the combined forecast is plus or minus 0.1 percentage points. How can two uncertain forecasts be combined to provide a forecast that is more certain than any individual forecast?
This is the beauty of forecast errors. They can be positive or negative and we don’t know if they are going to be positive or negative. If we knew the direction of the forecast error, we could make a better forecast by shifting the midpoint of the forecast range. But because forecast errors can go in either direction, they cannot simply be added together or averaged between different forecasts. Instead, they have to be added via squares. The square of the combined forecast error is the sum of the squares of all individual forecast errors. If you do that little bit of maths, you will find that the forecast error from one forecast is partially cancelled out by the forecast error of the other forecast and the resulting combined forecast error is indeed smaller than any individual error.
This example is similar to Bayesian statistics which provides probabilities of events happening under given conditions. As investors, we constantly have to make forecasts of markets, but the outcome depends heavily on some conditions to be met first. In the case of inflation, we have to forecast inflation under the condition that the Federal Reserve or the Bank of England hikes interest rates to 2% and compare this to inflation under the condition that the Federal Reserve hikes interest rates to 3%. The results will probably be quite different, but as the little maths example above has demonstrated, we can’t intuitively grasp the correct answer.
Dealing with uncertainty is not intuitive and most investors simply don’t have the time or the patience to do it properly. To be frank, I have a postgrad degree in mathematics and I still can’t stand Bayesian statistics. And I typically lack the patience to do all the maths needed for a correct Bayesian analysis of uncertainties. Plus, I lack the ability to explain to other people in plain English why the results are what they are. If you thought this post was hard to understand, it is not your fault. It is my inability to communicate statistical concepts clearly.
Much easier to think in directional bets. I forecast inflation to rise, but if the Federal Reserve hikes interest rates above 2%, we won’t be able to avoid a recession, in which case inflation will drop rather quickly. This conveys the key information of my forecasts without bothering people with forecast intervals how different scenarios have been combined into one number with an uncertainty interval around it.
I agree that assigning probabilities to particular scenarios is an improvement, but it is far from being a panacea. It can give an impression of faux certainty, that all possible outcomes are knowable in advance and have been captured by the forecast. The danger with faux certainty is that it might lead market participants to believe something is more likely than it actually is.
Probability by itself is hard and not intuitive. But probability, statistics, economics and politics mixed together is a very complex cocktail. Many of those (USA, Europe, etc.) that help with the economic forecast, influence policy and thus always have a political angle (do not let the truth get in the way).
In 1990, a (simple bayesian) problem was presented in a Sunday supplement Parade magazine. It was presented as a Monty Hall "Let's Make a Deal" TV contest wherein the contestant must guess which of the 3 doors had a car prize behind it. If the contestant selected door A at first, and the host opens door C to reveal a goat, should the contestant switch to door B given the chance?
The column was written by a beautiful, stylish woman Marilyn vos Savant (at the time she was the Guiness world's smartest woman per intelligence tests but had no formal 3 letter hangups after her name). She wrote that the contestant should always switch.
Guess how many donkeys (from Paul Erdos and other math mainsplainers, including from some respected universities in the USA) wrote to disparage her? Per Parade records, around 10,000 nationwide letters for Marilyn came in with about 1,000 from PhDs in math and statistics!!!
The lizard part of the brain cannot intuitively grasp probabilities even when those who are responding were supposed to know the math. Even if we account for sexism, pride and prejudice, it is way too high a number of mathematicians to reject the higher probability choice.
Now extend that to the scenario of forecasting economic direction to the masses amidst a stressful pile of Covid-19, monkey pox, Ukraine and inflation.
What really happens is that if a charismatic politician (e.g. a believable Minister of Finance) says something, at least half of the audience will not think independently and just take it for what it is and sometimes it becomes self-fullfilling prophecy. So it does not matter much that the forecast is not so accurate, as long as it comes across as believeable, like Hari Seldon's deal on "psychohistory" in Asimov's novels.
Experiments has shown that the functional MRI of the frontal cortex of a congregation listening to a charismatic preacher was basically inactive or at rest. Perhaps it was partly how in the 1940s the "emperor of the sun" from Asia, and a psycho with a funny mustache in Europe, convinced thousands to follow them, without even needing Twitter.