One of the things that fascinate me is our inability to deal with probabilities and randomness. As humans, we have evolved to see patterns of causality even if there are none (see also: chart analysis, investment punditry, etc.).

But in real life, so many things are random or quasi-random and decisions have to be made based on some assessment of how likely they are. If you deal with *risk*, you can assess the likelihood of an event and then update your baseline estimate with the information at hand to end up with a probability that can be expressed in a number. But in many cases, you are not dealing with risk but *uncertainty* in the Knightian sense. This means that there is no number that can be attached to the likelihood of some event. This is essentially the situation every entrepreneur faces when she starts a business or launches a new product. There is no way to calculate the probability of success with traditional stochastic methods because the range of possible outcomes is not well defined.

And then there are the situations in between where experts are able to calculate probabilities of an event or confidence intervals of an event happening but have to communicate to laypeople and politicians (which is the same thing, really) in a way to help them understand their findings and create a call for action. Traditionally, experts have used whatever words they wanted to convey a certain probability, but with the work of Sherman Kent a direct translation of probabilities into certain words has taken hold. For example, the Intergovernmental Panel on Climate Change (IPCC) uses a defined vocabulary to convey the likelihood of different climate change scenarios. In these vocabularies, the words are typically matched to the likelihood that most people perceive. Below is a chart from a recent survey of Michael and Andrew Mauboussin who asked 1,700 people to put a numeric probability to a given word. For example, if you tell someone an event is likely to happen, then they think the probability is somewhere around 75%.

But what happens if you read that something is likely to happen not from one but two sources? Are people changing their assessment of the likelihood of an event?

Robert Mislavsky and Celia Gaertig have conducted a series of experiments to understand how people update probabilities if they are communicated via numbers or words. They find – rather worryingly – that people who deal with probabilities in the form of numbers update them differently than people who deal with probabilities expressed as words.

Specifically, if people are confronted with two assessments of an event expressed as numerical probabilities (e.g. 50% and 60%), then they aggregate this information by averaging the probabilities (i.e. they conclude that the probability of the event is 55%). This works for probabilities above and below 50%, and independent of how the probabilities are conveyed (simultaneously or one after the other). In short, people deal with numerical probabilities as if they were generated independently and average them out.

But if the same probabilities are presented as words instead of numbers, something different happens. If people meet two experts (e.g. financial advisers) who both tell them that an event is “likely”, they afterward conclude that the event must be “very likely”. Instead of averaging between the two expert opinions, they add them up and get more confident in a certain outcome. This effect leads people to make more extreme forecasts and become overconfident in those forecasts.

In the investment world that simply means that they become more likely to trade based on these assessments. For example, if an investor sees an expert discussion a stock on TV and he says the probability of this stock earning 10% or more over the next year is 60% while another expert says it is 50%, the investor would conclude that the probability of a 10%-plus return is somewhere around 55% or about half.

But if the same investor sees two experts independently claim that the stock is likely to earn more than 10% next year, the investor may conclude that it is very likely that the stock will have such a high return. And the term very likely expresses more a probability of, say, 80% or more. Of course, in the second case the investor may buy the stock while in the first case, she may not.

How much more likely is an investor to make extreme forecasts? The chart below shows one of the results of Mislavsky and Gaertig. In this experiment, people were presented with a probability assessment expressed in numbers and words by either one or two advisers. If the probabilities were expressed in numbers, only about 10% of the participants made extreme forecasts about an event, but when two advisers communicated probabilities verbally, the share of people who made extreme forecasts nearly doubled.

To me, what seems to be at play here is a form of group pressure that makes people conform to a consensus opinion, something I have explained here.

Share of extreme forecasts of events communicated numerically or verbally

Source: Mislavsky and Gaertig (2019).

3 |