Tottenham Report: “There is no such thing as the future until it is the present” — George Bernard Shaw

October 18, 2022 10:00 PM
Photo: Shutterstock
  • Andrew Tottenham — Managing Director, Tottenham & Co
October 18, 2022 10:00 PM
  • Andrew Tottenham — Managing Director, Tottenham & Co

Since the development of the human brain, people have wanted to peek into the future. Until the 15th century, most thought that everything that happened was according to the “will of the gods.” The future could not be influenced, but with the right tools and knowledge, the will of the gods could be known. 

The various techniques used included the drawing of lots, looking at the entrails of birds, tarot-card readings, consulting the I Ching, and more, all in order to know more about the future. 

Today, we have faith in experts, people supposedly in the know, who will predict what is likely to happen with the weather, the economy, politics, and the like. Experts use knowledge of their particular subject, what has happened, and what happened before in similar circumstances, to try to project forward. 

Not a day goes by without some political pundit or economist appearing on a news show or podcast being asked about what is going to happen. Understanding the future is critical to decision-making. 

Obviously, predictions for the short term are much easier to make than the long term. What is the weather likely to be in the next four hours? Will the bear market continue tomorrow? Will the current prime minister be in post next week? These are simpler questions than, for example, will the UK be experiencing drought conditions in September 2023? 

Companies and governments spend large sums of money on forecasting the medium- and long-term future. But the question that needs to be asked is how good are the experts at forecasting the future? 

In the mid-1980s, a political scientist, Philip Tetlock, wanted to find out. He recruited academics and pundits and put them through a series of “forecasting tournaments”. He also wanted to compare them to members of the public who did not have any particular experience with forecasting. In all, he recruited more than 3,000 people to take part in his tournaments. 

The results were somewhat surprising. Most of the experts did no better than had they answered the questions completely at random. A few did better than a random guess, but only marginally so. Meanwhile, a group of lay people consistently outperformed everyone else. These people were termed “superforecasters”. 

When Tetlock analysed their approach, he noticed that the superforecasters were not influenced by the biases that impacted the forecasts of the experts.  

Daniel Kahneman, the behavioural psychologist who won a Nobel prize for economics, posited a bias called “scope insensitivity”. He meant that experts tend to have an opinion that they are unwilling to change, even if the information available changes. 

The superforecasters were not subject to this bias. As they researched the particular question and gathered information, they were quite happy to change their opinions. The experts, on the other hand, held on to their original position no matter what new information was available. 

A more recent study carried out by a group that included BBC Future, Nesta, Good Judgement, Inc., and the UK Innovation Foundation asked volunteers to predict the outcome of various world events. 

The participants were asked if they had any previous experience in forecasting and the study showed similar results: Those with no experience performed better than those who said they had prior experience. 

People with previous experience might be subject to biases and subject to the Dunning Kruger effect, the tendency of people of low ability to overestimate their skills. The novices were more open-minded and willing to learn from others, whereas the experts suffered from “earned dogmatism”, the belief that they knew better. 

Age did not make people wiser; the 25- to 35-year-old age group performed the best of any. Although the group’s study was not designed to understand why there was a difference between the forecasting abilities of the different age groups, it could be down to “information literacy”, the ability to think critically and make balanced judgements about any information we find and use.  

According to another study from researchers at Princeton, the over-65s are seven times more likely to post and share fake news than 18- to 29-year-olds. In a separate study, Pew Research found that older people find it harder to differentiate between facts and opinion. 

What does this all mean? Ignore experts older than 35? Hardly. What it tells us is that the approach to understanding what the data means and how to use it to forecast is more useful than experience.  

Interviews with some of the superforecasters in the BBC Future/Nesta study showed that they took a collaborative approach. They thought through the options, tried to understand what impact the information would have on the outcome, discussed their thoughts openly amongst themselves, listened to opposing views, gently challenged thinking that they thought was wrong rather than aggressively calling it out, and were prepared to change their forecast. It was this method that got the best results. 

The other thing to remember is that in forecasting, there are no hard-and-fast answers. There are only probabilities. And as the forecasted event gets closer, the probabilities usually firm up. The position of the hurricane in the next few hours is much more certain than where it will be in 24 hours. 

Finally, just because the probability of something happening is small does not mean it will not happen. Anything with a probability of greater than zero has a chance of happening. Black swans are rare, but not as rare as you might think.