Are you a Superforecaster ? You can be !

Author: Thomas Peterson
Published On: 18 April 2018
Views: 3584

Superforecasting: The Art and Science of Prediction 

Back in 2014, Daniel Kahneman (Nobel Prize winner of economics and author of Thinking Fast and Slow) asked University of Pennsylvania Professor of Psychology Philip Tetlock this very question. Dr. Tetlock replied “… a little of both”.

We all think of forecasting in terms of mathematics and statistics, but forecasting also requires human judgment. Given the nature of today’s forecasting problems, it is hard enough to select a proper method let alone perfect a model. How do we expect to optimize human judgment? It turns out it isn’t so difficult. As Dr. Tetlock mentioned above, superforecasters are a different kind of people, but they also do different kinds of things.


Superforecasting: The Art and Science of Prediction, by Philip Tetlock and Dan Gardner, details the findings of the ongoing Good Judgment Project. The GJP is a team of researchers who competed in an IARPA funded forecasting tournament. They emerged as the undisputed victors, mostly through using crowd-sourcing techniques and heavy application of the scientific method. In tandem, they were able to collect extensive demographic information on the people they crowd-sourced to make predictions. They found a subset of forecasters from their crowd that performed significantly better than the rest of the crowd. They called these people superforecasters. What the GJP set to find out: what makes these forecasters so ‘super’? What they learned: you don’t need to be a supergenius to be a superforecaster.

In popular culture, it was a commonly held belief that one could not be a high performing student, researcher, or data analyst without having a high IQ. However, research coming from psychology and neuroscience shows that a high IQ is not a good predictor of performance. Carol Dweck’s research argues that people who believe in hard work, learning, and training tend to have a “growth mindset” and perform much better than people who rely on intelligence alone. But for forecasting, performing well might be even easier than that …

Forecasts and predictions can be improved by following a certain set of rules.

With prediction, expert forecasters typically don’t run into problems with intelligence or training aptitude. Tetlock and the GJP group believe that plagued judgments come from the inability to handle cognitive biases and heuristics. For example, scope insensitivity occurs when a problem is not valued with a multiplicative relationship to its size. Tetlock recants one of Kahneman’s experiments where he presented a question: “How much would you be willing to pay to help stop migratory birds from drowning in oil ponds?” One group was informed 2,000 birds died from this each year, a second group was told 20,000 birds, and a third, 200,000 birds. One might imagine that the average amount of money one was willing to pay to help stop this injustice to nature would scale with the size of the problem, but due to scope insensitivity, the average amount for all three groups was $80. They did not scale with the severity of the problem! The explanation:

“The prototype automatically evokes an affective response, and the intensity of that emotion is then mapped onto the dollar scale. It’s a classic bait and switch. Instead of answering the question asked‑-a difficult one that requires putting a money value on things we never monetize–people answered ‘How bad does this make me feel?’”


Does this experiment sound plausible? If so, it should represent a deep revelation to forecasting, because if you cannot scale your judgment of a problem size, you cannot scale the solution that follows.

Still not convinced? Examine the Muller-Lyer Illusion: two parallel lines with some additional lines extending out from their ends.

The lower line appears to be longer than the upper line, but if you pull out your ID card and measure the two lines, you’ll be surprised to find that they are actually the same length! Worse yet, even though you know that the lines are the same length, it is impossible to see past the appearance that the lower line is still longer than the upper line. If Muller and Lyer can fool you with two lines, imagine the world of perceptual illusions a complex business problem can present to you!

Through discovery of our own ignorance, it is indeed possible to inoculate ourselves from these cognitive biases by following a few simple rules, as prescribed by Tetlock in his book.

Here are 11 ways you can improve your natural forecasting judgment for your next project:

  1. Triage: Focus on questions where your hard work is likely to pay off

If you could have predicted that Donald Trump would today be the President of the United States back in 2012, or that Narendra Modi would have demonetized ₹500 and ₹1000 bills back in 2008, you might have one of the most serious cases of hindsight ever recorded. Spend more time working on problems where you know your work is going to have the highest impact. Somewhere in the middle of the River of Reasonable Return™, you’ll find problems that might be out of reach, but certainly not out of sight.

  1. Break seemingly intractable problems into tractable sub-problems

“Decompose the problem into knowable and unknowable parts. Flush ignorance in the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them by vague verbiage.” It is important to appreciate that you break questions down until the answers become obvious, which is exactly what the obvious gap representation of a muPDNA does. Mu Sigma platforms are weapons we use to tackle problems ‘as they are’ and not ‘what they appear to be’. The surprise in these methods is how often good probability estimates arise from the aggregate of assumptions.

  1. Strike the right balance between inside and outside views

We collect outside views to understand ‘How often does a thing of this sort happen in situations of this sort?’ For example, how often do people buy ice cream on a hot summer day? Or do more people go shopping on Christmas Eve than on Christmas Day? The outside view is that ice cream is more delicious in the summer, and that most people are scrambling for gifts on Christmas Eve but drinking eggnog on Christmas Day.  As convincing as both of those situations sound, you still have to balance it with the ‘inside view’ by examining the uniqueness of the problem in front of you and any complicating factors that reduce ice cream consumption on a hot day or reduce shoppers on Christmas Eve. Understanding your problem requires as much focus on interactions with your client and the subtleties of your client’s context as it does on internet research. Don’t underestimate the power of asking your onsite!

  1. Strike the right balance between under and overreacting to evidence

A commonly forgotten element in forecasting is an evaluation of a forecast. According to Tetlock, most forecasts go unevaluated simply because their intention is not to be correct. Oftentimes, it is to entertain, comfort, or push a political agenda. But for us forecasters concerned with accuracy, highly surprising evidence deserves as much consideration as non-obvious lead indicators. Both skillful evaluation of forecasts and belief updating require careful consideration (and a cool head) when new evidence comes along, so that you can pick up on non-obvious indicators while not being suckered into misleading clues.

  1. Look for the clashing causal forces at work in each problem

“For every good policy argument, there is typically a counterargument that is at least worth acknowledging … be open to the possibility that you might be wrong.” For every hypothesis we apply to our problems, there exists the null hypothesis which we intend to disprove. When we reject either the null or alternative hypothesis, we are making a decision based in statistical confidence. What we are not doing is removing the forces that contribute to these hypotheses! Oftentimes, these forces are clash and require asking more complex questions. Remember: a result contrary to your hypothesis is a discovery! Incorporating insights from many complex forces are like chisels to your model: even though they might hurt, they make it perfect.

  1. Strive to distinguish as many degrees of doubt as the problem permits, but no more

“Nuance matters. The more degrees of uncertainty you can distinguish, the better a forecaster you are likely to be … Translating vague-verbiage hunches into numeric probabilities feels unnatural at first but it can be done.” Ambiguity can be scary, because it means that your simple problem transforms into a more complex one. Even worse, a documented cognitive bias called neglect of probability describes the tendency to completely disregard probability when making a decision under uncertainty. Identify and document the sources of uncertainty in your problem. Where can the contrary be plausible? If nothing else is learned from this article, learn to think about uncertainty in granular ways so that you don’t accidentally simplify the problem, as it often occurs with neglect of probability or scope insensitivity.

  1. Strike the right balance between under and overconfidence, between prudence and decisiveness

It is important to understand the risk of both jumping to conclusions and of waiting around until all of the information comes through. Forecasters have to strike a balance between contemplation and action by coming up with creative ways to tamp down on misses and false alarms. An effective strategy is the concept of failing fast, where we make a decision and evaluate it quickly and often. This allows us to create a stream of continuous feedback for solving a problem, thus requiring minimal revision when revising a solution.

  1. Look for the errors behind your mistakes, but beware of rearview-mirror hindsight biases

Hindsight bias is the tendency that one evaluates a decision based on its outcome, not upon the quality of the decision made at the time of the event, and it is a scapegoat for people who do not own up to their errors. Own them! You cannot control unpredictable elements, but it need not prevent you from evaluating the performance of your model. “Belief updating is to good forecasting as brushing and flossing are to good dental hygiene.”

  1. Bring out the best in others and let others bring out the best in you

Proper team management plays a key role in producing accurate and timely forecasts. Tetlock recommends mastering ‘perspective taking’ (understanding arguments from the other side so you can reproduce them to their satisfaction), ‘precision questioning’ (helping others clarify their arguments so they are not misunderstood), and ‘constructive confrontation’ (learning to disagree without being disagreeable). These concepts are related to Mu Sigma’s team dysfunctions – constructive confrontation is only possible if there is trust in the team. Create an empathy map for each person on your team; each of these principles arise out of compassion for your team and are achieved by maintaining a mutual understanding of where each person is at.

  1. Master the error-balancing bicycle

The old Confucian proverb goes: “I hear and I forget. I see and I remember. I do and I understand.” Just as you cannot learn to ride a bicycle by reading books about riding bicycles, making forecasts requires deep, deliberative practice.

  1. Don’t treat commandments as commandments

“Guidelines are the best we can do in a world where nothing is certain or exactly repeatable.” Remember: frameworks are guard rails for thinking, rather than a checklist.

For the time being, we would be wise to follow Daniel Kahneman’s advice: “If I am right, organizations will have more to gain from recruiting and training talented young people to resist biases.”


“Superforecasting: The Art and Science of Prediction” by Philip Tetlock and Daniel Gardner

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Back to Top