Articles on leadership and mental models for understanding the world. All articles are written by Wojciech Gryc.
by Wojciech Gryc
Given the time of year, everyone is publishing their opinions on what the future holds. We’re inundated with views into the next year, and ideas about trends and what the future holds. We rarely look back on how effective these predictions are – are they ever accurate? And do they actually help us make any decisions?
I’m interested in constructing forecasts that help us, as business leaders, investors, activists, and so on, make decisions. Forecasts that help you with decision-making need to be well-defined, and one also hopes that over time, we get better with forecasts. As such, I’m interested in the question of how to construct forecasts or predictions that are clear enough to (a) know whether or not we were right at some point, and (b) teach us about the world regardless of whether or not we were right. The first part is useful for obvious reasons: by knowing if the forecast was correct, we can decide who we trust, how often we’re right, and so on. In the second case, a clearly defined hypothesis will help us learn regardless of whether or not it was right because we’ll either know that we were right (and our underlying logic was correct), or that the underlying logic was not correct.
Please note that I use the term “forecast” and “prediction” interchangeably here.
A forecast has to meet a few criteria to be well-defined and useful1. First, it needs to have a concrete outcome. In other words, the outcome needs to be something that, were it to take place, will not be up for debate depending on who you ask or how you define the situation. This is much easier said than done – in fact, most forecasts and predictions in the popular press tend to be so vague that it’s unclear when they will actually be achieved, or if they’ll be achieved at all.
A good illustration of the above is this past year’s debates around the North American Free Trade Agreement (NAFTA). As Canada, Mexico, and the USA were negotiating and renegotiating the agreement, how would you define a forecast topic that aims to predict whether NAFTA will be revoked, cancelled, or changed so much that it’s effectively a new agreement? We could set the forecast to “By 31 December 2018, NAFTA will not be around anymore.” However, what does this mean in practice? What if the agreement is renamed? Or what if a similar agreement is agreed upon (i.e., the USMCA)? Or what if there is a replacement agreement, but it has not officially been approved or transitioned to?
You see how quickly we can begin debating whether or not a prediction took place or not. As such, one would need to define specific conditions for a forecast to be achieved. In the case of NAFTA, this could be invoking Article 2205 of the agreement, which sets a 6-month timeline for a country invoking to the agreement to leave the free trade area. Note that in this case, NAFTA still remains a binding agreement for the other countries.
With that in mind, a well-defined forecast on NAFTA would have to specifically require that a state invokes Article 2205. There is no room for debate if such an invocation were to take place. Of course, it’s not the only way to renege on NAFTA.
Concrete outcomes alone aren’t enough. As they say about recessions, there’s always one coming, and eventually every doomsayer is proven right when it comes to a downturn. A well-defined forecast needs to be time-bound. By having a deadline associated with the forecast, you ensure that past a certain date, you can judge whether or not all people participating in the forecast were correct or not. Let’s use the NAFTA example earlier. A forecast of “The USA will invoke Article 2205 to withdraw from NAFTA” makes it clear what the conditions for the forecast being correct or incorrect are. However, this could happen two weeks from now, five years from now, or never. Someone saying that NAFTA will eventually fall apart as per the above is never wrong, if the event has never happened. We could wait forever without proving a forecast incorrect.
Without a timebound element, forecasts also lose their utility. Forecasts are useful because they enable us to predict an event, prepare for it, or work to try and avoid it. Without a timebound element, when do you start making preparations or decisions based on the forecast? If NAFTA will be revoked eventually, does this mean we should make preparations now? Not necessarily.
A well-defined and timebound forecast is something along the lines of “The USA will invoke Article 2205 to withdraw from NAFTA by 30 June 2019.” We have a condition or development that leaves little room for debate, and a timebound component where we know when everyone is right or wrong. The art here is picking a time frame that is actually useful and realistic.
Technically, the forecast examples so far as just themes or statement. We need to express an opinion about those statements. Stating your opinion requires you to share a metric on how confident you are in the forecast and timestamp your forecast so it’s clear when the forecast was actually made.
Opinions need not be binary. To continue using the NAFTA example above, you can say that this statement will take place with 100% certainty or 0% certainty, but it’s also possible to say there’s a 33% chance, or a 47% chance, or a 79% chance of the forecast taking place. While this might seem like you’re adding uncertainty to the problem, planning for a situation that is 51% likely to happen is very different than planning for one that us 93% likely to happen, even though both round up to a “yes” in a forecast.
Note that you can also apply the above to other metrics. Rather than predicting a binary event (such as a member state leaving NAFTA), you might predict a ranking (e.g., Poland’s ranking on the Human Development Index in 20192), or a metric (e.g., Tesla’s stock price at close of market on 29 March 2019). In all these cases, however, you have a numerical, quantified prediction.
The second piece is time stamping your own prediction. Your predictions and certainty estimates might change over time, so knowing when a prediction was made is important. It’s also likely that the closer you get to the deadline of a forecast, the more often you’ll be correct. On the other hand, the more often you’re correct further out, then more useful your predictions are. Being better at stock market predictions or geopolitical forecasts 8 months in advance is much different than making effective and accurate predictions one day in advance.
Finally, you’ll also want to justify your prediction. This helps you come back to the predictions later, so you understand why you felt a certain way. Reviewing your opinions, thoughts, and ideas that went into a prediction makes you better over time, as with any skill3.
The USA will invoke Article 2205 to withdraw from NAFTA by June 30, 2019.
Prediction on 27 December 2018: 0.93
Why? The USMCA is likely to replace NAFTA and has support of all member states. I expect the replacement to take place without much issue at this point. The only uncertainties are some political ones (any politicians changing their minds as well as new politicians in office), or whether or not the actual article will be invoked.
Note that this is not my actual opinion on NAFTA or the USMCA itself, but rather an illustrative example.
You’ll notice that well-defined forecasts are few in number. Most writers, journalists, commenters, and pretty much everyone under the sun will typically use vague language that is almost always correct or can be proven to be correct with very specific anecdotes. This isn’t a nefarious ploy to pretend everyone is always correct, but making good, accurate, and well-defined forecasts is extremely difficult.
Examples of great forecasts come from the Good Judgment Open4 and its partnership with the Economist. Some example of their forecasts (with scores submitted by individual people) include:
Another fantastic set of examples comes from Deloitte’s Technology, Media, and Telecommunications Predictions for 20195. While not all of their predictions are well-defined, there are a few gems in the report, such as:
Unfortunately, many consulting firms and commentators do not have well-defined forecasts and those that do typically don’t return to those predictions to determine how accurate their forecasts are.
You might be thinking this all sounds a bit pedantic, but there are two reasons why you should be approaching your forecasts and thinking about future events in this way.
First, this gives you a feedback loop6 that shows you where your logic and thinking is correct, and where it is not. You can only improve what you measure, and measuring the strength, accuracy, and validity of your predictions will mean that over time, you can actually start improving.
The second reason is a bit more technical. Suppose you log your reasoning, timestamp your predictions, and over time actually learn which predictions were correct and which were not. You’re developing an actual data set of predictions and the logic behind them. Some of these are wrong, and some are right, but all of them represent a system of thinking… A system that, if combined with other people’s predictions, represents a forecasting data set that we can learn from systematically, and turn into an algorithm of some sort.
This future vision for forecasting is what excites me. Being a leader and working on the future requires you to be comfortable with forecasts (and being correct and incorrect). Working with a system like this and using the data to build an even better system can mean that we actually begin forecasting hard-to-predict, human events with some reasonable accuracy!
Notes