10 Million Steps

Articles on leadership and mental models for understanding the world. All articles are written by Wojciech Gryc.

Posts

Why 10 Million Steps?

Challenges

About

Get Updates

21 March 2019

How to build an AI-powered platform to predict fuzzy world events

by Wojciech Gryc

Predicting political, economic, and military events is difficult, but also important. We’ve already seen AI being used for playing complex strategy games like DOTA 2 or Starcraft, and quantitative hedge funds actively use advanced machine learning to try and predict metrics like consumer sentiment, products shipped, and other investment-specific metrics. However, much of these predictions are limited to quantitative metrics – you know when you’ve won a Starcraft match, and you can’t argue with the price of a commodity when it’s logged in a trading system like the Chicago Mercantile Exchange (CME).

Predictive systems depend on a steady stream of data, and while they might make multiple predictions over time, the type of problem does not change. Each Starcraft game is different, but the inputs/outputs of the game are consistent. Similarly, while predicting oil prices for different time frames might mean something different for a human, an algorithm will likely use similar input data for the questions (e.g., historical oil prices and other commodity data).

Predicting fuzzy world events – things like outcomes of intergovernmental negotiations or the results of a special election – are fundamentally different because the result can be up for debate, and the pipeline of data can be unique for that specific event. Worse still, the event itself might be so unique that it’s not possible to find analogues to learn from or train on. This is why the problem is so tough: there are no training sets, most outcomes are fuzzy, and the data associated to the event is likely unstructured – it’s in the form of newspaper articles and audio discussions, rather than a spreadsheet or database table.

This blog post provides a view for how an AI-powered event prediction system could potentially work and what such a platform would require.

But first, why build this?

There is a lot of information out there when it comes to fuzzy world events and analyzing this information will be helpful to government, business, and other leaders. A recent article on AI within the military aptly references use cases and different approaches across countries1:

China is … focusing on developing advanced AI that could contribute to strategic decision-making. The U.S. approach is more conservative, with the goal of producing computers that can assist human decision-making but not contribute on their own. Finally, Russia’s projects are directed at creating military hardware that relies on AI but leaves decisions about deployment entirely in the hands of generals.

It’s not clear how far we can drive strategic decision-making with AI, but if such a system is possible, then the outcomes will be strategically valuable. At the very least, such a system can help with decision support – non-computational experts (i.e., people) could use such a platform to improve their own decision-making, which can further feed into the strategic decisions around whatever event is being planned around. As such, it’s interesting to see how far we can push prediction of world events with a forecasting platform. In the best-case scenario, one might actually be able to build an intelligent system that predicts human behavior.

Note that even if forecasting itself is not that effective, this is an important exploration because making “fuzzy” events concrete will be critical if we ever want to enable smart contracts or other systems that support us in our daily lives.

Why is this hard?

There are challenges with building such a system. First and foremost, there’s no clear framework or standard for defining and making forecasts. It’s easy to make vague forecasts that are both right and wrong, or that can be framed as “correct” regardless of the outcome.

To use an example, suppose we were forecasting whether the UK will meet the 29 March 2019 Brexit deadline for exiting the European Union. What exactly does a concise and well-defined forecast look like? If I were to vehemently argue for a “failure” of the deadline, would a delay in implementing secession be considered such a failure? If the UK decides to have a second referendum, would this count as a success? It’s unclear.

So first and foremost, we have a problem of clearly and concretely defining the actual problem statement.

Secondly, there is no clear data set on which we can depend on to inform whether a forecast is correct or not. Searching for news sources about events, or tying them back to data sets, is difficult. The Brexit example above is apt – “success” and “failure” mean different things for people on different sides of the political spectrum. You need to be significantly more structured in defining a forecast, and then you need to find where to confirm whether the forecast or event were observed.

Finally, with such a vague problem, there’s also very little in the way of a training set. Algorithms need a history of predictions and data that inform those predictions to actually learn and improve over time. Without such a data set, you can’t actually train an algorithm, let alone have it get more accurate over time.

A vision for a forecasting platform

Let’s suppose we’re building a platform to deal with the problems above. For easy reference, let’s call it the Stochastic Futures Platform (SFP). The SFP would need to do the following:

  1. It would need to have a clear and concise ontology around predictions to define both what is being predicted and how we know whether the prediction took place or not.
  2. It would need to have a clear way to confirm or deny whether forecasts took place; ideally via data, but possibly with an impartial moderator.
  3. Data on forecasts and events would need to be logged over time, to be used as some sort of training input for predictive algorithms (eventually).

With such an approach, we can do a few extremely valuable things. First, it gives us a clear set of rules that we can forecast with – “we” being both humans and bots. Secondly, by having this standardized approach, we can build a data set of forecasts, which we can then use to learn about improving forecasting and train more advanced models against. In theory, having such feedback loops should even make people using the platform more effective over time.

An ontology for forecasting events

Let’s move on to actual forecasts. How do we make these concrete and well-defined? We’ll do so by building the system around statements, actors, and the forecasts those actors make on the statements.

Statements. A statement is an event that might take place in the future and will eventually be verifiable. At this time, we’re focusing on binary events – ones that will happen or will not happen, rather than measuring any sort of numerical values or picking from a set. This facilitates a lot of the automation around bots and AI.

Specifically, statements need a few things:

  1. A description of what is being predicted.
  2. A deadline or timestamp for when the statement will or will not be true (i.e., verifiability).
  3. A data set, or place, or source where we can confirm whether the statement has become true or not.

Forecasts. A forecast is tied to a statement and a statement can have multiple forecasts. A forecast has…

  1. A time stamp when it was made.
  2. An estimate of the probability of the event being true.
  3. A discussion on why the forecast has the estimate it does.

In the long run, multiple statements and forecasts might be tied to each other or interdependent on each other, like a decision tree or probabilistic graph model. However, this is a bit more advanced and we’ll leave it for a later discussion.

Actors. Actors make forecasts. People are actors, but bots can also be actors. We want to track actors, which create forecasts based on logic, rules, patterns, algorithms, etc. so that we can learn to improve forecasts over time. Both types of actors use algorithms, be it implicitly in the case of people, or explicitly in the case of bots.

With the system above, we have statements that say something that will be eventually verifiable about the world. Actors make forecasts and their approach to making forecasts (i.e., via forecasts) is logged within the actual forecasts themselves. If actors are human and using a process for their forecasts, then providing feedback on their accuracy should technically improve their forecasting. If they are bots, then we now have a way to validate algorithms and eventually work on improving them.

Next steps

This is still very early in my thinking but building the Stochastic Futures Platform (SFP) could be interesting. In terms of vision: the Stochastic Futures platform is a foresight platform that enables people to follow news and data, make forecasts, document how/why they approach their forecasts the way they do, and improve their ability to do so over time. We measure success via improvement to accuracy metrics and productivity of the forecasters themselves.

More to come (maybe)…


Notes:

  1. See Whoever Predicts the Future Will Win the AI Arms Race in Foreign Policy magazine.