• Entertainment
  • movies

Our Oscars Algorithm Predicted the Best Picture Winner. Tell Us Your Guesses, Too

4 minute read

We’re still not over it.

Last year, we used high-octane statistics to predict the winner of the Academy Award for Best Picture, based on 69 years of data from four different awards ceremonies. Our model, which required only three variables, gave Alfonso Cuarón’s Roma a 45.5% chance of winning, higher than any of the other seven candidates.

To be fair, a lot of humans also predicted that Roma would win. But the honors went to Green Book, causing immediate controversy. In the haze of second-guessing that followed the surprise victory, several critics surmised that, because Netflix produced Roma, the stuffy Academy wasn’t ready to anoint a streaming service that has upended the traditional Hollywood model. (Forbes’ Jim Amos, to his credit, saw it coming, writing before the ceremony, “Whether they want to admit it or not, there are still many Never-Netflix’ers in the Academy so Roma may face an uphill battle.”)

This argument offers a curious insight into the limits of statistical modeling, sometimes also referred to as “machine learning.” Models can be trained to predict outcomes with impressive accuracy when the rules of the game stay the same. For one-time outcomes that are the product of shifting forces, like the political attitudes of the Academy, model performance comes with few guarantees. In the case of the Oscars, we have seen that important contemporary forces such as the emergence of Netflix can neither be ignored nor easily incorporated into historical data analysis.

So this year, we’re trying something a little different. We’re going to ask you to mix your own predictions with the results of our new-and-improved model. The following interactive will first gauge your predictions, then let you see how they compare to the statistical probabilities. You can then decide who should get more say. (Your predictions, if you so consent, will be anonymously collected for an upcoming story comparing the algorithm’s predictions to those of our readers).

Here’s how it works: First, we updated our model, using the same 47 potential predictors as last year — a mix of other Academy Awards for which a Best Picture nominee was also nominated, and the results of three awards ceremonies that come before the Oscars: The Golden Globes, the Directors Guild of America, and the British Academy of Film and Television Arts.

For each of those variables, we have 70 years of data from past awards ceremonies, making it possible to identify the 10 that had the highest predictive value in determining the Best Picture Winner. For example, if a movie was also nominated for Best Film Editing or won the Golden Globe for Best Director, the odds that it won Best Picture go up.

Those 10 variables gave us 1,023 potential models, which we exhaustively searched and weighted using a technique called “Bayesian model averaging.”

Under the hood, the mathematical machinery which undergirds the model now ensures that probability of winning sums to 100% in any given year. Anyone interested in specifics of this new modeling approach can find all the details in a report we published on arXiv.org.

While we’re proud of this model, we’re aware, in light of the Roma debacle, that no algorithm can be correct all the time, particularly given the caprice of the Academy. That’s why we’re conscripting TIME readers like you to contribute your own wisdom directly into the analysis. Regardless of whether the model is correct in predicting that 1917 will win Best Picture, we’re eager to see the data, for those kind enough to consent to allow us to anonymously collect their predictions, as an experiment in what one might call “Bionic Learning” — which is to say, Machine-Human Collaborative Learning.

In the meantime, here’s what our model predicts for the Best Picture winner:

  • 1917: 39.6%
  • Parasite: 16.9%
  • The Irishman: 16.9%
  • Joker: 15.0%
  • Jojo Rabbit: 4.3%
  • Ford v Ferrari: 3.8%
  • Once Upon a Time… in Hollywood: 2.5%
  • Little Women: 0.5%
  • Marriage Story: 0.5%
  • As you can see, 1917, while garnering the top probability given, among other things, its victories with the Directors Guild of America and the British awards, commands only a plurality of the probability, which must add to 100%. Still, we’re rooting for it, only because we generally root for math — even when the title of a film isn’t all numbers.

    More Must-Reads From TIME

    Write to Chris Wilson at chris.wilson@time.com