Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
December 1st, 2024
With polling the election in my rearview, I am pivoting to: awards season! I am running back my Best Picture model that I began working on last year. I will update this page with a new entry every Sunday through the morning of the Oscars to display and discuss how the race has developed.
Although I added a few more variables, the details of the model are more or less the same (see the original post for details). The only additional technical details to note this year are:
-
When I discuss if an aspect of the movie is helping or hurting its chances, I am relying on SHAP values.
-
The model is only trained on Best Picture nominees, which we do not know yet. To start the model, I examined six reputable publications to get a pool of 20 potential nominees. Until the nominations are announced on January 17th, the model is assuming all 20 have been nominated. That means we’ll see some big changes in the model after that given a smaller field.
The only award information we have currently are the Gotham Awards nominees, which are for independent films only. The rest of the information we could call, in political terms, a “fundamentals-only” model. The predictions right now are based on festivals, aggregated review scores, runtime, genre, studio, MPAA rating, information about the director, etc.
Why is Wicked in the lead right now? It fits a few characteristics that help a film:
-
It falls in the sweet spot of runtime at 160 minutes. Broadly speaking: a runtime of under 100 minutes hurts a film’s chances; between 100 and 150 minutes doesn’t impact the chances much; and a runtime of 150 minutes or longer helps a film.
-
The director has never directed any movies that have been nominated for Best Picture before.
-
It has the genres of musical and romance.
-
It’s distributed by a major studio.
The Brutalist has many of the same things going for it, except it is hurt by not being listed under the genre of romance. It benefits from a major studio distributing it with a wide release after the New Year—Oscar bait time.
My personal favorite, Anora, is unique in these top three because it was in competition at the Toronto International Film Festival (TIFF), which helps the film. Notably, winning at Cannes does not help it at all, which makes sense: Only The Lost Weekend (1945), Marty (1955), and Parasite (2019) have won the top prizes at both Cannes and the Academy Awards.
Funnily enough, a Metacritic score of 91 for Anora hurts its chances of winning. Why? The aforementioned SHAP values show that a score from about 60-73 helps a movie a little bit. In the 74-92 range, it actually hurts the film’s chances slightly. As it approaches 100, however, its chances of winning are boosted. This would appear to give movie snobs the ammunition to say that the Academy usually makes the wrong choice—unless a film is receiving all-time good reviews (e.g., Moonlight, 12 Years a Slave, Parasite, The Hurt Locker).
One last film to mention, second-to-last, is September 5. As of the November 14 update of The Hollywood Reporter’s Feinberg Forecast, he believes it is the favorite. Why the massive difference between my model and his forecast (which is based on “screening films, consulting with voters, analyzing campaigns and studying the results of past seasons”)? Runtime. The database I’m getting runtimes from says it is only 91 minutes. The shortest film to have won the Best Picture is Marty (1955) at 90 minutes. Rounding out movies less than 100 minutes are Annie Hall (1977, 93 minutes) and Driving Miss Daisy (1989, 99 minutes). Two of these are romance, which September 5 is not and which hurts its chances. September 5 is categorized under history, which coupled with the short runtime also hurts its chances. I haven’t seen it (I am just a data scientist in the Midwest, I haven’t been able to), but it would appear to be a longshot, despite Feinberg’s rating. Nonetheless, it looks good (Past Lives and First Cow have made me a John Magaro fan, who is second-billed in the film).
In the next few weeks, we’ll have critics associations releasing their awards, some important nominations, and more information about A Complete Unknown when its review embargo lifts. I’ll see you back here next week for an update to the race.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
Continue reading: Predicting Best Picture at the 2025 Academy Awards
Long-term Implications and Future Developments in Predictive Models for Best Picture Award
The exploration and development of a predictive model for the Awards season provide fascinating insights and potential areas for further refinement. The system uses variables such as film characteristics, review scores, festivals’ inputs, and other data to forecast the probable Best Picture nominees and winners. Over time, the success and accuracy of this model could impact how filmmakers and studios approach film production, marketing strategies, and release timing.
The Impact of Film Characteristics
The reliance on SHAP values in the model reveals how specific characteristics of a movie—runtime, genre, the director’s history—significantly influence its chances of winning. Long-term, if these variables retain reliable predictive value, filmmakers might tailor these aspects to maximize the potential for awards success. It also may indicate an unconscious bias in awards voting patterns towards certain kinds of films, which awarding bodies may need to address.
The Role of Reviews and Ratings
The model also examines review score influence, finding that scores within a specific range can either marginally help or harm a film’s chances, while near-perfect scores boost a movie’s potential significantly. This could prompt changes in industry perspectives towards critics’ reviews, potentially valuing them even more as a barometer for awards viability.
Looking at Film Festivals and Awards
Interestingly, the model also details that participation in certain film festivals aids a film’s prospects, whereas winning at certain other festivals does not necessarily correlate with Academy awards success. This suggests that not all recognitions hold equal significance in an awards projection framework, shifting the focus and goals for film distribution in festivals.
Actionable Advice
Moving forward, filmmakers and studios could use these insights to their advantage. Some key recommendations include:
- Incorporate Key Film Characteristics: Making films that fall within preferred runtimes, have intriguing genres and hire directors with no prior Best Picture nominations could boost a film’s chances.
- Quality Above All: Aim to create films that garner near-perfect critical scores to significantly elevate a film’s award potential. This reinforces that while tweaking variables may help, ultimately, quality cannot be compromised.
- Choose the Right Festivals: Consider strategic participation in specific festivals as an important aspect of the film’s release cycle. Choose festivals that align best with your film’s profile and maximise its awards potential.
- Continue to Monitor Changing Models: Predictive modeling is not a static process. Variables that matter now may not hold the same weight in the future. It’s essential to regularly monitor, revise, and update these models as industry tendencies, voter biases, and societal trends fluctuate.
Overall, predictive modeling for the Best Picture award poses exciting implications for the film industry, prompting a potential shift in traditional filmmaking approaches. However, it’s crucial to balance these strategic approaches with the inherent uncertainty and charm of the art form that cinema remains to be.