• | 8:00 am

OpenAI’s Sora isn’t even here yet, and it’s already rattling Hollywood. Maybe it shouldn’t be

While AI tools may disrupt visual effects and empower armchair filmmakers, they still lack the human spark that makes narrative art connect with mass audiences.

OpenAI’s Sora isn’t even here yet, and it’s already rattling Hollywood. Maybe it shouldn’t be
[Source photo: eduardrobert/Getty Images, SDXL]

After a year of rapid evolution following the public debut of ChatGPT, it began to seem as if AI had lost its ability to shock. Then came the Sora demo. In a February 16 unveiling, OpenAI’s text-to-video model instantly produced vivid, photorealistic scenes from complicated prompts, stunning viewers around the world—especially in Hollywood. Tyler Perry, for instance, was reportedly so shaken by the demo that he placed on hold a planned $800 million expansion of his Atlanta studio less than a week later. Why bother building sets ever again when a computer program can hallucinate them with a keystroke?

Not everyone in show business, though, is throwing up their hands at the encroaching AI-ification of their industry. On February 27, a group of A-list screenwriters—including M3GAN scribe Akela Cooper and Iron Man 3’s Shane Black—announced their jointly developed tech platform, the Gauntlet. The goal of the Gauntlet is to keep humans at the center of the development process, in a moment when major decision-makers increasingly rely on AI tools for script evaluation. It’s the latest sign that many in Hollywood consider the human element of creativity indispensable and un-duplicable—and worth preserving at all costs.

“AI will entrap us in a matrix where none of us know what’s real,” SAG-AFTRA president Fran Drescher said at the SAG Awards, on February 24. “If an inventor lacks empathy and spirituality, perhaps that’s not the invention we need.”

Drescher spent much of 2023 negotiating a new contract for the Screen Actors Guild, with AI emerging as an enormous point of contention—much as it had with the concurrent writers strike. Actors didn’t want their likenesses used in future projects without consent, and writers didn’t want to unknowingly flesh out stories generated by algorithm, among other concerns for both guilds. In the end, each successfully secured regulations for how AI will be incorporated into their fields, though some say they don’t go far enough.

More recently, talent agency WME partnered up with AI outfit Vermillio to offer further protection for its client base against unauthorized use of their faces and IP. Perhaps more such guardrails will follow, and Hollywood will maintain more of its humanity. Or at least that was the hope before everyone got a glimpse of Sora.

OpenAI’s latest offering is more advanced in many ways than anything similar that’s come before it, like Runway’s Gen-2. The company claims that Sora, named after the Japanese word for sky, can populate any imagined scene with multiple distinct characters, a dynamic range of movement, and eerily precise, high-definition details. Its wind blows like real wind, its water flows and burbles, and everything these elements touch reacts accordingly. As OpenAI’s website describes, “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.” As a bonus, its 60-second-max length vastly exceeds Runway’s 4-to-18 seconds. Although some aspiring filmmakers have already been experimenting with platforms like Midjourney to create small snippets with some success, OpenAI’s model promises to unleash a world of possibilities.

Sora’s most urgent implications for the film and TV industry, though, go beyond its ability to instantly generate “a stylish woman in Tokyo.” It ostensibly has video-to-video editing, which means users can drop into its simulated worlds anything they filmed in the real one. The model’s “sampling flexibility” also allows them to view the same prompt from other perspectives, like a digital panopticon. With advances like these, Tyler Perry is correct in sensing the budget-slashing opportunities Sora represents. It has the power to save production costs, not just in special effects and pre-visualization, but in second-unit tasks like capturing establishment and master shots.

Of course, the trade-off for all this cost-cutting is that a lot of people will lose jobs.

Beyond the concerns that actors and writers negotiated over last year, Hollywood insiders fear the AI revolution will displace VFX artists, sound engineers, and many others. A recent study surveying 300 leaders across the entertainment industry estimates a loss of nearly 204,000 jobs over the next three years. In the days since Sora’s unveiling, workers and artists in the industry seem even more spooked—and the tech has yet to even go public.

Nobody knows how Sora will change between now and its eventual release. OpenAI claims its engineers are currently “engaging policymakers, educators, and artists around the world to understand their concerns and to identify positive use cases for this new technology.” Perhaps some of its capabilities will be curtailed by the time it becomes widely available. Or maybe, only once millions of users have access will its glitches and imprecisions come to the fore. Even if it does end up being as amazing as the demos seem to suggest, though, and usurps a lot of technical jobs, don’t hold your breath for Sora to democratize entertainment.

As this tech continues to evolve, it may soon be possible to make Sora hallucinate a feature-length horror movie in the style of Jordan Peele. But that doesn’t mean such prompts will produce something worth watching beyond the novelty of its existence. Given the tools to make anything they want to see, people will inevitably find out how hard it is to make a movie or show that anyone else wants to see—let alone something everyone wants to see.

The ability to fulfill prompts doesn’t necessarily translate to the ability to fulfill expectations of quality. If a user asks Sora to make a weird, funny movie about the rise and fall of BlackBerry—assuming it can eventually make videos longer than 60 seconds—what are the odds the model would make the same choices about dialogue, pacing, and character development as last year’s quirky critical hit? How could it capture a world as specific as 2023’s Theater Camp, without the writing and directing team’s shared lived experience?

Much of the cinematic magic that connects most with viewers is the result of a collaborative process with a visionary at the helm. It only emerges from the troubleshooting of myriad script drafts, day-of spontaneity, and fresh inspiration in the editing booth. Just because a text-to-video app can generate a stylish woman in Tokyo doesn’t mean it can generate a stylish movie about a woman in Tokyo. It may be able to depict a Pixar-level animated monster’s expression of wonder and curiosity, but will it dream up stories that stir those feelings in viewers?

Even if some fears about the quality of AI-generated films are valid, the demand for them so far seems rather tepid. According to a recent Morning Consult study of 2,201 U.S. adults, consumers are twice as interested in movies and TV shows created entirely by humans as opposed to those made entirely by AI. And that’s before audiences have even had the chance to engage much with the latter, outside of those discomfiting AI-generated South Park episodes from last year.

If any top Hollywood brass started imagining a slate of AI-generated projects after watching the Sora demo, well, as writer Joel Kim Booster suggested recently, perhaps it’s the executives who should be replaced by AI.

  Be in the Know. Subscribe to our Newsletters.


Joe Berkowitz is an opinion columnist at Fast Company. His latest book, American Cheese: An Indulgent Odyssey Through the Artisan Cheese World, is available from Harper Perennial. More

More Top Stories: