- | 8:00 am
What ’80s movies got right and wrong about living with AI
The decade that brought us ‘Blade Runner’ and ‘Terminator’ had a lot of ideas about how AI would factor into our lives. Some of them proved more accurate than others.
Many of us seemed to have already fallen in love with our iPhones well before Spike Jonze’s techno-love story Her took that concept all the way. On its tenth anniversary last fall, Her attracted plenty of fresh analysis about its prophetic vision—a world where everyday interactions with AI go far beyond asking Siri for directions. (More recent discussions about the film tend to revolve around whether OpenAI execs “borrowed” Scarlett Johansson’s voice for their GPT-4o.) But our visions of the AI future didn’t go mainstream with Her. It happened back in the era of spandex, neon, and eight-bit computer graphics: Yes, I’m talking about the ‘80s.
The notion of advanced robots with human-like intelligence dates back at least to Samuel Butler’s 1872 novel, Erewhon, and persisted for decades after with the robo-workers in Metropolis and the killbots of Westworld and 2001: A Space Odyssey. Only in the 1980s, though, did AI characters and motifs explode in popularity. The dying days of the Cold War inspired some filmmakers to imagine nuclear apocalypses either hastened or fully driven by AI, while the rise of personal computing nudged others in comparatively mundane directions. Whatever their provenance, the breadth of AI use cases in ‘80s entertainment reveals that decade as the moment when our paranoia and fascination around living with this technology came into full bloom. We never seem to stop talking about AI these days but the conversation first heated up back in the era of Max Headroom.
Some of these depictions, however, proved more accurate than others.
WHAT THE 1980S GOT RIGHT ABOUT LIVING WITH AI
1. We do, indeed, interact with AI all the time
Throw a rock in the 2015 world depicted in 1989’s Back to the Future 2 and you’re bound to hit a piece of smart tech. Even when ‘80s flicks missed the mark on specific future technologies, they still carried a prescient whiff of what was coming—mainly, that AI would be everywhere. No hydratable pizzas or hover boards necessarily, but plenty of talking to responsive screens. AI ubiquity in these films extends far beyond consumer electronics, though, and into the professional sphere—where the police precincts are getting populated by RoboCops. While AI has not yet displaced too many jobs in 2024, it’s already become all-too-prevalent in surveillance and cybersecurity, and has crept into many other fields as well.
2. They are starting to get more conversational
Setting aside its legally dubious vocal resemblance to the star of Her, GPT-4o is just the latest talking AI to flap its virtual gums with flare. The shift to conversational AI has only just begun, but we’re already closer to the natural-sounding replicants of Blade Runner, and the incorrigible sassafras of KITT from Knight Rider, than we are to the monotonous bleep-blorps of yore.
3. We outsource even creative tasks to AI
A viral post on X from earlier this year succinctly summed up how a lot of creatives are feeling in 2024: “I want AI to do my laundry and dishes so that I can do art and writing,” wrote author Joanna Maciejewska, “not for AI to do my art and writing so that I can do my laundry and dishes.”
Not everyone feels that way, however, as the advent of dream-weaving generative-AI like Sora finds champions in Hollywood, even as it rattles many others. Goofy ‘80s comedy Electric Dreams predicted the day when humans with a creativity deficit would outsource art-making to AI. In it, lovestruck loser Miles gives his sentient PC Edgar directions for writing a song he can pass off as his own to impress a violinist love interest. “Use words like ‘hug’ ‘hold’, ‘kiss on my lips’, ‘tears on her pillow’—it doesn’t matter, they just need to rhyme,” Miles says, pioneering how people would later use apps like AIVA and Riffusion. Those apps probably won’t deliver the goods like Edgar, though, who ends up with a slinky slow jam that sounds suspiciously like a Culture Club song.
4. AI would be able to beat us at video games
Video games were still in their infancy when movies began postulating that AI would be way better at them than humans. In 1985, the same year Nintendo Entertainment System burst onto the scene, the film D.A.R.Y.L. depicted its titular Data-Analyzing Robot Youth Lifeform effortlessly wiping the floor with his 10-year old foster brother in a game of Pole Position. By now, designing AI that can beat humans at any given videogame has become easy enough that scientists at Google’s DeepMind have moved on to creating AI that can be of collaborative assistance with gameplay. The idea that human-operated phone service, the Nintendo Power Line, was ever a thing becomes quainter with each passing day.
5. We would need to create tests to detect it
Some of the more captivating scenes in Blade Runner involve the Voight-Kampff test, which determines whether or not the taker is a “replicant.” It’s kind of like a verbal Rorschach, except it does have wrong answers and they do carry a death penalty.
Our world may not (yet) be rife with replicants, but the AI material we can now generate is already impressive enough to merit tools for authentication. We now have tests to determine whether AI was involved in creating text, film scripts, art, and people’s voices—and thankfully, these seem more appropriately complex than the “which of these images has a sign in it” robot test.
WHAT THE 1980S GOT WRONG ABOUT LIVING WITH AI (SO FAR)
1. They would mostly be humanoid hardware
The techno-thrillers of the 1980s leaned hard on AI in the form of robots passing for human. Although the internet is teeming with bots designed to separate us from our money or sway elections, they seldom show up in the literal (synthetic) flesh. Humans appear to be far more comfortable dealing with virtual assistants and, increasingly, friends who live in our phones.
2. They would experience human emotions
When he’s not eviscerating his foster brother at Pole Position, D.A.R.Y.L. is busy experiencing the full spectrum of human emotions. He seems genuinely afraid when running for his little robot life from government agents, and joyful after he makes a clean getaway. More importantly, he doesn’t appear to be expressing those emotions for an audience. While AI can certainly mimic emotions for humans in need of some empathy, they tend not to spiral into existential crisis at the prospect of disassembly, like Short Circuit’s Number Five, let alone wax poetic about it, like Roy Batty’s iconic monologue at the end of Blade Runner.
3. They would be capable of independent thought and action
“Sorry, pal—forgive me,” says KITT, the perceptive Pontiac Trans Am from Knight Rider, just before ejecting a helpless David Hasselhoff from the driver seat. It’s not the result of a decision-tree algorithm in KITT’s programming, i.e., “in the case of x threat level, do y.” It’s just what occurred to the supercomputer-on-wheels in the heat of the moment, an independent decision. Much like the War Operation Plan Response from WarGames, KITT was blessed with the ability to think and act with autonomy. We should all be crossing our fingers that no AI in the real world ever gets that much freedom.
4. They would tend to be manipulative and malevolent
There wasn’t exactly a ton of optimism in the ‘80s around how self-aware AI would conduct themselves. They never seemed to act in the interest of humanity exactly. Instead, they sized us up as the yassified apes we are, and began wiping us out accordingly—whether its Skynet inciting g the nuclear apocalypse in The Terminator, or the sentient computer from Superman 3 turning a lady into a cyborg. (It’s an incredibly disturbing, nightmare-fuel scene, completely incongruous with the tone of what is otherwise a playful children’s movie for children.) Real-life AI haven’t displayed much in the way of malevolent behavior just yet, although the “deeply unsettling” experience a New York Times reporter had with Bing’s chatbot came a little too close.
5. They would surpass human control
The WOPR in WarGames—yes, pronounced like “whopper”—is specifically designed without a failsafe, so it can make the hard call of launching a nuke, which a feeble human might fumble. Most other AI in the ‘80s have a kill switch like KITT, or some other means of control, which they almost invariably find ways to circumvent. Although it doesn’t appear to have happened yet in real life, this possibility is the source of much concern. A 2022 survey of AI researchers found that the majority believed there is a 10% or greater chance that our inability to control AI will cause an existential catastrophe. Perhaps gaming out so many civilization-destroying scenarios onscreen in the ‘80s, though, sounded enough alarms to ensure they don’t play out in real life. Though if all AI ends up doing is eliminating more jobs and homogenizing art into obsolescence, maybe an apocalypse would be preferable.