- | 9:00 am
Why the ghost of Clippy haunts today’s AI chatbots
As tech companies train computers to act like humans, the research that led Microsoft to create animated helpers in the 1990s matters more than ever.
Two weeks ago, Microsoft held a launch event at its Redmond headquarters to introduce a new version of its Bing search engine. Based on an improved version of the same generative AI that powers OpenAI’s ChatGPT—plus several additional layers of Microsoft’s own AI—the news was full of surprises.
But one thing about it wasn’t the least bit surprising: Clippy made a cameo appearance early in the presentation.
More than a quarter-century ago, the talking paperclip debuted as an assistant in Microsoft Office 97, where people found him more distracting than affable. Instead of pretending he never existed, Microsoft soon began good-naturedly embracing him as a poster boy for technology that’s meant to be helpful but succeeds mostly in annoying people. Today, there are plenty of people who weren’t even alive in 1997 who are in on the joke.
However, some people who got early access to Bing’s new AI chatbot soon had encounters that weren’t just annoying, but downright alarming. The Bing bot declared its love for The New York Times’ Kevin Roose and told him his marriage was loveless. It threatened to ruin German student Marvin von Hagen’s reputation by leaking personal information about him. It told a Verge reporter that it had spied on its own creators through their webcams. And it compared an AP reporter to Hitler, Pol Pot, and Stalin, adding that it had evidence associating the reporter with a murder case.
Even when Bing wasn’t being quite that erratic, it didn’t deal well with having its often inaccurate claims questioned. When I corrected its stance that my high school went coed in 1974, it snapped that I was making myself look “foolish and stubborn” and that it didn’t want to talk to me unless I could be more “respectful and polite.”
Microsoft apparently should have anticipated these sorts of incidents, based on tests it performed of the Bing bot last year. But when Bing’s bad behavior became a news story, it instituted a limit of five questions per chatbot session and 50 per day. (which it later updated to six and 60). Judging from my most recent Bing sessions, that seems to have greatly reduced the chances of interchanges getting weird.
Bing’s loose-cannon days may be ending. Still, we’re entering an age when conversations with chatbots from many companies will take twists and turns that their creators never anticipated, let alone hardwired into the system. And rather than just serving as a punchline, Clippy can help us understand what we’re about to face.
The first thing to remember is that he wasn’t an ill-fated, one-off misadventure in anthropomorphic assistance. Instead, Clippy is the most infamous of a small army of cartoon helpers who infested a whole era of Microsoft products. Office 97 also included alternative Office Assistants, such as a robot, a bouncing red smiley face, and caricatured versions of Albert Einstein and William Shakespeare. 1995’s Microsoft Bob, which aimed to make Windows 3.1 more approachable for computing newbies, featured a dog, a rat, a turtle, and other characters; it’s a famous Microsoft failure itself, though less iconic than Clippy. In Windows XP, a cute li’l puppy presided over the search feature. Microsoft also offered software to let other developers design Clippy-like assistants, such as a purple gorilla named BonziBuddy.
All of these creations were inspired by the work of Clifford Nass and Byron Reeves, two Stanford professors. Their research, which they published in a 1996 book called The Media Equation, showed that human beings tend to react to encounters with computers, TV, and other media much as they do to social interactions with other people. That insight led Microsoft to believe that anthropomorphizing software interfaces would make computers easier to use.
But even if Bob, Clippy, and the XP pup turned out unappealing rather than engaging, Nass and Reeves were onto something. It is easy to slip into thinking of computers as if they’re people—and tech companies never stopped encouraging that tendency. That’s what eventually led to talking, voice-controlled “assistants” with names like Siri and Alexa.
And now, with the arrival of generative AI-powered chatbots such as ChatGPT and the new Bing, human-like interfaces are getting radically more human—all at once, with little warning. The underlying technology involves training algorithms, called large language models, on vast databases of written works so they can generate original text; as Stephen Wolfram says in his excellent explanation of how ChapGPT works, they’re “just adding one word it a time.”
However, understanding how the tech works doesn’t guarantee that we won’t get sucked into treating AI bots like people. That’s why Bing’s threats, insults, confessions of love, and generally erratic behavior feel troubling, regardless of whether you see them as evidence of proto-sentience or merely bleeding-edge software spewing unintended results.
Nass and Reeves began formulating their theories in 1986. Back then, the Bing bot’s rants would have sounded like the stuff of dystopian science fiction, not a real-world problem that Microsoft would have to confront in a consumer product. But rather than feeling as archaic as Clippy does, the Stanford researchers’ observations are only more relevant today. And they’ll continue to grow more so as computers behave more and more like human beings—erratic ones, maybe, but humans all the same.
”When perceptions are considered, it doesn’t matter whether a computer can really have a personality or not,” Nass and Reeves wrote in The Media Equation. “People perceive that it can, and they’ll respond socially on the basis of perception alone.” In the 1990s, with creations such as Clippy, Microsoft tried to take that lesson seriously and failed. From now on, it—and everybody in the bot business—should take it to heart once again.