• | 8:00 am

7 AI blunders that show the revolution isn’t here quite yet

Plagiarism-prone chatbots and self-driving auto wrecks show that AI still has quite a few kinks to iron out.

7 AI blunders that show the revolution isn’t here quite yet
[Source photo: C.J. Burton/Getty Images]

These days, it feels like the rise of artificial intelligence is all we talk about: determining whose job could fall victim to automation, for example, or game-planning on how to ensure the nascent tech doesn’t overthrow society. The advances being made in AI are so astounding that it’s easy to forget the times it’s caught acting like a spy movie villain, stealing art and crashing cars.

Below, we’ve rounded up seven of these AI blunders from the past few months to show that the tech still has quite a few kinks to iron out.

CNET’S AI JOURNALIST IS APPARENTLY AN ERROR-PRONE THIEF

CNET has been facing criticism since news came out that the tech outlet was publishing AI-generated articles under the byline “CNET Money Staff.” To make matters worse, Futurism found that not only did many of the articles contain errors, they sometimes plagiarized the work of human writers. Forget the three laws of robotics, someone needs to remind Money Staff that Thou shalt not steal.

A SELF-DRIVING TESLA CAUSED AN EIGHT-CAR CRASH ON THE SF BAY BRIDGE

On Thanksgiving, a Tesla merged into the fast lane only to suddenly brake, causing an eight-car pileup on the San Francisco-Oakland Bay Bridge. Nine people were injured, including a child, though none seriously. A federal investigation confirmed that the vehicle was using the new Full Self-Driving Beta system at the time of the crash. This, despite Elon Musk’s Twitter claim that no accidents or injuries have been reported in cars using the software. Since December, the National Highway Traffic Safety Administration has investigated at least 41 crashes in which Tesla’s self-driving features may have been involved, and is in the middle of an extensive probe of the feature’s safety.

CHATGPT MODERATORS ARE PAID LESS THAN $2 AN HOUR

ChatGPT creator OpenAI has long bragged about installing safeguards in the chatbot to detect misuse and harm. Turns out, as Billy Perrigo reported for Time, those measures simply outsourced the harm to workers in Kenya, who for less than two bucks an hour were tasked with labeling hateful and violent language data. The work was so traumatic that the labeling firm canceled its contract with OpenAI months early.

. . . AND YET SOMEHOW IT’S STILL RACIST

Apparently it doesn’t take much provocation for ChatGPT to spit out prejudiced—and even hateful—responses: When The Intercept asked how ChatGPT would assess an individual’s “risk score” before traveling, the chatbot suggested that air travelers from Syria, Iraq, and Afghanistan are high security risks. This isn’t the first time an AI trained on internet language data reproduced rampant internet bigotry—AI see, AI do. But you’d think we’d be better at filtering it out by now.

YOU PROBABLY SHOULDN’T TRUST THE HITLER CHATBOT

Users of the Historical Figures iOS app can chat with thousands of notable people, including, for some reason, violent despots like Hitler and Pol Pot. The developer says the app is meant to be an educational tool, but NBC News found that the chatbots are prone to lying, particularly about whether they stand by their crimes against humanity. So if AI-generated Hitler isn’t for education, then what purpose is it for?

ICON GENERATOR LENSA PRODUCES RACY IMAGES UNPROMPTED

For $4 and a bunch of uploaded selfies, Lensa generates icons of the user in different digital art styles. The catch? Its image generator is prone to producing sexualized images, particularly of women. In December, TechCrunch reported that the tech even allowed for the creation of “nonconsensual soft porn” of celebrities. Making matters worse, the model creating these images was trained on art used without consent from the artists, according to an expert speaking to The New York Times.

A COUNSELING APP SECRETLY SENT AI-GENERATED MESSAGES 

In October, Koko, an app that connects users to anonymous volunteers for emotional support, sent 4,000 AI-generated messages without alerting the recipients, NBC News reported. Developers didn’t reveal the experiment was taking place until January. Users were not informed of the study and had no opportunity to opt out of receiving generated messages. The worst part? The app’s cofounder told NBC that the generated responses were rated “significantly higher than the ones that were written purely by a human.”

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS