• | 9:00 am

Are we being led into yet another AI chatbot bubble?

Some experts believe that today’s chatbots lack the “soft skills” needed to be truly useful in business settings.

Are we being led into yet another AI chatbot bubble?
[Source photo: Andrew Holt/Getty Images; hqrloveq/Getty Images]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

ARE WE BEING LED INTO YET ANOTHER AI CHATBOT BUBBLE?

To AI researcher Michelle Zhou, the chatbot fever that’s gripped Silicon Valley for the past year has felt all too familiar. Zhou, who helped invent Watson Personality Insights while at IBM, still remembers the bot waves of 2012 and 2016. Obviously, Microsoft’s Tay and Zo didn’t catch on, nor did Facebook’s Messenger bots. And Zhou has her doubts about ChatGPT and Bard, too. “Those chatbots cannot chat,” Zhou says. “Those are information retrieval assistants.”

Her opinion is based on the idea that our expectations for chatbots are either too low or just ill-defined. Today, we mainly use chatbots as internet search helpers or productivity tools, but what enterprises (schools, hospitals, businesses, etc.) really need is something that can actually stand in for a human being. (For instance, Zhou’s company, Juji, is working with healthcare providers to develop assistants that can, for example, counsel patients through recovery from  knee or heart surgery with the goal of reducing the chances of an expensive ER visit.) Zhou says Juji is also working with universities seeking to prevent students from dropping out of school. One-on-one time with professors or counselors is often very limited, and yet, Zhou says, a study shows that such meetings increase the chances that the student will stay in school by 13.5%. And, of course, businesses want to save money by using bots as stand-ins for human representatives.

The ChatGPTs of the world simply aren’t equipped to perform complex tasks—not very well anyway. Zhou points out that large language model chatbots are still mainly stochastic parrots: That is,they are very complicated probability systems that, trained on massive amounts of text from the web, can generate the most likely next word in a sentence. Now, to be fair, during the training process, LLMs do gain a certain amount of basic understanding of how the world works. And they can be “fine-tuned” on specific knowledge bases such as a user’s writing style, or proprietary product or service information.

But even then, Zhou explains, chatbots still lack some crucial elements to make them truly useful. Zhou says an effective AI assistant needs to be able to proactively engage a user in conversation, and in order to do that they need a number of soft skills, including active listening, negotiation, and conflict resolution. They need “personal intelligence,” or the ability to understand the user’s motivations and psychological needs. “You want the AI assistant to be proactive, to have that personal intelligence to encourage me, to explain what I don’t understand in a way I can understand,” Zhou says. This requires that the AI model develop an understanding of the user—a model of personality traits, knowledge, attitudes, abilities, and needs hierarchy. With those skills and knowledge, the AI might be able to proactively understand and address possible pitfalls facing the user. It might intuit the signs of a patient heading for a relapse, or a student showing signs of dropping out of school.

The promise of AI assistants may be mostly about the extensibility of human knowledge. Businesses with a limited number of employees need to service a far larger number of customers. Advanced AI assistants offer a chance for all users to get high-touch one-on-one attention from businesses or institutions. But the soft skills Zhou mentions may be part of a future wave of chatbots—one that we’re still years away from experiencing.

JOY BUOLAMWINI’S SOLUTION FOR UNETHICAL MODEL TRAINING

Joy Buolamwini has conducted groundbreaking research on bias in AI systems. Founder of the influential Algorithmic Justice League, Buolamwini has been called the “conscience of the AI revolution.” In her new book, Unmasking AI (out October 31), she describes how she came to specialize in AI bias, her research, and its impact.

Bias remains a big problem with the data being used to train our most advanced AI systems. Not only are the well-monied companies at the frontiers of AI research less than transparent about how they build and train their models, but they don’t talk much about the fact that they trained their systems mainly on data scraped from the internet, without getting permission from (or offering compensation to) the millions of artists, authors, journalists, and other users who created that content in the first place.

“When you’re taking all of that kind of information and you’re giving people nothing . . . you’re literally going into somebody’s house, taking the book off their shelf and putting it into your system and then being paid for that,” Buolamwini tells me. Her solution to the problem is something she calls Deep Deletion. Not only should those training data sets be deleted, she says, but the models built using the data should be deleted. And she doesn’t stop there. “The products built on top of those models would need to be reconfigured so they could be built on a foundation of fair data and ethical AI processes,” she says. “And I think now is the time to do this necessary reset.”

While AI companies have now begun striking deals with large content creators, they’re not likely to voluntarily delete the training data they’ve already gathered. However, it’s not out of the realm of possibility that the companies could be compelled to do so by the courts or the FTC, Buolamwini says. The FTC has said that companies engaged in permissionless data scraping will have to delete the data. The FTC, in fact, opened an inquiry on OpenAI’s training data gathering practices this summer.

AI LOBBYISTS ARE KEEPING BUSY IN D.C.

D.C. is buzzing with lobbyists trying to influence lawmakers’ thinking on how to regulate AI. While AI regulation cuts across virtually all industries, lobbyists representing tech companies involved with AI have been increasingly aggressive throughout 2023. One lobbyist told Politico this week that many lawmakers are looking to lobbyists for specialized knowledge and guidance on how to think about the sprawling issue. Filings show that Meta was the fifth biggest spender on outside lobbying ($5.1 million) in the third quarter, while Amazon came in sixth ($4.1 million). The technology “cuts all across industry sectors, health care, energy, transportation, you name it,” Nadeam Elshami of the lobby group Brownstein Hyatt Farber Schreck told Politico. “And that has gotten a lot of our clients and new clients interested in this space.”

Also this week, another Politico reporter, Brendan Bordelon, dug out an interesting story about OpenAI’s influence in Washington. Bordelon’s piece concerns a September open letter signed by a wide swath of advocacy groups arguing that Congress should not force AI companies to get permission from content rights-holders in order to use their content to train AI models. Bordelon discovered that the letter was circulated by an attorney named Sy Damle, who is also listed as one of the attorneys defending OpenAI against a copyright violation lawsuit filed by comedian Sarah Silverman.

“The letter’s covert origin offers a window into the deep and often invisible reach of Big Tech influence in the Washington debate over AI,” writes Bordelon.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS