• | 5:00 pm

A case for making our AI chatbots more confrontational

A new study puts forward the idea that bots like ChatGPT should be less conversational and more confrontational.

A case for making our AI chatbots more confrontational
[Source photo: FC]

Spend any time interacting with AI chatbots and their tone can start to grate. No question is too taxing or intrusive for the noncorporial assistants, and if you probe under the hood of the bot too much, it’ll respond in a platitudinous way designed to dullen the interaction.

Nearly a year and a half into the generative-AI revolution, researchers are starting to wonder whether that deathly dull format is the best approach.

“There was something off with the tone and values that were being embedded in large language models,” says Alice Cai, a researcher at Harvard University. “It felt very paternalistic.” Beyond that, Cai says, it felt overly Americanized, imposing norms of consensual, agreeable, often saccharine agreement—which aren’t shared by the entire world.

In Cai’s household growing up, criticism was commonplace—and healthy, she says. “It was used as a way to incite growth, and honesty was a really important currency of my family unit.” That triggered her and colleagues at Harvard and the University of Montreal to explore whether a more antagonistic AI design would better serve users.

In their study, published in the open-access repository arXiv, the academics conducted a workshop asking participants to imagine how they thought a human personification of the current crop of generative-AI chatbots would look if brought to life. The answer: a white, middle-class customer service representative with a rictus smile and an unflappable attitude—and clearly, not always the best approach. “We humans don’t just value politeness,” says Ian Arawjo, assistant professor in human-computer interaction at the University of Montreal and one of the study’s coauthors.

Indeed, says Arawjo, “in many different domains, antagonism broadly construed, is good.” The researchers suggest that an AI coded to be antagonistic, rather than supplicant and sickeningly consensual, could help users confront their assumptions, build resilience, and develop healthier relational boundaries.

One of the potential deployments for a confrontational AI that the researchers came up with was in intervention, to shake a user out of a bad habit. “We had a team come up with an interventional system that could recognize when you were doing something that you might consider a bad habit,” says Cai. “And it does use a confrontational coaching approach that you often see used in sports, or sometimes in self-help.”

However, Arawjo points out that the use of confrontational AIs would require careful oversight and regulation, especially if it were deployed in those areas.

But the research team have been surprised by the positive response they’ve received to their suggestion of retooling AIs to be a little less polite. “I think the time has come for this kind of idea and exploring these systems,” says Arawjo. “And I would really like to see more empirical investigations so we can start to tease out how you actually do this in practice, and where it could be beneficial and where it might not be—or what the trade-offs are.”

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter