• | 9:00 am

Brands are growing more concerned about how they are perceived by ChatGPT

As chatbots and AI-native search increasingly become the arbiters of the web’s information, companies may need a new SEO for the AI age.

Brands are growing more concerned about how they are perceived by ChatGPT
[Source photo: amtitus/Getty Images; Tetiana Lazunova/Getty Images]

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. Sign up to receive this newsletter every week via email here.

BRANDS ARE GETTING CURIOUS ABOUT HOW THEY’RE SEEN BY AI MODELS

People are increasingly turning to large language models (LLMs) to search for product information (see: chatbots like ChatGPT or AI search tools like Perplexity). Little surprise, then, that companies are growing more concerned about how these LLMs perceive their brand.

Jack Smyth, chief solutions officer at digital marketing firm Jellyfish, has been conducting research to figure out why certain user prompts will cause LLMs to mention brands. LLMs organize words and phrases, including brand names, within a huge vector space according to their meaning and the contexts within which they’re often used. So, for example, an LLM might associate the name of a lotion product with the term “gentle.”

But what if the model fails to associate a certain term with the product? Or worse yet, what if the LLM associates the product name with some negative term. Smyth says he’s talked to financial firms that are trying to make sure LLMs aren’t associating their brand names with anything related to “ESG” or “woke” during an election year.

While companies can’t change what the models already “know” about the brand, they can put new content into the information space in hopes that it’ll reach, and sway, AI  models. Right now, that exposure happens mainly when a model is trained using a huge compressed version of all the content on the internet, but that might change. “As these models become connected to the open web by default, which is the best way to make sure they’re as useful as possible, they’re going to be ingesting more and more topical or recent content,” Smyth tells me.

“It’s almost like model surgery or adversarial optimization, and that’s where it gets really fun because we have to figure out what type of content is going to have the biggest impact on that model,” he says. Smyth believes that LLMs will increasingly be trained by watching web videos. “Our working hypothesis is that video—just because it’s a richer format or it might get more eyeballs on it—is likely going to be pretty significant.” He says he’s also advised brands to look at the entire body of media they’ve published over the years, and to eliminate anything that may have sent the wrong messages.

NEW APPLE LLM RESEARCH REVEALS A STRATEGY FOR MAKING SIRI GREAT

Apple wants to enable a compact language model on your iPhone to “see” content from applications and websites you have open on your screen or running in the background. In a new paper, the company’s AI researchers propose a method of creating a completely textual representation of such content (and its place on the screen) so that the language model can understand it and use it in conversations with the user. “To the best of our knowledge, this is the first work using a large language model that aims to encode context from a screen,” the researchers write in the open-access repository ArXiv.

For example, if a user is looking at a list of nearby businesses within the Maps app, they could simply ask “what are the hours for the last one on the list?” without having to name the business. Or, if a user is looking at a list of phone numbers on the Contact Us page of a website, they might tell Siri to “call the business number.”

One of the coolest storylines in generative AI is the expanding set of data types that models can access, reference, and learn from. The first LLMs could only process text from the internet, but new multimodal models can understand audio and video. A future iPhone may be able to “see” app content on your phone, but Apple’s great opportunity is to inform Siri with more kinds of data that the iPhone can already collect, such as the audio environment collected by the microphones, the world as seen through the camera, and the motion and movement detected by the sensors.

WHY OPENAI WAIVED USER SIGN-IN FOR CHATGPT

OpenAI is no longer requiring people to set up an account before using ChatGPT. Users can just click the app or website and start prompting. The company says it’s rolling out the sign-up-free ChatGPT slowly, with the aim to “make AI accessible to anyone curious about its capabilities.”

The move makes sense if you think about why OpenAI opened its chatbot to the public in the first place. Many of its researchers came to the company because they wanted to expose AI to real users instead of just writing research papers about it. The idea is that they can learn a lot about how the chatbot can be used, and misused, from people in everyday situations.

The company also wants to use the dialogs that users get into with ChatGPT to train their AI models. User-generated content may be as valuable to AI companies as it is to social networks. By nixing the sign-up, OpenAI is removing a barrier for collecting training data. (The company points out, however, that users can opt out of having their conversations used for training.)

Now that just anybody can use ChatGPT, the risk of someone using the tool for harmful purposes may increase. OpenAI says it’s putting additional content safeguards in place for users without accounts and will block a wider range of prompts.

OpenAI’s move may also be an attempt to get new users to try ChatGPT rather than any of the set of increasingly performant alternatives, such as Google’s Gemini. SimilarWeb data shows that February visits to the ChatGPT service on mobile and desktop were down 2.7% from January, and down 11% from the peak of its popularity in May 2023.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS