• | 8:00 am

Could Meta use Threads conversations to train its AI chatbots?

Mark Zuckerberg may not have to dethrone Twitter to profit handsomely from his new app.

Could Meta use Threads conversations to train its AI chatbots?
[Source photo: Westend61/Getty Images]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m looking at Meta’s new Twitter challenger, Threads, whose users may not realize their conversations and opinions could one day be used to train AI models. Also, Bill Gates weighs in on the near-term societal risks of AI.

WHY META’S THREADS MAY NOT HAVE TO BEAT TWITTER TO WIN

Meta CEO Mark Zuckerberg has been crowing about the 100 million people who quickly signed up to use his company’s Twitter competitor, Threads. It’s understandable that people would want to move away from Twitter, which has been stuck in a tailspin since Elon Musk bought the company last year.

But dethroning Twitter as the world’s “town square” won’t be easy. Network effects are at work, and not in Meta’s favor: Twitter stars won’t want to forsake their big followings for a fresh start on Threads, nor will their followers simply walk away from their favorite accounts’ platform of choice. And social networks need network effects to function properly—they run on brand advertising revenue, and brands want to follow the users.

Is Zuckerberg taking on this risky gambit just to spite Musk? Doubtful. There may be something else in the Threads equation for Meta, even if it doesn’t overtake Twitter: AI training data. Meta is developing large language models just like everyone else, and LLMs must be trained on sizable chunks of organic text on a wide range of topics. That’s why OpenAI trained the models underpinning ChatGPT on Twitter conversations—back when gathering training data was as easy as scraping the public web. Now, tech companies are beginning to lock down their data (as Stack Overflow and Twitter have already done), which prevents other companies from accessing that source material for training language models. There’s a real possibility, then, that future LLMs are characterized by the proprietary training data available to them. Google’s models may gain an understanding of the world by consuming mountains of YouTube videos. Elon Musk may use Twitter content to train the ChatGPT rival he’s hinted at building. And Meta may one day find a rich fund of training data in Threads.

IF CHATGPT’S POPULARITY IS FADING, IT DOESN’T MATTER MUCH

Reuters and the Washington Post both ran stories last week suggesting that the popularity of ChatGPT may be trending down. The stories cite Similarweb data showing a 9.7% decrease in traffic to OpenAI’s ChatGPT website. Never mind that it’s summertime (students probably are taking a break from the chatbot) and OpenAI recently launched a ChatGPT app that naturally would shift users away from the website. Even if interest in ChatGPT has in fact softened, it doesn’t matter much. The chatbot, which quickly rocketed past 100 million users after its November 2022 launch, has already done what it was intended to do.

The main purpose of ChatGPT was to dazzle the public with the large language models OpenAI had been developing. It was not a money maker; it was free and, in fact, for OpenAI it was (and is) very expensive to operate. Profitability was never the point. The point was capturing the public imagination, and ChatGPT did so like few tech products before it had ever done.

The immediate groundswell of interest in ChatGPT very likely gave OpenAI additional bargaining power in its talks to partner with Microsoft, which invested $10 billion in OpenAI less than two months after the launch of ChatGPT. More importantly, the chatbot’s popularity quickly kicked off an arms race among big tech companies to develop evermore powerful language models and apps. That race is still picking up steam.

BILL GATES’ VIEWS ON AI RISKS

Microsoft founder Bill Gates has said that he’s been floored by exactly two technologies in his lifetime: the graphical user interface that he saw for the first time in 1980 and a demo of OpenAI’s GPT language model he was given in 2021. Gates believes artificial intelligence will soon reinvent many parts of personal and business life; he’s also concerned about the costs. While much of the discussion of AI safety has focused on long-term existential threats (i.e. a Skynet scenario), Gates is more concerned about a set of near- to medium-term risks, which he discusses in a new blog post.

On retraining workers for the AI age, Gates writes that it’s the responsibility of “governments and businesses, and they’ll need to manage it well so that workers aren’t left behind—to avoid the kind of disruption in people’s lives that has happened during the decline of manufacturing jobs in the United States.” Gates also sees deepfakes, AI-generated misinformation, AI-assisted cyberattacks, and bias in AI models as pressing issues.

On the sunnier side, Gates seems to dismiss concerns that ChatGPT will erode education. “It reminds me of the time when electronic calculators became widespread in the 1970s and 1980s,” he writes. “Some math teachers worried that students would stop learning how to do basic arithmetic.”

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS