• | 8:00 am

Why Meta is offering users a peek under the hood of its AI-powered algorithm

The social media giant may be getting its users ready for Facebook and Instagram content that’s far more curated—and even created—by AI.

Why Meta is offering users a peek under the hood of its AI-powered algorithm
[Source photo: Porapak Apichodilok/Pexels; Markus Spiske/Pexels; Rawpixel]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m looking at Meta’s attempt to show users (and regulators) how its AI-assisted algorithm picks content for their social feeds. Also, web publishers are increasingly unhappy that their content is being used without charge by well-monied companies like Google and OpenAI to train their respective AI models. 

WHY META REALLY WANTS TO SHOW HOW ITS ALGORITHM WORKS

Whenever Meta leadership appears before Congress, there’s always a line of questioning about the company’s “algorithm,” the complex math equation that decides what content goes into users’ feeds. The company’s willingness to serve up hateful, misleading, and divisive content in exchange for greater user “engagement” and more ad dollars has been encoded within the algorithm. The internal documents leaked in 2021 by whistleblower Frances Haugen certainly suggested so.

But now Meta says it wants to come clean about how its algorithm uses signals from the user to pick and rank content. “This is part of a wider ethos of openness, transparency and accountability,” Meta’s president of public policy Nick Clegg wrote in a blog post late last week.

The algorithm, Clegg explained, is actually a group of 22 AI models, each one trained to place different categories of content within different sections of the Facebook and Instagram apps. In the Facebook homepage feed, for example, three AI models might interact to choose the content: A “Feed” model suggesting content from friends and family, a “Feed Recommendations” model suggesting posts from people the user isn’t connected to, and a “Reels” model recommending short-form videos.

Such AI suggestion engines are nothing new; they came into use long before the arrival of generative AI systems like ChatGPT. So why is Meta releasing this information now?

It sends a signal to Congress, which has become far more interested in the negative societal effects of Facebook and Instagram, especially in the wake of the 2021 leaks. It could also be part of a broader effort to prepare users for their feeds to become increasingly populated with content from people outside their networks—namely, celebrities, politicians, and “creators.” As Facebook president Tom Alison told Fast Company back in March, Meta will increasingly rely on complex AI models to select the videos that will keep the user watching and scrolling.

THE BATTLE OVER DATA RIGHTS HEATS UP

Large language models, like the one that powers ChatGPT, are trained using massive amounts of data scraped from the public internet. The people and companies that published that content never signed up for that, and will make no money from its use to train AI models. OpenAI already faces a number of lawsuits from creators, authors, and publishers over its training practice, and will very likely face more legal action in the future.

The conflict is taking other forms. Elon Musk limited the access of non-signed-in users to Twitter posts over the Independence Day break, suggesting the move was triggered by a concern that AI model developers were “pillaging” the site for data to train models on things people say in the internet’s “public square.” And Reddit, which is getting ready for an IPO, has been in a high-profile joust with its volunteer content moderators over the same issue. On July 1 Reddit began charging outside developers for access to the site’s content. “The Reddit corpus of data is really valuable,” the company’s CEO Steve Huffman told The New York Times. “But we don’t need to give all of that value to some of the largest companies in the world for free.”

The conflict is just getting going. Expect it to play out in courtrooms and boardrooms around the U.S. and the world.

U.S. MAY LIMIT CHINESE ACCESS TO AI COMPUTE POWER SERVED BY U.S. CLOUD COMPANIES

The current AI renaissance can be credited to an increase in available computing power as much as it can to new AI models and training data. You don’t even have to buy the computers; you can access them through the cloud via services like Amazon’s AWS and Microsoft’s Azure. But in a move that could have major geopolitical implications, the Biden administration is preparing to restrict Chinese companies’ cloud access to AI computers, according to the Wall Street Journal. If the rule goes into effect, Amazon and Microsoft would have to get permission from the U.S. government to sell those customers AI-related cloud computing.

Such a move would add a new tension to U.S.-China relations, which are increasingly strained. The news comes on the heels of China’s announcement that it will place restrictions on the export of metals that could be used by U.S. companies in the manufacture of advanced AI chips. Currently, the majority of the world’s most dense and advanced chips are produced by TSMC in Taiwan. But restriction of the metals by the Chinese could put a crimp on the production of the graphics processing chips needed for machine learning, many of which are now supplied by U.S.-based Nvidia.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS