• | 8:00 am

How calls for AI safety could wind up helping heavyweights like OpenAI

Abstract notions of super-dangerous AI may only feed the hype that current models are more impactful than they really are.

How calls for AI safety could wind up helping heavyweights like OpenAI
[Source photo: Tara Winstead/Pexels]

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

MORE AI RESEARCHERS SPEAK OUT AGAINST OPENAI’S SAFETY PRACTICES

For months, I’ve been writing about concerns that OpenAI, under Sam Altman’s leadership, is far more excited about pushing out new AI products than doing the hard work of making them safe. Well, those safety practices are now a national news story. Last week, Jan Leike, who led the superalignment team at OpenAI, left the company for that reason. This week a group of five ex-OpenAI researchers have signed an open letter saying OpenAI and other AI companies aren’t serious enough about safeguarding large AI models . . . As AI models became a red-hot investment opportunity, research labs have stopped openly sharing their safety research with the broader community. “AI companies . . . currently have only weak obligations to share some of this information with governments, and none with civil society,” the letter reads.

The letter demands that AI companies act with more transparency about potential harms from their AI models and about the safety work meant to mitigate the risk. It also calls on the companies to stop using broad confidentiality agreements in order to prevent whistleblowers from speaking out. (Several other current OpenAI companies signed anonymously, fearing reprisals, as well as one current and one former Google DeepMind researcher.)

“We’re proud of our track record providing the most capable and safest AI systems,” OpenAI spokesperson Lindsey Held said in a statement. Easy to say now, when AI tools are still far from having the intuition, reasoning ability, and agency to be truly dangerous. We’ve not yet seen an AI system act autonomously to, say, shut down the power grid or generate a recipe for a deadly bioweapon that can be made in somebody’s kitchen.

Ultimately, companies such as OpenAI aren’t harmed by any of this hand-wringing over safety worries. In fact, they’re helped by it. This news cycle feeds the hype that AI models are on the cusp of achieving “artificial general intelligence,” which would mean models are generally better than human beings at thinking tasks (still aspirational today). And besides, if governments are moved to put tight regulations on AI development, it’ll only entrench the well-monied tech companies that have already built them.

ELON MUSK’S AI AMBITIONS REMAIN MYSTERIOUS, BUT URGENT

CNBC reported this week that Elon Musk diverted a Tesla order of thousands of Nvidia H100 AI chips to his X social media company (formerly Twitter). Musk is CEO of both companies, a time-splitting arrangement that has rankled some Tesla investors. Musk fired back at the report, saying that Tesla was simply not ready to deploy the expensive and highly sought-after Nvidia chips, whereas X was.

The whole ordeal points to the interconnected nature of Musk’s tech empire. Musk has been telling Tesla investors that he’s bulking up Tesla’s AI power with many more Nvidia GPU chips this year. Musk said in April that Tesla would increase its number of Nvidia’s flagship H100 chips from 35,000 to 85,000 by year-end. Tesla uses the Nvidia chips to develop and support the navigation systems in its cars, and for robotics research.

And Musk has ordered at least 12,000 H100s for X, which uses the Nvidia chips to service content and ads. Musk’s company, xAI, currently uses some of X’s data center capacity for its research, so it’s likely to get access to at least some of the new H100s, too. Grok, xAI’s current AI product, is powered by just a text-based AI model, but the second generation of the model will likely be capable of processing images and sounds.

Musk’s ambitions may go much further. He’s said he wants xAI, which has managed to attract some top AI talent, to focus on some very weighty science problems, such as modeling dark matter, black holes, or complex ecological systems now suffering from climate change. That requires lots of capital and computing power. Musk recently raised a new $6 billion funding round for xAI, and reports say he has plans to build a massive supercomputer, or “Gigafactory of compute,” possibly in partnership with Oracle.

2024 ROIS COULD MEAN BOOM OR BUST FOR GENERATIVE AI

If 2023 was the year that generative AI left the lab and went to work in the real world, then 2024 is when many C-suite types will be called on by their boards to start showing that those AI tools actually can result in real productivity gains and cost savings. The answer will most likely be a mixed bag.

“Interestingly, the rise of GenAI seems to have shaken many executives’ assessments of their company’s overall AI achievements,” said Boston Consulting CEO Christoph Schweizer in the summary of a new survey of client companies. “Between 2022 and 2024, the proportion of executives reporting their companies had implemented AI with impact declined from 37% to 10%.”

Part of the problem is the technology itself. While large language models (LLMs), and now multimodal models, can do some impressive things, they still fail in basic ways. Hallucinations continue. Many companies are still coming to grips with the infrastructure work needed to ground AI models in reliable corporate data. And AI models fail at something that humans are good at: We can continually learn new things and learn to apply new knowledge when we need to, whereas generative AI models’ training data goes up to a certain date past which they have no knowledge of the world.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter