• | 8:00 am

Should we be afraid of Q*, OpenAI’s mysterious AI system?

Some speculate the new AI system may be a major step toward artificial general intelligence. Safety concerns around the system may have led to the firing of Sam Altman before Thanksgiving.

Should we be afraid of Q*, OpenAI’s mysterious AI system?
[Source photo: Rawpixel]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.


AI agents like ChatGPT that use only a large language model are just an early chapter in the story of AI assistants. ChatGPT and other LLM chatbots have an uncanny sense of language patterns, but they also hallucinate facts. And they lack the reasoning and planning skills needed to complete high-school-level math problems, for example, as well as working through multistep tasks on a user’s behalf.

But the next generation of AI agents is starting to take shape. Some of the outlines emerged from the recent leadership shake-up at OpenAI. At the time, the board gave only a vague reason for firing CEO Sam Altman, citing a lack of transparency with board members. (Altman was soon reinstated after employees revolted.) Some (me) thought there must be another issue in the background causing such dramatic action by the board—like a scary research breakthrough. Turns out that’s exactly what it was. OpenAI has reportedly been working on a new kind of agent, known internally as “Q*” (“Q-star”), which marks a major step toward OpenAI’s goal of making systems that are generally better than humans at doing a wide variety of tasks (aka, artificial general intelligence, or AGI). The board reportedly feared that Altman might charge ahead with productizing Q* without allowing enough time for adequate safety guardrails to put around it.

“My gut feeling is a reduced team of OpenAI engineers—led by OpenAI’s engineering team, including [president] Greg [Brockman] and [chief scientist] Ilya [Sutskever]—carried out experiments in a completely new direction with models capable of planning and complex-math solving, and found some good early results,” Bay Area AI developer/entrepreneur Diego Asua tells me. “This might have led to a rush to release an early version of this model to the public, causing conflict . . . to the point of triggering all the events we saw last week.”

Speculation over the technical makeup of Q* has gone wild on X over the past few days. But the best theory I’ve seen comes from Nvidia senior AI scientist Jim Fan, who predicted on X (née Twitter) that Q* likely uses a number of AI models working together to learn, plan, and carry out tasks. DeepMind’s AlphaGo, the AI system that defeated the world champion Go player in 2016, similarly utilized several convolutional neural networks, and learned by playing millions of games of Go against an older version of itself. Q*, Fan says, may rely on a similar architecture: employing a neural network to devise the steps in a complex task, an additional network to score the steps and give feedback, and yet another to search possible outcomes of any chosen by the system.

OpenAI isn’t alone in taking this team approach to AI agents. DeepMind itself is working on a new AI agent called Gemini, which CEO Demis Hassabis has suggested might use a similar approach to that used by AlphaGo, but with a large language model thrown into the mix. This might result in a system that reacts to context or situational data, like AlphaGo, and also converses and takes instructions in plain language like ChatGPT.

Let’s be smart: LLMs were never going to be the whole answer to the chatbots question. Gemini and Q* may represent the path toward a next generation of chatbots.


At its big cloud computing shindig up in Las Vegas, Amazon’s AWS division finally announced its entry in the AI chatbot wars. Oddly enough, the new LLM-based chatbot is called “Amazon Q.” Unlike Google’s Bard and OpenAI’s ChatGPT, Amazon Q is not intended for the general public. Rather, the bot is designed for workers within large enterprises that need AI assistance to access and synthesize their company’s corporate data. For many companies, all that data is stored in the AWS cloud, with AWS guaranteeing their data is secure. Security is the reason many companies have been hesitant to use chatbots that weren’t designed with businesses in mind (like the consumer version of ChatGPT); they fear that a third-party chatbot might leak the data or put it in the wrong hands internally. AWS customers will likely trust it to keep data safe, and the assistant can use the permissioning system that the customer company already has set up to govern which employees get access to various types of data.


Nineteen Fast Company writers and editors have been working on a major awards package that honored companies across numerous categories. I’m biased, but the AI companies spotlighted in the feature are particularly interesting. That’s partly because, even though 2023 marked the dawn of a new era for the generative AI story, a number of companies pushed forward far enough and fast enough to radically change the tech landscape at large. Our picks for the AI winners in this year’s Next Big Things in Tech awards include the image-generation phenom Runway, the customer service platform Ada, Github’s Copilot assistant, Nvidia’s Picasso tool, and more.

  Be in the Know. Subscribe to our Newsletters.


Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories: