Casual followers of the AI boom may not have heard the name Mustafa Suleyman. Yet. He’s been a pioneer in the AI space since cofounding DeepMind with his friend Demis Hassabis back in 2010. The truth is, Suleyman has lived in the shadow of Hassabis, first at DeepMind and later after Google bought the startup. Now Mustafa is stepping out.
He’s got a new company, Inflection AI. He’s got wide name recognition and pedigree within AI circles, and he has the faith and friendship of some people with very deep pockets. He cofounded Inflection with LinkedIn billionaire Reid Hoffman. Inflection’s product—an emotionally intelligent personal assistant named Pi—represents a different take on AI than most of its peers, such as OpenAI.
And, as if to mark his coming out, Suleyman has a new book TheComing Wave, a plainspoken and compelling warning about how AI is about to change life as we know it. When I arrived at Inflection in early September, the first copies of the new book sat in a stack against the window in Suleyman’s small office. He let his gaze rest there for a moment, visibly proud.
Upon Pi’s debut earlier this year, it was immediately apparent that the assistant offered a different focus from other one-size-fits-all chatbots such as ChatGPT. Pi seemed focused on the needs of a single human user, not on doing everything from computer coding to discovering new drugs. Pi talks to the user in a personal and empathetic way, and gradually builds up knowledge about the user based on its conversations with them.
“You’re going to store lots of your creative moments, your brainstorming moments, a lot of your personal information, and you’re going to want that information with you, just like, back in the day, we might’ve had a thumb drive with us,” Suleyman said in a May interview with Fast Company. “This is the arrival of a new era in computing; this is going to be like bringing your digital life with you wherever you are,” he said.
Pi, then, may eventually become everything we hoped Siri or Google Assistant would be—that is, a super-assistant that’s an expert on me. “Pi is going to evolve to become a chief of staff for your life,” Suleyman says. “It’s going to help you organize, prioritize, think, create, and plan your day.”
Above all, Pi will know more. It’ll develop a more general knowledge of the world around the user. It’ll get better about learning facts that are unique to the user, and then make sense of them. It’ll give longer, more detailed answers. It will become a sort of tutor that can develop a detailed curriculum over days and weeks on a wide range of topics. Next year, Suleyman says, Pi will know how to take actions on the user’s behalf, and will be able to reason through longer, multistep tasks.
Pi’s emotional intelligence and user focus is reflective of Suleyman’s general approach to AI, which is flavored by a strong humanistic streak that goes way back. “His mother was a nurse, so he was unusually sensitive to healthcare in Britain and the health struggles that people were having,” . . . Eric Schmidt says. Suleyman’s father was a Syrian taxi driver, but weekends usually found him busy planning community events such as fetes and picnics, Suleyman remembers. The fact that his parents passed that humanistic instinct to Suleyman became apparent early on. He went to Oxford to read philosophy, but dropped out to help operate a telephone support line for Muslim teens in the wake of 9/11. Still, it was at Oxford that his humanistic approach to technology began to take shape.
In his book Suleyman argues that AI can’t truly be understood in purely technological or philosophical terms, but rather must be seen as a uniquely human creation that could profoundly affect humans and human systems in numerous ways—some of them obvious and some of them totally novel.
In 2010 Suleyman started DeepMind with his childhood friend Demis Hassabis (and AI researcher Shane Legg). The new company found success developing algorithms that taught deep learning systems to play, and win, popular video games. DeepMind’s early approach to this was unique: the researchers used convolutional neural networks that were not pretrained; rather, the systems learned on their own from experience using a form of cognition similar to that of the human brain. The small London-based research lab published its research on AI game play in 2013; Google took notice, and bought DeepMind for $650 million the following year.
Schmidt says he worked mainly with Hassabis during the acquisition, but Suleyman’s human-centered approach soon caught his attention. “I got to know Mustafa because he was interested in applications and in particular addressing sepsis, which is a big problem in the UK, using various AI technologies,” says Schmidt, who was Google’s CEO at the time of the acquisition. In 2016 Suleyman launched DeepMind Healthat the Royal Society of Medicine, a group that builds clinician-led technology for the National Health Service (NHS) and other providers to improve frontline healthcare.
Suleyman became head of applied AI at DeepMind, a role responsible for applying machine learning technologies to a wide array of Google products and processes. “Sundar basically said to me ‘identify new AI-first opportunities for the company,” Suleyman says. “That was my remit and I could work with anybody or do anything.”
During its first few years under Google (then Alphabet), the group began infusing AI into Google products and processes. “Over the course of five years, my group, which was called DeepMind for Google, did deployments of our technologies—classifiers, various kinds of deep learning, and re-ranking (re-ranking was used to adjust the order of app listings in the Play Store, for example, or suggested videos in YouTube, to be more helpful to users).” Such applications went on at a breakneck pace for five years under Suleyman. He says that in 2019 his team did around 50 launches (of AI projects) across six major product groups at Google. His group also applied AI models to Google’s data center operations, reducing the (substantial) cost of cooling them by 30%.
Hassabis has always been the public face of DeepMind, but Suleyman’s own profile rose steadily in his years with Google. Eric Schmidt, who was Google CEO during the acquisition, says he spent most of his time with Hassabis during the acquisition and got to know Mustafa only later on. “I didn’t understand at the time how good a technologist he was because Demis sort of overwhelmed him in that sense—he was sort of in Demis’s shadow,” Schmidt says. “But I think in the past few years, he’s emerged from that shadow.”
By August 2019, something new was brewing at Google. Natural language processing researchers had developed a new, simpler kind of neural network architecture called the Transformer model, which could gain an understanding of languages with far less training time than previous language models. The following year Google researchers developed BERT (Bidirectional Encoder Representations from Transformers), a new method of teaching a large language model (LLM) to build a complex map of language processing large amounts of unlabeled training data (data scraped from the internet, for example). These two events opened the door for the development of far larger models that produce ChatGPT-like mastery of language. Suleyman was transfixed by the research, seeing in it the beginnings of a totally new “conversational” interface between humans and computers. In 2020, he joined Google’s natural language research team, where he would remain for almost two years. He began spending lots of time with early versions of the Bard chatbot, then powered by early versions of Google’s LaMDA LLM. His major contribution was developing a method of “grounding” the output of the model in facts, and avoiding “hallucinations.” “Before it produces an answer, you get it to cross check against a knowledge base, in the case of Google that was the search results,” Suleyman says.
In 2020 the LaMDA research bubbled up to the attention of top management, with Suleyman its main champion within Google. Excitement about the technology built during 2020 and 2021, Suleyman says, culminating in a demonstration of the technology by Alphabet CEO Sundar Pichai during Google’s biggest event of the year, its I/O developer conference, in May 2021. In the demo, Pichai showed a human having perfectly natural conversations with LaMDA, which played the character of the dwarf planet Pluto, then a paper airplane.
“I thought that [demo] was going to be the rallying cry to get the company to make this a number one priority,” Suleyman says. “I made a huge slide deck and wrote a memo basically saying the future of search is conversational.” It became clear to Suleyman that a chatbot interface could be very useful in Google’s core Search product: Simply asking questions of an intelligent assistant must be preferable to typing in keywords then swimming through a list of blue links. But Google executives remained hesitant to push AI interfaces into Google products, especially Search. Google’s core search advertising business is based on auctioning off ad space to advertisers whose ads target certain keyword searches, the argument went, and a single assistant interface would obliterate that system. “That was mind blowing to me . . . and that was my frustration throughout 2021,” Suleyman says.
The lawyers had another worry: antitrust. “Google has spent the last 15 years in Europe, on the witness stand, swearing and resolving legal cases on the basis that its job is to provide access to the open web,” Suleyman explains. The company argued that its search service didn’t just drive web users to Google’s own content and services, but rather to the web pages of the original content creators. But an AI search assistant would answer questions directly, drawing on its own training data or web checks. “So if Google then goes and creates an experience that disintermediates the third party content creator, then that undermines everything that Google has been saying for the past 15 years about the role of the home page,” Suleyman says.
Google’s hesitancy ultimately led to Suleyman’s departure in January of 2022, a few months before the next Google I/O event. And it got him thinking about starting a company of his own. “I was just too frustrated—I was like, well, I’m gonna do it, because I believe that the future of all interfaces is conversational.”
Inflection isn’t just another AI startup. It’s now being recognized as one of a small group of companies that have the money, people, and compute power to build toward AI systems that rival humans in general intelligence (“artificial general intelligence” or “AGI”). Suleyman now gets invited to Washington D.C., including to the White House, to talk to lawmakers about the promise and the risks of advanced AI.
Inflection has raised a lot of money, mainly on the power of Suleyman’s pedigree, name recognition, and large professional network. In 2022, it raised $225 million in seed money from a long list of investors that included Reid Hoffman, Microsoft cofounder Bill Gates, and Eric Schmidt. This year, it raised another $1.3 billion more from the same group of investors, but with the addition of NVIDIA, which makes the coveted A100 and H100 GPUs used to train almost all large AI models. In a world of GPU haves and have-nots, Inflection is in that first group, and probably near the top of it. Suleyman tells me his company was the first to get NVIDIA’s new H100 GPUs, and now has the largest cluster of H100s in the world–22,000 of them–at a cost of $1.2 billion. “That basically means we have four times more compute than the total amount of compute that was used to train GPT-4,” he says. “Our model will be 10 times larger than GPT-4 in the spring.” In terms of talent, Inflection employs the development co-leads of DeepMind’s Gopher and Chinchilla models, the project leads/cocreators of Google’s LaMDA and PaLM LLMs, and the coleads/cocreators of OpenAI’s GPT-2 and GPT-3 models.
With that much power, Inflection, like its peers, has a responsibility to make sure its systems are safe and remain safe as they evolve. Indeed, The Coming Wave deals mainly with the potential harms from AI, which are dangerous beyond precedent—even to the survival of humankind, he acknowledges. (Like Demis Hassabis, Suleyman signed a letter from the Center for AI Safety stating that AI is an existential threat to humankind.)
However, Suleyman’s book focuses on dealing with the near- to mid-term changes the widespread use of AI is likely to bring. And he believes the potential dangers of advanced AI can only be understood and contained by seeing them within the context of complex human social, political, and economic structures. “That’s fundamentally what my book is—a meditation on how the nature of existing power is likely to change with the arrival of this new power, how centralized power is going to be amplified, how decentralized power is also going to be amplified, how those things will collide,” he says. “It’s a much more real politique analysis of how these new technologies end up practically changing the way that we live . . .”
Much of the discussion around the risks of AI has been dominated by doomers who talk in abstract, philosophical terms about the inevitability that hyper intelligent AGI (artificial general intelligence) systems will one day naturally become misaligned with the goals of humans, and use their superintelligence to deceive us and finally to do away with us. “The story and narrative that there is going to emerge some super intelligence is in itself a kind of fiction that we’ve created to distract ourselves from the practical reality that AI is actually transforming us today—it’s in production,” Suleyman says. The AI revolution isn’t something that will happen all at once in the future, he argues, but rather something that happens gradually. And people, corporations, and governments have time to manage AI’s serious risks and course-correct at every messy step along the way. It’s within those messy steps that Suleyman’s real interest lies.
In “Wave,” Suleyman sketches out two likely futures that could grow from the advance of AI, and neither are pretty. The first, which he calls “catastrophe,” involves the free and open proliferation of powerful AI models, leading to extremists or crazies weaponizing the technology to harm people, institutions, or states. The second, “dystopia,” involves a small number of corporations or states controlling the technology, and using it to exert China-style control over populations.
And yet, Suleyman describes himself as a technology optimist, a statement that rang true with me after talking to him for an hour.
“If you only read the book and it was the only thing you knew about Mustafa, you would presume a more pessimistic [attitude], but I think he’s more optimistic in general,” says Reid Hoffman. Notably, Suleyman isn’t one of the people who signed a letter calling for a “pause” in development of large AI models. “I don’t think Mustafa’s theory is to slow down; I think it’s to be super attentive and careful . . . to move with all the speed you can, but in the right direction.”
In fact, Suleyman says he believes history tells a story of technology bringing far more good to the world than harm. And he believes that AI will bring “radical abundance” within the next 40 or 50 years (how that abundance is distributed may be another question).
The fear is that the future will be a story of technology doing more harm than good, he says. “The question is, can we manage the downsides that come with that inevitable unfolding of the quest towards radical abundance?”
Loading the player...
Issam Kazim on what's next for Dubai Tourism | PART 2