• | 9:00 am

What is AGI in AI, and why are people so worried about it?

Artificial general intelligence is a breakthrough innovation that OpenAI and its rivals are either trying to achieve—or prevent.

What is AGI in AI, and why are people so worried about it?
[Source photo: Enis Can Ceyhan/Unsplash]

We used to worry about AI becoming “sentient,” or that something called the “singularity” would occur and AIs would begin creating other AIs on their own. The new goal posts are something called artificial general intelligence, or AGI—a term that’s being subsumed into the realm of AI marketing and influence-pushing.

Here’s what you need to know.

HOW DO WE DEFINE AGI?

AGI usually describes systems that can learn to accomplish any intellectual task that human beings can perform, and perform it better. An alternative definition from Stanford’s Institute for Human-Centered Artificial Intelligence defines AGI as “broadly intelligent, context-aware machines . . . needed for effective social chatbots or human-robot interaction.” The consulting company Gartner defines artificial general intelligence as “a form of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks and domains. It can be applied to a much broader set of use cases and incorporates cognitive flexibility, adaptability, and general problem-solving skills.”

Gartner’s definition is particularly interesting because it nods at the aspect of AGI that makes us nervous: autonomy. Superintelligent systems of the future might be smart enough (and unsafe enough) to work outside of a human operator’s awareness, or work together toward goals they set for themselves.

WHAT’S THE DIFFERENCE BETWEEN AGI AND AI?

AGI is an advanced form of AI. AI systems include “narrow AI” systems that do just one specific thing, like recognizing objects within videos, with a cognitive level lesser than humans. An AGI refers to systems that are generalists; that is, they can learn to do a wide variety of tasks at a cognitive level equal to or greater than a human. Such a system might be used to help a human plan a complex trip one day and to find novel combinations of cancer drug compounds the next.

SHOULD WE FEAR AGI?

Is it time to become concerned about AGI? Probably not. Current AI systems have not risen to the level of AGI. Not yet. But many people inside and outside the AI industry believe that the advent of large language models like GPT-4 have shortened the timeline for reaching that goal.

There’s currently much debate within AI circles about whether AGI systems are inherently dangerous. Some researchers believe that AGI systems are inherently dangerous because their generalized knowledge and cognitive skill will permit them to invent their own plans and objectives. Other researchers believe that getting to AGI will be a gradual, iterative, process in which there will be time to build in thoughtful safety guardrails at every step.

HOW FAR AWAY IS AGI?

There’s a lot of disagreement over how soon the artificial general intelligence moment will arrive. Microsoft researchers say they’ve already seen “sparks” of AGI in GPT-4 (Microsoft owns 49% of OpenAI). Anthropic CEO Dario Amodei says AGI will arrive in just two to three years. DeepMind co-founder Shane Legg predicts that there is a 50% chance AGI will arrive by 2028.

Google Brain cofounder and current Landing AI CEO Andrew Ng says the tech industry is still “very far” from achieving systems smart enough to do things like that. And he’s concerned about the misuse of the term itself. “The term AGI is so misunderstood,” he says.

“I think that it’s very muddy definitions of AGI that make people jump on the ‘are we getting close to AGI?’ question,” Ng says. “And the answer is no, unless you change the definition of AGI, in which case you could totally be there in three years or maybe even 30 years ago.”

WHY AGI IS STILL SO DIVISIVE IN THE BROADER AI FIELD

People may be stretching the definition of AGI to suit their own ends, Ng believes. “The problem with redefining things is people are so emotional, positive and negative; they have hopes and fears attached to the term AGI. And when you have companies that say they reached AGI because they changed the definition, it just generates a lot of hype.”

OpenAI’s definition of the term, in fact, has been somewhat flexible. The company, whose stated goal is to create AGI, defines artificial general intelligence in its charter (published in 2018) as “highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” But OpenAI’s CEO Sam Altman has more recently defined AGI as “AI systems that are generally smarter than humans,” a seemingly lower bar to hit.Hype can fuel interest and investment in a technology but it can also create a bubble of expectations that, when unmet, eventually bursts. That’s perhaps the biggest risk to the current AI boom. Some very good things might result from advances in generative AI, but it will take time.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter