• | 8:00 am

What is the real point of all these letters warning about AI?

A new statement warning about the risks of AI was signed by the likes of OpenAI’s Sam Altman and Turing Award-winner Geoffrey Hinton. But critics question whether such public figures are really operating in good faith.

What is the real point of all these letters warning about AI?
[Source photo: Miguel Á. Padriñán/Pexels; Tara Winstead/Pexels]

Hundreds of AI researchers, computer scientists, and executives signed a short statement on Tuesday arguing that artificial intelligence could be a threat to the very existence of the world as we know it.

The statement, which was created by the Center for AI Safety and clocks in at all of 22 words, reads: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

The list of signatories includes Sam Altman, CEO of Microsoft-backed OpenAI; Google DeepMind CEO Demis Hassabis; Anthropic CEO Dario Amodei; Turing Award-winners Geoffrey Hinton (who recently left Google to speak more freely about AI’s risks) and Yoshua Bengio; and the philosopher and researcher Eliezer Yudkowsky, who has been one of AI’s most outspoken critics. (The computer scientist Yann LeCun did not sign the warning.)

There are, in simple terms, two tiers of risk when it comes to AI: short-term harms, like whether AI might that unfairly disqualify job applicants because of some bias in its training data; and longer-term (often more existential) dangers—namely, that super-intelligent artificial general intelligence (AGI) systems might one day decide that humans are getting in the way of progress and must thus be eliminated.

“The statement I signed is one that sounds true whether you think the problem exists but is sorta under control but not certainly so, or whether you think AGI is going to kill everyone by default and it’ll require a vast desperate effort to have anything else happen instead,” Yudkowsky writes in a message to Fast Company. “That’s a tent large enough to include me.”

As to whether letters like the Center for AI Safety’s will cause any material changes to the pace at which researchers at companies like OpenAI and Google are developing AI systems, that remains to be seen.

Some are even doubtful of the signatories’ intentions—specifically, whether they’re acting in good faith.

“We’ll know it isn’t just theater when they quit their jobs, unplug the data centers, and actually act to stop ‘AI’ development,” says Meredith Whittaker, a prominent AI researcher who was pushed out of Google in 2019 and is now the president of the Signal Foundation.

“Let’s be real: These letters calling on ‘someone to act’ are signed by some of the few people in the world who have the agency and power to actually act to stop or redirect these efforts,” Whittaker says. “Imagine a U.S. president putting out a statement to the effect of ‘would someone please issue an executive order?’”

So why, then, would leaders like Altman and Hassabis sign such a statement?

Whittaker believes that such statements focus the debate on long-term threats that future AI systems might pose while distracting from the discussion about how very real harms that current AI systems can cause—worker displacementcopyright infringement, and privacy violations, to name just a few issues. Whittaker points out that a significant body of research already shows the harms of AI in the near term—and the need for regulation.

Focusing the debate on the long-term “existential” harms of advanced AI systems might also give companies like OpenAI and Google the cover they need to continue racing forward in their development of ever-smarter AI systems, according to University of Washington law professor Ryan Calo.

“If AI threatens humanity, it’s by accelerating existing trends of wealth and income inequality, lack of integrity in information, and exploiting natural resources,” he wrote on Twitter.

In his recent Senate testimony on how the government might play a role in mitigating the risks of AI, OpenAI’s Altman went to far as to suggest that the government should issue licenses to tech companies to develop AI, but only if they can prove that they have the resources to do so safely.

On the other hand, Altman threatened to pull OpenAI’s services out of the European Union if AI was “overregulated” there. The EU is moving forward with its AI Act, which could become the first globally focused regulatory framework for AI systems.

Perhaps, as Whittaker suggested, Altman is in favor of regulation that addresses the more theoretical, long-term risk of AI (i.e. regulation that Congress isn’t equipped to deliver anytime soon), but far less comfortable with regulation that addresses near-term risks and that could be enacted on in a far shorter timeline.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS