Artificial Intelligence

"Risk of Extinction" From AI Is as Big a Threat as Nuclear War, Tech Experts Warn

More artificial intelligence doomers have joined the chat.

Updated: 
Originally Published: 
Aitor Diago/Moment/Getty Images

Artificial intelligence may pose a societal threat as dire as pandemics and nuclear war, according to a one-sentence statement released today by a nonprofit organization called the Center for AI Safety. It received over 350 signatures from AI researchers and a mix of executives at companies like Google and Microsoft.

The signers also included researchers Geoffrey Hinton and Yoshua Bengio, who are often referred to as the “godfathers of AI” and received the Turing Award — essentially the Nobel Prize of computing — for their work on deep learning, a type of AI inspired by the human brain.

The brief yet piercing statement arrives just as fears mount that AI tools like ChatGPT could replace human jobs in various industries, from journalism to food service, and promote dangerous misinformation online. Deepfake videos have already emerged online that appear to be crafted to sway the 2024 presidential election results.

Experts fear that artificial general intelligence could outsmart humans and imperil society.

Future Publishing/Future Publishing/Getty Images

Many industry titans have shared their AI concerns in recent years. This past March, thousands of tech experts signed a letter demanding a pause on the “out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

But this new statement represents a “coming out” for executives who have stayed quiet, Center for AI Safety Executive Director Dan Hendrycks told The New York Times.

“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

The ultimate fear is that engineers create artificial general intelligence, which is “generally smarter than humans,” according to OpenAI CEO Sam Altman, who heads the company behind the influential tools GPT-4 and ChatGPT. In this case, computers could theoretically take over as our overlords. But some experts claim such a feat is pretty far off, even impossible.

Given today’s state of computing power, the grim picture painted by the recent statement likely wouldn’t arrive for several decades. But some think that breakthroughs like quantum computing could take AI to new heights — meaning that researchers need to develop ethical guidelines for these technologies before we reach that point.

This article was originally published on

Related Tags