Runaway AI Is an Extinction Risk, Experts Warn

Share

Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement. 

The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes. 

These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.

Language models had been getting more capable in recent years, but the release of ChatGPT last November drew public attention to the power—and potential problems—of the latest AI programs. ChatGPT and other advanced chatbots can hold coherent conversations and answer all manner of questions with the appearance of real understanding. But these programs also exhibit biases, fabricate facts, and can be goaded into behaving in strange and unpleasant ways.

Geoffrey Hinton, who is widely considered one of the most important and influential figures in AI, left his job at Google in April in order to speak about his newfound concern over the prospect of increasingly capable AI running amok. 

National governments are becoming increasingly focused on the potential risks posed by AI and how the technology might be regulated. Although regulators are mostly worried about issues such as AI-generated disinformation and job displacement, there has been some discussion of existential concerns.

“We understand that people are anxious about how it can change the way we live. We are, too,” Sam Altman, OpenAI’s CEO, told the US Congress earlier this month. “If this technology goes wrong, it can go quite wrong.”

Not everyone is on board with the AI doomsday scenario, though. Yann LeCun, who won the Turing Award with Hinton and Bengio for the development of deep learning, has been critical of apocalyptic claims about advances in AI and has not signed the letter as of today. 

And some AI researchers who have been studying more immediate issues, including bias and disinformation, believe that the sudden alarm over theoretical long-term risk distracts from the problems at hand.

Meredith Whittaker, president of the Signal Foundation and cofounder and chief advisor of the ​​AI Now Institute, a nonprofit focused AI and the concentration of power in the tech industry, says many of those who signed the statement likely believe probably that the risks are real, but that the alarm “doesn’t capture the real issues.”