In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT

Share

Stone, a signatory of the letter, says he does not agree with everything in it, and is not personally concerned about existential dangers. But he says advances are happening so quickly that the AI community and the general public barely had time to explore the benefits and possible misuses of ChatGPT before it was upgraded with GPT-4. “I think it is worth getting a little bit of experience with how they can be used and misused before racing to build the next one,” he says. “This shouldn’t be a race to build the next model and get it out before others.”

To date, the race has been rapid. OpenAI announced its first large language model, GPT-2 in February 2019. Its successor, GPT-3, was unveiled in June 2020. ChatGPT, which introduced enhancements on top of GPT-3, was released in November 2022. 

Some letter signatories are parts of the current AI boom—reflecting concerns within the industry itself that the technology is moving at a potentially dangerous pace. “Those making these have themselves said they could be an existential threat to society and even humanity, with no plan to totally mitigate these risks,” says Emad Mostaque, founder and CEO of Stability AI, a company building generation AI tools, and a signatory of the letter. “It is time to put commercial priorities to the side and take a pause for the good of everyone to assess rather than race to an uncertain future,” he adds.

Recent leaps in AI’s capabilities coincide with a sense that more guardrails may be needed around its use. The EU is currently considering legislation that would limit the use of AI depending on the risks involved. The White House has proposed an AI Bill of Rights that spells out protections that citizens should expect from algorithm discrimination, data privacy breaches, and other AI-related problems. But these regulations began taking shape before the recent boom in generative AI even began. 

“We need to hit the pause button and consider the risks of rapid deployment of generative AI models,” says Marc Rotenberg, founder and director of the Center for AI and Digital Policy, who was also a signatory of the letter. His organization plans to file a complaint this week with the US Federal Trade Commission calling for it to investigate OpenAI and ChatGPT and ban upgrades to the technology until “appropriate safeguards” are in place, according to its website. Rotenberg says the open letter is “timely and important” and that he hopes it receives “widespread support.”

When ChatGPT was released late last year, its abilities quickly sparked discussion around the implications for education and employment. The markedly improved abilities of GPT-4 have triggered more consternation. Musk, who provided early funding for OpenAI, has recently taken to Twitter to warn about the risk of large tech companies driving advances in AI

An engineer at one large tech company who signed the letter, and who asked not to be named because he was not authorized to speak to media, says he has been using GPT-4 since its release. The engineer considers the technology a major shift but also a major worry. “I don’t know if six months is enough by any stretch but we need that time to think about what policies we need to have in place,” he says.

Others working in tech also expressed misgivings about the letter’s focus on long-term risks, as systems available today including ChatGPT already pose threats. “I find recent developments very exciting,” says Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, who asked his name be removed from the letter a day after signing it as debate emerged among scientists about the best demands to make at this moment. 

“I worry that we are very much in a ‘move fast and break things’ phase,” says Holstein, adding that the pace might be too quick for regulators to meaningfully keep up. “I like to think that we, in 2023, collectively, know better than this.”

Updated 03/29/2021, 10:40 pm EST: This story has been updated to reflect the final version of the open letter, and that Ken Holstein asked to be removed as a signatory. An earlier draft of the letter contained an error. A comment from OpenAI has also been added.