Models that are released under normal open source licenses, like GPT Neo from the nonprofit EleutherAI, are more fully open, the researchers say. But it is difficult for such projects to get on an equal footing.
First, the data required to train advanced models is often kept secret. Second, software frameworks required to build such models are often controlled by large corporations. The two most popular ones, TensorFlow and Pytorch, are maintained by Google and Meta, respectively. Third, computer power required to train a large model is also beyond the reach of any normal developer or company, typically requiring tens or hundreds of millions of dollars for a single training run. And finally, the human labor required to finesse and improve these models is also a resource that is mostly only available to big companies with deep pockets.
The way things are headed, one of the most important technologies in decades could end up enriching and empowering just a handful of companies, including OpenAI, Microsoft, Meta, and Google. If AI really is such a world-changing technology, then the greatest benefits might be felt if it were made more widely available and accessible.
“What our analysis points to is that openness not only doesn’t serve to ‘democratize’ AI,” Meredith Whittaker, president of Signal and one of the researchers behind the paper, tells me. “Indeed, we show that companies and institutions can and have leveraged ‘open’ technologies to entrench and expand centralized power.”
Whittaker adds that the myth of openness should be a factor in much-needed AI regulations. “We do badly need meaningful alternatives to technology defined and dominated by large, monopolistic corporations—especially as AI systems are integrated into many highly sensitive domains with particular public impact: in health care, finance, education, and the workplace,” she says. “Creating the conditions to make such alternatives possible is a project that can coexist with, and even be supported by, regulatory movements such as antitrust reforms.”
Beyond checking the power of big companies, making AI more open could be crucial to unlock the technology’s best potential—and avoid its worst tendencies.
If we want to understand how capable the most advanced AI models are, and mitigate risks that could come with deployment and further progress, it might be better to make them open to the world’s scientists.
Just as security through obscurity never really guarantees that code will run safely, guarding the workings of powerful AI models may not be the smartest way to proceed.