At a Twitter company meeting Monday about the decision to accept Elon Musk’s $44 billion acquisition offer, CEO Parag Agrawal avoided speculating on how its new owner might change the platform. For one employee watching, Musk’s public comments had already made the likely consequences clear: Twitter appeared poised to loosen content moderation, making it a less welcoming place for both users and advertisers.
“There’s an insane amount of abuse against people of color and gay people, and literal Nazis on the platform, we have to block out,” says the employee, who works on Twitter’s business development team. “If you take away restrictions and let Twitter fill with hate speech, people aren’t going to want to come there anymore.”
Concerns that Twitter is on the cusp of a new era marked by a significant uptick in abuse and harassment are shared by other company insiders, investors, and advisers to Twitter’s moderation policy teams. They say it would follow naturally from Musk’s stated aim of permitting a broader range of speech on the platform—not to mention the pugnacious example he sets with his own Tweets.
“Trolls are empowered,” another Twitter employee told WIRED, predicting that while the company will survive, the service may soon serve a much narrower community because it becomes hostile to many people who currently use Twitter. Research indicates that online abuse already falls disproportionately on marginalized groups, suggesting that any increase in toxicity on Twitter likely would too. “I’m concerned about the social media version of the splinternet,” the employee says. “The world will be worse off without Twitter as a dysfunctional but well-meaning place.”
On Tuesday, Musk used his Twitter account to single out one of the company’s top lawyers, Vijaya Gadde, by joining a thread criticizing her work policing the platform. She immediately received a stream of abuse from Musk fans on Twitter, prompting public complaints from some Twitter employees and the company’s former CEO Dick Costolo. Researchers who monitor right-wing extremists reported this week that individuals and groups previously banned from Twitter have already been attempting to return to the platform. Twitter and Musk did not respond to requests for comment.
In 2011, then CEO Costolo described Twitter as “the free speech wing of the free speech party” after UK officials suggested tweets may have contributed to riots in London. Since then Twitter has, like other social networks, strengthened its moderation as recognition of the potential harms of online harassment and falsehoods have grown. During the 2020 US presidential election campaign, Twitter blocked users from sharing a New York Post article about the son of then presidential candidate Joe Biden, saying it was based on emails that may have been obtained inappropriately. After the attack on the US Capitol last year, the company shut down President Trump’s account, citing “the risk of further incitement of violence.”
Musk’s ownership of Twitter may start a unique experiment in broadly loosening controls on users’ conduct. “The story of content moderation has generally been a one-way ratchet so far, with platforms very rarely going back the other way,” says Evelyn Douek, a researcher at the Knight First Amendment Institute at Columbia University. Musk tweeted Tuesday that he considered removing speech that wasn’t illegal as “contrary to the will of the people,” suggesting that people in the US could be allowed to tweet more or less anything.
Jon Bell, a designer who previously worked on Twitter’s anti-abuse team, says weakening Twitter’s moderation would be a mistake. Outsiders—including Bell before he joined the company—often don’t realize how much work it takes to prevent the site from being overrun by toxic content, he says. Although Twitter has been criticized for not doing enough to contain abuse and harassment, it has developed tools and processes that significantly reduce the volume, Bell says. “Everything Musk is talking about would undo that.”
In 2016, Twitter established a council of independent organizations to provide advice on online safety. Alex Holmes, deputy CEO of UK nonprofit the Diana Award, which is a member of the council, says he is now unsure how that work can continue. “Understandably, there are concerns about how this would be possible if freedom of speech is prioritized to a detrimental extent,” he says. Holmes says he has heard similar concerns from Twitter employees.