Twitter became the target of a coordinated trolling campaign shortly after Elon Musk took over the company last week. Yoel Roth, the company’s head of safety and security, said that the organized effort was to make people think that Twitter has weakened its policies. Roth also said that the company was working on putting a stop to the campaign that had led to a surge in hate speech and hateful conduct on the website. Now, the executive has tweeted an update to the Twitter’s cleanup efforts and said that it has made “measurable progress” since Saturday and has removed over 1,500 accounts involved in the trolling.
Roth explained that those 1,500 accounts didn’t correspond to 1,500 people. “Many were repeat bad actors,” he tweeted. The executive also said that Twitter’s primary success measure for content moderation is impressions — that translates to the times a piece of content is seen by users — and the company was able to reduce impressions on the hateful content that flooded its website to nearly zero.
Our primary success measure for content moderation is impressions: how many times harmful content is seen by our users. The changes we’ve made have almost entirely eliminated impressions on this content in search and elsewhere across Twitter. pic.twitter.com/AnJuIu2CT6
— Yoel Roth (@yoyoel) October 31, 2022
In addition to providing an update about dealing with the recent trolling campaign on Twitter, Roth also talked about how the website is changing how it enforces its policies regarding harmful tweets. He explained that the company treats first person and bystander reports differently: “Because bystanders don’t always have full context, we have a higher bar for bystander reports in order to find a violation.” That’s why reports by uninvolved third parties about hateful conduct on the platform often get marked as non-violation evens if they do violate its policies.
Roth ended his series of tweets with a promise to reveal more about how the website is changing how it enforces its rules. However, a new Bloomberg report puts into question how Twitter’s staff can enforce its policies in the coming days. According to the news organization, Twitter has frozen most employees’ access to internal tools used for content moderation.
Apparently, most members of Twitter’s Trust and Safety organization have lost the ability to penalize accounts that break rules regarding hateful conduct and misinformation. This event has understandably raised concerns among employees on how Twitter will be able to keep the spread of misinformation in check, when the November 8th US midterm election is just a few days away.
Bloomberg said the restriction placed upon the employes’ access to moderation tools is part of a broader plan to freeze Twitter’s software code, which will prevent staff members from pushing changes to the website as its changes ownership. The organization also said that Musk asked the Twitter team to review some of its policies, including its rule regarding misinformation that penalizes posts containing falsehoods about politics and COVID-19. Another rule Musk reportedly asked the team to review is a section in Twitter’s hateful conduct policy that penalizes posts containing “targeted misgendering or deadnaming of transgender individuals.”
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.