Why It’s So Hard to Count Twitter Bots


Is the Twitter account @ElonMusk a bot? One of the best algorithms for detecting fake accounts thinks it might be, which shows how challenging it is to quantify the proportion of fake accounts across the social network.

Counting Twitter bots has become a point of contention in Elon Musk’s ongoing $44 billion acquisition of Twitter. Last Friday, the billionaire tweeted that he was putting his purchase “temporarily on hold” until the company provided details to back up its claim (as stated in its latest SEC filing) that fewer than 5 percent of “monetizable daily active users” on Twitter are spam or fake. Musk also outlined a plan to count bots himself that involved sampling 100 @Twitter followers to see how many were bots and said the approach suggests over 20 percent of accounts are fake.

But accurately quantifying the percentage of bots on Twitter is a lot more difficult, according to experts.

Finding them isn’t hard if you know where to look. Certain accounts, including Musk’s, seem to attract plenty of them. “If you simply mention Elon Musk on Twitter, you immediately get engaged with a ton of crypto bots,” says Chris Bail, a professor of sociology at Duke University who studies social media.

Twitter is not the only social network to struggle with fake accounts. Facebook removes billions of bogus accounts every year. But it is hard to know for certain that an account on Twitter is a bot, since legitimate users may have few followers, rarely tweet, or have strange usernames. It is even more difficult to gauge the number of bots that operate across the platform as a whole.

To test Musk’s proposed methodology, IV.ai, an AI company, looked at 100 accounts that follow Musk’s car manufacturing company Tesla on Twitter.

An algorithmic examination of the accounts on Tuesday found that more than 20 accounts out of 100 have a high likelihood of being bots. A manual examination of the same 100 concluded that more than half may be bots. And an analysis of the topics discussed by those accounts did not find evidence that any of the suspected accounts were promotional. But many of those accounts also disappeared shortly after, suggesting that Twitter catches bots fairly quickly. Vince Lynch, CEO of IV.ai, says identifying dubious accounts is also inherently subjective and involves a degree of uncertainty. 

“It’s a very hard problem,” says Filippo Menczer, a professor at Indiana University who led the development of the Botometer algorithm, which gave Musk’s account a relatively high bot score. Menczer says that looking at 100 accounts will not be representative of Twitter’s daily active users, and different samples will produce wildly different results. “I want to hope that that was a joke,” Menczer says of the methodology.

Automated accounts have become more sophisticated and complex in recent years. Many fake accounts are partly operated by humans, as well as machines, or just amplify messages written by real people (what Menczer calls “cyborg accounts”). Other accounts use tricks designed to evade human and algorithmic detection, such as rapidly liking and unliking tweets or posting and deleting tweets. And of course there are plenty of automated or semi-automated accounts, such as those run by many companies, that aren’t actually harmful.