Musk’s Plan to Reveal the Twitter Algorithm Won’t Solve Anything

Share

When Elon Musk victory-tweeted his $44 billion acquisition of Twitter on Monday evening, he committed to improving the social network by, among other things, “making the algorithms open source to increase trust.”

In a TED talk earlier this month, the entrepreneur suggested that the algorithm that determines how tweets are promoted and demoted could be uploaded to the software hosting platform GitHub, making it available to people outside of the company. “People can look through it and say, ‘Oh, I see a problem here, I don’t agree with this,’” Musk said. “They can highlight issues and suggest changes, in the same way that you update Linux or Signal.”

In reality, cracking Twitter open to see how it truly works would involve a lot more than just uploading some code to GitHub. And proving the existence—or absence—of biases that may be subtle in nature and depend on a multitude of ever-changing factors may prove to be far more difficult than Musk suggests.

On the face of it, greater transparency makes a lot of sense. Social platforms like Twitter, Facebook, and TikTok wield enormous influence and power but are largely opaque to their users and regulators. And just as the source code for a computer program provides a way to inspect it for bugs or backdoors, revealing the code that makes Twitter tick might, in theory, show that the platform promotes certain types of content over others.

“I’m very excited about seeing what happens,” says Derek Ruths, an associate professor at McGill University in Canada who studies large social platforms. Ruths says he has refrained from teaching his students about social recommendation systems thus far because they are so opaque.

While Ruths admits to misgivings about what less moderation—another of Musks’ promised “improvements”—might mean for the platform, he believes more transparency will be useful and hopes that other social networks will feel pressured to reveal more about how they operate. “It has the potential to be a really interesting experiment that is long overdue,” Ruths says.

The idea has stirred up some debate around political bias baked into the platform. Some on the right of the political divide are rubbing their hands at the prospect of finally proving that conservative perspectives are routinely “shadow banned”—or prevented from receiving the kind of prominence that they actually deserve. But they may be disappointed by the complexity of untangling how the platform really operates.

The first problem is that there is no single algorithm that guides the way Twitter decides to elevate or bury content, unlike what Musk has implied in the past. Rather, according to sources within Twitter’s technical team who spoke on condition of anonymity, decisions are the result of many different algorithms that perform a complex dance atop mountains of data and a multitude of human actions. Results are also tailored to each user based on their personal information and behavior. “There is no ‘master algorithm’ for Twitter,” one company source says.

Another issue is that Twitter uses machine learning to guide many decisions. For instance, Twitter trains numerous machine learning models to help decide which posts to prioritize on users’ feeds based on a dizzying number of factors. These models cannot be inspected like regular code; they need to be tested in an environment that replicates the real world as closely as possible. The models also change rapidly in the real system, in response to a constant flow of new data, user behavior, and input from moderators. This would quickly make them an unreliable source of information.