Spam accounts are a problem on all social media platforms, including Twitter, which can make it hard to separate what’s fake from what’s real.But Twitter has a working system for tackling the issue, which has resulted in the app having less than 5 percent of spam accounts each quarter for the last 4 years.But how exactly does Twitter fight spam accounts? Fortunately, Twitter’s CEO broke this down to help us understand the process better.

Twitter CEO Outlines How the Platform Fights Spam Accounts

Twitter CEO Parag Agrawal has shed light on how Twitter fights spam accounts. He broke down the process in a lengthy thread to more than 600,000 followers on his Twitter account on May 16, 2022.

In the thread, Agrawal reveals who reviews Twitter accounts for spam, what the process entails, and even gives examples of some actions they take against suspicious accounts.

If anything, the thread helps Twitter users understand what the app is doing to reduce the prevalence of spam accounts on the platform that boasts 436 million monthly active users, according to Statista.

What Are Spam Accounts?

multiple spam warning signs

Spam accounts are fake accounts. They can either be bots or humans, but Agrawal says that some spam campaigns use both. Although there are good bots on Twitter, unfortunately, there are bad ones, too.

And not all spam accounts have a weird handle with a name, a bunch of random numbers, and no profile picture or a random, generic photo as their avatar.

Some look real and are therefore able to blend in on the app, making it difficult to spot them. Agrawal says those spam accounts often have real people behind them.

Spam accounts can behave like a normal account, which makes them dangerous as they can spread hate, misinformation, and propaganda. That’s why it’s important that social media apps, like Twitter, weed them out, and for online platforms at large to prevent bad bot attacks.

But Twitter isn’t as concerned about whether spam accounts are automated or not, as it is about the harm they could cause. That’s why Twitter refers to spam activity as platform manipulation.

How Twitter Fights Spam Accounts

two women looking at laptop screen

Twitter uses its own people to review spam accounts rather than outsourcing the task to an independent company.

The process involves human reviewers randomly sampling thousands of Twitter’s monetizable daily active users (mDAUs) every quarter. The reviewers vet the accounts based on Twitter’s rules as it relates to spam and platform manipulation, which it updates often.

They use an account’s public data, like account activity, as well as private data, like their IP address and phone number, to prove that it is real and legitimate. The use of private information is also why Twitter keeps this process in-house. (Try Twitter's online game if you want to understand its privacy policy better.)

As a result, Twitter suspends 500,000 spam accounts every day, often before you get a chance to see them. It also locks millions of suspected spam accounts every week if it isn’t able to verify that they’re real.

You know how Twitter randomly asks you to verify your phone number or successfully complete a CAPTCHA? It locks accounts that aren’t able to do that, as these are methods that Twitter uses to prove that an account isn’t a bot.

But Twitter admits that it doesn’t catch all spam accounts on its platform. Agrawal says it’s those accounts that slip through the cracks that make up less than 5 percent of spam accounts.

Is Twitter Doing Enough to Fight Spam Accounts?

Twitter is doing its best to fight spam on its platform. However, we wonder if its spam review process is more effective against bots than on human-run accounts.

That’s because it’s easier for spam accounts run by human beings to bypass the system as the people behind those accounts can, in actual fact, confirm their phone number and successfully complete a CAPTCHA form.

And if those accounts’ platform manipulation tactics are subtle, it could take time for Twitter to catch them. And the damage could already be done by then. Based on that, there is always room for improvement, especially when it comes to weeding out human-run spam accounts.