How sure are you that the person you're passionately "debating" with online is a real, breathing person? How do you know whether they're just another impassioned supporter of whatever topic and not someone with government (or other) backing?

Spotting Russian bots or paid-for shills is no easy task. It is, however, becoming increasingly important as accusations of nation-states meddling in other's affairs continue to swirl. Can you spot them? Here's what you need to know.

Bots vs. Shills

Let's start by differentiating bots and shills.

Bot: A bot is a fake social media account under the control of an organization or government seeking to influence the online community. For instance, a Twitter bot set to retweet certain hashtags and phrases in such volumes that it amplifies the specific topic. Another example is Reddit bots downvoting views disagreeing with the bot controller opinion (while upvoting those that do agree). Bots require volume for success on certain platforms, while at other times only a few can begin to shape the direction of a conversation. And anyone can create a social media bot using Python.

Shill: A shill is different. Shills are real people actively engaging in the shaping of online (in this instance) discussion and opinion---while receiving payment in exchange for their presence. Shills promote companies, governments, public figures, and much more, for personal profit, essentially engaging in propaganda.

Depending on the organization or government, shills can work in conjunction with large bot networks to create intense vocal online movements. And while the combined efforts of shills and bots shape online opinion, these efforts are increasingly affecting more than just social media users.

The practice is also known as astroturfing, whereby organizations and governments curate the conversation through "regular" members of the public.

Russian Bots and Shills

Russian bots and shills dominated talk in the run up to the 2016 US presidential election. Commentators and critics dedicated a huge amount of airtime and column inches discussing the role of Russian-backed bots and shills in influencing the discussion around certain topics.

In fact, Robert Mueller, the special counsel who's investigating interference in the presidential election, recently indicted 13 US-based Russians as part of the suspected Russian-backed propaganda machine, Internet Research Agency (IRA).

The allegations of influence are far-reaching. They range from simply creating American sounding identities online, to stealing the identities of US citizens, to baiting minority activists and so-called "social justice warriors," to creating Instagram groups such as "Woke Blacks" to influence minority voting efforts. And there are numerous other examples, too.

Social media networks are one the primary tools of influence. The platforms know there is a problem, too. In January 2018, Twitter said it was emailing 677,775 people in the US who tweeted IRA content. At the same time as the apology, Twitter is purging bot accounts, prompting the #twitterlockout hashtag to trend amongst predominantly conservative-leaning Twitter users.

And for all the cries of foul play and unfair targeting, there is evidence that "conservatives retweeted Russian trolls about 31 times more often than liberals and produced 36x more Tweets."

Furthermore, Twitter maintains their bot purge is "apolitical" and that they enforce sitewide rules "without political bias."

That's not to say bots, shills, and astroturfing is the sole remit of conservative figures.

As far back as 2007, campaign staffers for Clinton were anonymously boosting pro-Hilary sites, while during the 2016 presidential debates, the Clinton campaign was the subject of hundreds of thousands of automated bot tweets (though significantly fewer than Donald Trump).

Not All Bots

Twitter and other social media platforms aren't plagued by bots, as some publications would have you believe. We can break down Twitter bot hashtag interaction to understand how their backers seek to influence conversation.

The Computational Propaganda Project (CCP), sponsored by Oxford University, closely examines these interactions. The table below illustrates [PDF] the difference in automation between interactions with pro-Trump or pro-Hilary hashtags, as well as the overall percentage of non-automated tweets, between November 1 and November 9, 2016:

Russian bots?

CCP define high automation as "accounts that post at least 50 times a day" using at least one of the election specific hashtags. The study considers anything below that threshold low automation---in other words, a real person. The table shows a much higher percentage of low level automation, indicating much higher numbers of regular users are interacting.

The study does note that some human users are inevitably swept into the high automation bracket. It also notes that accounts demonstrating high automation also very rarely use terms from the Mixed Hashtag Cluster bracket (bar Trump-Clinton combinations due to sheer retweet volume).

We will never truly know the full picture of how many bots are working on any given social media platform. Recent research estimates [PDF] that automated bots make up nearly 15 percent of all Twitter users, putting the total well over 40 million individual bot accounts.

Da'wah Center Protest

A prime example of direct Russian influence is the 2016 Houston Da'wah Center protest.

Facebook group "Heart of Texas" posted an advert looking for sympathizers to attend a protest "to stop the Islamification of Texas." The protest was set for midday on May 21, meeting at the Da'wah Center. At the same time, another group---the so-called "United Muslims of America"---were organizing a counter-protest at the same time and place.

The two groups met at the center and, predictably, "interactions between the two groups eventually escalated into confrontation and verbal attacks."

At the time, neither set of protesters realized their respective group wasn't real. That is to say; the groups were the construct of a Russian-backed "troll farm" that exists solely to manipulate political, racial, and religious tension in the US.

How to Spot a Bot on Social Media

Recognizing bots and shills on social media isn't always easy. Why? Because otherwise more people would realize what was going on.

Don't get me wrong; we all interact with bots and shills, it is the very nature of social media in 2018. Operatives receive thousands of dollars a month to subtly (and sometimes more brazenly) influence conversation.

There are, however, some bot-spotting tips to bear in mind:

  1. The account only reposts/retweets, never making posts of its own, sending the same response to other people.
  2. Accounts that only repost/retweet comments made by multiple other similar accounts (some of whom are also likely bots).
  3. Some accounts rapidly (likely automatically) post in response to "trigger" topics faster than humanly possible.
  4. Human cycles. Real people tend to post in bursts, covering different topics, as well as have recognizable downtime for day/night cycles.
  5. Default profile pictures. For instance, a Facebook profile with the image of a man or woman, or a Twitter profile with the default egg picture.
  6. Profiles that are prolific around major events---elections, scandals, terrorist attacks---but remain dormant at other times. The upcoming 2018 mid-term elections will see swathes of bot accounts reactivating.

Other things to look out for are automatic systemic downvotes on sites like Reddit. Bots pick up on the title of a submission and immediately begin downvoting comments that disagree with their programming. (Downvoting hides comments as well as their responses from other users, and is an easy way of obstructing dissenting views.)

How to Spot a Shill on Social Media

Spotting paid-shills is more difficult as the onus is on the account maintaining the appearance of a regular social media user. Posts promoting a certain topic or shaping the online conversation might come among regular mundane discussion points to not create suspicion.

Some common tactics include:

  • Changing the narrative of a hot topic toward something that promotes the agenda of whoever paid for the shill
  • Consistently attacking something that wasn't part of the initial conversation (sometimes called "whataboutism," where a shill argues using terms such as "but what about when X did Y")

Another spotting tactic is the human cycle. Regular people have to sleep, eat, drink, and so on. If a single account is managing to post on a single agenda continuously for 24-hour periods, something is likely afoot.

But "really good" shills work hard. Instead of merely attacking and contradicting opinion and attempting to shape the discussion, they'll slowly befriend and infiltrate a group before setting to work.

Can You Stop the Russian Bots?

Unfortunately, other than reporting suspicious accounts, there is little direct action to take against shill or bot accounts. As they say, don't feed the trolls.

The 2018 mid-term elections are firmly on the horizon now (check the political bias of any site in the run up). While the impact of shill and bot accounts is perhaps larger than ever, you now know more about how to spot certain types of behavior.

Twitter isn't all bots and trolls, though. Social media can make a positive impact on the world.

Image Credit: raptorcaptor/Depositphotos