It feels like far too often we see news that some egregious offense or another was committed on one or the other social media site, including illegal posts or the live streaming of violent content. It can be easy to wonder how, in this technologically advanced age, such things can slip through the cracks.

But, the fact is, it’s virtually impossible to moderate social media sites effectively. Why? There are a few reasons.

1. Social Media Is Huge

Some social media sites are still fledgling or are more niche. However, major social media sites have vast user bases. The largest sites have between hundreds of millions to billions of users. Naturally, any platform that large is hard to monitor.

Most major social networks manage this through a combination of community reporting, algorithms (imperfectly) trained to spot harmful content, and a few human moderators. While a community member or AI might spot and report harmful information, it is a human (or group of humans) that has the final say. And, many sites offer an appeals process.

2. Social Media Is Diverse

Social media sites aren’t just huge in terms of their user base. Different states and different countries have different laws relating to what kind of content needs to be reported and how.

So, social media companies active in different geographical areas often have to have different policies for different users. This includes different policies for misinformation and privacy. In fact, Twitter had to agree to specific terms to get unbanned in Nigeria.

Even more than that, social media is a global phenomenon but it doesn't always happen in the "global" languages. This can leave social media companies scrambling to find content moderators that speak regional languages in order to provide services globally without leaving those users vulnerable to bad actors.

A good example is that of Facebook owner Meta. The company was recently sued by a former content moderator in Nairobi who claims that content moderators in the area are misled into taking the role.

3. Moderators Are Walking a Tightrope

Most social media users want two things: they want to feel safe and they want to feel free. Striking a balance between these two desires is the tricky task of the content moderator.

Stray too far to one side, and users will feel policed. Stray too far to the other side and users will feel abandoned. One way that some social media sites get around this is through community moderators.

Community moderators help to express that moderation is something done out of passion for promoting constructive discourse without allowing harmful content rather than out of any desire to control or oppress users. This model is challenging at scale, though different platforms make it work in different ways.

The menu for interacting with Parler posts includes blocking, muting, and reporting users.

Parler came back online after adjusting its moderation policy to comply with third-party policies. They have something of a community jury system that more-or-less only comes into play after a potentially problematic post has already been reported. Prioritizing user reports is another way that Parler helps to lighten the load on its moderators.

4. Moderators Have to Toe a Dynamic Line

Content moderators also operate in what can feel like a moral gray area. Even platforms with a more free-speech view of moderation have rules against potentially harmful or violent content. Some posts are obviously harmful or violent, while others might seem harmful or dangerous to some users (or moderators) but not others.

Many platforms also have rules against other activities like selling products or services. This can be an even trickier area to navigate, particularly given the size and scope of some sites.

Consider what Reddit community moderators do. Whenever users join a new community, they are immediately sent the rules for that community. Those rules are also posted on the side of each community page. This allows each community to have a subset of rules tailored to it rather than blanket rules for the whole site.

Community Rules for the Cryptocurrency subreddit

The idea of subreddits also helps Reddit use community moderators at scale. Moderators are tasked with moderating their subreddit rather than trying to patrol the whole site.

5. Social Media Moves Fast

Finally, social media moves really fast. When you hear about a problematic post, it might be well after the original poster has taken it down. In the event that the post wasn’t reported while it was live, responding to it after the fact can be tricky.

In one case in 2015, a post on the social media site YikYak was reported to police using an altered version of the original post, which had already been removed. What might have been solved internally by online content moderators became a prolonged legal drama that upset an entire community.

Many social media sites, including Facebook, have policies for preserving suspicious posts in the event that they need to be referenced in developing situations including by law enforcement.

But features like livestreaming mean that content goes up as it happens, which algorithms can miss until it's too late.

Toss a Coin to Your Moderator

If you see something online that worries you, go through the provided channels to report it. Try to avoid getting upset or suspicious because moderating social media is hard. The same goes if you or a post is reported. It’s probably a well-intentioned move to keep social media users safe.