Since the 2016 US presidential election, mainstream media has looked closely at the role of social media in politics. Fake news stories were rampant on Facebook around election time. Members of minority groups faced serious harassment on Twitter throughout the year, and the problem is still getting worse. And while we generally consider the internet to be a bastion of free speech, the people running those social network services decided that it was time to take action.

In the wake of the election, both platforms have taken some controversial steps in an effort to better monitor and curate content. What have they done? Will they stick by their new standards? And will it help make the internet a better place?

I decided to ask some people who have done a lot of thinking about these sorts of issues to see what they think.

What Are Facebook and Twitter Doing?

During the election cycle, users circulated a lot of fake news stories on Facebook. Post-election, commentators have been discussing whether those stories may have influenced the results. Of course, there are people vehemently arguing on both sides. But the result of this battle is that Facebook has committed to identifying fake news posts and taking action on them.

We haven't yet heard the details of what Facebook is planning to do. It's probable that a new algorithm will take a number of factors into account, including user reports and information on fact-checking sites. We don't yet know how it will deal with degrees of truth, bias, sensationalism, and other difficult questions. Some conservative news sites are nervous that Facebook will censor their views, at least in part because Facebook has a left-leaning history. It's a complicated undertaking, and minimizing bias is extremely difficult.

Twitter, on the other hand, has recently been banning accounts that it says violate its terms of use. The platform's user agreement prohibits harassment and hateful conduct, but Twitter doesn't have a great reputation when it comes to actually preventing harassment. They've recently gone on a banning spree, getting rid of accounts that belong to prominent members of the "alt-right" movement, including Richard Spencer and the organizations he runs, Pax Dickinson, Ricky Vaughn, and John Rivers.

After announcing that they'd provide better counter-abuse tools and take complaints more seriously, Twitter seems to be stepping up its game. But the platform has focused its ire on accounts related to a specific political movement. Because of this, many people think it looks like a political move, and not a counter-abuse one. It's tough to disentangle motivations here.

Is This New?

When I asked Aaron Smith, Associate Director of Research at the Pew Research Center, about these developments, he emphasized that these problems have a long history: "[T]he need to police 'negative' content of various kinds (whether that's fake news, abuse, trolling, spam, or what have you) is something that online platforms and their owners/moderators have struggled with since the dawn of the internet."

online harassment

While the media has been giving this issue a lot of attention lately, he told me, it's been around in various forms forever. He also pointed me to a great article from 2011 called "If Your Website's Full of Assholes, It's Your Fault" that sums up a lot of the issues prevalent in the discussion.

Just because this has been going on forever doesn't mean that we haven't gotten better at dealing with it. Twitter, for example, was very hands-off for a long time. But it recently banned over 100,000 accounts related to ISIS. And the number of people that the service has banned for harassment does seem to have risen before this recent run. Some people, though, argue that the platform has used its powers to disproportionately get rid of right-leaning tweeters.

Facebook's previous crackdowns have copyright violations, fake-name accounts, marijuana dispensary pages, and "overly promotional" posts. Hate-speech groups have previously faced consequences, but from law enforcement, not from Facebook. And Zuckerberg's platform seems content to maintain a more open atmosphere.

One of Twitter's most notable bans, for example, was of alt-right commentator Milo Yiannopoulos. Twitter determined that his views on feminism, Islam, and other issues incited "the targeted abuse or harassment of others." He still has a page on Facebook, as does Richard Spencer's National Policy Institute. Despite many people viewing these two personalities as hateful, Facebook is happy to continue hosting them.

Why Now?

It seems likely that both platforms have chosen to take these actions as a response to the recent election. Why they didn't feel obligated to do it before, however, is less clear. To be fair, Facebook has taken some action in the past. For example, after damning revelations that editors artificially influenced political news to suppress right-wing viewpoints emerged in 2016, they moved to an algorithmic trending section. But that allowed fake news to propagate even further.

Buzzfeed's Craig Silverman found that engagement with fake news stories actually surpassed that of real stories near the US presidential election. Could those stories have influenced election results? It certainly seems possible. In the wake of an election that has much of the mainstream media contemplating its behavior and future, Facebook appears to be thinking carefully about its responsibilities. (And, if you're cynical, probably its political motivations.)

Twitter has always had a tumultuous relationship with its own championing of free speech. But I think it finally got fed up with people saying that it's a toxic environment. The platform sees a huge huge amount of harassment, and people have been talking about it for a long time.

Many users have left Twitter as a result of the harassment they've received. And reports of harassment have only increased since the election. That, combined with reported revenue problems, likely has Twitter worried. Taking action to clean up the network could help increase its user base. And, consequently, revenue.

Of course, there's always the possibility that they just decided one day that they want to be upstanding citizens of the internet.

Not Just Fake News

As soon as I heard about these crackdowns, I started wondering. Will they work? Will these new policies make the internet a better place? Could they help people get better information online and reduce harassment? I thought I'd ask some people who have given the topic a lot of thought.

First, I got in touch with Rick Webb, the author of the fantastic article "I'm Sorry Mr. Zuckerberg, But You Are Wrong". In this piece, Webb disagrees with Zuckerberg's assertion that fake news spread on Facebook didn't influence the outcome of the election. I asked Webb if he thought the crackdown on fake news would have an effect, and he did:

Those two platforms especially are very well suited to the propagation of fake news in a way other platforms aren't, and it would be much harder for fake news to spread if they enthusiastically cracked down. As to the larger effect that had on our society, it would be hard to feel, but over time I think it would have a bit of an effect, yes. -- Rick Webb

He was quick to point out, however, that it's not just fake news that's causing problems. It's also that Facebook "exacerbates the trends in news to writing hyped articles to attract traffic." Publishers are rewarded for getting clicks and shares -- two things that clickbait is great at generating. "People shared fake news before Facebook and they'll continue to after," he says, "but Facebook spreads it far beyond its past quarters: to users who aren't already in a conspiracy mindset."

Facebook also legitimizes fake, skewed, and over-hyped news with its brand, Webb says. Associating the Facebook brand with a news story gives it added credibility. This applies to sensationalized, hyper-partisan, and false news as much as it does to quality journalism. Facebook would do well to keep these issues in mind in its efforts to stamp out misinformation.

A Narrow Focus

I also got in touch with Sophie Bjork-James, a post-doctoral fellow in the Department of Anthropology at Vanderbilt University. She's studied white nationalist movements, race relations, and conservative evangelical political life. She brought up an interesting point when I asked her about recent policy changes on Facebook and Twitter: that for it to work, it has to have the right goals in mind.

Despite the fact that the racist right represents a larger presence on social media than ISIS far more attention has been given to limiting social media use by ISIS. This is a problem given that the racist right is linked to more fatal attacks within the US than Islamic terrorism. -- Sophie Bjork-James

If social platforms are going to make a positive difference by policing their content, they're going to have to do it in a principled way. Addressing the villain du jour might not be enough. But then there are questions of interpretation. What constitutes hate speech? What should be protected by free speech rights? These are very difficult questions, and anyone's answers may depend on their political leanings.

Still, Bjork-James does think that these efforts could have positive effects. She gave the hypothetical example of Twitter banning anti-Semitic accounts resulting in less harassment directed towards Jewish journalists, a trend that's becoming increasingly prevalent. Even if the people who get banned head to other social networks, as many alt-right commentators have moved to Gab, Twitter would become a better social space for everyone.

The people that I talked to all seemed to agree that addressing hate speech was a good thing, and would likely have a positive effect. But they also thought that these policies aren't not enough to solve the problems that underlie hate speech. Racism, sexism, xenophobia, and other discriminatory mindsets are deep-seated, and require a great deal of cultural force to address.

But if these three experts are hopeful that Facebook and Twitter's changes could make a difference, that's cause to be optimistic.

Looking Forward

Facebook and Twitter haven't started policing their content in earnest yet. Facebook is working on algorithms to identify fake news stories. Twitter has started banning some accounts and has deployed better tools for reporting abuse. We'll see if their plans will be effective. It will likely come down to just how zealous they are in pursuing these goals.

I agree with Webb when he says that he's "skeptical that their efforts will be [super] enthusiastic." With the reputation that both sites have of being rather hands-off, it's hard to imagine them suddenly changing gears and doing everything they can to get abuse and misleading information off of their sites. On the other hand, much of Silicon Valley is upset about the recent presidential election, and political dissatisfaction is great for galvanizing change.

silicon valley election

But the particulars of any sort of campaign like this are always going to be difficult. Who decides which accounts to ban? How does the service control for bias? Who's checking for abuse of the system? What constitutes misinformation? (This is a particularly difficult question.) How many people are Facebook and Twitter willing to employ and pay to police content? Running an effective anti-misinformation or anti-harassment program is time-consuming and expensive.

Personally, I'm cautiously optimistic. I think effort taken to clean up the internet is worth it. It's never an easy or especially transparent process, but it's an endeavor worth undertaking.

What do you think? Should social networks make an effort to police the content being shared on their platforms? Or does free speech trump all? I'm conflicted myself, and I'd love to hear from you. Let's hash it out in the comments.

Image Credits: Ollyy/Shutterstock