Should Facebook do more to protect our freedom of speech?

Jillian York ask: with such a large number of users, should Facebook be obliged to protect our freedom of expression?

Image: screenshot from Facebook

A few weeks ago, a number of women (and men) banded together in a successful campaign to challenge Facebook’s policies around content moderation, specifically the way in which the Internet giant has dealt with misogynistic content. In response, Facebook agreed that it would take steps to ensure that its policies toward dealing with such content would be brought in line with its policies toward other types of hate speech and violence.

The campaign brought about seemingly equal parts praise and disdain, with those in the latter camp—myself included—objecting primarily on the grounds that Facebook should not control speech.

But as Soraya Chemaly recently argued, the question of whether Facebook should moderate speech is different from whether the company should treat all types of hateful speech equally under their own policies. I agree with Chemaly when she says that “corporations … have a responsibility to treat hate based on gender in the same manner that they do other forms of hate speech.” Therefore, the debate is once again distilled into whether or not Facebook should be controlling speech at all.

Let’s break it down: Facebook has more than 1 billion users, 655 million of whom are active daily. The platform’s content moderation relies on what is known as “community policing”: In order for terms of service-violating content to be flagged to moderators, a user must first report it under a variety of different categories, including “sexually explicit content,” “hate speech,” and “violent or harmful content.”

After the content is flagged, Facebook’s team of moderators then reviews it. Just how many moderators are employed by the company, or what procedures they use for moderating the content remain a mystery: Facebook is notoriously opaque about their practices. What we can assume, however, is that—given the size of the platform’s user base—they’re spending less than a second on each piece of content.

A split-second may be enough to determine that an image such as those Chemaly’s campaign decries is abusive, certainly. But the myriad false positives over the years—such as a ban on the word ‘Palestinian,’ the removal of ‘Arab Spring’ activist pages, or the recent takedown of a page protesting genetically modified foods—expose a failed process.

Furthermore, the actual policies can be confusing. While few would decry a ban on images of abuse to women, Facebook also bans nudity and certain profanities, as well as pseudonyms (unless, that is, you’re Salman Rushdie).

Ultimately, while Facebook remains a private company, it has become the largest shared platform the world has ever seen, one that half the world’s Internet users employ in some way. At some point, we must consider whether that gives it additional responsibilities when it comes to protecting free expression.

Share this article

Google+ Delicious Digg Facebook Google LinkedIn StumbleUpon Twitter Reddit Newsvine E-mail

Comments

Comments (0)

This thread has been closed from taking new comments.

By Jillian York on Jun 25, 2013

Featured Article

Schmidt Happens

Wendy M. Grossman responds to "loopy" statements made by Google Executive Chairman Eric Schmidt in regards to censorship and encryption.

ORGZine: the Digital Rights magazine written for and by Open Rights Group supporters and engaged experts expressing their personal views

People who have written us are: campaigners, inventors, legal professionals , artists, writers, curators and publishers, technology experts, volunteers, think tanks, MPs, journalists and ORG supporters.

ORG Events