Science and TechSocial Media

Actions

Here's How Facebook Determines What Hate Speech Looks Like

Facebook's internal policies protect certain groups of people from hate speech, but not others. And subsets of protected groups are fair game.
Posted at 8:37 PM, Jun 28, 2017
and last updated 2017-06-28 20:37:31-04

Facebook's content rules are under scrutiny again. A recent ProPublica report gives greater insight about how Facebook determines who to protect from hate speech on the platform.

The outlet obtained an internal presentation outlining Facebook's content rules. Basically, it protects some groups of people from harassment while leaving posts targeting other groups alone.

According to its policy, Facebook can remove posts that attack groups based on race, sex, gender identity, sexual orientation, religion, national origin, ethnicity and serious disability or disease.

Non-protected subsets of protected groups are fair game for targeting. One slide illustrated that a post attacking all white men should be taken down, while attacks on women drivers or black children are to be left up.

This is just the latest publication of Facebook's internal documents about content moderation, including how they dealt with tricky legal situations like Holocaust denial and online extremism.

Offensive content is increasingly putting Facebook and other social media giants at odds with governments around the world. A bill recently proposed in Germany would fine companies $53 million if they don't remove content quickly enough.

The problem is only going to get more complex as Facebook's user base grows — the site recently passed 2 billion monthly users.