Facebook says the exact wording of its rules may have changed slightly in more recent versions. (The exact rules are in the slide show below.) Advertisementįurther Reading Facebook users sue over alleged racial discrimination in housing, job adsFacebook has used these rules to train its "content reviewers" to decide whether to delete or allow posts.
#Muslim anti gay memes drivers
White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. It gives users broader latitude when they write about "subsets" of protected categories. The reason is that Facebook deletes curses, slurs, calls for violence, and several other types of attacks only when they are directed at "protected categories"-based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation, and serious disability/disease. It asks: which group is protected from hate speech? The correct answer: white men. The slide identifies three groups: female drivers, black children, and white men.
#Muslim anti gay memes how to
One document trains content reviewers on how to apply the company's global hate speech algorithm. One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of "violence to resist occupation of an internationally recognized state." The company's workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea, and Western Sahara. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens. While Facebook was credited during the 2010-2011 "Arab Spring" with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company's hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of "fake news"-fabricated stories that circulated on Facebook like " Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement"-and growing concern that terrorists are using social media for recruitment. Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn't be allowed in an effort to make the site a safe place for its nearly 2 billion users. For instance, Higgins' incitement to violence passed muster because it targeted a specific sub-group of Muslims-those that are "radicalized"-while Delgado's post was deleted for attacking whites in general. The documents reveal the rationale behind seemingly inconsistent decisions. The post was removed, and her Facebook account was disabled for seven days.Ī trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook's censors use to distinguish between hate speech and legitimate political expression. Start from this reference point, or you've already failed," Delgado wrote. Higgins' plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.īut a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response. For the sake of all that is good and righteous.
" Hunt them, identify them, and kill them," declared US Rep. Further Reading Facebook lets advertisers exclude users by raceIn the wake of a terrorist attack in London earlier this month, a US congressman wrote a Facebook post in which he called for the slaughter of "radicalized" Muslims.