"It's fundamentally not rights-oriented."

[Content Note: Racism; anti-Semitism; Islamophobia; misogyny; abuse.]

Julia Angwin and Hannes Grassegger have written a terrific piece for ProPublica, bluntly titled: "Facebook's Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children." It's a long read, but well worth your time and attention, so settle in.

I will just quickly highlight this passage, whence comes the title for my post (emphasis mine):
By 2008, the company had begun expanding internationally but its censorship rulebook was still just a single page with a list of material to be excised, such as images of nudity and Hitler. "At the bottom of the page it said, 'Take down anything else that makes you feel uncomfortable,'" said Dave Willner, who joined Facebook's content team that year.

Willner, who reviewed about 15,000 photos a day, soon found the rules were not rigorous enough. He and some colleagues worked to develop a coherent philosophy underpinning the rules, while refining the rules themselves. Soon he was promoted to head the content policy team.

By the time he left Facebook in 2013, Willner had shepherded a 15,000-word rulebook that remains the basis for many of Facebook's content standards today.

"There is no path that makes people happy," Willner said. "All the rules are mildly upsetting." Because of the volume of decisions — many millions per day — the approach is "more utilitarian than we are used to in our justice system," he said. "It's fundamentally not rights-oriented."
Well, that's refreshingly frank and ALSO TERRIBLE.

The question, of course, is if the approach to moderation is "fundamentally not rights-oriented," to what is it oriented? Profits, is the simple answer — but because Facebook's primary profit-making enterprise is data collection on its users, I think the true answer is slightly more complex and sinister, as they try to balance the appearance of safety for users with the ruthless exploitation and tolerance of abuse of those users for their advertisers.

One additional observation: There's nothing in the article about the flagging of content by users. And I suspect that plays a huge role in how moderating decisions get made.

I know from experience that conservatives (and "far-leftists" who imagine they're not conservatives) spend an inordinate amount of time tracking and policing and reporting people they don't like.

I suspect that progressives generally spend a lot less time focused on the people we don't like, and have a much lower impulse for tracking and reporting.

What does that mean on Facebook? It's very likely that's going to influence how people who receive reports on flagged content respond as moderators.

Similarly, the options that Facebook provides for reporting inappropriate content shape those reports in a very particular way:

Option 1: It's annoying or not interesting
Option 2: I think it shouldn't be on Facebook
Option 3: It's a false news story
Option 4: It's spam

That's it. There's not even an option for reporting something as harmful, abusive, etc.

If I were going to report abusive content — let's say racist content, for this example — I'm not going to choose "annoying or not interesting," because I find racist content rather more problematic than "annoying."

I would choose "I think it shouldn't be on Facebook," which is the only subjective option of the four. My report gets submitted already prefaced with "I think," as opposed to my being able to definitively say it doesn't belong, though I would be able to definitively say it's annoying, even though that is arguably more subjective than whether abusive material "shouldn't be on Facebook."

That doesn't seem incidental. And I strongly suspect that who reports content, and how it gets reported, has a major impact on the deeply problematic aspects of Facebook's secret censorship strategy.

Shakesville is run as a safe space. First-time commenters: Please read Shakesville's Commenting Policy and Feminism 101 Section before commenting. We also do lots of in-thread moderation, so we ask that everyone read the entirety of any thread before commenting, to ensure compliance with any in-thread moderation. Thank you.

blog comments powered by Disqus