Social Media and Disinformation Watch, #2

[Content Note: Eliminationist Islamophobia; gun violence.]

In light of the role that disinformation, particularly on social media, played in the U.S. 2016 presidential election, this post is a semi-regular roundup of news items related to disinformation and social media as we look toward 2020.

1) Amazon's "White Man Problem"

At Medium, Maya Kosoff has written about the lack of diversity at Amazon. From her piece:
Amazon's diversity problems extend well beyond Bezos' closest confidantes: A 2018 Bloomberg piece found that according to numbers the company submitted to the government, 73 percent of Amazon's corporate employees — those who do not work as contractors or in the company's warehouses — are men, as are 78 percent of senior executives and managers.
Kosoff then delves into the business case for increasing diversity, and how research shows that diversity boosts productivity. I suppose that research would matter if hiring managers made hiring decisions rationally. To me, the more compelling argument, at least morally (which Kosoff also mentions) is that when white men are disproportionately creating platforms and technology, its features will likewise reflect the flawed, limited perspectives, biases, and safety concerns (or lack thereof) of white men.

2) Apropos of Nothing
3) Facebook's Free Speech Philosophy

Vanity Fair ran a profile, written by Simon Van Zuylen-Wood, of the high-level employees who contemplate free speech issues for Facebook. The focus is largely on Head of Global Policy Management and former federal prosecutor Monika Bickert, who leads Facebook's team that sets free speech norms. From the piece:
In general, Bickert would rather not censor content that's part of the national discourse, even when it's blatantly hateful. A prime example: in 2017, just before Facebook began re-writing its hate-speech policies, U.S. representative Clay Higgins of Louisiana posted a message on his Facebook wall demanding that all "radicalized Islamic suspects" be executed. "Hunt them, identify them, and kill them," he wrote. "Kill them all." People were obviously outraged. But Facebook left the message up, as it didn't violate the company's hate-speech rules. ("Radicalized Islamic suspects" wasn't a protected category.) Post-revamp, that outburst would run afoul of the Community Standards. Yet Facebook has not taken the message down, citing an exemption for "newsworthy" content. "Wouldn't you want to be able to discuss that?" Bickert asks, when I point to Higgins's post. "We really do want to give people room to share their political views, even when they are distasteful."

Bickert is articulating not just the classic Facebookian position that sharing is good, but also what used to be a fairly uncontroversial idea, protected by the First Amendment, that a democratic society functions best when people have the right to say what they want, no matter how offensive. This Brandeisian principle, she fears, is eroding. "It's scary," she says. "When they talk to people on U.S. college campuses and ask how important freedom of speech is to you, something like 60 percent say it's not important at all. The outrage about Facebook tends to push in the direction of taking down more speech. Fewer groups are willing to stand up for unpopular speech."
"Offensive" and "unpopular" are interesting word choices that, in my opinion, downplay what's going on here. Generally, I think it's more apt to say that people might be offended by, say, a bad movie review, whereas they are terrorized by speech that promotes genocide.

My take-away is that Bickert (and thus Facebook) seems to err on the side of "tolerating" hateful, exterminationist speech, under the general liberal principle that blanket tolerance is a social good. Zuylen-Wood sums this up pretty well:
If the left wing of the Internet generally wants a safer and more sanitized Facebook, and the right wing wants a free-speech free-for-all, Bickert is clinging to an increasingly outmoded Obamian incrementalism. Lead from behind. Don't do stupid shit. Anything more ambitious would be utopian.
4) Also Apropos of Nothing

At the New York Times, Kevin Rouse discussed last week's murderous, Islamophobic rampage in the context of Internet culture:
[W]e do know that the design of internet platforms can create and reinforce extremist beliefs. Their recommendation algorithms often steer users toward edgier content, a loop that results in more time spent on the app, and more advertising revenue for the company. Their hate speech policies are weakly enforced. And their practices for removing graphic videos — like the ones that circulated on social media for hours after the Christchurch shooting, despite the companies' attempts to remove them — are inconsistent at best.
See also:
Here, it's also important to note how, in addition to radicalizing people, social media can also foster the dehumanization of various "target" human beings very quickly and algorithmically.

In the context of feminists, for example, on Twitter alone I see other users engage in the most toxic, aggressive pile-ons of women pretty regularly. The trend is that once a feminist target has been identified as having said something deemed ridiculous, someone with a relatively high follower count will retweet it with their own mocking commentary added. Other users quickly begin competing to make the cruelest, edgiest, and/or wittiest attacks, reaction gifs, and memes, hoping to get likes and retweets.

People stop caring that there's a human being on the receiving end of the commentary. And, when "everyone else" is engaging in the anti-social behavior, I think it's easy for individual abusers to think that their single addition to the discourse doesn't matter much, from a moral standpoint. Twitter has no pervasive stigma against being cruel. Rather, it's a predominant, widely-accepted norm.

5) Twitterpocalypse

Last week, a rumor circulated on Twitter that the platform was going to start hiding like and retweet metrics. As a result, users began expressing outrage and panicking. Twitter then tried to clarify and now a bunch of people are mostly confused.

So, just another day on Twitter, all in all.

I actually don't think hiding like and retweet metrics would be the worst thing. This article by Will Oremus is an interesting exploration of the Twitter "Demetricator" app and how users change their behavior based on perceived "shareability" and "likeability" of content. I think hiding these metrics has important implications for the spread of outrage, abuse, and misinformation.

6) Facebook Files Patent for Political Debate Forum

Oh.
Facebook has applied to patent a system where people could comment on laws that might affect them, then have that feedback worked into a formal political proposal, creating a way for people to "meaningfully engage in civil discourse" online. The concept would build on Facebook's earlier attempts at promoting civic engagement, and it sounds similar to other, existing crowdsourced democracy tools. But Facebook's vast scale could put tremendous weight behind any kind of private political forum.
Internet debates mediated on Facebook. Woo.

Shakesville is run as a safe space. First-time commenters: Please read Shakesville's Commenting Policy and Feminism 101 Section before commenting. We also do lots of in-thread moderation, so we ask that everyone read the entirety of any thread before commenting, to ensure compliance with any in-thread moderation. Thank you.

blog comments powered by Disqus