Social Media and Disinformation Watch, #1

In light of the role that disinformation, particularly on social media, played in the U.S. 2016 presidential election, I thought it would be prudent to start a semi-regular roundup of news items related to disinformation and social media as we look toward 2020.

1. Social media attacks ongoing

Natasha Korecki at Politico has written an important piece on the disinformation campaign that's already ongoing against 2020 Democratic candidates. From the article:
An analysis conducted for POLITICO by Guardians.ai found evidence that a relatively small cluster of accounts — and a broader group of accounts that amplify them — drove a disproportionate amount of the Twitter conversation about the four candidates [Kamala Harris, Elizabeth Warren, Bernie Sanders, and Beto O'Rourke] over a recent 30-day period.

Using proprietary tools that measured the discussion surrounding the candidates in the Democratic field, Guardians.ai identified a cohort of roughly 200 accounts — including both unwitting real accounts and other 'suspicious' and automated accounts that coordinate to spread their messages — that pumped out negative or extreme themes designed to damage the candidates.

This is the same core group of accounts the company first identified last year in a study as anchoring a wide-scale influence campaign in the 2018 elections.
The broader goal, per the article, is to "sow discord and chaos within the Democratic presidential primary." This group seems to operate by manufacturing a viral negative response about Democrats through the use of synthetic accounts operated by real people and those that amplify authentic users who are already tweeting a desired message.

The article also notes that the researchers analyzing this activity cannot conclude whether these accounts are coordinated by domestic or foreign actors, but that it bears signs of previous foreign attacks.

Given that social media companies, the U.S. government, and the political candidates themselves have not yet been able (and/or willing) to adequately address these issues, users of platforms such as Facebook and Twitter are left to navigate this activity with basically the same tools we have to deal with trolls and harassers: block, mute, unfollow, and report.

With the way that Facebook and Twitter were initially centered around a model that privileges "free speech" over user safety, what is a structural problem that threatens our democratic processes is thus largely treated as though it's a personal problem for individual users to figure out.

State Democratic Party Chairs in Iowa, New Hampshire, South Carolina, and Nevada have written a letter to state party chairs across the country urging collaboration in battling disinformation on social media:
The goal is to have 2020 campaigns agree to forego illicit online campaign tactics like those used against Democrats in the 2016 presidential campaign, including the use of fake social media accounts, the spread of disinformation, hacking, and the use of hacked materials.

There's also discussion about candidates calling out supporters for taking part in that activity.
That second part is especially key. Certain high-profile campaign surrogates need to be held accountable by their preferred candidates in spreading disinformation about the difference between an outcome that is rigged versus a legitimate outcome with which they simply aren't happy, for instance.

2. The human toll of content moderation

[Content Note: Trauma; violence; bigotry] Others have written about this topic before, including me, but Casey Newton at The Verge has written an in-depth piece on the trauma experienced by Facebook content moderators.

In the piece, Newton interviews employees of a company called Cognizant, who work as content moderators for Facebook. What is notable from their accounts is not just the trauma these contractors experience, complete with PTSD symptoms, due to viewing content that Facebook users try to post, but that their own worldviews sometimes start to shift, with them sometimes adopting fringe, non-reality-based viewpoints, such as 9/11 trutherism and Holocaust revisionism.

Also interesting is that Facebook seems to have an extreme focus on moderating content "accurately" while giving their moderators an ever-changing rulebook for how to moderate that makes consistency and accuracy extremely challenging. The article cites multiple sources of authoritative policy that content moderators are supposed to use when making decisions, which they're also supposed to do quickly: Facebook's community guidelines, longer internal guidelines, a 15,000 word Known Questions document, moderator discussions among each other, and ongoing incremental guidance.

I understand that content moderation can be challenging, particularly when a platform is hosting massive amounts of content. That, it seems, is an argument to build such processes into a platform at the time of their initial rollouts. That ship has sailed, in many cases, but these challenges, as well as the traumas moderators endure, also strike me as an argument to keep content moderation services in-house, rather than outsource them to relatively low-paid contractors doing what is treated as low-tier work for the company.

Another thing I wonder is whether moderators who regularly view extremely traumatic content, such as depictions of murder, gradually become inured to less extreme content and if that factors into some of their on-the-fly moderating decisions, such that gradually, over time, all users of the platform become more "used to" aggression, incivility, and violence. 

I suspect that we will be grappling with these failures and tech's normalization of traumatic content and disinformation, individually and socially, for a very long time.

3. Kara Swisher interviews Twitter's Jack Dorsey

If you've never listened to Kara Swisher interview someone in the tech industry, I highly recommend it if you're able.

A couple weeks ago, she interviewed Jack Dorsey, on his own platform. In it, she really presses him on what, specifically, Twitter has done to address user safety and Dorsey acknowledges that the company has put most of the burden on victims of abuse.

Which, yes. I'm not sure there's a prominent woman on Twitter who would disagree with that.
4. Parliament releases report on disinformation

Last week in the UK, a parliamentary committee released a 108-page report on disinformation and social media. Finding democracy to be at risk due to the spread of disinformation and tech company failures, the report calls for:
  • Compulsory Code of Ethics for tech companies overseen by independent regulator
  • Regulator given powers to launch legal action against companies breaching code
  • Government to reform current electoral communications laws and rules on overseas involvement in UK elections
  • Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation
You know, something I see a lot on Twitter is snarking about how the spread of disinformation, particularly by Russia, isn't real. That gaslighting, itself, is abusive disinformation.

Shakesville is run as a safe space. First-time commenters: Please read Shakesville's Commenting Policy and Feminism 101 Section before commenting. We also do lots of in-thread moderation, so we ask that everyone read the entirety of any thread before commenting, to ensure compliance with any in-thread moderation. Thank you.

blog comments powered by Disqus