Remove ads, unlock a dark mode theme, and get other perks by upgrading your account. Experience the website the way it's meant to be.

Why Won’t Twitter Treat White Supremacy Like ISIS?

Discussion in 'Article Discussion' started by Melody Bot, Apr 25, 2019.

  1. Melody Bot

    Your friendly little forum bot. Staff Member

    This article has been imported from chorus.fm for discussion. All of the forum rules still apply.

    Joseph Cox and Jason Koebler, writing at Motherboard:


    In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.

    The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.

     
  2. Martina Apr 25, 2019
    (Last edited: Apr 25, 2019)
    Martina

    Regular

    Actually, it looks more like Twitter and to a different extent all major social media platforms are treating white supremecist/nationalist users and content as they have Isis. For many years Twitter didn't ban Isis/Isil accounts at all, which is remarkable given that Twitter and most social media platforms were developed after 9/11. At first Twitter and similar platforms blocked posts of Isis execution videos and overt attempts to recruit foreign volunteers, often after delays during which the accounts were surely being monitored, and only gradually and with the decline (somewhat) of Isis as a global threat have social media companies done more to crack down on those accounts entirely.

    It was well known (sorry I can't find links right now) that Isis accounts themselves tolerated because it gave US and other intellegence agencies insight into Isis operations that they couldn't get by any other means (and other means included informants who were relatively expensive, risky, and not always reliable). With the decline of Isis influence globally there was less need (again, at least globally) to tolerate Isis accounts, making it easier to crack down on their content and eventually on their accounts as well. So social media companies really are cracking down on white nationalist / supremecist content and accounts like they cracked down on Isis, at least if one looks at how they did that over time.

    What makes dealing with this more important is an upsurge in domestic hate group activity, especially since Trump was elected. It's argably possible to take greater action against white nationalist / supremecist content but doing so algorithmically risks suppressing dissent and political discourse that I'd hope folks here on this site would agree should be at least tolerated, if not welcomed, for example:

    Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate speech
    Jessica Guynn, USA TODAYPublished 7:26 a.m. ET April 24, 2019
    https://www.usatoday.com/story/news...hey-get-blocked-racism-discussion/2859593002/

    I strongly believe all social media platforms should have policies that are less tolerant and facilitating of white nationalist / supremecist content, and I really believe they could do (and usually do) that without "censoring Republicans." There are few Republican politicians, espeically in national offices like at the Representative level and above (and their staff and campaign officials as well), whose political content would be clearly racist as to trip algorithms meant to block groups like the National Alliance, for example.

    Same thing goes for Fox News, very little of that would likely trip those algorithms, or at least (if you have ever seriously tried to watch Fox News, difficult as I know that sounds) if the network knew those algorithms were "watching" and wanted to avoid tripping them. There are certainly examples of racist Republican politicians like Steve King who would likely be affected, and times when more Republicans are expressing overtly racist rhetoric than others (like after the killing of Michael Brown in Ferguson a few years ago) when more would be affected.

    I really don't think the problem is that Twitter/Facebook/etc doesn't want to offend Republicans, they don't really want to get political criticism from major political forces at all, and right-wing groups are in general percieved as more of a threat to social media companies policies, at least in the US. I also don't find much value in arguements that Republicans are more racist than Democrats, I'm not saying that's not true, only that most Republicans (and at times, Democrats) have a kind of fashion sense, like a dress code, that they follow and only use racist "dog whistles" when they think it is absolutely necessary (which it never is), dog whistles which are designed to not sound overtly racist and be caught by those algorithms.

    I think the problem is less about pissing off Republicans than:

    (1) Social media companies want to do their content policing by algorithms that are cheaper than humans they have to pay and risk blabbing to the media, whether the leaks go to The Intercept or Chorus.fm. Social media companies can claim those algorithms are some trade secret or intellectual proprerty, so they are harder to hold accountable by the press and by free speech advocates and partisans alike.

    I think the trend is toward social media companies doing that, though, including affecting people like are described in the USA Today article above. I think that as they do that most people wouldn't object, especially people in the entertainment industry who just want to use social media to move more product without consumers being distracted by politics outside of the mainstream and by disturbing content. Social media companies may not like the idea that they may alienate some politicians, and some may think they more don't want to alienate Republicans more than Democrats, but they surely like the idea of alienating advertisers even less.

    (2) Social media companies want to avoid having to modify their policies to accomodate regional differences in how political forces would want to regulate content. Relatively liberal states and even communities might want to more regulate more extreme right-wing/white racist/nationalist/Republican content, while conservative states might have far more powerful forces wanting to regulate left-wing/anti-white-racist/democratic content.

    I think the further we go down the road of social media companies trying to algorithmically regulate content like this, the more likely we may be to major political forces being able to regulate and manipulate content and even the existance of social media itself, as we've seen in China, as we're seeing now in Sri Lanka, and in the 2016 Presidential election in the US.

    Again I'm all for human moderators making judgement calls when needed to restrict racist/extremist content (especially white nationalist/supremecist/etc as we've see it in the US in recent years), but not for more opaque algorithms that can (and likely will) backfire as the USA Today article above discusses.

    ***
    tldr:

    "Alternative Press sucks."

    (that should get me some likes, huh?)
     
  3. BradBradley

    Regular

    “Why won’t you remove racist content from your platform?” “Because the algorithm we specifically developed to identify racism found that a lot of politicians fall under the category of ‘super racist.’” “Oh, uh, okay.”
     
    orangehorizon and Essie like this.
  4. Yellowcard2006

    Trusted

    If the algorithm can't tell the difference I don't see why this is a problem.
     
  5. [​IMG]
     
    coleslawed and tyramail like this.