Can Big Tech Handle Hate Speech?

After the events of January 2021, when the social media site Twitter reportedly banned 70,000 accounts associated with the QAnon conspiracy theory, tech companies have engaged in a concerted effort to avoid association with extremists and to curb hate speech on their platforms. As one would expect, this effort has motivated even more software innovations in algorithmic solutions to website curation. Last March, Intel presented an AI-powered tool called “Bleep” meant to help video game enthusiasts filter offensive language out of online chatrooms. The presentation received some flak, however, when social media commentators wondered why the program’s interface features a “Racism and Xenophobia” slider with levels that are labeled “None, Some, Most and All.” Twitter user @beesmygod was one of many joking at the expense of Intel with quips like “Computer, today I feel like being a little bit misogynistic.” Intel might tweak their user interface in response to this feedback, but public relations flubs such as these raise the question: Are tech companies equipped to handle hate speech?

Skeptics have contested the sincerity of tech companies’ claims to want to mitigate extremist and hateful content. Last March, NYU-based research group Cybersecurity for Democracy published an article for Medium claiming that “content from sources rated as far-right by independent news rating services consistently received the highest engagement per follower of any partisan group” on Facebook. While the available research cannot confirm whether Facebook’s algorithms are responsible for this level of engagement, studies such as this reveal a perverse incentive for social media sites to boost dangerous or misinformative content in order to appeal to a profitable demographic.

Assuming that tech companies are speaking honestly about their efforts to prevent hate speech, questions remain about the efficacy of their attempts. In June 2019, YouTube started to ban videos featuring the word “Nazi” presumably to stop the spread of Alt-Right hate speech. Ironically, the site ended up banning historical videos about World War II and barely touched genuine white supremacist content—real neo-Nazis know to call themselves neutral terms like “race-realist” for the sake of optics. Just this past June, YouTube banned a channel belonging to the Right Wing Watch, a project dedicated to documenting far-right extremist media, and then unbanned the channel after releasing their mistake. The Right Wing Watch had only violated community guidelines based on YouTube being unable to distinguish between extremism and the exposure of said extremism. According to an article from The Daily Beast, YouTube never removed the original videos documented by the Right Wing Watch, raising even more questions as to how the ban happened in the first place.

In addition to the questionable competency of tech companies in controlling hate speech, computer technology has not proven itself to be capable of handling the nuances of language. A Queer person referring to themself with a homophobic or transphobic slur might be acceptable by modern social norms, but an algorithm would no be able to understand that nuance. The algorithm would merely see a certain word being used and would ban whomever used it, which might explain why YouTube has been accused of demonetizing LGBTQ videos on multiple occasions.


Get in Touch

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Get in Touch

22,042FansLike
2,870FollowersFollow
18,100SubscribersSubscribe

Latest Posts