From Hate to Agonism:

Fostering Democratic Exchange Online

Toxic and abusive language threaten the integrity of public dialogue and democracy. Abusive language has been linked to political polarisation and citizen apathy; the rise of terrorism and radicalisation; and cyberbullying. In response, governments worldwide have enacted strong laws against abusive language that leads to hatred, violence and criminal offences against a particular group. This includes legal obligations to moderate (i.e., detection, evaluation, and potential removal or deletion) online material containing hateful or illegal language in a timely manner; and social media companies have adopted even more stringent regulations in their terms of use. The last few years, however, have seen a significant surge in such abusive online behaviour, leaving governments, social media platforms, and individuals struggling to deal with the consequences.

Responsible Artificial Intelligence

The responsible (i.e. effective, fair and unbiased) moderation of abusive language carries significant practical, cultural, and legal challenges. While current legislation and public outrage demand a swift response, we do not yet have effective human or technical processes that can address this need. The widespread deployment of human content moderators is costly and inadequate on many levels: the nature of the work is psychologically challenging, and significant efforts lag behind the deluge of data posted every second. At the same time, Artificial Intelligence (AI) solutions implemented to address abusive language have raised concerns about automated processes that affect fundamental human rights, such as freedom of expression, privacy and lack of corporate transparency. Tellingly, the first moves to censor Internet content focused on terms used by the LGBTQ community and AIDS activism. It is no surprise then that content moderation has been dubbed by industry and media as a “billion dollar problem”. Thus, this project addresses the overarching question: how can AI be better deployed to foster democracy by integrating freedom of expression, commitments to human rights and multicultural participation in the protection against abuse.

People Involved

At SFU

A professor of Communication, whose work examines the intersections between political extremism, misinformation, and social media

Canada 150 Research Chair in New Media

hannah holtzclaw’s research probes the intersection of critical data studies and design, decolonial pedagogy, and imaginative methods. 

A postdoctoral researcher studying the organization of labor in the media industry.

A professor of Communication, whose work examines early Internet censorship of LGBTQ+ activists

SSHRC postdoctoral fellow

A PhD student whose research is focused on aiding in the development of more diverse and egalitarian video game communities and esports industry.

An MA student whose research examines fashion in the metaverse.

A professor of Linguistics, whose work tracks the impact of toxicity in online discussion sites

A postdoctoral researcher studying the role of NLP across disciplines to examine what constitutes abuse, how it can be mitigated, and addressing the disproportionate harms that computational tools for content moderation have on marginalised communities.

Around the World

Runs the Ahmenson Lab and brings highly qualified personnel (HQP) training expertise from USC’s School of Cinematic Arts’ game design and E-sports program

Lecturer at UC Irvine and games studies specialist who studies gender, culture, media and online interaction.

An expert on race, gender, and toxicity in game spaces

Participated in Intel’s initiative on bias in gaming

From Hate to Agonism

The international collaborative Hate to Agonism research stream has an ambitious goal – to create responsible AI for inclusive, democratic societies. Within the research being completed by the Institute at SFU, the stream has already evolved into two teams with similar goals in two distinct fields, counterspeech, and gaming, which will be discussed in more detail below. We also work closely with a team at the University of Sheffield as part of a joint UK-Canada project focused on hate speech directed at UK politicians. The team at Sheffield are developing ground-breaking machine learning tools, which will also be employed in research projects situated in Canada. The three arms of the project have the same overall aims, as stated on our website to: “combat abusive language and foster democracy through counterspeech”…but what is counterspeech? Why is democracy important? And how does gaming fit in?? This article aims to answer these questions and describe the research in more detail.

Debate Not Hate

Abusive language has been linked to political polarization and citizen apathy, the rise of terrorism and radicalization, and cyberbullying. One of the key drivers of this research is therefore to combat the proliferation of toxic and abusive language that threatens the integrity of public dialogue and democracy as a result. In response to the rise of hate speech, governments worldwide have enacted strong laws against abusive language that leads to hatred, violence and criminal offences against a particular group. This includes legal obligations to moderate (i.e., detection, evaluation, and potential removal or deletion) online material containing hateful or illegal language in a timely manner, which has encouraged social media companies to adopt even more stringent regulations in their terms of use. Despite these, the last few years have seen a significant surge in abusive online behaviour which has left governments, social media platforms, and individuals struggling to deal with the consequences.

Counterspeech describes the response and resistance to hate speech, perhaps most easily identified on digital platforms like Twitter, but it can take any form where reactions can be manifested; art, mass media, even records of court proceedings where speech is recorded could include elements of counterspeech. Part of our research project involves us working on an anatomy of counterspeech; trying to determine what is qualified as counterspeech and how it functions, including identifying moments of counterspeech across digital platforms. There are a number of significant public figures that demonstrate a ‘best practice’ form of counterspeech, Alexandra Ocasio-Cortez being one of the most public figures. Her command of the elegant response has been exemplary in the recent debates over COVID relief in the US:

AOC tweets calling out Ted Cruz

This sort of response is crucial because it is in these arenas where counterspeech and debate are helping to foster constructive democracy. This crucial element of societal dialogue shouldn’t be about eliminating dissensus, but about creating equality of access and about engaging and respecting differences. We aim to identify how moments of counterspeech build coalitions between communities, by following tweets that demonstrate counterspeech and where they are shared across networks, and crucially, how those networks are connected. We are also identifying historic acts of counterspeech outside of digital spaces as a way to further inform our understanding and research. These instances are often in response to existing hegemonic practices like colonialism, imperialism and neoliberalism, which contribute to the ongoing suppression of key voices. By identifying how grassroots resistance to hate is being manifested, we aim to find out how those methods could be amplified.

Key social movements that we are focusing on include:

  • Black Lives Matter
  • Standing Rock Water Protector Camp
  • AIDS and Act Up Activism
  • Zapatista Movement
  • BC First Nations Actions against TransMountain Pipeline
  • DREAMers Activism

These different movements often speak to and with each other, hence our analysis of sharing across networks. Their language and tactics are often shared and taken up across subcultural intersections to gain support and manifest solidarity. One really fascinating mode of resistance has been from fan-culture, where disruptive protests have been organized by fans, as was seen at the Trump rally in Tulsa in June 2020 where the event had ‘sold-out’ but many of the tickets were bought by K-pop fans who didn’t show up. 

Gaming

Online gaming has long-fostered an environment where abuse, trash talking and criticism are normal modes of communicating, hence it presented an excellent arena for observing firstly how such speech is encountered, how it is regulated by the platform it is performed on, and also how it is reacted to by the players themselves. Gaming was identified as one of the primary locations where hate speech can manifest, and offers a different area of focus outside of politicians, where participants also regularly face abusive language, but with different styles and forms of communicating, mainly via chat forums that occur during game-play.

The work that Bo Ruberg and their lab at UC Irvine has been doing that we have written about previously in our post ‘Queering the Norms in Gaming’, has been incredibly informative for this project. We are also fortunate to have a member of Bo’s lab working for us on this stream, Christine Tomlinson, whose expertise and insights have been vital to the project’s progress.

Among Us graphicThere are a number of innovative approaches being taken with this research, and as gaming lends such a rich environment with which to analyze, it was challenging to even choose a place with  which to start. Our team narrowed the initial research to focus on Among Us, an interactive multi-player game that experienced a huge uptake of players during Covid-19. We will extend the research to other genres of games in due course including first person shooter, action-adventure, role-playing games, and others. Our methodology includes the use of cultural probes, where ethnographic research is carried out while the subject plays the game, describing encounters and opportunities for connection with other players. The team will also look outside the games themselves for discussion about gameplay on sites like Reddit and YouTube, and will monitor Discord and Twitch forums for material. There are some interesting crossovers between the projects, such as the occasions where Alexandra Ocasio-Cortez and the ‘Squad’ of Democratic representatives play Among Us and use Discord to connect with members of the public.

The teams are working towards finding how to facilitate productive moments of disagreement, where counterspeech, and moments of indifference rather than hate can create opportunities for deliberative dialogue. While the different arms of the project have varying areas of focus, each is informing the other, by using the same tools, finding intersections in networks, and learning together to find spaces for counterspeech that could feed into strategies for responsive AI. We look forward to sharing results of our research in due course. Watch this space!