From Hate to Agonism:

Fostering Democratic Exchange Online

Toxic and abusive language threaten the integrity of public dialogue and democracy. Abusive language has been linked to political polarisation and citizen apathy; the rise of terrorism and radicalisation; and cyberbullying. In response, governments worldwide have enacted strong laws against abusive language that leads to hatred, violence and criminal offences against a particular group. This includes legal obligations to moderate (i.e., detection, evaluation, and potential removal or deletion) online material containing hateful or illegal language in a timely manner; and social media companies have adopted even more stringent regulations in their terms of use. The last few years, however, have seen a significant surge in such abusive online behaviour, leaving governments, social media platforms, and individuals struggling to deal with the consequences.

Responsible Artificial Intelligence

The responsible (i.e. effective, fair and unbiased) moderation of abusive language carries significant practical, cultural, and legal challenges. While current legislation and public outrage demand a swift response, we do not yet have effective human or technical processes that can address this need. The widespread deployment of human content moderators is costly and inadequate on many levels: the nature of the work is psychologically challenging, and significant efforts lag behind the deluge of data posted every second. At the same time, Artificial Intelligence (AI) solutions implemented to address abusive language have raised concerns about automated processes that affect fundamental human rights, such as freedom of expression, privacy and lack of corporate transparency. Tellingly, the first moves to censor Internet content focused on terms used by the LGBTQ community and AIDS activism. It is no surprise then that content moderation has been dubbed by industry and media as a “billion dollar problem”. Thus, this project addresses the overarching question: how can AI be better deployed to foster democracy by integrating freedom of expression, commitments to human rights and multicultural participation in the protection against abuse.

People Involved

At SFU

A professor of Communication, whose work examines the intersections between political extremism, misinformation, and social media

Canada 150 Research Chair in New Media

hannah holtzclaw’s research probes the intersection of critical data studies and design, decolonial pedagogy, and imaginative methods. 

A postdoctoral researcher studying the organization of labor in the media industry.

A professor of Communication, whose work examines early Internet censorship of LGBTQ+ activists

SSHRC postdoctoral fellow

A PhD student whose research is focused on aiding in the development of more diverse and egalitarian video game communities and esports industry.

An MA student whose research examines fashion in the metaverse.

A professor of Linguistics, whose work tracks the impact of toxicity in online discussion sites

A postdoctoral researcher studying the role of NLP across disciplines to examine what constitutes abuse, how it can be mitigated, and addressing the disproportionate harms that computational tools for content moderation have on marginalised communities.

Around the World

Runs the Ahmenson Lab and brings highly qualified personnel (HQP) training expertise from USC’s School of Cinematic Arts’ game design and E-sports program

Lecturer at UC Irvine and games studies specialist who studies gender, culture, media and online interaction.

An expert on race, gender, and toxicity in game spaces

Participated in Intel’s initiative on bias in gaming