Beyond Verification:

Authenticity and the Spread of Mis/Disinformation

Fake news threatens democracy in Canada and globally. The viral spread of misinformation impacts election results, undermines trust in media sources and politicians, who accuse each other of spreading “fake news”, and fosters conspiracy theories and general cynicism about institutions. Social media has been targeted as the main source of fake news and its discourses. Within a decade, Twitter, Facebook, and other platforms have moved from being lauded as inherently democratic technologies to being condemned as irresponsible media publishers. The structure of the Internet itself undermines the efficacy of fact-checking; fact-checking sites always lag behind the deluge of rumors produced by misinformation sources.

Verification is not enough to combat misinformation as fake news often reaches a different audience than its corrections. We need to go beyond verification, and this research stream seeks to determine and investigate new solutions to countering these problems.

About the Project

Beyond Verification: Authenticity and the Spread of Mis/Disinformation leverages emerging technologies to combat fake news and benefit Canadians by developing new strategies for displacing fake news, particularly the structures and actions that foster junk content production and circulation.

Fake news is not simply a question of content, but also of global data circulation, digital cultures, web-economies and industries, and user- and group-identity formation, context, and trust. It works by weaving itself into the larger media environment and by provoking strong emotions. Its force is tied to the everyday actions and sensibilities of users—to how they craft themselves as “brands” via social media platforms that also restrict their actions, and how they come to trust others online and offline. Our research analyzes misinformation and authenticity within this broader environment.

Modelling Authenticity

In order to answer questions as to what constitutes authenticity, what makes a news item appear authentic, and what are the ways in which authenticity can be evaluated — we are collaborating with Goodly Labs to create a coding schema to analyze the news.

To operationalize authenticity, we have identified six different characteristics/features: self-authentication, emotional intensity, transgression of conventions, culturally authenticating rhetoric, social validation, and branding. We identified these features with plain text in mind (no pictures, no images, no layouts). These categories act as operational patterns of revelation and relation that ground authenticity and its recognition. Although framed as “unscripted,” often these strategies for identification are carefully constructed in order to establish relationships of trust or intimacy between the writer and the audience.

At this stage, the project is currently applying the codebook to a training sample to test its reliability. The sample texts relate to a few intersecting themes: COVID-19, Canadian politics, and anti-Asian hate. As research progresses, we will report on our findings

Research Guide to Authenticity

As one of the driving factors for this stream, the spread of false, misleading and inaccurate news threatens democracy globally. In response, researchers, non-profit organizations and media companies have sought to develop techniques to detect mis- and disinformation, but fact-checking, while important, is not enough. Fact-checking sites lag behind the deluge of rumors produced by global disinformation networks and spread via private interactions. Likewise, definitions of mis- and disinformation and other forms of information disorder often center around the intention of the producer or sharer of information to deceive or cause harm. While this approach can prove useful in differentiating forms of information disorder, focusing on intentions to deceive potentially limits broader and more diverse understanding of information sharing behaviour. How, then, might we comprehend and combat the impact of the global spread of mis and dis-information? 

This research guide attempts to answer this question, and in turn contributes to research on mis- and disinformation, by moving beyond questions of facticity to those of authenticity. Grouping together the relevant research on authenticity under four common themes surrounding authenticity, it asks—and responds—to the following questions:  

  1. Why and how—under what circumstances (social, cultural, technical and political)—do people find information to be true or authentic, regardless of facticity?
  2. What information do people share and create in order to appear authentic?

Once completed, the Research Guide will be available to all, and will be linked here. The research guide will be published by meson press, in late 2022.

Previous Projects

This project culminated with virtual performances of the theatrical production Left and Right, or Being Where you Are in February and March 2021, led by Ioana Juhasz. The bots were used during the performance in an interactive activity for the audience.  An article on the project was published in September 2021. 

News websites have financial incentives to spread disinformation, in order to increase their online traffic and, ultimately, their advertising revenue. Meanwhile, the dissemination of disinformation has disruptive and impactful consequences. The COVID-19 pandemic offers a recent example. By disrupting society’s shared sense of accepted facts, these narratives undermine public health, safety and government responses.

To combat ad-funded disinformation, the Global Disinformation Index (GDI) deploys its assessment framework to rate news domains’ risk of disinforming their readers. These independent, trusted and neutral ratings are used by advertisers, ad tech companies and platforms, to redirect their online ad spending in line with their brand safety and disinformation risk mitigation strategies.

GDI defines disinformation as ‘adversarial narratives that create real world harm’, and the GDI risk rating provides information about a range of indicators related to the risk that a given news website will disinform its readers by spreading these adversarial narratives. These indicators are grouped under the index’s Content and Operations pillars, which respectively measure the quality and reliability of a site’s content and its operational and editorial integrity.

Our team worked with colleagues at McGill and Laval Universities to produce a report on the position of Canadian organizations which can be reviewed here, along with other countries that have been reviewed using the same framework. It is available in both English and French. 

Mis- and disinformation are a growing threat to the integrity of free and fair elections and to the strength of democracies the world over. Members of our team worked with McGill University and the University of Toronto in the Canadian Election Misinformation Project to monitor and respond to mis- and disinformation incidents and threats during the 44th Canadian federal election. The initiative was housed under the Media Ecosystem Observatory at McGill’s Max Bell School of Public Policy.

People Involved

At SFU

Ph.D. student and SSHRC Joseph Bombardier Fellow in the Department of Communications at Simon Fraser University. 

Canada 150 Research Chair in New Media

A postdoctoral researcher studying the organization of labor in the media industry.

MA Student and research assistant at the DDI

Around the World

A leader in digital journalism

Algorithmic theatre and documentary producer.

Algorithmic theatre and documentary producer

An expert in political economy and social media walkthroughs

An authority in the history of media outlets and policy

Digital media artist and software developer

Digital methods expert