Beyond Verification:

Authenticity and the Spread of Mis/Disinformation

Fake news threatens democracy in Canada and globally. The viral spread of misinformation impacts election results, undermines trust in media sources and politicians, who accuse each other of spreading “fake news”, and fosters conspiracy theories and general cynicism about institutions. Social media has been targeted as the main source of fake news and its discourses. Within a decade, Twitter, Facebook, and other platforms have moved from being lauded as inherently democratic technologies to being condemned as irresponsible media publishers. The structure of the Internet itself undermines the efficacy of fact-checking; fact-checking sites always lag behind the deluge of rumors produced by misinformation sources.

Verification is not enough to combat misinformation as fake news often reaches a different audience than its corrections. We need to go beyond verification, and this research stream seeks to determine and investigate new solutions to countering these problems.

About the Project

Beyond Verification: Authenticity and the Spread of Mis/Disinformation leverages emerging technologies to combat fake news and benefit Canadians by developing new strategies for displacing fake news, particularly the structures and actions that foster junk content production and circulation.

Fake news is not simply a question of content, but also of global data circulation, digital cultures, web-economies and industries, and user- and group-identity formation, context, and trust. It works by weaving itself into the larger media environment and by provoking strong emotions. Its force is tied to the everyday actions and sensibilities of users—to how they craft themselves as “brands” via social media platforms that also restrict their actions, and how they come to trust others online and offline. Our research analyzes misinformation and authenticity within this broader environment.

Modelling Authenticity

In order to answer questions as to what constitutes authenticity, what makes a news item appear authentic, and what are the ways in which authenticity can be evaluated — we are collaborating with Goodly Labs to create a coding schema to analyze the news.

To operationalize authenticity, we have identified six different characteristics/features: self-authentication, emotional intensity, transgression of conventions, culturally authenticating rhetoric, social validation, and branding. We identified these features with plain text in mind (no pictures, no images, no layouts). These categories act as operational patterns of revelation and relation that ground authenticity and its recognition. Although framed as “unscripted,” often these strategies for identification are carefully constructed in order to establish relationships of trust or intimacy between the writer and the audience.

At this stage, the project is currently applying the codebook to a training sample to test its reliability. The sample texts relate to a few intersecting themes: COVID-19, Canadian politics, and anti-Asian hate. As research progresses, we will report on our findings

Research Guide to Authenticity

As one of the driving factors for this stream, the spread of false, misleading and inaccurate news threatens democracy globally. In response, researchers, non-profit organizations and media companies have sought to develop techniques to detect mis- and disinformation, but fact-checking, while important, is not enough. Fact-checking sites lag behind the deluge of rumors produced by global disinformation networks and spread via private interactions. Likewise, definitions of mis- and disinformation and other forms of information disorder often center around the intention of the producer or sharer of information to deceive or cause harm. While this approach can prove useful in differentiating forms of information disorder, focusing on intentions to deceive potentially limits broader and more diverse understanding of information sharing behaviour. How, then, might we comprehend and combat the impact of the global spread of mis and dis-information? 

This research guide attempts to answer this question, and in turn contributes to research on mis- and disinformation, by moving beyond questions of facticity to those of authenticity. Grouping together the relevant research on authenticity under four common themes surrounding authenticity, it asks—and responds—to the following questions:  

  1. Why and how—under what circumstances (social, cultural, technical and political)—do people find information to be true or authentic, regardless of facticity?
  2. What information do people share and create in order to appear authentic?

Once completed, the Research Guide will be available to all, and will be linked here. The research guide will be published by meson press, in late 2022.

Previous Projects

This project culminated with virtual performances of the theatrical production Left and Right, or Being Where you Are in February and March 2021, led by Ioana Juhasz. The bots were used during the performance in an interactive activity for the audience.  An article on the project was published in September 2021. 

News websites have financial incentives to spread disinformation, in order to increase their online traffic and, ultimately, their advertising revenue. Meanwhile, the dissemination of disinformation has disruptive and impactful consequences. The COVID-19 pandemic offers a recent example. By disrupting society’s shared sense of accepted facts, these narratives undermine public health, safety and government responses.

To combat ad-funded disinformation, the Global Disinformation Index (GDI) deploys its assessment framework to rate news domains’ risk of disinforming their readers. These independent, trusted and neutral ratings are used by advertisers, ad tech companies and platforms, to redirect their online ad spending in line with their brand safety and disinformation risk mitigation strategies.

GDI defines disinformation as ‘adversarial narratives that create real world harm’, and the GDI risk rating provides information about a range of indicators related to the risk that a given news website will disinform its readers by spreading these adversarial narratives. These indicators are grouped under the index’s Content and Operations pillars, which respectively measure the quality and reliability of a site’s content and its operational and editorial integrity.

Our team worked with colleagues at McGill and Laval Universities to produce a report on the position of Canadian organizations which can be reviewed here, along with other countries that have been reviewed using the same framework. It is available in both English and French. 

Mis- and disinformation are a growing threat to the integrity of free and fair elections and to the strength of democracies the world over. Members of our team worked with McGill University and the University of Toronto in the Canadian Election Misinformation Project to monitor and respond to mis- and disinformation incidents and threats during the 44th Canadian federal election. The initiative was housed under the Media Ecosystem Observatory at McGill’s Max Bell School of Public Policy.

People Involved

At SFU

Ph.D. student and SSHRC Joseph Bombardier Fellow in the Department of Communications at Simon Fraser University. 

Canada 150 Research Chair in New Media

A postdoctoral researcher studying the organization of labor in the media industry.

MA Student and research assistant at the DDI

Around the World

A leader in digital journalism

Algorithmic theatre and documentary producer.

Algorithmic theatre and documentary producer

An expert in political economy and social media walkthroughs

An authority in the history of media outlets and policy

Digital media artist and software developer

Digital methods expert

Beyond Verification

01. Beyond Verification

In the first article exploring the research of the Digital Democracies Institute we focus on the Beyond Verification stream, which takes on the viral spread of mis/information by focusing on questions of “authenticity.”  Why? Because fact-checking is important but not enough. It alone does not dispel misinformation (inadvertent) and disinformation (intentional) as:

  • fact-checking sites lag behind the deluge of rumors produced by disinformation sources and spread via private interactions;
  • corrections and ‘fake news’ stories often reach very different audiences;
  • corrections can create new interest in debunked stories;
  • users spread stories they find compelling or funny, regardless of their accuracy.

Tellingly, the 2016 U.S. presidential election was both described as “the authenticity election” and as normalizing ‘fake news.’ So how can we understand and best counter the power of mis/disinformation?

To answer this question, we start with the authenticity misinformation, but broaden it to investigate how and under what circumstances–social, cultural, historical, and technical–information is deemed ‘truthful.’ Fact and truth are related but not interchangeable. Fact is linked to feat, or acts done; truth and trust share the same root, as do authenticity, authority and authorship.

Throughout the projects, discussed below, we study the impact of authenticity on: 1) the habitual actions of users, and how they craft their identities via social media platforms; 2) behind the screen data capture, used by algorithms to profile and cluster users; 3) infrastructures and interactions that foster group-identity formation and trust; and 4) modes of engagement that best displace mis/disinformation. We are considering: 1) historical and theoretical analyses of authenticity; 2) qualitative and quantitative investigations into how platforms and third-party aggregators authenticate and profile users and into which interactions users find most authentic; and 3) the creation and deployment of research personae to reveal how platforms use obfuscated mechanisms to restrict and influence on- and offline user actions and perceptions of trust.

So how is this work achieved? Within the Beyond Verification remit, there are three projects which all involve international or national collaborations. The first is a project with Goodly Labs, to create a model of authenticity, based on ten different characteristics/features: spontaneity, affective intensity, self-disclosure, transgression of conventions, branding and endorsement, community building, personal accountability as evidence, rhetorical style, rebel/alternative media, and audience engagement. Some of these concepts have several meanings and definitions, and there is some overlap between them. We identified these features with plain text in mind (no pictures, no images, no layouts). These categories act as operational patterns of revelation and relation that ground authenticity and its recognition. Although framed as “unscripted”, often these strategies for identification are carefully constructed in order to establish relationships of trust or intimacy between the writer and the audience.

At this stage, the project is currently applying the codebook to a training sample to test its reliability, and coders from SFU, York, Ryerson and Emerson are working to identify a cohesive coding schema. The sample texts relate to four intersecting themes: environment, Canadian politics, Covid-19, and indigenous rights.

The second project involves the creation of a persona called ‘Charlie’. Charlie has a presence on a number of social media platforms, including dating apps, Facebook and Instagram, but is actually a fiction, created and maintained by the team. Charlie’s purpose is to try to create an understanding of how algorithms work to produce mis/information. If he likes a number of football-related news articles on Facebook, and footballers’ profiles on Instagram, what advertisements are prevalent? What news articles are generated? This project is in collaboration with the University of Amsterdam, and, by working in groups, the researchers make the persona interact with content on social media platforms to identify the different tactics used by dis- and misinformation actors. Having established a rich background for the fictional persona, the researchers brainstorm and identify what kind of content the persona would most likely respond to and why. Using digital methods, the researchers map out the kinds of algorithmic personalization processes that push the persona towards different homophilic communities.

This is known as ‘research persona method’ and takes place over time. Its purpose is not only to track personalized information disorder, but to understand how they potentially could be combatted through interventions at all three levels. For instance, policy regulations on using user data for political manipulation purposes could be accompanied by not only greater transparency on the algorithmic processes in social media platforms, but also the development of new algorithmic processes that depart from the homophilic model. As well, creating user experience that bring to light the affective dynamics with disinformation campaigns would enable users to experience new modes of being and relating to each other and to information online.

Thirdly, the “Serial Bots’ project is a fascinating interdisciplinary approach to this research, combining computer science and performing arts in a unique experiment. We’re designing a series of bots using a machine-learning algorithm to learn and produce news. Each bot differs only by its input. The main goal of the study is to explore how bots grow biased, by simulating the algorithms that we might find on social media platforms which have been criticized for producing online ‘echo chambers’. Two bots have been programmed to receive only news with either a left or a right wing angle, one for each. Each bot then creates text based on the input they have received, and the output from both the bias bots is in turn fed to a single bot. Which side will it lean? What will the text that it produces read like?

The project then turns to examine performativity, utilizing skills form Ioana Jucan and Melody Devries, and argues that performance – that is, an embodied “doing” of a certain script or style-of-being in the world – is key to the creation of an authenticity which sustains the spread of dis- and misinformation. Rather than conceptualizing authenticity as necessarily attached to concepts of truth or “real identity”, Ioana and Melody’s work argues that what undergirds the manufacture of an authentic self and an impression of authenticity are processes of identification that run on emotional experiences with one’s environment, other humans, and media content.

To develop its argument, their project uses (and makes a claim for) performance both as an object and as a method of investigation, taking the online theatre performance Left and Right, or Being where/who one is which Ioana is currently developing as a case-study. The performance draws on the concept of the homophilic avatar developed by Devries (Devries 2020) in order to showcase the interactional processes of identification which verify political realities and interactions as trust-worthy ones. These are processes of performativity (Butler 1988), marked by Devries as the embodiment of an avatar which scripts not only a way-of-being in the world, but an authentic [politicized] world itself. Subsequently, this project defines “authenticity” as that which is verified through experiential, emotional interactions with the world, regardless of what we might call “objective” truth.

We started this article with the statement that we know that verification alone does not dispel misinformation and disinformation; true. Corrections and ‘fake news’ stories often reach very different audiences; corrections can create new interest in debunked stories; users spread stories they find compelling or funny, regardless of their accuracy. Our work addresses the vital issue of how we can understand and best counter the power of mis/disinformation, drawing on our inter-disciplinary approach and essential collaborators to aid in the fight against this dangerous feature of modern life from causing further harm. You can find out more information about the various Beyond Verification research projects on our website here.