Beyond Verification

01. Beyond Verification

In the first article exploring the research of the Digital Democracies Institute we focus on the Beyond Verification stream, which takes on the viral spread of mis/information by focusing on questions of “authenticity.”  Why? Because fact-checking is important but not enough. It alone does not dispel misinformation (inadvertent) and disinformation (intentional) as:

  • fact-checking sites lag behind the deluge of rumors produced by disinformation sources and spread via private interactions;
  • corrections and ‘fake news’ stories often reach very different audiences;
  • corrections can create new interest in debunked stories;
  • users spread stories they find compelling or funny, regardless of their accuracy.

Tellingly, the 2016 U.S. presidential election was both described as “the authenticity election” and as normalizing ‘fake news.’ So how can we understand and best counter the power of mis/disinformation?

To answer this question, we start with the authenticity misinformation, but broaden it to investigate how and under what circumstances–social, cultural, historical, and technical–information is deemed ‘truthful.’ Fact and truth are related but not interchangeable. Fact is linked to feat, or acts done; truth and trust share the same root, as do authenticity, authority and authorship.

Throughout the projects, discussed below, we study the impact of authenticity on: 1) the habitual actions of users, and how they craft their identities via social media platforms; 2) behind the screen data capture, used by algorithms to profile and cluster users; 3) infrastructures and interactions that foster group-identity formation and trust; and 4) modes of engagement that best displace mis/disinformation. We are considering: 1) historical and theoretical analyses of authenticity; 2) qualitative and quantitative investigations into how platforms and third-party aggregators authenticate and profile users and into which interactions users find most authentic; and 3) the creation and deployment of research personae to reveal how platforms use obfuscated mechanisms to restrict and influence on- and offline user actions and perceptions of trust.

So how is this work achieved? Within the Beyond Verification remit, there are three projects which all involve international or national collaborations. The first is a project with Goodly Labs, to create a model of authenticity, based on ten different characteristics/features: spontaneity, affective intensity, self-disclosure, transgression of conventions, branding and endorsement, community building, personal accountability as evidence, rhetorical style, rebel/alternative media, and audience engagement. Some of these concepts have several meanings and definitions, and there is some overlap between them. We identified these features with plain text in mind (no pictures, no images, no layouts). These categories act as operational patterns of revelation and relation that ground authenticity and its recognition. Although framed as “unscripted”, often these strategies for identification are carefully constructed in order to establish relationships of trust or intimacy between the writer and the audience.

At this stage, the project is currently applying the codebook to a training sample to test its reliability, and coders from SFU, York, Ryerson and Emerson are working to identify a cohesive coding schema. The sample texts relate to four intersecting themes: environment, Canadian politics, Covid-19, and indigenous rights.

The second project involves the creation of a persona called ‘Charlie’. Charlie has a presence on a number of social media platforms, including dating apps, Facebook and Instagram, but is actually a fiction, created and maintained by the team. Charlie’s purpose is to try to create an understanding of how algorithms work to produce mis/information. If he likes a number of football-related news articles on Facebook, and footballers’ profiles on Instagram, what advertisements are prevalent? What news articles are generated? This project is in collaboration with the University of Amsterdam, and, by working in groups, the researchers make the persona interact with content on social media platforms to identify the different tactics used by dis- and misinformation actors. Having established a rich background for the fictional persona, the researchers brainstorm and identify what kind of content the persona would most likely respond to and why. Using digital methods, the researchers map out the kinds of algorithmic personalization processes that push the persona towards different homophilic communities.

This is known as ‘research persona method’ and takes place over time. Its purpose is not only to track personalized information disorder, but to understand how they potentially could be combatted through interventions at all three levels. For instance, policy regulations on using user data for political manipulation purposes could be accompanied by not only greater transparency on the algorithmic processes in social media platforms, but also the development of new algorithmic processes that depart from the homophilic model. As well, creating user experience that bring to light the affective dynamics with disinformation campaigns would enable users to experience new modes of being and relating to each other and to information online.

Thirdly, the “Serial Bots’ project is a fascinating interdisciplinary approach to this research, combining computer science and performing arts in a unique experiment. We’re designing a series of bots using a machine-learning algorithm to learn and produce news. Each bot differs only by its input. The main goal of the study is to explore how bots grow biased, by simulating the algorithms that we might find on social media platforms which have been criticized for producing online ‘echo chambers’. Two bots have been programmed to receive only news with either a left or a right wing angle, one for each. Each bot then creates text based on the input they have received, and the output from both the bias bots is in turn fed to a single bot. Which side will it lean? What will the text that it produces read like?

The project then turns to examine performativity, utilizing skills form Ioana Jucan and Melody Devries, and argues that performance – that is, an embodied “doing” of a certain script or style-of-being in the world – is key to the creation of an authenticity which sustains the spread of dis- and misinformation. Rather than conceptualizing authenticity as necessarily attached to concepts of truth or “real identity”, Ioana and Melody’s work argues that what undergirds the manufacture of an authentic self and an impression of authenticity are processes of identification that run on emotional experiences with one’s environment, other humans, and media content.

To develop its argument, their project uses (and makes a claim for) performance both as an object and as a method of investigation, taking the online theatre performance Left and Right, or Being where/who one is which Ioana is currently developing as a case-study. The performance draws on the concept of the homophilic avatar developed by Devries (Devries 2020) in order to showcase the interactional processes of identification which verify political realities and interactions as trust-worthy ones. These are processes of performativity (Butler 1988), marked by Devries as the embodiment of an avatar which scripts not only a way-of-being in the world, but an authentic [politicized] world itself. Subsequently, this project defines “authenticity” as that which is verified through experiential, emotional interactions with the world, regardless of what we might call “objective” truth.

We started this article with the statement that we know that verification alone does not dispel misinformation and disinformation; true. Corrections and ‘fake news’ stories often reach very different audiences; corrections can create new interest in debunked stories; users spread stories they find compelling or funny, regardless of their accuracy. Our work addresses the vital issue of how we can understand and best counter the power of mis/disinformation, drawing on our inter-disciplinary approach and essential collaborators to aid in the fight against this dangerous feature of modern life from causing further harm. You can find out more information about the various Beyond Verification research projects on our website here.