The symposium Querying AI: Social Science and Humanities Perspectives on AI in Research and Society, held at UBC on October 27, brought together faculty, students, and members of the public for a full day of discussions. The symposium culminated in the keynote panel “Can Democracy Survive AI?” The panel featured Fenwick McKelvey (Concordia University) and Elizabeth Dubois (University of Ottawa), and was moderated by Chris Tenove, interim director of UBC’s Centre for the Study of Democratic Institutions.
Dr. McKelvey opened up the discussion of AI’s risks by drawing on the history of technology and politics. He traced the promise of AI to create “new magic” — of linking people and information — to the long history of technology used for microtargeting. The use of targeted messaging in political campaigns, and the promise of “knowing people” (and their choices) was valuable to the political candidates. The clustering of American voters in apt voter demographics was a key task of this online targeting. These forms of “targeted clusters” of audience demographics and zip code match were built on aggregating publicly available data.
These “data” collections, as Dr. McKelvey elaborated, have bias baked into them. For example, the Indigenous people (and people of color) with no home addresses may have no representation in the public census “data.” Then, the biased data used to create proxies to train AI and its attempts of “new magic” are not that new after all.
Dr. Dubois also questioned the “new- ness” of AI and its implications. Based on her upcoming report, Political Uses of AI in Canada, she provided three cases of existing roles for AI in politics. First, the simple automation “tasks” like the chat bots which reply to questions on locations of voting booths, voting dates and protocols etc. Second, the prediction polls on “what citizens want” using machine learning and sentiment analysis. These predictions were made on information available on the web (which includes, but not limited to social media), rather than just existing databases. For instance, an AI tool called Polly used in campaign strategizing, usually feeds on online information where not all citizens get a say. Thirdly, using generative AI for political campaign ad videos. Dr. Dubois gave the example of the Alberta Party’s use of an AI-generated spokesman. Such is the story of lack of transparency and trust in the case of robocalls made in Spanish, by the NYC mayor.
As disinformation spreads faster than factual knowledge on social networks, dis/ misinformation contains huge power in undermining public trust in democracy and the institutions. On trust building and authenticity, Dr. McKelvey underlined the need for transparency and increased regulation. Pointing to the use of watermarking to ‘flag’ generative AI content, he finds these solutions can be short term, alluding to the need for long term equitable (legal, social and technological) systems. Moreover, technological solutions to technological problems won’t suffice. What is required is to create corporate accountability and a sound social policy. He argued that Canada’s proposed Consumer Protection Privacy Act (CPPA) should be more robust and should also regulate political parties’ use of data.. He further suggested that the Artificial Intelligence and Data Act (AIDA), which like the CPPA is part of Bill C-27, requires further discussion, including attention to systemic harms that AI systems could pose to democratic institutions.
To answer the main question on democracy and AI, Dr. Dubois and Dr. McKelvey agreed that human beings and not AI systems will have the greatest impact on the quality of democracy. The choice of how we regulate, use and create tech, shapes, and will continue to shape our social and political fabric.