This post was written by Ben Scholl, PhD researcher at the DDI, working on the From Hate to Agonism: Fostering Democratic Exchange Online project.
Yuan Stevens– LL.M candidate at the University of Ottawa Faculty of Law and collaborator at McGill University’s Centre for Media, Technology and Democracy– visited the Digital Democracies Institute on March 2nd, 2022, as the latest thinker in our Spring Speaker Series. Her inspiring presentation, on the regulation of automated facial recognition (AFR) technologies, provocatively frames them as automated biometric surveillance and analytics tools or systems. Her approach raises awareness of the disembodied-biological nature of our data, and the chilling effects of its real-time automatic processing. Yuan demands that Canadian regulators take stock of the wider picture, identify the gaps in federal and provincial privacy law, and confront the inadequacy of soft-power legal reprimands.
Illustrating the state of affairs, we are reminded that the Royal Canadian Mounted Police, among other Canadian law enforcement agencies, have recently been caught in a lie regarding the deployment of AFR software against the public. The product, provided by the U.S.-based company, Clearview AI, is designed to indiscriminately scrape sites like Facebook and Instagram of people’s photographs, without individual consent or that of the host-website, to assemble a massive, automated face-matching police line-up. In essence, police forces need no-longer burden themselves with the drudgery of lawfully detaining citizens, lining them up in a room of ‘suspects’, and staring inquisitively through a one-way mirror in search of their offender. Today, companies like Clearview AI will happily take the Queen’s shilling and automate these duties. Similar applications for AFR include deployment through CCTV cameras, which monitor public and quasi-public spaces in real-time and likely dissuade citizens from exercising their right to assemble and protest while maintaining anonymity. Through these examples Yuan reminds us that AFR is riskiest when applied one-to-many (meaning an individual’s features are compared to those of many), in real-time, and in public.
In response to the context provided by the RCMP’s use of Clearview AI software, Yuan asserts that Canada’s existing privacy law is too limited in scope and enforceability. While privacy commissioners have instructed Clearview to cease service provision in Canada and to delete Canadian citizens’ data, the company has engaged in legal challenges. They claim protection under their right to free expression and hold that it would be impossible to know which images to delete, despite having done so under Illinois-state laws. Yuan notes that there is indeed no law which directly regulates automated biometric surveillance in Canada. Thus, her presentation proceeds to tackle the questions: Do we need to regulate AFR and, if so, why? What does the law understand as AFR? And how is this similar-to or different-from other technological innovations? These questions guide her current research trajectory, as she has been examining the sociological implications of artificial intelligence since 2017, with a more recent focus on AFR over the past couple of years.
Structuring her exploration of Canada’s AFR regulation, Yuan applies Lawrence Lessig’s pathetic dot theory. She explains that Lessig’s theory is a common approach in North American legal examinations of technology regulation– especially the internet– because it considers more than just the law’s role in regulation. While far from perfect, the pathetic dot theory is useful when making the case that Canada needs to regulate AFR with the same scrutiny and urgence owed to automated biometric surveillance and analytics systems. Lessig’s theory boils down to the notion that the pathetic dot (a thing subjected to regulation) can be governed by architecture, markets, norms, and laws. Yuan considers these four modes of regulation in the context of AFR as well as historical examples of the regulation of emerging disruptive technologies.
Adapted from Lessig, L. (2006). Code 2.0 (Version 2.0). Basic Books.
Regulation through Architecture
AFR may be regulated, first and foremost, architecturally– during its development stage. A software developer’s choice of coding languages comes with fundamental consequences in terms of coding norms, parameters, and logics, which shape the resulting tool. This level of regulation, as Yuan explains, constrains our behaviour and shapes the affordances of technology in its very conception. Furthermore, the use of datasets (which have been shown to lack diversity) for training and developing different forms of artificial intelligence– i.e., machine learning algorithms including AFR– extenuates issues attributed to racism, sexism, ageism, classism, and other biases. This is often encapsulated in the saying ‘garbage-in, garbage-out’. An existing effort to regulate software through architecture is the use of technical audits. This form of audit is deployed to generate a transparency of architecture; identifying the coding languages, mathematical processes, algorithms, and datasets used in a software product. However, Yuan asserts that there are major weaknesses which leave much to be desired of this form of regulation. The primary issue is that failing a technical audit does not necessarily stop the use of the products under analysis. Additionally, even a ‘perfect’ surveillance technology that passes a technical audit is still, at the end of the day, a surveillance technology.
Regulation through Markets
In terms of Lessig’s theory, market-based corporate decisions– i.e., the decision to service a market, supply a niche, develop a new product, etc.– operate outside the purview of law, but may be seen as regulatory when it comes to technological innovation. Yuan exemplifies the regulatory power of these decisions or marketing strategies, through the case of Clearview AI. The company initially marketed their products to law enforcement and is seeking to extend their reach far beyond police, through the guise of democratizing access to their tech in a manner akin to Google’s search capabilities. This illuminates the market’s power to regulate and shape technology, as profit-motives entice Clearview AI to repackage the ‘free’ capability of reverse image search as a for-cost service. This is perhaps even more clear when corporations decide to withhold their products or services from the market in the name of corporate social responsibility. Yuan exemplifies this through the case of Microsoft, Amazon, and IBM’s decision to ban police from using their AFR technology for one year, in the wake of the tragic murder of George Floyd in 2020. This is an example of industry self-regulation, which is a risky approach to AFR protection since it requires placing significant trust in companies to prevent human rights harms; something which many in the human rights community think is impossible.
Regulation through Norms
Cultural and social norms act as an unwritten code of industry standards or non-binding professional institutions, which Yuan argues may be seen as a source of regulation. These may find form and function in organizational funding norms and community-led advocacy efforts. Prominent examples of community-based advocacy related to addressing the harms of AFR include:
- The international Ban the Scan movement, co-led by organizations such as Amnesty International and the Surveillance Technology Oversight Project,
- The European Reclaim Your Face movement spearheaded by numerous civil society organizations, including AccessNow, Article 19, AlgorithmWatch, EDRi, Privacy International,
- Campaigns and efforts in the U.S. regarding AFR led by organizations like the Algorithmic Justice League, Fight for the Future, and the Electronic Frontier Foundation.
Another instance of norms as regulation includes the use of organizational policies that are ambiguous and/or lack enforceability. Yuan’s example of this is the Toronto Police Services Board’s policy which dictates norms around the service’s use of AI-powered technology. She notes that it does not clearly indicate consequences for violations and therefore functions in the category of norms-based regulation.
Regulation through Laws
Lastly, there is the role that the law plays in regulating technological innovation, which generally relates to the state’s attitude towards the like. Yuan notes that the U.S. has historically been rather permissive in its regulation of innovation, although more recent state-level interventions have pushed for enforceable privacy and data protection. In contrast, the E.U. has been notably more interventionist, extending human-rights to data protection laws in 1995 and strengthening it in 2016 through the GDPR. Recently, in Europe, there have been efforts to regulate artificial intelligence technology as a commodity, thus instituting safety rules and compliance requirements. Although the efficacy of this approach is yet to be seen, the ability of the law to regulate AFR, in Yuan’s estimation, generally hinges on its enforceability.
AFR Regulation: Back to the future
In search of guidance, Yuan turns to historical examples of regulating revolutionary technological innovations– focusing on the regulation of printing press, books, and automobiles. She explains that these examples are comparable to AFR because they all shape and reproduce our understanding of the world while posing significant hazards or risks. Their main differentiator is that of materiality. Software is more easily disseminated, and it is harder to imagine what it may look like to limit or prohibit this. A successful regulatory device, common to each of these cases, is the implementation of sui generis (or brand-new) laws that balance public and private interests.
On this note, Yuan concludes that having implemented sui generis laws to account for the undesired outcomes and safety risks of the technological innovations of the past, Canada must also explore addressing AFR through completely new laws because markets, norms, and architecture have proved ineffective. The proposed EU AI Act may serve as a good example to continually learn from, as it tackles AI-driven products as commodities; which she believes may be a valuable strategy in addition to extending human rights to data protection. Yuan asserts that automated biometric surveillance and analytic systems are not inevitable nor harmless. They pose significant risk to the public as they collect sensitive and personal data on mass, can result in arbitrary institutional decisions, and are known to have significant discriminatory impacts. Governments can and have imposed swift bans on technology when it serves them, and this begs the question: what will it take for lawmakers to implement enforceable laws in Canada for intrusive surveillance technologies such as AFR?