Fenwick McKelvey & Reza Rajabiun – AI Governance in Canada? Uneven Opportunities

Fenwick McKelvey and Reza Rajabiun joined us virtually at the DDI to discuss Artificial Intelligence (AI) governance in Canada. Their presentation explored a case study in consultation transparency and policy-making by the Canadian Radio-Television and Telecommunications Commission (CRTC), examining ongoing attention to the application of AI by telecom company Bell. As part of a larger SSHRC-funded project “Media Governance after AI”, they used the case to discuss “how can scholars meaningfully engage in public policy consultation processes?”.

Fenwick and Reza explained that in July 2019, Bell submitted a request to the CRTC for permission to block “fraudulent and scam voice calls” on a temporary basis–a program now under review for permanent approval. The technology as far as they know involves machine learning to block such calls at the transit level. The system applies to any call transiting Bell’s networks, or calls made on other networks, making it probably one of the largest applications of AI in Canada. They identified that this case touches on several key areas of governance to be considered: AI ethics, cybersecurity, copyright, and more.

A central issue through the hearings has been due diligence in the implementation of Bell’s AI system. Fenwick and Reza asked Bell to conduct an Algorithmic Impact Assessment (AIA) of the system and make it public, but Bell has not done so. They then asked the CRTC to conduct an AIA, as well as a Privacy Impact Assessment (PIA), so far unsuccessfully. Given the lack of information on the public record of the CRTC proceeding, they remain concerned about the potential of the system to block legitimate calls (i.e. false positive errors).  For example, they asked if this could disproportionately impact diasporic communities expecting calls from abroad? What are the long-term implications for network level traffic blocking? 

[Source: McKelvey, F & Rajabiun, R. 15 September 2021]

For Fenwick and Reza, this case is significant because if left unchecked, it further normalizes zero-transparency automated decision-making. Already, when documentation is released to the public, it is redacted to the point beyond comprehension. Non-disclosure agreements for further access are required and last in perpetuity. 

More than anything, they showed us that given this landscape, it is crucial for scholars to be involved in public policy processes that implicate the applications of AI to decision making by governments and businesses. Without the proper impact assessments, there cannot be public oversight on how the system works and the accountability process for both false positive and negative errors inherent in automated decision making.

They noted that the CRTC exists as a legacy of public regulatory agencies that evolved in the post-WWII era and it is relatively unique as it provides for some measure of open public consultations and engaging multiple stakeholders. Although, it is changing in problematic ways. We now live in a “paradigm of regulation that is quite market oriented” and non-disclosure agreements are increasingly “undermining capacity for public scholarship. Without enough transparency, we cannot appeal CRTC decisions”. This institution has limited resources and lacks the expertise to ask the right questions, so “if we don’t do it, no one else will. There are gaps in the consultation process”. 

Fenwick and Reza explained that we are situated in a unique regulatory jurisdiction. First, there are several institutions regulating AI in Canada that engage in “consultation theatre” such as Heritage Canada, the Treasury Board, Industry Canada, the Competition Bureau, and so on. They discussed opportunities for scholars to intervene in each of them. AI governance with multi-level jurisdiction is so complex, where “international negotiations do not necessarily translate into rule-making, and scholars have potential to have bearing on this rulemaking-process”. 

Fenwick and Reza situate their work within the long tradition of Canadian communication policy scholars that are oriented towards public interest through activism and engaged scholarship. As this case goes forward, and their project develops, they will continue to ask “what are limits and standards of public transparency and disclosure, and how do we approach and intervene in the process of AI governance?”. They invite us to stay in touch for opportunities to do comparative research and discuss other examples of AI regulation in media regulation. We can follow their work at: https:/amo-oma.ca.