Regulating AI Chatbots: A Misguided Approach
In recent developments, the Canadian government, led by AI Minister Evan Solomon, has initiated discussions with executives from OpenAI concerning their decision not to alert law enforcement about the flagged account of a mass shooter. This event has sparked a broader conversation about the regulation of artificial intelligence (AI) and specifically, AI chatbots.
The Context of the Discussion
The meeting with OpenAI executives followed a tragic incident where a shooter killed eight individuals. Solomon expressed disappointment, indicating that OpenAI failed to present substantial new safety protocols. Justice Minister Sean Fraser suggested that if OpenAI does not implement necessary changes, the government would intervene to regulate AI companies.
As a potential regulatory solution, attention has shifted towards the Online Harms Act (Bill C-63), which aims to address online dangers. Although the bill lapsed last year, it is anticipated to resurface in some form soon. The government has enlisted an expert advisory panel on online harms to assist in this matter.
Concerns About the Online Harms Act
However, there are significant concerns regarding the applicability of the Online Harms Act to AI chatbots. The act was specifically designed to regulate social media platforms and intentionally excluded private communications and proactive monitoring from its scope. This exemption was implemented to avoid the surveillance concerns that previously led to criticism of the government’s proposals.
Applying the Online Harms Act to AI chatbots would necessitate dismantling these core privacy safeguards. Chatbot interactions are fundamentally different from social media interactions; they typically represent one-on-one exchanges rather than public communication. This distinction is critical, as the Online Harms Act was designed to target platforms where harmful content can spread rapidly through sharing and recommendation systems.
The Privacy Safeguards at Risk
The act’s Section 6 emphasizes that its regulations do not apply to private messaging features, reinforcing a boundary set to protect user privacy. Bringing chatbot prompts under the act would require either narrowing or bypassing these privacy protections, which are crucial for maintaining user confidence in AI technologies.
Moreover, Section 7(1) states that operators are not required to proactively search for harmful content. The current push to apply the Online Harms Act to AI chatbots contradicts this provision, as identifying potentially dangerous behavior would necessitate monitoring private exchanges—something the act was structured to avoid.
The Implications of Over-Regulation
The previous attempts to regulate online harms faced widespread backlash due to fears of over-reporting and expanded surveillance of lawful expression. Critics warned that mandating platforms to monitor user communications effectively deputized them as law enforcement agents, blurring the lines between addressing harmful content and infringing on individual rights.
Applying the Online Harms Act to AI chatbots could reintroduce these very issues. If proactive monitoring of emails and texts is protected under privacy safeguards, then AI chatbot interactions should be treated the same way. The challenge lies in finding a balance that protects user privacy while ensuring safety.
A Call for Specific Legislation
The Online Harms Act, by attempting to cover too much, failed to gain traction. Expanding it to incorporate AI chatbots poses similar risks. Instead of broadening existing legislation, a more effective approach would involve developing specific, transparency-focused regulations that prioritize user safety policies and their implementation.
The path forward requires careful consideration, ensuring that any regulatory framework for AI chatbots does not compromise user privacy while addressing legitimate safety concerns.