AI’s High Risk in the Election Landscape

Elections Watchdog Warned of High AI Risks in Current Campaign

An internal briefing note prepared for Canada’s election watchdog highlights the use of artificial intelligence (AI) as a high risk for the ongoing election campaign. This document was created for the Commissioner of Canada Elections, Caroline Simard, roughly a month before the campaign began.

AI’s Potential Impact on Elections

The briefing note suggests that the upcoming election will likely generate complaints related to the use of AI tools, potentially violating the Canada Elections Act. While AI can serve legitimate purposes, the document emphasizes that it poses significant risks that could lead to contraventions of election rules.

Concerns Over Disinformation

According to the note, the Elections Act does not explicitly prohibit the use of AI, bots, or deepfakes; however, certain provisions could apply if AI tools are misused. Violations may encompass the spreading of disinformation, publishing false information about the electoral process, or impersonating election officials.

Michael Litchfield, director of the AI risk and regulation lab at the University of Victoria, pointed out the challenges of identifying individuals who misuse AI to violate election rules. The inability to trace these actions complicates enforcement efforts.

Deepfakes and Their Threats

The briefing note raises specific concerns regarding the use of deepfakes—hyperrealistic faked videos or audio. While there have been no reported incidents of deepfakes in Canadian federal elections, there have been multiple examples abroad, including a notable deepfake involving Kamala Harris during the 2024 U.S. presidential election. The document warns that similar occurrences could happen in Canada.

Generative AI can create convincing fakes that may significantly impact public perception, even if they are quickly debunked. The note states that an increase in advertising for customized deepfake services on the dark web has been observed, highlighting the potential for misuse during the election.

Existing Regulatory Framework

The document indicates that Canada has generally relied on a self-regulation approach regarding AI, leaving much of the oversight to the tech industry. However, the effectiveness of this self-regulation is questioned. Despite some AI image generators having policies against election disinformation, they have failed to prevent the creation of misleading content.

Bill C-27, which would partially regulate AI usage in elections, was introduced but did not reach the legislative deadline. Experts believe that even if regulations are passed, they may take time to enforce, placing Canada in a regulatory vacuum.

Future Concerns

As the current election campaign progresses, there are already signs of AI being used to disseminate misinformation. AI-generated articles have surfaced, spreading dubious information about party leaders’ personal finances. Furthermore, fake election news ads have attempted to lure Canadians into fraudulent investment schemes.

Fenwick McKelvey, an assistant professor at Concordia University, noted that the misuse of AI contributes to a decline in public trust in online content, complicating the credibility of legitimate information sources.

The briefing note warns that the use of AI is likely to trigger numerous complaints during the election campaign, even in cases where no specific rules have been broken. This situation could lead to complex assessments and a large-scale impact on the election process.

Conclusion

The ongoing developments in AI technology present both opportunities and challenges in the context of elections. While AI can enhance campaign strategies, its potential for misuse raises significant concerns about the integrity of the electoral process. As the election landscape evolves, it is crucial to navigate these challenges carefully to preserve the democratic process.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...