AI’s High Risk in the Election Landscape

Elections Watchdog Warned of High AI Risks in Current Campaign

An internal briefing note prepared for Canada’s election watchdog highlights the use of artificial intelligence (AI) as a high risk for the ongoing election campaign. This document was created for the Commissioner of Canada Elections, Caroline Simard, roughly a month before the campaign began.

AI’s Potential Impact on Elections

The briefing note suggests that the upcoming election will likely generate complaints related to the use of AI tools, potentially violating the Canada Elections Act. While AI can serve legitimate purposes, the document emphasizes that it poses significant risks that could lead to contraventions of election rules.

Concerns Over Disinformation

According to the note, the Elections Act does not explicitly prohibit the use of AI, bots, or deepfakes; however, certain provisions could apply if AI tools are misused. Violations may encompass the spreading of disinformation, publishing false information about the electoral process, or impersonating election officials.

Michael Litchfield, director of the AI risk and regulation lab at the University of Victoria, pointed out the challenges of identifying individuals who misuse AI to violate election rules. The inability to trace these actions complicates enforcement efforts.

Deepfakes and Their Threats

The briefing note raises specific concerns regarding the use of deepfakes—hyperrealistic faked videos or audio. While there have been no reported incidents of deepfakes in Canadian federal elections, there have been multiple examples abroad, including a notable deepfake involving Kamala Harris during the 2024 U.S. presidential election. The document warns that similar occurrences could happen in Canada.

Generative AI can create convincing fakes that may significantly impact public perception, even if they are quickly debunked. The note states that an increase in advertising for customized deepfake services on the dark web has been observed, highlighting the potential for misuse during the election.

Existing Regulatory Framework

The document indicates that Canada has generally relied on a self-regulation approach regarding AI, leaving much of the oversight to the tech industry. However, the effectiveness of this self-regulation is questioned. Despite some AI image generators having policies against election disinformation, they have failed to prevent the creation of misleading content.

Bill C-27, which would partially regulate AI usage in elections, was introduced but did not reach the legislative deadline. Experts believe that even if regulations are passed, they may take time to enforce, placing Canada in a regulatory vacuum.

Future Concerns

As the current election campaign progresses, there are already signs of AI being used to disseminate misinformation. AI-generated articles have surfaced, spreading dubious information about party leaders’ personal finances. Furthermore, fake election news ads have attempted to lure Canadians into fraudulent investment schemes.

Fenwick McKelvey, an assistant professor at Concordia University, noted that the misuse of AI contributes to a decline in public trust in online content, complicating the credibility of legitimate information sources.

The briefing note warns that the use of AI is likely to trigger numerous complaints during the election campaign, even in cases where no specific rules have been broken. This situation could lead to complex assessments and a large-scale impact on the election process.

Conclusion

The ongoing developments in AI technology present both opportunities and challenges in the context of elections. While AI can enhance campaign strategies, its potential for misuse raises significant concerns about the integrity of the electoral process. As the election landscape evolves, it is crucial to navigate these challenges carefully to preserve the democratic process.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...