AI Regulations: Balancing Safety and Free Expression

The AI Election Panic: How Fear-Driven Policies Could Limit Free Expression

As the US and EU shape their AI frameworks, it is crucial to consider lessons from recent experiences. The fear-driven narrative surrounding AI and the latest elections, where AI-created content had limited impact, should caution policymakers against hastily implementing laws that may unintentionally undermine democratic values. Policymakers crafting the forthcoming US Action Plan, state legislatures, and authorities enforcing the EU AI Act should avoid outright bans on political deepfakes and refrain from imposing mandates that could force AI models to conform to specific and arbitrary values. Instead, they should focus on promoting AI literacy and transparency, including ensuring researchers have access to data.

The AI Disinformation Narrative

Throughout 2023 and 2024, prominent media outlets voiced concerns about AI’s potential influence on elections. In April 2024, a major news outlet warned its readers: “AI deepfakes threaten to upend global elections. No one can stop them.” Similar concerns were echoed by other reputable organizations, highlighting a growing public anxiety linked to AI’s impact on elections. A Pew survey in the US found that 57% of adults were very concerned about AI-driven misinformation regarding elections, while 40% of European voters feared AI misuse during elections. This widespread apprehension was exacerbated by high-profile figures describing AI deepfakes as an “atomic bomb” that could change the course of voter preferences.

Despite these fears, research indicates that the narrative surrounding AI in 2024 was not substantiated by evidence. The Alan Turing Institute found no significant evidence that AI altered election results in the UK, France, or the US. In fact, many instances of AI-generated misinformation did not notably change voting behaviors but may have reinforced existing divides. Traditional methods of spreading misinformation, such as Photoshop and conventional video editing software, remain effective and widely accessible.

Overreaching Laws in the US and Europe

By September 2024, nineteen US states had enacted laws specifically targeting the use of AI in political campaigns. Some states banned the creation or distribution of deepfakes in relation to elections under certain circumstances. However, these laws raised freedom of expression concerns, as they criminalized the dissemination of deepfakes intended to “injure” a candidate or “influence” the outcome—terms that are both subjective and central to protected political speech. Important exceptions for satire or parody were often lacking, which are vital tools for critiquing power.

In Europe, the EU finalized the AI Act, which mandates watermarking and labeling for AI-generated content. However, its broad obligation for powerful AI models to mitigate systemic risks raised concerns about stifling lawful speech. The act could restrict content that criticizes the government or supports certain viewpoints, potentially leading to censorship.

A Smarter Way Forward

The forthcoming US AI Action Plan should be guided by evidence and refrain from promoting bans on political deepfakes. Similarly, state-level legislation with comparable provisions should be revised. Less restrictive measures, such as labeling and watermarking, may offer alternatives but could still raise First Amendment concerns. Moreover, their effectiveness is questionable as malicious actors can circumvent these safeguards.

In the EU, the European Commission must ensure that enforcement of the AI Act robustly protects freedom of expression. The obligation to mitigate systemic risks should not be interpreted as requiring models to align with specific viewpoints, and must allow space for controversial or dissenting content.

Policymakers and companies must ensure that researchers have access to high-quality, reliable data to conduct more comprehensive studies on the impact of AI-generated content. Additionally, promoting AI and media literacy is essential. Educational campaigns should empower individuals with knowledge and equip the public with the skills to critically engage with content. Non-restrictive measures to counter disinformation can also help users make informed judgments.

Lastly, existing legal tools, such as defamation and fraud laws, remain available and can be used where appropriate. Ultimately, effective regulation must be evidence-based and clearly articulated to avoid undermining freedom of expression, creativity, and satire—vital components of a healthy democratic discourse.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...