AI Regulations: Balancing Safety and Free Expression

The AI Election Panic: How Fear-Driven Policies Could Limit Free Expression

As the US and EU shape their AI frameworks, it is crucial to consider lessons from recent experiences. The fear-driven narrative surrounding AI and the latest elections, where AI-created content had limited impact, should caution policymakers against hastily implementing laws that may unintentionally undermine democratic values. Policymakers crafting the forthcoming US Action Plan, state legislatures, and authorities enforcing the EU AI Act should avoid outright bans on political deepfakes and refrain from imposing mandates that could force AI models to conform to specific and arbitrary values. Instead, they should focus on promoting AI literacy and transparency, including ensuring researchers have access to data.

The AI Disinformation Narrative

Throughout 2023 and 2024, prominent media outlets voiced concerns about AI’s potential influence on elections. In April 2024, a major news outlet warned its readers: “AI deepfakes threaten to upend global elections. No one can stop them.” Similar concerns were echoed by other reputable organizations, highlighting a growing public anxiety linked to AI’s impact on elections. A Pew survey in the US found that 57% of adults were very concerned about AI-driven misinformation regarding elections, while 40% of European voters feared AI misuse during elections. This widespread apprehension was exacerbated by high-profile figures describing AI deepfakes as an “atomic bomb” that could change the course of voter preferences.

Despite these fears, research indicates that the narrative surrounding AI in 2024 was not substantiated by evidence. The Alan Turing Institute found no significant evidence that AI altered election results in the UK, France, or the US. In fact, many instances of AI-generated misinformation did not notably change voting behaviors but may have reinforced existing divides. Traditional methods of spreading misinformation, such as Photoshop and conventional video editing software, remain effective and widely accessible.

Overreaching Laws in the US and Europe

By September 2024, nineteen US states had enacted laws specifically targeting the use of AI in political campaigns. Some states banned the creation or distribution of deepfakes in relation to elections under certain circumstances. However, these laws raised freedom of expression concerns, as they criminalized the dissemination of deepfakes intended to “injure” a candidate or “influence” the outcome—terms that are both subjective and central to protected political speech. Important exceptions for satire or parody were often lacking, which are vital tools for critiquing power.

In Europe, the EU finalized the AI Act, which mandates watermarking and labeling for AI-generated content. However, its broad obligation for powerful AI models to mitigate systemic risks raised concerns about stifling lawful speech. The act could restrict content that criticizes the government or supports certain viewpoints, potentially leading to censorship.

A Smarter Way Forward

The forthcoming US AI Action Plan should be guided by evidence and refrain from promoting bans on political deepfakes. Similarly, state-level legislation with comparable provisions should be revised. Less restrictive measures, such as labeling and watermarking, may offer alternatives but could still raise First Amendment concerns. Moreover, their effectiveness is questionable as malicious actors can circumvent these safeguards.

In the EU, the European Commission must ensure that enforcement of the AI Act robustly protects freedom of expression. The obligation to mitigate systemic risks should not be interpreted as requiring models to align with specific viewpoints, and must allow space for controversial or dissenting content.

Policymakers and companies must ensure that researchers have access to high-quality, reliable data to conduct more comprehensive studies on the impact of AI-generated content. Additionally, promoting AI and media literacy is essential. Educational campaigns should empower individuals with knowledge and equip the public with the skills to critically engage with content. Non-restrictive measures to counter disinformation can also help users make informed judgments.

Lastly, existing legal tools, such as defamation and fraud laws, remain available and can be used where appropriate. Ultimately, effective regulation must be evidence-based and clearly articulated to avoid undermining freedom of expression, creativity, and satire—vital components of a healthy democratic discourse.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...