AI Regulations: Balancing Safety and Free Expression

The AI Election Panic: How Fear-Driven Policies Could Limit Free Expression

As the US and EU shape their AI frameworks, it is crucial to consider lessons from recent experiences. The fear-driven narrative surrounding AI and the latest elections, where AI-created content had limited impact, should caution policymakers against hastily implementing laws that may unintentionally undermine democratic values. Policymakers crafting the forthcoming US Action Plan, state legislatures, and authorities enforcing the EU AI Act should avoid outright bans on political deepfakes and refrain from imposing mandates that could force AI models to conform to specific and arbitrary values. Instead, they should focus on promoting AI literacy and transparency, including ensuring researchers have access to data.

The AI Disinformation Narrative

Throughout 2023 and 2024, prominent media outlets voiced concerns about AI’s potential influence on elections. In April 2024, a major news outlet warned its readers: “AI deepfakes threaten to upend global elections. No one can stop them.” Similar concerns were echoed by other reputable organizations, highlighting a growing public anxiety linked to AI’s impact on elections. A Pew survey in the US found that 57% of adults were very concerned about AI-driven misinformation regarding elections, while 40% of European voters feared AI misuse during elections. This widespread apprehension was exacerbated by high-profile figures describing AI deepfakes as an “atomic bomb” that could change the course of voter preferences.

Despite these fears, research indicates that the narrative surrounding AI in 2024 was not substantiated by evidence. The Alan Turing Institute found no significant evidence that AI altered election results in the UK, France, or the US. In fact, many instances of AI-generated misinformation did not notably change voting behaviors but may have reinforced existing divides. Traditional methods of spreading misinformation, such as Photoshop and conventional video editing software, remain effective and widely accessible.

Overreaching Laws in the US and Europe

By September 2024, nineteen US states had enacted laws specifically targeting the use of AI in political campaigns. Some states banned the creation or distribution of deepfakes in relation to elections under certain circumstances. However, these laws raised freedom of expression concerns, as they criminalized the dissemination of deepfakes intended to “injure” a candidate or “influence” the outcome—terms that are both subjective and central to protected political speech. Important exceptions for satire or parody were often lacking, which are vital tools for critiquing power.

In Europe, the EU finalized the AI Act, which mandates watermarking and labeling for AI-generated content. However, its broad obligation for powerful AI models to mitigate systemic risks raised concerns about stifling lawful speech. The act could restrict content that criticizes the government or supports certain viewpoints, potentially leading to censorship.

A Smarter Way Forward

The forthcoming US AI Action Plan should be guided by evidence and refrain from promoting bans on political deepfakes. Similarly, state-level legislation with comparable provisions should be revised. Less restrictive measures, such as labeling and watermarking, may offer alternatives but could still raise First Amendment concerns. Moreover, their effectiveness is questionable as malicious actors can circumvent these safeguards.

In the EU, the European Commission must ensure that enforcement of the AI Act robustly protects freedom of expression. The obligation to mitigate systemic risks should not be interpreted as requiring models to align with specific viewpoints, and must allow space for controversial or dissenting content.

Policymakers and companies must ensure that researchers have access to high-quality, reliable data to conduct more comprehensive studies on the impact of AI-generated content. Additionally, promoting AI and media literacy is essential. Educational campaigns should empower individuals with knowledge and equip the public with the skills to critically engage with content. Non-restrictive measures to counter disinformation can also help users make informed judgments.

Lastly, existing legal tools, such as defamation and fraud laws, remain available and can be used where appropriate. Ultimately, effective regulation must be evidence-based and clearly articulated to avoid undermining freedom of expression, creativity, and satire—vital components of a healthy democratic discourse.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...