AI Regulations: Balancing Safety and Free Expression

The AI Election Panic: How Fear-Driven Policies Could Limit Free Expression

As the US and EU shape their AI frameworks, it is crucial to consider lessons from recent experiences. The fear-driven narrative surrounding AI and the latest elections, where AI-created content had limited impact, should caution policymakers against hastily implementing laws that may unintentionally undermine democratic values. Policymakers crafting the forthcoming US Action Plan, state legislatures, and authorities enforcing the EU AI Act should avoid outright bans on political deepfakes and refrain from imposing mandates that could force AI models to conform to specific and arbitrary values. Instead, they should focus on promoting AI literacy and transparency, including ensuring researchers have access to data.

The AI Disinformation Narrative

Throughout 2023 and 2024, prominent media outlets voiced concerns about AI’s potential influence on elections. In April 2024, a major news outlet warned its readers: “AI deepfakes threaten to upend global elections. No one can stop them.” Similar concerns were echoed by other reputable organizations, highlighting a growing public anxiety linked to AI’s impact on elections. A Pew survey in the US found that 57% of adults were very concerned about AI-driven misinformation regarding elections, while 40% of European voters feared AI misuse during elections. This widespread apprehension was exacerbated by high-profile figures describing AI deepfakes as an “atomic bomb” that could change the course of voter preferences.

Despite these fears, research indicates that the narrative surrounding AI in 2024 was not substantiated by evidence. The Alan Turing Institute found no significant evidence that AI altered election results in the UK, France, or the US. In fact, many instances of AI-generated misinformation did not notably change voting behaviors but may have reinforced existing divides. Traditional methods of spreading misinformation, such as Photoshop and conventional video editing software, remain effective and widely accessible.

Overreaching Laws in the US and Europe

By September 2024, nineteen US states had enacted laws specifically targeting the use of AI in political campaigns. Some states banned the creation or distribution of deepfakes in relation to elections under certain circumstances. However, these laws raised freedom of expression concerns, as they criminalized the dissemination of deepfakes intended to “injure” a candidate or “influence” the outcome—terms that are both subjective and central to protected political speech. Important exceptions for satire or parody were often lacking, which are vital tools for critiquing power.

In Europe, the EU finalized the AI Act, which mandates watermarking and labeling for AI-generated content. However, its broad obligation for powerful AI models to mitigate systemic risks raised concerns about stifling lawful speech. The act could restrict content that criticizes the government or supports certain viewpoints, potentially leading to censorship.

A Smarter Way Forward

The forthcoming US AI Action Plan should be guided by evidence and refrain from promoting bans on political deepfakes. Similarly, state-level legislation with comparable provisions should be revised. Less restrictive measures, such as labeling and watermarking, may offer alternatives but could still raise First Amendment concerns. Moreover, their effectiveness is questionable as malicious actors can circumvent these safeguards.

In the EU, the European Commission must ensure that enforcement of the AI Act robustly protects freedom of expression. The obligation to mitigate systemic risks should not be interpreted as requiring models to align with specific viewpoints, and must allow space for controversial or dissenting content.

Policymakers and companies must ensure that researchers have access to high-quality, reliable data to conduct more comprehensive studies on the impact of AI-generated content. Additionally, promoting AI and media literacy is essential. Educational campaigns should empower individuals with knowledge and equip the public with the skills to critically engage with content. Non-restrictive measures to counter disinformation can also help users make informed judgments.

Lastly, existing legal tools, such as defamation and fraud laws, remain available and can be used where appropriate. Ultimately, effective regulation must be evidence-based and clearly articulated to avoid undermining freedom of expression, creativity, and satire—vital components of a healthy democratic discourse.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...