The AI Election Panic: How Fear-Driven Policies Could Limit Free Expression
As the US and EU shape their AI frameworks, it is crucial to consider lessons from recent experiences. The fear-driven narrative surrounding AI and the latest elections, where AI-created content had limited impact, should caution policymakers against hastily implementing laws that may unintentionally undermine democratic values. Policymakers crafting the forthcoming US Action Plan, state legislatures, and authorities enforcing the EU AI Act should avoid outright bans on political deepfakes and refrain from imposing mandates that could force AI models to conform to specific and arbitrary values. Instead, they should focus on promoting AI literacy and transparency, including ensuring researchers have access to data.
The AI Disinformation Narrative
Throughout 2023 and 2024, prominent media outlets voiced concerns about AI’s potential influence on elections. In April 2024, a major news outlet warned its readers: “AI deepfakes threaten to upend global elections. No one can stop them.” Similar concerns were echoed by other reputable organizations, highlighting a growing public anxiety linked to AI’s impact on elections. A Pew survey in the US found that 57% of adults were very concerned about AI-driven misinformation regarding elections, while 40% of European voters feared AI misuse during elections. This widespread apprehension was exacerbated by high-profile figures describing AI deepfakes as an “atomic bomb” that could change the course of voter preferences.
Despite these fears, research indicates that the narrative surrounding AI in 2024 was not substantiated by evidence. The Alan Turing Institute found no significant evidence that AI altered election results in the UK, France, or the US. In fact, many instances of AI-generated misinformation did not notably change voting behaviors but may have reinforced existing divides. Traditional methods of spreading misinformation, such as Photoshop and conventional video editing software, remain effective and widely accessible.
Overreaching Laws in the US and Europe
By September 2024, nineteen US states had enacted laws specifically targeting the use of AI in political campaigns. Some states banned the creation or distribution of deepfakes in relation to elections under certain circumstances. However, these laws raised freedom of expression concerns, as they criminalized the dissemination of deepfakes intended to “injure” a candidate or “influence” the outcome—terms that are both subjective and central to protected political speech. Important exceptions for satire or parody were often lacking, which are vital tools for critiquing power.
In Europe, the EU finalized the AI Act, which mandates watermarking and labeling for AI-generated content. However, its broad obligation for powerful AI models to mitigate systemic risks raised concerns about stifling lawful speech. The act could restrict content that criticizes the government or supports certain viewpoints, potentially leading to censorship.
A Smarter Way Forward
The forthcoming US AI Action Plan should be guided by evidence and refrain from promoting bans on political deepfakes. Similarly, state-level legislation with comparable provisions should be revised. Less restrictive measures, such as labeling and watermarking, may offer alternatives but could still raise First Amendment concerns. Moreover, their effectiveness is questionable as malicious actors can circumvent these safeguards.
In the EU, the European Commission must ensure that enforcement of the AI Act robustly protects freedom of expression. The obligation to mitigate systemic risks should not be interpreted as requiring models to align with specific viewpoints, and must allow space for controversial or dissenting content.
Policymakers and companies must ensure that researchers have access to high-quality, reliable data to conduct more comprehensive studies on the impact of AI-generated content. Additionally, promoting AI and media literacy is essential. Educational campaigns should empower individuals with knowledge and equip the public with the skills to critically engage with content. Non-restrictive measures to counter disinformation can also help users make informed judgments.
Lastly, existing legal tools, such as defamation and fraud laws, remain available and can be used where appropriate. Ultimately, effective regulation must be evidence-based and clearly articulated to avoid undermining freedom of expression, creativity, and satire—vital components of a healthy democratic discourse.