The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
In recent years, the emergence of AI-generated deepfakes has raised significant concerns regarding their impact on election misinformation. These highly realistic fake media are produced using advanced generative AI models that enable the manipulation of images, audio, and video to mislead the public. As elections become increasingly vulnerable to such technologies, understanding the legal landscape surrounding these practices is imperative.
How Deepfakes Are Created
Deepfakes are predominantly created by training deep neural networks on real images, videos, or audio of a target individual. The two leading AI architectures utilized in this process are Generative Adversarial Networks (GANs) and autoencoders. GANs consist of a generator network that creates synthetic images and a discriminator network that identifies whether the images are real or fake. Through iterative training, the generator progressively improves its ability to produce outputs that successfully deceive the discriminator.
In practical applications, creators often utilize accessible software such as DeepFaceLab and FaceSwap, which dominate the realm of video face-swapping. Additionally, voice-cloning tools can replicate a person’s speech using only a few minutes of audio. Commercial platforms like Synthesia further enable the creation of text-to-video avatars, which have already been misused in disinformation campaigns. The proliferation of mobile applications like FaceApp and Zao allows users to execute basic face swaps within minutes, making deepfake technology more accessible than ever before.
Deepfakes in Recent Elections: Examples
Deepfakes have already made headlines in several election cycles globally. For instance, during the 2024 U.S. primary season, a digitally altered audio robocall impersonating President Biden urged Democrats not to vote in the New Hampshire primary, resulting in a $6 million fine for the perpetrator. Similarly, former President Trump posted AI-generated images on social media suggesting that pop star Taylor Swift endorsed his campaign, igniting media controversy.
Internationally, deepfake-like content has appeared in various elections. In Indonesia’s 2024 presidential election, a deepfake video showed a convincing image of the late President Suharto endorsing a candidate, who ultimately won the presidency. In Bangladesh, a viral deepfake video aimed to discredit an opposition leader by superimposing her face onto an inappropriate body. These cases illustrate the diverse and damaging nature of deepfakes in electoral contexts.
U.S. Legal Framework and Accountability
In the United States, the legal framework addressing deepfake-related election misinformation is fragmented. Currently, there is no comprehensive federal law specifically targeting deepfakes; however, existing laws can be applied to relevant cases. Statutes against impersonating government officials, electioneering under the Bipartisan Campaign Reform Act, and targeted statutes regarding election communications can sometimes be stretched to encompass deepfake activities.
The Federal Election Commission (FEC) has begun to address these issues by preparing to enforce new rules that limit non-candidate electioneering communications utilizing falsified media. If finalized, these regulations would require political ads to use only authentic images of candidates. Meanwhile, the Federal Trade Commission (FTC) has indicated that commercial deepfakes could violate consumer protection laws.
Proposed Legislation and Policy Recommendations
To combat the threats posed by deepfakes, federal lawmakers have proposed new statutes, such as the DEEPFAKES Accountability Act, which demands disclosure for political ads featuring manipulated media. This act aims to provide a uniform standard across federal and state campaigns. At the state level, over 20 states have enacted laws specifically addressing deepfakes in elections, with provisions varying from forbidding the distribution of falsified media to enabling candidates to sue violators.
Experts recommend a multi-faceted approach to tackle the challenges posed by deepfakes. Emphasizing transparency and disclosure as core principles, they advocate for clear labeling of AI-generated media in political communications. Outright bans on all deepfakes may infringe upon free speech; however, targeted bans addressing specific harms could be feasible. Additionally, implementing technical solutions like watermarking and enhancing detection capabilities can further mitigate the risks associated with deepfakes.
Ultimately, strengthening public awareness and resilience is crucial. Education campaigns that promote media literacy and critical thinking can empower voters to recognize and question sensational media, thereby reducing the influence of deepfakes in electoral processes.