Building Trust in Synthetic Media Through Responsible AI Governance
Trust in public information ecosystems is crucial for democratic debate and socio-economic growth. While digital media has expanded access to information, it has also enabled the spread of mis- and disinformation, which is being compounded by generative AI. Synthetic media produced with generative AI, including deepfakes, can be used for constructive purposes in areas such as education and entertainment. However, their misuse, such as creating non-consensual intimate content or spreading misinformation, raises significant concerns.
Unlike traditional misinformation, synthetic media often appears convincingly real and is harder to identify. Studies show that many people perceive AI-generated false media as genuine. The World Economic Forum warns that AI-driven falsehoods can erode democracy and deepen social polarization, posing immediate risks to the global economy. In particular, countries like India are more vulnerable due to low digital literacy and the waning legitimacy of legacy media.
Regulatory Landscape
Regulation intended to address synthetic media harms is evolving globally. The European Union’s AI Act classifies deepfakes as a “limited risk” category, requiring transparency disclosures. The United States has proposed legislation targeting specific issues, such as the DEFIANCE Act for non-consensual explicit deepfakes and the No AI FRAUD Act to protect personal likenesses. Additionally, the Take It Down Act aims to ensure the removal of non-consensual intimate synthetic media, while the UK’s Online Safety Act criminalizes the creation of intimate deepfakes and imposes obligations on social media platforms.
Trust, Privacy, Accountability
Emerging regulations aimed at addressing synthetic media harms are largely reactive, focusing on measures such as removal from social media platforms and identification of synthetic content. While these measures are steps in the right direction, they do not adequately address the creation of malicious synthetic media and its associated harms. For instance, a woman depicted in non-consensual, AI-generated pornographic content may experience shame and distress, even if the media includes a disclaimer stating it is synthetic.
Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy, creating a paradox where inaccurate labels may legitimize harmful media. Additionally, users may not view basic AI edits, such as color correction, as manipulation, leading to inconsistencies in how platforms apply labeling. The complexity of moderation strategies increases when mixed AI- and human-generated content is involved, complicating the ability to identify harmful media.
Embedding tools like watermarks and metadata can compromise users’ privacy and anonymity. Online anonymity is crucial for individuals seeking support from domestic violence or LGBTQ+ individuals in hostile environments. Broad tracking measures risk treating all users as potential offenders, which is problematic.
Reimagining Liability
Synthetic media governance must be context-specific, as the legality of AI-generated content often depends on how and where it is used. A teacher using generative AI to create synthetic media for educational purposes may not be acting unlawfully, while the same technology used to disseminate violence-inciting speech carries serious implications. A context-sensitive regulatory framework would be proportional to potential impact, necessitating collaborative processes to develop evidence-based risk classifications and harm principles.
Based on collaboratively developed risk classifications, codes and standards should be integrated across the AI system lifecycle. AI systems must incorporate safety codes and security standards as non-negotiable requirements. Developers should implement stronger protective measures as potential harm increases, ensuring that high-risk applications have robust guardrails.
Liability should trigger upon non-compliance with foundational safeguards. Given that one-third of generative AI tools enable intimate media creation, compliance and monitoring should be overseen by an independent oversight body comprising government officials, civil society representatives, academics, and subject matter experts. Such oversight mechanisms would enhance transparency and build user trust.
Developing a Collaborative Framework
Relying solely on voluntary commitments leads to significant problems, as it fails to establish binding obligations. The proposed Indian AI Safety Institute (AISI) should facilitate an iterative, collaborative process for generative AI governance by involving civil society, academics, and industry experts. The AISI should conduct empirical evaluations of AI models to develop safety standards focused on explainability, interpretability, and accountability.
Moreover, given the diverse applications of synthetic media, it is essential to define permissible and impermissible uses in collaboration with relevant stakeholders. Without clear boundaries, there is a risk of stifling legitimate creative expression while failing to address harmful uses, which can result in regulatory inconsistencies.