Building Trust in Synthetic Media Through Responsible AI Governance

Building Trust in Synthetic Media Through Responsible AI Governance

Trust in public information ecosystems is crucial for democratic debate and socio-economic growth. While digital media has expanded access to information, it has also enabled the spread of mis- and disinformation, which is being compounded by generative AI. Synthetic media produced with generative AI, including deepfakes, can be used for constructive purposes in areas such as education and entertainment. However, their misuse, such as creating non-consensual intimate content or spreading misinformation, raises significant concerns.

Unlike traditional misinformation, synthetic media often appears convincingly real and is harder to identify. Studies show that many people perceive AI-generated false media as genuine. The World Economic Forum warns that AI-driven falsehoods can erode democracy and deepen social polarization, posing immediate risks to the global economy. In particular, countries like India are more vulnerable due to low digital literacy and the waning legitimacy of legacy media.

Regulatory Landscape

Regulation intended to address synthetic media harms is evolving globally. The European Union’s AI Act classifies deepfakes as a “limited risk” category, requiring transparency disclosures. The United States has proposed legislation targeting specific issues, such as the DEFIANCE Act for non-consensual explicit deepfakes and the No AI FRAUD Act to protect personal likenesses. Additionally, the Take It Down Act aims to ensure the removal of non-consensual intimate synthetic media, while the UK’s Online Safety Act criminalizes the creation of intimate deepfakes and imposes obligations on social media platforms.

Trust, Privacy, Accountability

Emerging regulations aimed at addressing synthetic media harms are largely reactive, focusing on measures such as removal from social media platforms and identification of synthetic content. While these measures are steps in the right direction, they do not adequately address the creation of malicious synthetic media and its associated harms. For instance, a woman depicted in non-consensual, AI-generated pornographic content may experience shame and distress, even if the media includes a disclaimer stating it is synthetic.

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy, creating a paradox where inaccurate labels may legitimize harmful media. Additionally, users may not view basic AI edits, such as color correction, as manipulation, leading to inconsistencies in how platforms apply labeling. The complexity of moderation strategies increases when mixed AI- and human-generated content is involved, complicating the ability to identify harmful media.

Embedding tools like watermarks and metadata can compromise users’ privacy and anonymity. Online anonymity is crucial for individuals seeking support from domestic violence or LGBTQ+ individuals in hostile environments. Broad tracking measures risk treating all users as potential offenders, which is problematic.

Reimagining Liability

Synthetic media governance must be context-specific, as the legality of AI-generated content often depends on how and where it is used. A teacher using generative AI to create synthetic media for educational purposes may not be acting unlawfully, while the same technology used to disseminate violence-inciting speech carries serious implications. A context-sensitive regulatory framework would be proportional to potential impact, necessitating collaborative processes to develop evidence-based risk classifications and harm principles.

Based on collaboratively developed risk classifications, codes and standards should be integrated across the AI system lifecycle. AI systems must incorporate safety codes and security standards as non-negotiable requirements. Developers should implement stronger protective measures as potential harm increases, ensuring that high-risk applications have robust guardrails.

Liability should trigger upon non-compliance with foundational safeguards. Given that one-third of generative AI tools enable intimate media creation, compliance and monitoring should be overseen by an independent oversight body comprising government officials, civil society representatives, academics, and subject matter experts. Such oversight mechanisms would enhance transparency and build user trust.

Developing a Collaborative Framework

Relying solely on voluntary commitments leads to significant problems, as it fails to establish binding obligations. The proposed Indian AI Safety Institute (AISI) should facilitate an iterative, collaborative process for generative AI governance by involving civil society, academics, and industry experts. The AISI should conduct empirical evaluations of AI models to develop safety standards focused on explainability, interpretability, and accountability.

Moreover, given the diverse applications of synthetic media, it is essential to define permissible and impermissible uses in collaboration with relevant stakeholders. Without clear boundaries, there is a risk of stifling legitimate creative expression while failing to address harmful uses, which can result in regulatory inconsistencies.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...