Building Trust in Synthetic Media Through Responsible AI Governance

Building Trust in Synthetic Media Through Responsible AI Governance

Trust in public information ecosystems is crucial for democratic debate and socio-economic growth. While digital media has expanded access to information, it has also enabled the spread of mis- and disinformation, which is being compounded by generative AI. Synthetic media produced with generative AI, including deepfakes, can be used for constructive purposes in areas such as education and entertainment. However, their misuse, such as creating non-consensual intimate content or spreading misinformation, raises significant concerns.

Unlike traditional misinformation, synthetic media often appears convincingly real and is harder to identify. Studies show that many people perceive AI-generated false media as genuine. The World Economic Forum warns that AI-driven falsehoods can erode democracy and deepen social polarization, posing immediate risks to the global economy. In particular, countries like India are more vulnerable due to low digital literacy and the waning legitimacy of legacy media.

Regulatory Landscape

Regulation intended to address synthetic media harms is evolving globally. The European Union’s AI Act classifies deepfakes as a “limited risk” category, requiring transparency disclosures. The United States has proposed legislation targeting specific issues, such as the DEFIANCE Act for non-consensual explicit deepfakes and the No AI FRAUD Act to protect personal likenesses. Additionally, the Take It Down Act aims to ensure the removal of non-consensual intimate synthetic media, while the UK’s Online Safety Act criminalizes the creation of intimate deepfakes and imposes obligations on social media platforms.

Trust, Privacy, Accountability

Emerging regulations aimed at addressing synthetic media harms are largely reactive, focusing on measures such as removal from social media platforms and identification of synthetic content. While these measures are steps in the right direction, they do not adequately address the creation of malicious synthetic media and its associated harms. For instance, a woman depicted in non-consensual, AI-generated pornographic content may experience shame and distress, even if the media includes a disclaimer stating it is synthetic.

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy, creating a paradox where inaccurate labels may legitimize harmful media. Additionally, users may not view basic AI edits, such as color correction, as manipulation, leading to inconsistencies in how platforms apply labeling. The complexity of moderation strategies increases when mixed AI- and human-generated content is involved, complicating the ability to identify harmful media.

Embedding tools like watermarks and metadata can compromise users’ privacy and anonymity. Online anonymity is crucial for individuals seeking support from domestic violence or LGBTQ+ individuals in hostile environments. Broad tracking measures risk treating all users as potential offenders, which is problematic.

Reimagining Liability

Synthetic media governance must be context-specific, as the legality of AI-generated content often depends on how and where it is used. A teacher using generative AI to create synthetic media for educational purposes may not be acting unlawfully, while the same technology used to disseminate violence-inciting speech carries serious implications. A context-sensitive regulatory framework would be proportional to potential impact, necessitating collaborative processes to develop evidence-based risk classifications and harm principles.

Based on collaboratively developed risk classifications, codes and standards should be integrated across the AI system lifecycle. AI systems must incorporate safety codes and security standards as non-negotiable requirements. Developers should implement stronger protective measures as potential harm increases, ensuring that high-risk applications have robust guardrails.

Liability should trigger upon non-compliance with foundational safeguards. Given that one-third of generative AI tools enable intimate media creation, compliance and monitoring should be overseen by an independent oversight body comprising government officials, civil society representatives, academics, and subject matter experts. Such oversight mechanisms would enhance transparency and build user trust.

Developing a Collaborative Framework

Relying solely on voluntary commitments leads to significant problems, as it fails to establish binding obligations. The proposed Indian AI Safety Institute (AISI) should facilitate an iterative, collaborative process for generative AI governance by involving civil society, academics, and industry experts. The AISI should conduct empirical evaluations of AI models to develop safety standards focused on explainability, interpretability, and accountability.

Moreover, given the diverse applications of synthetic media, it is essential to define permissible and impermissible uses in collaboration with relevant stakeholders. Without clear boundaries, there is a risk of stifling legitimate creative expression while failing to address harmful uses, which can result in regulatory inconsistencies.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...