Building Trust in Synthetic Media Through Responsible AI Governance

Building Trust in Synthetic Media Through Responsible AI Governance

Trust in public information ecosystems is crucial for democratic debate and socio-economic growth. While digital media has expanded access to information, it has also enabled the spread of mis- and disinformation, which is being compounded by generative AI. Synthetic media produced with generative AI, including deepfakes, can be used for constructive purposes in areas such as education and entertainment. However, their misuse, such as creating non-consensual intimate content or spreading misinformation, raises significant concerns.

Unlike traditional misinformation, synthetic media often appears convincingly real and is harder to identify. Studies show that many people perceive AI-generated false media as genuine. The World Economic Forum warns that AI-driven falsehoods can erode democracy and deepen social polarization, posing immediate risks to the global economy. In particular, countries like India are more vulnerable due to low digital literacy and the waning legitimacy of legacy media.

Regulatory Landscape

Regulation intended to address synthetic media harms is evolving globally. The European Union’s AI Act classifies deepfakes as a “limited risk” category, requiring transparency disclosures. The United States has proposed legislation targeting specific issues, such as the DEFIANCE Act for non-consensual explicit deepfakes and the No AI FRAUD Act to protect personal likenesses. Additionally, the Take It Down Act aims to ensure the removal of non-consensual intimate synthetic media, while the UK’s Online Safety Act criminalizes the creation of intimate deepfakes and imposes obligations on social media platforms.

Trust, Privacy, Accountability

Emerging regulations aimed at addressing synthetic media harms are largely reactive, focusing on measures such as removal from social media platforms and identification of synthetic content. While these measures are steps in the right direction, they do not adequately address the creation of malicious synthetic media and its associated harms. For instance, a woman depicted in non-consensual, AI-generated pornographic content may experience shame and distress, even if the media includes a disclaimer stating it is synthetic.

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy, creating a paradox where inaccurate labels may legitimize harmful media. Additionally, users may not view basic AI edits, such as color correction, as manipulation, leading to inconsistencies in how platforms apply labeling. The complexity of moderation strategies increases when mixed AI- and human-generated content is involved, complicating the ability to identify harmful media.

Embedding tools like watermarks and metadata can compromise users’ privacy and anonymity. Online anonymity is crucial for individuals seeking support from domestic violence or LGBTQ+ individuals in hostile environments. Broad tracking measures risk treating all users as potential offenders, which is problematic.

Reimagining Liability

Synthetic media governance must be context-specific, as the legality of AI-generated content often depends on how and where it is used. A teacher using generative AI to create synthetic media for educational purposes may not be acting unlawfully, while the same technology used to disseminate violence-inciting speech carries serious implications. A context-sensitive regulatory framework would be proportional to potential impact, necessitating collaborative processes to develop evidence-based risk classifications and harm principles.

Based on collaboratively developed risk classifications, codes and standards should be integrated across the AI system lifecycle. AI systems must incorporate safety codes and security standards as non-negotiable requirements. Developers should implement stronger protective measures as potential harm increases, ensuring that high-risk applications have robust guardrails.

Liability should trigger upon non-compliance with foundational safeguards. Given that one-third of generative AI tools enable intimate media creation, compliance and monitoring should be overseen by an independent oversight body comprising government officials, civil society representatives, academics, and subject matter experts. Such oversight mechanisms would enhance transparency and build user trust.

Developing a Collaborative Framework

Relying solely on voluntary commitments leads to significant problems, as it fails to establish binding obligations. The proposed Indian AI Safety Institute (AISI) should facilitate an iterative, collaborative process for generative AI governance by involving civil society, academics, and industry experts. The AISI should conduct empirical evaluations of AI models to develop safety standards focused on explainability, interpretability, and accountability.

Moreover, given the diverse applications of synthetic media, it is essential to define permissible and impermissible uses in collaboration with relevant stakeholders. Without clear boundaries, there is a risk of stifling legitimate creative expression while failing to address harmful uses, which can result in regulatory inconsistencies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...