India’s New AI Regulations: Mandatory Labelling and Compliance for Synthetic Content

Govt Tightens Digital Rules on AI

The Centre has significantly tightened India’s digital governance framework by formally bringing synthetically generated information (SGI) — including AI-generated audio, video, and visual content — under the ambit of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, through amendments notified on February 10, 2026.

Key Definitions and Compliance Obligations

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, which will come into force from February 20, 2026, introduce detailed definitions, disclosure requirements, and compliance obligations for digital intermediaries and social media platforms. This comes amid growing concerns over deepfakes, misinformation, and synthetic media misuse.

For the first time, the rules define “synthetically generated information” as audio, visual, or audio-visual content that is artificially or algorithmically created, modified, or altered using computer resources in a manner that appears real or authentic. This content may depict individuals or events in a way indistinguishable from real persons or real-world events.

However, the government has carved out explicit exemptions. Routine or good-faith editing, formatting, color correction, noise reduction, transcription, compression, translation, or accessibility-related enhancements will not be treated as SGI, provided such changes do not materially distort the original meaning or context.

Mandatory Labelling and Metadata Requirements

A key compliance requirement under the amended rules is the mandatory labelling of synthetically generated content. Intermediaries that enable or facilitate the creation or dissemination of SGI must ensure that such content is clearly, prominently, and unambiguously labelled so that users can immediately identify it as synthetic.

Additionally, platforms are required to embed persistent metadata or other technical provenance mechanisms, including unique identifiers, to enable traceability of SGI to the intermediary’s computer resource, to the extent technically feasible. Importantly, intermediaries are prohibited from enabling the removal or tampering of such labels or metadata.

Enhanced Obligations for Significant Social Media Platforms

Significant social media intermediaries face enhanced obligations. Before allowing upload or publication, such platforms must obtain user declarations stating whether the content is synthetically generated. They must also deploy reasonable and proportionate technical measures, including automated tools, to verify the accuracy of such declarations.

Where content is identified as SGI, platforms must ensure it is displayed along with an appropriate disclosure or notice prominently indicating its synthetic nature. Failure to exercise due diligence in this regard could expose platforms to liability under the amended framework.

Stricter Timelines and Takedown Procedures

The amendments also compress several compliance timelines, signaling a tougher stance on harmful online content. The time limit for intermediaries to act on lawful orders or complaints in certain cases has been reduced—from 36 hours to 3 hours in specific circumstances—while other response timelines have been cut from 15 days to 7 days, and from 24 hours to 12 hours, depending on the nature of the violation.

Synthetic Content and Unlawful Acts

The rules clarify that any reference to “information” used to commit an unlawful act—including under user due diligence obligations—explicitly includes synthetically generated information. This brings AI-generated content squarely within enforcement mechanisms related to offences under laws such as the Bharatiya Nyaya Sanhita, the Bharatiya Nagarik Suraksha Sanhita, and the Protection of Children from Sexual Offences Act.

Platforms are required to prevent the use of their services for creating or disseminating SGI that involves child sexual abuse material, indecent or obscene content, false electronic records, impersonation, or content related to explosives, weapons, or ammunition.

Clarification on Safe Harbour Protections

At the same time, the government has sought to reassure intermediaries regarding safe harbour. The notification clarifies that removal or disabling of access to SGI, including through automated tools and technical measures, will not amount to a violation of safe harbour conditions under Section 79 of the IT Act, provided such actions are taken in compliance with the rules.

Policy Signal on AI Governance

The amendments mark one of India’s most detailed regulatory interventions in the rapidly evolving AI and synthetic media ecosystem. By combining disclosure mandates, traceability requirements, and sharper enforcement timelines, the government appears to be aiming for a balance between innovation and harm prevention while placing clear accountability on platforms hosting or enabling synthetic content.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...