New Regulations on AI Content Labeling and Takedown Timelines

What Has the Government Laid Down on AI Labelling?

The Ministry of Electronics and Information Technology (MeitY) has recently notified an amendment to the IT Rules, 2021. This amendment introduces crucial measures aimed at ensuring transparency in AI-generated content on social media platforms. The new rules will come into effect on February 20.

Mandatory Labelling of AI-Generated Content

Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, social media platforms are now required to “prominently” label any content that is synthetically generated, including AI-generated images and videos. Platforms with more than five million users must obtain a user declaration for AI-generated content and conduct technical verification before publication.

MeitY has articulated that this initiative seeks to combat issues like deepfakes, misinformation, and other forms of unlawful content that could mislead users or threaten national integrity. Awareness regarding the authenticity of content is deemed essential for user protection.

Defining Synthetically Generated Information (SGI)

The definition of Synthetically Generated Information (SGI) was broadened in an earlier draft but is now more focused. For instance, automatic retouching of smartphone photos does not qualify as SGI, nor do special effects in films. However, certain types of SGI, such as child sexual exploitation materials, forged documents, and deepfakes that misrepresent individuals, are strictly prohibited.

Detection of AI-Generated Content

Large platforms are mandated to implement reasonable and appropriate technical measures to identify unlawful SGI and ensure proper labelling requirements. A senior official from the IT Ministry has emphasized that many platforms already possess sophisticated detection tools, and this requirement simply formalizes existing capabilities. Collaborative efforts such as the Coalition for Content Provenance and Authenticity (C2PA) are also acknowledged, which aim to provide technical standards for invisibly labelling AI-generated content.

Changes to Takedown Timelines

The new IT Rules have significantly reduced the takedown timelines for illegal content. Government authorities and police can now issue takedown notices within 2-3 hours, while user complaints concerning categories like misinformation and nudity must be addressed within one week. Sensitive content reports must be responded to within 36 hours.

Increased User Notifications

Moreover, users will now receive reminders of platform terms and conditions more frequently. The amendments stipulate that notifications must be sent at least once every three months, rather than annually, along with expanded content that clarifies the potential consequences of non-compliance.

Platforms are also required to explicitly warn users that harmful deepfakes and illegal AI-generated content could expose them to legal repercussions, including identity disclosure to law enforcement and potential account suspension or termination.

In summary, these amendments highlight a significant shift in how AI-generated content will be managed and regulated, promoting transparency and user safety in the digital landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...