What Has the Government Laid Down on AI Labelling?
The Ministry of Electronics and Information Technology (MeitY) has recently notified an amendment to the IT Rules, 2021. This amendment introduces crucial measures aimed at ensuring transparency in AI-generated content on social media platforms. The new rules will come into effect on February 20.
Mandatory Labelling of AI-Generated Content
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, social media platforms are now required to “prominently” label any content that is synthetically generated, including AI-generated images and videos. Platforms with more than five million users must obtain a user declaration for AI-generated content and conduct technical verification before publication.
MeitY has articulated that this initiative seeks to combat issues like deepfakes, misinformation, and other forms of unlawful content that could mislead users or threaten national integrity. Awareness regarding the authenticity of content is deemed essential for user protection.
Defining Synthetically Generated Information (SGI)
The definition of Synthetically Generated Information (SGI) was broadened in an earlier draft but is now more focused. For instance, automatic retouching of smartphone photos does not qualify as SGI, nor do special effects in films. However, certain types of SGI, such as child sexual exploitation materials, forged documents, and deepfakes that misrepresent individuals, are strictly prohibited.
Detection of AI-Generated Content
Large platforms are mandated to implement reasonable and appropriate technical measures to identify unlawful SGI and ensure proper labelling requirements. A senior official from the IT Ministry has emphasized that many platforms already possess sophisticated detection tools, and this requirement simply formalizes existing capabilities. Collaborative efforts such as the Coalition for Content Provenance and Authenticity (C2PA) are also acknowledged, which aim to provide technical standards for invisibly labelling AI-generated content.
Changes to Takedown Timelines
The new IT Rules have significantly reduced the takedown timelines for illegal content. Government authorities and police can now issue takedown notices within 2-3 hours, while user complaints concerning categories like misinformation and nudity must be addressed within one week. Sensitive content reports must be responded to within 36 hours.
Increased User Notifications
Moreover, users will now receive reminders of platform terms and conditions more frequently. The amendments stipulate that notifications must be sent at least once every three months, rather than annually, along with expanded content that clarifies the potential consequences of non-compliance.
Platforms are also required to explicitly warn users that harmful deepfakes and illegal AI-generated content could expose them to legal repercussions, including identity disclosure to law enforcement and potential account suspension or termination.
In summary, these amendments highlight a significant shift in how AI-generated content will be managed and regulated, promoting transparency and user safety in the digital landscape.