AI, Anonymity and Accountability: What the New IT Rules Change
India’s amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, notified on 10 February 2026, introduce significant changes for social media platforms and other intermediaries hosting audio-visual and AI-generated content.
Definition of Synthetic Media
The amendments formally define synthetically generated information (SGI), which includes audio-visual content that is “artificially or algorithmically created, generated, modified or altered” to appear real or authentic. This definition encompasses deepfakes, AI-generated videos, cloned voices, and hyper-realistic manipulated images.
However, routine editing such as colour correction or translation does not qualify as synthetic content if it does not “materially alter, distort, or misrepresent” the original material.
Broadening Definitions
The rules expand the definition of “audio, visual or audio-visual information” to include any image, video, recording, or graphic created or modified through computer resources, effectively bringing most forms of digital content under regulatory scrutiny.
Notably, whenever the rules mention “information” related to unlawful acts, it now includes synthetically generated information, meaning AI-generated content is treated no differently than traditional forms of content in legal contexts.
New Takedown Timelines
One of the most consequential changes is the reduction in takedown timelines. Previously, intermediaries had 36 hours to remove unlawful content upon receiving a court order. Now, platforms must act within three hours of receiving “actual knowledge” of unlawful content, either through a court order or a written government communication.
This three-hour requirement applies across all categories of unlawful content, not just AI-generated material. Prateek Waghre of the Tech Global Institute criticized this unrealistic standard, suggesting that platforms may increasingly rely on automated systems to meet the tight deadlines.
Context of Regulation
Independent internet researcher Srinivas Kodali emphasized that this rule aims to limit the rapid spread of harmful material online, suggesting that government officials are more concerned about the potential for content to go viral.
Shorter Timelines for Specific Complaints
The rules also maintain a separate, even shorter two-hour timeline for certain categories of user complaints, such as content that exposes private body parts or involves impersonation. This provision is particularly relevant given the rise of deepfake pornography and impersonation content.
Labeling AI-Generated Content
Earlier drafts proposed a 10% visibility standard for labeling AI-generated content, but the final version merely requires such content to be “prominently” labeled. While Waghre noted that a percentage-based requirement may have been impractical, he cautioned that the vagueness of “prominent” labeling could lead to inconsistent interpretations by platforms.
Expanded Obligations for Intermediaries
The amendments also expand intermediary obligations beyond content removal. In cases where a user violates the rules, platforms may be required to identify such users. This means that, subject to legal safeguards, anonymity may not protect individuals who misuse AI tools or publish harmful content.
Criticism and Concerns
The digital rights group Internet Freedom Foundation (IFF) has criticized the amendments, calling them a “troubling addition” and warning that they could create risks without appropriate safeguards. The IFF expressed concerns about increased compliance burdens on intermediaries and the potential impact on safe harbour protections under Section 79 of the IT Act.
Waghre questioned the practical implications of identity disclosure for complaints, arguing that the rules do not clearly anchor such disclosures to court orders, raising concerns about potential misuse in political contexts.