New IT Rules: Implications for AI Content and User Anonymity

AI, Anonymity and Accountability: What the New IT Rules Change

India’s amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, notified on 10 February 2026, introduce significant changes for social media platforms and other intermediaries hosting audio-visual and AI-generated content.

Definition of Synthetic Media

The amendments formally define synthetically generated information (SGI), which includes audio-visual content that is “artificially or algorithmically created, generated, modified or altered” to appear real or authentic. This definition encompasses deepfakes, AI-generated videos, cloned voices, and hyper-realistic manipulated images.

However, routine editing such as colour correction or translation does not qualify as synthetic content if it does not “materially alter, distort, or misrepresent” the original material.

Broadening Definitions

The rules expand the definition of “audio, visual or audio-visual information” to include any image, video, recording, or graphic created or modified through computer resources, effectively bringing most forms of digital content under regulatory scrutiny.

Notably, whenever the rules mention “information” related to unlawful acts, it now includes synthetically generated information, meaning AI-generated content is treated no differently than traditional forms of content in legal contexts.

New Takedown Timelines

One of the most consequential changes is the reduction in takedown timelines. Previously, intermediaries had 36 hours to remove unlawful content upon receiving a court order. Now, platforms must act within three hours of receiving “actual knowledge” of unlawful content, either through a court order or a written government communication.

This three-hour requirement applies across all categories of unlawful content, not just AI-generated material. Prateek Waghre of the Tech Global Institute criticized this unrealistic standard, suggesting that platforms may increasingly rely on automated systems to meet the tight deadlines.

Context of Regulation

Independent internet researcher Srinivas Kodali emphasized that this rule aims to limit the rapid spread of harmful material online, suggesting that government officials are more concerned about the potential for content to go viral.

Shorter Timelines for Specific Complaints

The rules also maintain a separate, even shorter two-hour timeline for certain categories of user complaints, such as content that exposes private body parts or involves impersonation. This provision is particularly relevant given the rise of deepfake pornography and impersonation content.

Labeling AI-Generated Content

Earlier drafts proposed a 10% visibility standard for labeling AI-generated content, but the final version merely requires such content to be “prominently” labeled. While Waghre noted that a percentage-based requirement may have been impractical, he cautioned that the vagueness of “prominent” labeling could lead to inconsistent interpretations by platforms.

Expanded Obligations for Intermediaries

The amendments also expand intermediary obligations beyond content removal. In cases where a user violates the rules, platforms may be required to identify such users. This means that, subject to legal safeguards, anonymity may not protect individuals who misuse AI tools or publish harmful content.

Criticism and Concerns

The digital rights group Internet Freedom Foundation (IFF) has criticized the amendments, calling them a “troubling addition” and warning that they could create risks without appropriate safeguards. The IFF expressed concerns about increased compliance burdens on intermediaries and the potential impact on safe harbour protections under Section 79 of the IT Act.

Waghre questioned the practical implications of identity disclosure for complaints, arguing that the rules do not clearly anchor such disclosures to court orders, raising concerns about potential misuse in political contexts.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...