Regulating AI-Generated Content: India’s 2026 Amendment Rules

Regulation of AI-Generated/Deepfake Content and Synthetically Generated Information (SGI) in India – New Rules

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 represent a significant regulatory shift within India’s digital governance framework. Officially notified on 10 February 2026 and set to take effect from 20 February 2026, these amendments primarily focus on the regulation of synthetically generated information (SGI), commonly known as AI-generated or deepfake content.

Objectives of the 2026 Amendments

The amendments aim to:

  • Regulate AI-generated and deepfake content.
  • Prevent the misuse of synthetic media for purposes such as fraud, impersonation, obscenity, misinformation, and criminal activity.
  • Mandate labeling and traceability of synthetic content.
  • Tighten compliance timelines for intermediaries.
  • Align references from the Indian Penal Code (IPC) to the Bharatiya Nyaya Sanhita, 2023.

Definition of Synthetically Generated Information

A major addition is the formal definition of:

  • Audio, Visual or Audio-Visual Information: Expanded to include any content created, generated, modified, or altered using computer resources.
  • Synthetically Generated Information (SGI): Defined as AI-created or algorithmically altered content that:
    • Appears real or authentic.
    • Depicts individuals or events.
    • Is likely to be perceived as indistinguishable from real-world events.

Exclusions include routine editing and formatting as long as they do not materially distort the underlying content, making a clear distinction between legitimate enhancements and deepfake manipulations.

Implications of the Amendment

The amendment clarifies that any reference to “information” under unlawful activity provisions includes synthetically generated information. This ensures that deepfakes are treated on par with real content under the IT Act liability provisions, eliminating potential regulatory gaps for intermediaries.

Mandatory User Awareness Requirements

Intermediaries are now required to:

  • Inform users every three months about the legal consequences of misuse.
  • Warn about penalties under various acts, including:
    • Bharatiya Nyaya Sanhita, 2023
    • POCSO Act
    • Representation of the People Act
    • Indecent Representation of Women Act
    • Immoral Traffic Prevention Act
  • Make users aware that violations may result in:
    • Immediate content removal.
    • Account suspension.
    • Identity disclosure to victims.
    • Mandatory reporting to authorities.

Due Diligence Obligations for Synthetic Content Platforms

Platforms enabling AI content creation must:

  • Prevent Illegal SGI: Utilize automated tools and reasonable technical measures to prevent the creation of synthetic content that:
    • Contains child sexual abuse material.
    • Is obscene, pornographic, or invasive of privacy.
    • Creates false documents or electronic records.
    • Aids in explosives or arms procurement.
    • Falsely depicts individuals or events to deceive.
  • Mandatory Labelling: All lawful SGI must be prominently labelled, containing visible disclosures and embedded metadata.

Obligations of Significant Social Media Intermediaries (SSMIs)

Before publishing user content, SSMIs must:

  • Require users to declare whether content is synthetic.
  • Deploy verification tools to validate declarations.
  • Ensure labelling if the content is confirmed as synthetic.

If a platform knowingly permits unlabeled synthetic content, it will be deemed to have failed due diligence, significantly increasing liability exposure.

Tightened Compliance Timelines

The amendment reduces key timelines:

  • Content removal after government order: from 36 hours to 3 hours.
  • Grievance resolution: from 15 days to 7 days.
  • Certain urgent removals: from 24 hours to 2 hours.

Safe Harbour Clarification

The amendment clarifies that actions such as removing or disabling access using automated tools will not breach safe harbour protections under Section 79 of the IT Act, legally protecting proactive moderation efforts.

Legal and Regulatory Impact

The amendments will have significant implications for:

  • Social Media Platforms: They will face a high compliance burden and increased liability risks.
  • AI Tools and Generative Platforms: Must incorporate watermarking or metadata and prevent deepfake misuse.
  • Users: There will be criminal exposure for malicious deepfakes and reduced anonymity if violations occur.

Conclusion

The 2026 Amendment Rules represent India’s most stringent regulation on deepfake content, introducing technical compliance standards for AI while shifting from reactive moderation to a preventive architecture. By formally regulating harmful content and the mechanisms of its creation, India sets a precedent in digital governance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...