Regulation of AI-Generated/Deepfake Content and Synthetically Generated Information (SGI) in India – New Rules
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 represent a significant regulatory shift within India’s digital governance framework. Officially notified on 10 February 2026 and set to take effect from 20 February 2026, these amendments primarily focus on the regulation of synthetically generated information (SGI), commonly known as AI-generated or deepfake content.
Objectives of the 2026 Amendments
The amendments aim to:
- Regulate AI-generated and deepfake content.
- Prevent the misuse of synthetic media for purposes such as fraud, impersonation, obscenity, misinformation, and criminal activity.
- Mandate labeling and traceability of synthetic content.
- Tighten compliance timelines for intermediaries.
- Align references from the Indian Penal Code (IPC) to the Bharatiya Nyaya Sanhita, 2023.
Definition of Synthetically Generated Information
A major addition is the formal definition of:
- Audio, Visual or Audio-Visual Information: Expanded to include any content created, generated, modified, or altered using computer resources.
- Synthetically Generated Information (SGI): Defined as AI-created or algorithmically altered content that:
- Appears real or authentic.
- Depicts individuals or events.
- Is likely to be perceived as indistinguishable from real-world events.
Exclusions include routine editing and formatting as long as they do not materially distort the underlying content, making a clear distinction between legitimate enhancements and deepfake manipulations.
Implications of the Amendment
The amendment clarifies that any reference to “information” under unlawful activity provisions includes synthetically generated information. This ensures that deepfakes are treated on par with real content under the IT Act liability provisions, eliminating potential regulatory gaps for intermediaries.
Mandatory User Awareness Requirements
Intermediaries are now required to:
- Inform users every three months about the legal consequences of misuse.
- Warn about penalties under various acts, including:
- Bharatiya Nyaya Sanhita, 2023
- POCSO Act
- Representation of the People Act
- Indecent Representation of Women Act
- Immoral Traffic Prevention Act
- Make users aware that violations may result in:
- Immediate content removal.
- Account suspension.
- Identity disclosure to victims.
- Mandatory reporting to authorities.
Due Diligence Obligations for Synthetic Content Platforms
Platforms enabling AI content creation must:
- Prevent Illegal SGI: Utilize automated tools and reasonable technical measures to prevent the creation of synthetic content that:
- Contains child sexual abuse material.
- Is obscene, pornographic, or invasive of privacy.
- Creates false documents or electronic records.
- Aids in explosives or arms procurement.
- Falsely depicts individuals or events to deceive.
- Mandatory Labelling: All lawful SGI must be prominently labelled, containing visible disclosures and embedded metadata.
Obligations of Significant Social Media Intermediaries (SSMIs)
Before publishing user content, SSMIs must:
- Require users to declare whether content is synthetic.
- Deploy verification tools to validate declarations.
- Ensure labelling if the content is confirmed as synthetic.
If a platform knowingly permits unlabeled synthetic content, it will be deemed to have failed due diligence, significantly increasing liability exposure.
Tightened Compliance Timelines
The amendment reduces key timelines:
- Content removal after government order: from 36 hours to 3 hours.
- Grievance resolution: from 15 days to 7 days.
- Certain urgent removals: from 24 hours to 2 hours.
Safe Harbour Clarification
The amendment clarifies that actions such as removing or disabling access using automated tools will not breach safe harbour protections under Section 79 of the IT Act, legally protecting proactive moderation efforts.
Legal and Regulatory Impact
The amendments will have significant implications for:
- Social Media Platforms: They will face a high compliance burden and increased liability risks.
- AI Tools and Generative Platforms: Must incorporate watermarking or metadata and prevent deepfake misuse.
- Users: There will be criminal exposure for malicious deepfakes and reduced anonymity if violations occur.
Conclusion
The 2026 Amendment Rules represent India’s most stringent regulation on deepfake content, introducing technical compliance standards for AI while shifting from reactive moderation to a preventive architecture. By formally regulating harmful content and the mechanisms of its creation, India sets a precedent in digital governance.