MeitY’s New AI Content Rules: 8 Key Changes Social Media Users Must Know
The Ministry of Electronics and Information Technology (MeitY) has rolled out a stricter compliance framework for artificial intelligence-generated content on social media platforms, significantly tightening timelines and accountability norms for both users and tech companies. This move comes amid growing concerns over deepfakes, impersonation, and the misuse of synthetic media to mislead the public.
1. Why the Government Acted
The Centre has flagged a rise in the misuse of AI-generated content, particularly synthetic videos, images, and audio designed to misinform or defame individuals. Officials argue that while AI tools are advancing rapidly, guardrails around their use must evolve to prevent harm, misinformation, and reputational damage.
2. What Counts as Synthetic Media
Under the new framework, synthetic media refers to AI-generated or AI-manipulated content that can convincingly mimic real people, voices, events, or situations in a way that could deceive viewers into believing it is authentic. This includes:
- Realistic deepfake videos
- Fabricated audio clips
- Hyper-real images
3. Routine Edits Don’t Require AI Labelling
MeitY has clarified that everyday digital edits will not require an AI-generated label. Activities such as:
- Applying photo filters
- Compressing videos for upload
- Transcribing or translating content
- Cleaning background noise
- Generating presentation slides
- Formatting documents
- Creating charts and diagrams
using AI tools are considered routine assistance, not synthetic media.
4. Mandatory Declaration of AI-Generated Content
Social media platforms must now ensure that users clearly declare when their content is AI-generated. Once identified as synthetic, the platform is required to attach a visible label to inform viewers that the content is not organically created.
5. Quarterly User Warnings
Platforms are required to notify users every three months about their responsibilities under the law. These notifications must clarify that illegal content can lead to post removal, account suspension, and potential reporting to law enforcement authorities.
6. Strong Stance on Deepfakes and Impersonation
Companies must explicitly caution users against creating deepfakes, impersonation content, or non-consensual intimate imagery. Violations could lead to swift takedown, account action, and legal proceedings under applicable laws.
7. Shorter Compliance Timelines for Takedowns
One of the most significant changes is the drastic reduction in compliance windows. If the government issues a takedown directive at 11 am, platforms must comply by 2 pm, cutting the earlier 36-hour window to just three hours. Such orders can be issued by police officers of the rank of Deputy Inspector General (DIG) or above.
In cases involving nude or sexually explicit images, platforms must remove content within two hours, compared to the earlier 24-hour deadline. For user-reported deceptive content or impersonation, platforms now have 36 hours to act, down from 72 hours.
8. Content AI Companies Are Prohibited from Generating
AI firms are barred from producing:
- Sexually explicit deepfakes without consent
- Content that violates bodily privacy
- Forged government identification documents
- Fake appointment letters
- Manipulated financial records
- Instructional content related to explosives
They are also restricted from creating political or public figure deepfakes, including:
- Fabricated speeches by election candidates
- Fake celebrity endorsements
- Staged interviews
- False directives by senior officials or CEOs
- Synthetic news visuals depicting riots, attacks, or accidents that never occurred