India’s New 3-Hour Deadline for Social Media to Remove Deepfakes

India Sets Three-Hour Deadline for Social Media Platforms to Remove AI-Generated, Deepfake Content

In a significant move to regulate AI-generated content, the Indian government has mandated that social media platforms such as Facebook, Instagram, and YouTube must clearly label all synthetic materials. This directive comes as part of an official order aimed at curbing the misuse of deepfake technology.

Strict Enforcement Measures

The government has implemented a three-hour deadline for these platforms to take down any flagged AI-generated or deepfake content, whether the alert comes from the government or via a court order. This rapid response is crucial in addressing the potential harms associated with misleading or deceptive AI outputs.

Regulations on Content Labeling

Platforms are now prohibited from removing or suppressing AI labels or associated metadata once they have been applied. This regulation aims to enhance transparency and accountability in the dissemination of AI-generated content.

Automated Detection Tools

To prevent the circulation of illegal or harmful content, social media companies are required to deploy automated tools that can detect and filter out sexually exploitative or deceptive AI-generated content. This proactive approach is part of a broader strategy to safeguard users from the dangers of manipulated media.

User Warnings and Compliance

Additionally, social media platforms must regularly warn users about the implications of violating rules related to AI misuse. These warnings are mandated to occur at least once every three months, ensuring ongoing awareness among users.

Context and Background

This latest directive follows growing concerns over the proliferation of AI-based deepfakes online. It builds on draft amendments proposed by the Ministry of Electronics and Information Technology (MeitY) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The draft rules emphasize the necessity for users to disclose when posting AI-generated or modified content and require platforms to adopt technology to verify such disclosures.

Current Platform Compliance

Leading social media platforms have already introduced features that allow users to label content as generated or modified using artificial intelligence. The initial focus of this enforcement is on platforms with over five million registered users in India. For instance, YouTube requires creators to disclose content that is “meaningfully altered” or synthetically generated, particularly in cases where it misrepresents real persons or events.

Meta has similarly directed users on Facebook and Instagram to label content featuring digitally generated or altered audio and visuals. This includes examples such as AI-generated conversations, songs created through AI, and reels narrated with AI voiceovers.

Conclusion

The Indian government’s decisive actions reflect a growing recognition of the need to regulate AI-generated content responsibly. By establishing clear guidelines and enforcement measures, it aims to protect users and maintain the integrity of information shared on social media platforms.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...