India’s New AI Deepfake Regulations: Rapid Takedown Mandates for Social Media

India Introduces New AI Deepfake Rules

The Government of India has made significant amendments to its 2021 IT Rules, establishing a formal regulatory framework for AI-generated deepfakes. These new regulations impose specific requirements on social media platforms to enhance transparency and accountability regarding synthetic audio-visual content.

Key Features of the New Regulations

The updated rules require platforms to:

  • Label synthetic audio-visual content to inform users about the nature of the media they are consuming.
  • Ensure traceability of such content to maintain accountability.
  • Deploy verification tools to help users discern real from synthetic media.

Compliance Timelines

One of the most notable aspects of the new rules is the introduction of strict compliance timelines:

  • Three-hour deadline for official takedown orders issued by authorities.
  • Two-hour window for urgent user complaints regarding problematic content.

These compressed timelines are expected to significantly increase the compliance burdens on platforms, particularly in a market as vast as India’s.

Implications for Social Media Platforms

According to policy expert Rohit Kumar from The Quantum Hub, the new regulations will heighten the legal risks for non-compliant platforms. This is particularly critical as India ranks among the world’s largest internet markets, making adherence to these rules essential for operational integrity.

Aprajita Rana of AZB & Partners emphasizes that the focus of these rules is specifically on AI-generated media. The obligations for content moderation are directly linked to platform liability protections, meaning that platforms must take proactive measures to manage and mitigate risks associated with deepfake content.

Conclusion

India’s new AI deepfake regulations present both challenges and responsibilities for social media platforms. By requiring labeling, traceability, and rapid response to user complaints, the government aims to foster a safer digital environment, while also placing greater accountability on the platforms themselves.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...