India Sets Three-Hour Deadline for Social Media Platforms to Remove AI-Generated, Deepfake Content
In a significant move to regulate AI-generated content, the Indian government has mandated that social media platforms such as Facebook, Instagram, and YouTube must clearly label all synthetic materials. This directive comes as part of an official order aimed at curbing the misuse of deepfake technology.
Strict Enforcement Measures
The government has implemented a three-hour deadline for these platforms to take down any flagged AI-generated or deepfake content, whether the alert comes from the government or via a court order. This rapid response is crucial in addressing the potential harms associated with misleading or deceptive AI outputs.
Regulations on Content Labeling
Platforms are now prohibited from removing or suppressing AI labels or associated metadata once they have been applied. This regulation aims to enhance transparency and accountability in the dissemination of AI-generated content.
Automated Detection Tools
To prevent the circulation of illegal or harmful content, social media companies are required to deploy automated tools that can detect and filter out sexually exploitative or deceptive AI-generated content. This proactive approach is part of a broader strategy to safeguard users from the dangers of manipulated media.
User Warnings and Compliance
Additionally, social media platforms must regularly warn users about the implications of violating rules related to AI misuse. These warnings are mandated to occur at least once every three months, ensuring ongoing awareness among users.
Context and Background
This latest directive follows growing concerns over the proliferation of AI-based deepfakes online. It builds on draft amendments proposed by the Ministry of Electronics and Information Technology (MeitY) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The draft rules emphasize the necessity for users to disclose when posting AI-generated or modified content and require platforms to adopt technology to verify such disclosures.
Current Platform Compliance
Leading social media platforms have already introduced features that allow users to label content as generated or modified using artificial intelligence. The initial focus of this enforcement is on platforms with over five million registered users in India. For instance, YouTube requires creators to disclose content that is “meaningfully altered” or synthetically generated, particularly in cases where it misrepresents real persons or events.
Meta has similarly directed users on Facebook and Instagram to label content featuring digitally generated or altered audio and visuals. This includes examples such as AI-generated conversations, songs created through AI, and reels narrated with AI voiceovers.
Conclusion
The Indian government’s decisive actions reflect a growing recognition of the need to regulate AI-generated content responsibly. By establishing clear guidelines and enforcement measures, it aims to protect users and maintain the integrity of information shared on social media platforms.