Govt Amends IT Rules, Focusing on AI Content
The Centre has recently amended the Information Technology rules, bringing artificial intelligence (AI)-generated content under the legal framework. This significant change mandates labeling requirements for AI content and imposes new obligations on users and platforms.
Key Changes in Compliance Timeline
One of the most striking amendments is the reduction of the timeline for social media platforms to take down flagged unlawful content from the current 36 hours to just three hours. This change aims to address the rising instances of unlawful content being shared widely shortly after posting.
Additionally, the time allowed for platforms to resolve user-reported grievances has been cut to seven days, down from 15 days. Non-consensual intimate imagery (NCII) must now be removed within two hours, a notable decrease from the previous 24-hour requirement. These urgent timelines reflect growing concern over the viral nature of harmful content.
Addressing Rising Incidents of Harmful Content
The amendments are geared towards tackling the increasing occurrences of child sexual abuse material (CSAM), deepfakes targeting individuals, and NCII. The rules define a broad spectrum of information as unlawful content, which includes material prohibited under laws concerning national sovereignty, public order, decency, and morality, among others.
Industry Reactions and Feasibility Concerns
Industry experts, such as Ashish Aggarwal, vice-president of policy at Nasscom, have expressed concerns over the technical feasibility of complying with these new timelines. There is a shared hope that the government has adequately assessed these obligations before implementation, as unintentional non-compliance could pose risks.
Previously, the draft amendments required all content with any AI modification to be labeled, a stipulation deemed impractical. The revised guidelines now focus specifically on synthetically generated content intended to mislead or falsify information, which is seen as a positive change.
AI Labeling Requirements
The updated rules require social media users to declare when posting AI-generated or modified content. Platforms are also obligated to implement technical measures to verify these declarations and prominently label both AI-generated images and audio.
Furthermore, technology intermediaries must inform their users about the AI regulations every three months. These measures are crucial in curbing the rapid escalation of AI-based deepfakes.
Broader Implications for Technology Intermediaries
Most provisions in the new rules have been welcomed by industry bodies, although the 10-day implementation window poses challenges. The rules primarily target significant social media intermediaries (SSMIs)—those with over 5 million registered users in India—yet all technology intermediaries must participate in these efforts.
Examples of AI-based software and services affected by these rules include OpenAI’s ChatGPT, Dall-E, Google’s Gemini, and Microsoft’s Copilot, among others. The definition of AI-generated content hinges on whether it appears real or could be perceived as indistinguishable from actual events or individuals, introducing a subjective standard that may be challenging to enforce.
Conclusion
As the digital landscape continues to evolve, these amendments aim to enhance accountability and safety in the realm of AI-generated content. The government’s initiative reflects an urgent response to the challenges posed by rapidly advancing technologies and their potential misuse, highlighting the need for collaboration between industry stakeholders and regulatory bodies.