Amendments to IT Rules Address AI Content and User Safety

Govt Amends IT Rules, Focusing on AI Content

The Centre has recently amended the Information Technology rules, bringing artificial intelligence (AI)-generated content under the legal framework. This significant change mandates labeling requirements for AI content and imposes new obligations on users and platforms.

Key Changes in Compliance Timeline

One of the most striking amendments is the reduction of the timeline for social media platforms to take down flagged unlawful content from the current 36 hours to just three hours. This change aims to address the rising instances of unlawful content being shared widely shortly after posting.

Additionally, the time allowed for platforms to resolve user-reported grievances has been cut to seven days, down from 15 days. Non-consensual intimate imagery (NCII) must now be removed within two hours, a notable decrease from the previous 24-hour requirement. These urgent timelines reflect growing concern over the viral nature of harmful content.

Addressing Rising Incidents of Harmful Content

The amendments are geared towards tackling the increasing occurrences of child sexual abuse material (CSAM), deepfakes targeting individuals, and NCII. The rules define a broad spectrum of information as unlawful content, which includes material prohibited under laws concerning national sovereignty, public order, decency, and morality, among others.

Industry Reactions and Feasibility Concerns

Industry experts, such as Ashish Aggarwal, vice-president of policy at Nasscom, have expressed concerns over the technical feasibility of complying with these new timelines. There is a shared hope that the government has adequately assessed these obligations before implementation, as unintentional non-compliance could pose risks.

Previously, the draft amendments required all content with any AI modification to be labeled, a stipulation deemed impractical. The revised guidelines now focus specifically on synthetically generated content intended to mislead or falsify information, which is seen as a positive change.

AI Labeling Requirements

The updated rules require social media users to declare when posting AI-generated or modified content. Platforms are also obligated to implement technical measures to verify these declarations and prominently label both AI-generated images and audio.

Furthermore, technology intermediaries must inform their users about the AI regulations every three months. These measures are crucial in curbing the rapid escalation of AI-based deepfakes.

Broader Implications for Technology Intermediaries

Most provisions in the new rules have been welcomed by industry bodies, although the 10-day implementation window poses challenges. The rules primarily target significant social media intermediaries (SSMIs)—those with over 5 million registered users in India—yet all technology intermediaries must participate in these efforts.

Examples of AI-based software and services affected by these rules include OpenAI’s ChatGPT, Dall-E, Google’s Gemini, and Microsoft’s Copilot, among others. The definition of AI-generated content hinges on whether it appears real or could be perceived as indistinguishable from actual events or individuals, introducing a subjective standard that may be challenging to enforce.

Conclusion

As the digital landscape continues to evolve, these amendments aim to enhance accountability and safety in the realm of AI-generated content. The government’s initiative reflects an urgent response to the challenges posed by rapidly advancing technologies and their potential misuse, highlighting the need for collaboration between industry stakeholders and regulatory bodies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...