India’s AI Regulations Spark Censorship Concerns

India’s Tougher Rules on AI in Social Media Spur Censorship Fears

The Modi government in India is poised to implement tougher regulations on the use of artificial intelligence in social media, aiming to combat the rising tide of disinformation. The proposed measures also include a potential ban on children’s access to social media platforms.

Combatting Disinformation

The initial move seeks to address the increasing prevalence of fake videos and other fraudulent content, yet it raises alarms about potential censorship and the erosion of digital freedoms.

These regulations are set to take effect on February 20, coinciding with the conclusion of an international AI summit in New Delhi featuring prominent global tech figures. Social media platforms will now have three hours (down from 36) to comply with government takedown orders, an effort to prevent harmful content from spreading rapidly.

Challenges for Social Media Giants

With over a billion internet users, India is struggling with the overwhelming presence of AI-generated disinformation on social media. Companies such as Instagram, Facebook, and X face heightened scrutiny as they navigate growing public anxiety regarding the misuse of AI, including the dissemination of misinformation and sexualized imagery of children.

However, rights groups caution that stringent oversight of AI could jeopardize freedom of speech. Under Prime Minister Narendra Modi, accusations have emerged regarding the curtailing of freedom of expression, particularly aimed at activists and opposition figures, which the government has denied.

Compressed Timeframes for Compliance

The Internet Freedom Foundation (IFF), a digital rights organization, has expressed concerns regarding the rapid compliance requirements placed on social media platforms. The shortened timeframe for responding to takedown notices may compel platforms to act as “rapid-fire censors.”

Labelling Requirements

New regulations mandate that platforms must label any content that is “created, generated, modified, or altered” using computer resources, except for material modified during standard editing processes. This means that synthetic media must be clearly and permanently marked, raising questions about the effectiveness of such labels.

Apar Gupta, chief of the IFF, indicated that the stringent timelines make meaningful human review impractical, shifting control significantly away from users.

Automated Censorship Concerns

Critics argue that the rules represent a form of automated censorship. Many internet users are unaware of governmental orders to delete their content, raising alarms about transparency and user rights.

Responsibility Shift to Platforms

The new laws place the responsibility of content monitoring on the platforms themselves. Users must declare if their content is synthetic, while platforms must verify and label such material prior to publication. However, the broad parameters for takedown requests leave much open to interpretation, potentially affecting satire, parody, and political commentary.

This shift to upstream responsibility has sparked concerns about collateral censorship as platforms may err on the side of caution in their monitoring efforts.

Age Restrictions Under Discussion

In addition to the AI regulations, India is exploring age-based restrictions for social media users. IT Minister Ashwini Vaishnaw has indicated that discussions are ongoing regarding limitations similar to those implemented in countries like Australia and France, where young teens are banned from popular platforms.

Vaishnaw emphasized the need for more robust regulations on deepfakes and the protection of children in the digital landscape, acknowledging the growing challenges posed by such technologies.

The government aims to address these issues, asserting that stronger regulation is essential to safeguard society.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...