AI’s Role in Escalating Hate and Discrimination in India

Rights Groups Warn India AI Fuels Hate

NEW DELHI, India (MNTV) — As Indian government officials convened in the capital for the AI Impact Summit 2026, two civil society organizations released a report warning that India’s artificial intelligence governance model risks entrenching discrimination and accelerating democratic erosion.

Overview of the Report

The report, titled AI Governance at the Edge of Democratic Backsliding, was published by the Center for the Study of Organized Hate (CSOH) and the Internet Freedom Foundation. It argues that New Delhi’s regulatory approach relies heavily on voluntary compliance and industry self-regulation while avoiding binding accountability mechanisms.

India’s Ministry of Electronics and Information Technology issued AI Governance Guidelines in November 2025, stating that a separate AI law is not required at present. Instead, potential harms are expected to be addressed under existing legislation such as the Information Technology Act, 2000 and India’s new criminal code, the Bharatiya Nyaya Sanhita, 2023.

The report highlights that this framework lacks mandatory transparency standards, independent audits, and clear liability rules.

Vulnerable Communities and Algorithmic Opacity

While official summit messaging emphasizes “democratizing AI,” the authors argue that vulnerable communities — including Muslims, Dalits, Adivasis, and sexual minorities — remain largely unaddressed in policy design. They warn that algorithmic opacity makes it difficult for marginalized groups to challenge automated decisions in court.

The report documents instances in which generative AI tools were used to spread communal propaganda. It cites AI-generated videos and images targeting Muslim communities and opposition leaders, including material circulated by state units of the Bharatiya Janata Party.

In one widely criticized case, the BJP’s Assam unit shared an AI-generated video depicting Chief Minister Himanta Biswa Sarma shooting at two visibly Muslim men alongside the caption “No Mercy,” before later deleting it.

Researchers link such synthetic content to broader Hindutva narratives such as “love jihad” and “land jihad,” conspiracy theories frequently invoked to portray Muslims as demographic or cultural threats. The report argues that AI lowers the cost of producing inflammatory propaganda, increasing the risk of offline violence.

State Use of AI in Policing and Surveillance

Beyond online harms, the authors highlight the expanding state use of AI in policing and surveillance. In Maharashtra, authorities have proposed AI tools to identify alleged Bangladeshi immigrants and Rohingya refugees through linguistic profiling, a move experts say could enable ethnic or religious targeting.

Facial recognition systems have also been deployed in cities including Delhi, Hyderabad, Bengaluru, and Lucknow without a comprehensive legal framework.

Concerns Over AI in Welfare Delivery

AI integration into welfare delivery is another significant concern. Mandatory facial recognition through government nutrition programs has raised fears of exclusion due to technical failures, echoing earlier controversies over Aadhaar-based authentication in food distribution.

Recommendations for AI Governance

As global delegates discuss innovation and growth, the report urges binding oversight, minority safeguards, and clear limits on biometric surveillance. It warns that without enforceable protections, India’s AI expansion could reinforce existing hierarchies of caste, religion, and political power rather than bridge technological divides.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...