Rights Groups Warn India AI Fuels Hate
NEW DELHI, India (MNTV) — As Indian government officials convened in the capital for the AI Impact Summit 2026, two civil society organizations released a report warning that India’s artificial intelligence governance model risks entrenching discrimination and accelerating democratic erosion.
Overview of the Report
The report, titled AI Governance at the Edge of Democratic Backsliding, was published by the Center for the Study of Organized Hate (CSOH) and the Internet Freedom Foundation. It argues that New Delhi’s regulatory approach relies heavily on voluntary compliance and industry self-regulation while avoiding binding accountability mechanisms.
India’s Ministry of Electronics and Information Technology issued AI Governance Guidelines in November 2025, stating that a separate AI law is not required at present. Instead, potential harms are expected to be addressed under existing legislation such as the Information Technology Act, 2000 and India’s new criminal code, the Bharatiya Nyaya Sanhita, 2023.
The report highlights that this framework lacks mandatory transparency standards, independent audits, and clear liability rules.
Vulnerable Communities and Algorithmic Opacity
While official summit messaging emphasizes “democratizing AI,” the authors argue that vulnerable communities — including Muslims, Dalits, Adivasis, and sexual minorities — remain largely unaddressed in policy design. They warn that algorithmic opacity makes it difficult for marginalized groups to challenge automated decisions in court.
The report documents instances in which generative AI tools were used to spread communal propaganda. It cites AI-generated videos and images targeting Muslim communities and opposition leaders, including material circulated by state units of the Bharatiya Janata Party.
In one widely criticized case, the BJP’s Assam unit shared an AI-generated video depicting Chief Minister Himanta Biswa Sarma shooting at two visibly Muslim men alongside the caption “No Mercy,” before later deleting it.
Researchers link such synthetic content to broader Hindutva narratives such as “love jihad” and “land jihad,” conspiracy theories frequently invoked to portray Muslims as demographic or cultural threats. The report argues that AI lowers the cost of producing inflammatory propaganda, increasing the risk of offline violence.
State Use of AI in Policing and Surveillance
Beyond online harms, the authors highlight the expanding state use of AI in policing and surveillance. In Maharashtra, authorities have proposed AI tools to identify alleged Bangladeshi immigrants and Rohingya refugees through linguistic profiling, a move experts say could enable ethnic or religious targeting.
Facial recognition systems have also been deployed in cities including Delhi, Hyderabad, Bengaluru, and Lucknow without a comprehensive legal framework.
Concerns Over AI in Welfare Delivery
AI integration into welfare delivery is another significant concern. Mandatory facial recognition through government nutrition programs has raised fears of exclusion due to technical failures, echoing earlier controversies over Aadhaar-based authentication in food distribution.
Recommendations for AI Governance
As global delegates discuss innovation and growth, the report urges binding oversight, minority safeguards, and clear limits on biometric surveillance. It warns that without enforceable protections, India’s AI expansion could reinforce existing hierarchies of caste, religion, and political power rather than bridge technological divides.