Joint Report: India AI Impact Summit 2026
AI Governance at the Edge of Democratic Backsliding
As India prepares to host the AI Impact Summit 2026, the first global AI summit in the Global South, a critical policy report has emerged from the Center for the Study of Organized Hate (CSOH) and the Internet Freedom Foundation (IFF). This report examines the stark disconnect between India’s official AI rhetoric and the troubling reality of AI-enabled hate, discrimination, surveillance, repression, and violence against minority and marginalized communities, all occurring within a broader context of democratic backsliding.
Weaponization of AI
The report, titled “India AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding,” documents the alarming ways in which generative AI tools are being weaponized. Notably, the ruling party, Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP), uses these tools to demonize and dehumanize religious minorities. Furthermore, opaque AI systems deployed by the state facilitate mass surveillance, exclusion from essential services, and the deletion of voters from electoral rolls.
Summit Context
As global leaders, technology executives, and civil society representatives converge in India for a summit centered on “Democratizing AI and Bridging the AI Divide” through the pillars of “People, Planet, and Progress,” the report uncovers a troubling pattern of AI deployment that undermines democratic rights and specifically targets vulnerable communities.
AI-Generated Hate
The report reveals that the ruling government systematically employs AI-generated content on official social media accounts to disseminate divisive and dehumanizing anti-minority messages. For instance, just one week prior to the Summit, the BJP’s Assam unit shared an AI-generated video depicting the state’s Chief Minister shooting at two visibly Muslim men, titled “No Mercy.” This example illustrates the extent to which AI is being leveraged to incite violence and fear.
Surveillance and Predictive Policing
In addition to online hate, law enforcement agencies across multiple states are utilizing facial recognition technology, predictive policing algorithms, and AI-powered surveillance systems without any form of independent oversight, judicial authorization, or transparency. One of the most alarming developments is the announcement by Devendra Fadnavis, the Chief Minister of Maharashtra, regarding the development of an AI tool in collaboration with the Indian Institute of Technology Bombay. This tool aims to detect alleged Bangladeshi immigrants and Rohingya refugees through language-based verification, such as analyzing speech patterns and tone. Experts warn that such tools could become instruments of discrimination against already persecuted Bengali-speaking Muslim communities and low-income migrant workers.
Governance Approach Issues
At the core of these concerns lies India’s governance approach to AI. The AI Governance Guidelines released in November 2025 favor voluntary self-regulation over binding accountability mechanisms, explicitly prioritizing “responsible innovation” over necessary caution. While these guidelines acknowledge the need to protect vulnerable groups, they fail to address the specific harms faced by religious minorities, Dalit, Bahujan, Adivasi communities, and sexual and gender minorities.
This governance framework places an unrealistic burden of evidence-gathering and challenging powerful AI systems on the very communities most affected, without mandating the transparency necessary for such challenges to be viable.
Call to Action
The CSOH and IFF urge global leaders at the Summit to demand and commit to rights-respecting regulation with clear obligations across the AI value chain. They advocate for a prohibition on the use of AI for mass surveillance and predictive policing, the requirement of transparency and independent oversight for all public-sector AI deployments, and the inclusion of affected communities’ voices in AI governance discussions. The organizations stress the need to move beyond mere voluntary commitments from tech companies and urgently recognize the need for robust regulation to address the harms arising from the design, development, and deployment of AI systems.
Press Inquiries
For press inquiries, contact:
Eviane Leidig, Director of Research, Center for the Study of Organized Hate (press@csohate.org)
Apar Gupta, Advocate and Founder Director, Internet Freedom Foundation (media@internetfreedom.in)
About the Organizations
The Center for the Study of Organized Hate (CSOH) conducts research and informs policy to combat organized hate, extremism, violence, and online harms. The Internet Freedom Foundation (IFF) is a digital rights advocacy organization based in New Delhi, working to protect the fundamental rights of Indians in the face of digital technologies.