AI Governance Amid Democratic Erosion in India

Joint Report: India AI Impact Summit 2026

AI Governance at the Edge of Democratic Backsliding

As India prepares to host the AI Impact Summit 2026, the first global AI summit in the Global South, a critical policy report has emerged from the Center for the Study of Organized Hate (CSOH) and the Internet Freedom Foundation (IFF). This report examines the stark disconnect between India’s official AI rhetoric and the troubling reality of AI-enabled hate, discrimination, surveillance, repression, and violence against minority and marginalized communities, all occurring within a broader context of democratic backsliding.

Weaponization of AI

The report, titled “India AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding,” documents the alarming ways in which generative AI tools are being weaponized. Notably, the ruling party, Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP), uses these tools to demonize and dehumanize religious minorities. Furthermore, opaque AI systems deployed by the state facilitate mass surveillance, exclusion from essential services, and the deletion of voters from electoral rolls.

Summit Context

As global leaders, technology executives, and civil society representatives converge in India for a summit centered on “Democratizing AI and Bridging the AI Divide” through the pillars of “People, Planet, and Progress,” the report uncovers a troubling pattern of AI deployment that undermines democratic rights and specifically targets vulnerable communities.

AI-Generated Hate

The report reveals that the ruling government systematically employs AI-generated content on official social media accounts to disseminate divisive and dehumanizing anti-minority messages. For instance, just one week prior to the Summit, the BJP’s Assam unit shared an AI-generated video depicting the state’s Chief Minister shooting at two visibly Muslim men, titled “No Mercy.” This example illustrates the extent to which AI is being leveraged to incite violence and fear.

Surveillance and Predictive Policing

In addition to online hate, law enforcement agencies across multiple states are utilizing facial recognition technology, predictive policing algorithms, and AI-powered surveillance systems without any form of independent oversight, judicial authorization, or transparency. One of the most alarming developments is the announcement by Devendra Fadnavis, the Chief Minister of Maharashtra, regarding the development of an AI tool in collaboration with the Indian Institute of Technology Bombay. This tool aims to detect alleged Bangladeshi immigrants and Rohingya refugees through language-based verification, such as analyzing speech patterns and tone. Experts warn that such tools could become instruments of discrimination against already persecuted Bengali-speaking Muslim communities and low-income migrant workers.

Governance Approach Issues

At the core of these concerns lies India’s governance approach to AI. The AI Governance Guidelines released in November 2025 favor voluntary self-regulation over binding accountability mechanisms, explicitly prioritizing “responsible innovation” over necessary caution. While these guidelines acknowledge the need to protect vulnerable groups, they fail to address the specific harms faced by religious minorities, Dalit, Bahujan, Adivasi communities, and sexual and gender minorities.

This governance framework places an unrealistic burden of evidence-gathering and challenging powerful AI systems on the very communities most affected, without mandating the transparency necessary for such challenges to be viable.

Call to Action

The CSOH and IFF urge global leaders at the Summit to demand and commit to rights-respecting regulation with clear obligations across the AI value chain. They advocate for a prohibition on the use of AI for mass surveillance and predictive policing, the requirement of transparency and independent oversight for all public-sector AI deployments, and the inclusion of affected communities’ voices in AI governance discussions. The organizations stress the need to move beyond mere voluntary commitments from tech companies and urgently recognize the need for robust regulation to address the harms arising from the design, development, and deployment of AI systems.

Press Inquiries

For press inquiries, contact:

Eviane Leidig, Director of Research, Center for the Study of Organized Hate (press@csohate.org)

Apar Gupta, Advocate and Founder Director, Internet Freedom Foundation (media@internetfreedom.in)

About the Organizations

The Center for the Study of Organized Hate (CSOH) conducts research and informs policy to combat organized hate, extremism, violence, and online harms. The Internet Freedom Foundation (IFF) is a digital rights advocacy organization based in New Delhi, working to protect the fundamental rights of Indians in the face of digital technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...