Watchdog Report Alleges Use of AI to Target Minorities, Expand Surveillance
The study raises concerns over the political misuse of generative AI and weak safeguards.
Introduction
NEW DELHI – A joint report, released days before the India AI Impact Summit 2026 (Feb 16-20), has raised serious concerns over the political and social use of artificial intelligence in India, with particular reference to its impact on Muslim communities.
Key Findings
The report, titled “India AI Impact Summit 2026: AI Governance at the Age of Democratic Backsliding,” was published by the Internet Freedom Foundation and the Centre for the Study of Organised Hate. It claims that generative AI tools are being used to spread anti-minority narratives, strengthen surveillance systems, and influence the electoral process, while transparency and regulation remain weak.
Targeting Minorities
The report alleges that creative AI is being used by political actors to deepen social divisions and target minorities, especially Muslims. An example from Assam highlights this issue: the state unit of the Bharatiya Janata Party shared an AI-generated video showing Assam Chief Minister Himanta Biswa Sarma shooting two Muslim men, captioned “No Mercy.” The authors of the report described this video as “inflammatory content that can pose a serious threat to social harmony.”
Concerns Among Communities
A senior member of the Internet Freedom Foundation stated, “When political actors use AI to depict violence against a specific religious community, it sends a dangerous message. It normalizes hate and creates fear among citizens.”
For many Indian Muslims, such developments are worrying. A community activist in Delhi expressed, “We already face suspicion in many spaces. When technology is used to show violence against us, even if it is fake, it increases anxiety and makes people feel unsafe.”
Weak Safeguards
The report points to gaps in safeguards within popular generative AI systems. Widely used text-to-image tools such as Meta AI, Microsoft Copilot, OpenAI ChatGPT, and Adobe Firefly reportedly lack effective controls concerning Indian languages and local social contexts. According to the study, these tools sometimes reinforce stereotypes against certain communities.
A researcher associated with the report stated, “Content moderation systems are often designed with Western contexts in mind. They do not fully understand Indian political signals, dog whistles, or coded language. This gap can allow harmful content to circulate.”
Surveillance Measures
The report raises concerns over surveillance measures, referring to a statement by Maharashtra Chief Minister Devendra Fadnavis about the development of an AI tool in collaboration with the Indian Institute of Technology Bombay. This tool is reportedly intended to help identify alleged illegal Bangladeshi immigrants and Rohingya refugees through initial screening based on language and accent.
Linguistic experts have questioned the reliability of such a system, noting, “Bengali dialects across borders share deep similarities. It is extremely difficult, if not impossible, to determine nationality accurately through accent alone.”
A lawyer working on citizenship cases added, “When technology is used to flag people based on how they speak, the burden falls on poor and marginalized citizens to prove they belong.”
Facial Recognition and Policing
Another key concern is the use of facial recognition technology (FRT) by police forces across several states. The study states there is little public information about how these systems are procured, their accuracy, and how errors are handled. Mistaken identity can have serious consequences, particularly linked to criminal investigations.
A digital rights advocate remarked, “If a facial recognition system wrongly matches a person, that error can follow them for years. For minorities who already face profiling, the risks are higher.”
Welfare Schemes and Algorithmic Exclusion
The report highlights problems in welfare delivery, claiming that flaws in AI systems have excluded eligible beneficiaries from government schemes in several states. Vague algorithms and automated decision-making systems are deployed without public consultation, leaving citizens to prove their eligibility when flagged as ineligible.
A social worker in Uttar Pradesh mentioned, “Many families do not understand why their ration or pension stops. They are told the system has rejected them. There is no clear explanation and no simple way to appeal.”
Concerns Over the Electoral Process
The study raises questions about the lack of transparency in software used to mark “suspicious” voters. Limited clarity exists on how voters are flagged, how data is verified, and what safeguards prevent errors. A constitutional expert stated, “The right to vote is fundamental. If automated systems are used without transparency, citizens may have to go through long legal processes just to protect their voting rights.”
Community leaders have expressed concern that Muslims, who often face scrutiny in citizenship-related matters, could be affected if flawed systems are used in voter verification.
Recommendations
At the end of the report, several recommendations are made for governments, industry, and civil society. These include:
- Transparent policy-making
- Independent review of algorithms
- Strong human oversight
- Clear complaint systems
- Alignment with international human rights standards
A representative of the Centre for the Study of Organised Hate emphasized, “Artificial intelligence should serve people, not target them. Governance must be rooted in constitutional values and equal rights.”
Conclusion
As the India AI Impact Summit 2026 approaches, the report adds urgency to the debate on how AI is being used in India. For many Indian Muslims, the core concern is not technology itself, but how it is used. A young student in Mumbai summarized the mood: “We are not against technology. We just want fairness. We want to know that new tools will not be used to single us out.”
The report concludes that aligning AI governance with democratic values and fundamental rights is essential if trust is to be maintained in a diverse country like India.