AI Misuse and Minority Targeting in India: A Growing Concern

Watchdog Report Alleges Use of AI to Target Minorities, Expand Surveillance

The study raises concerns over the political misuse of generative AI and weak safeguards.

Introduction

NEW DELHI – A joint report, released days before the India AI Impact Summit 2026 (Feb 16-20), has raised serious concerns over the political and social use of artificial intelligence in India, with particular reference to its impact on Muslim communities.

Key Findings

The report, titled “India AI Impact Summit 2026: AI Governance at the Age of Democratic Backsliding,” was published by the Internet Freedom Foundation and the Centre for the Study of Organised Hate. It claims that generative AI tools are being used to spread anti-minority narratives, strengthen surveillance systems, and influence the electoral process, while transparency and regulation remain weak.

Targeting Minorities

The report alleges that creative AI is being used by political actors to deepen social divisions and target minorities, especially Muslims. An example from Assam highlights this issue: the state unit of the Bharatiya Janata Party shared an AI-generated video showing Assam Chief Minister Himanta Biswa Sarma shooting two Muslim men, captioned “No Mercy.” The authors of the report described this video as “inflammatory content that can pose a serious threat to social harmony.”

Concerns Among Communities

A senior member of the Internet Freedom Foundation stated, “When political actors use AI to depict violence against a specific religious community, it sends a dangerous message. It normalizes hate and creates fear among citizens.”

For many Indian Muslims, such developments are worrying. A community activist in Delhi expressed, “We already face suspicion in many spaces. When technology is used to show violence against us, even if it is fake, it increases anxiety and makes people feel unsafe.”

Weak Safeguards

The report points to gaps in safeguards within popular generative AI systems. Widely used text-to-image tools such as Meta AI, Microsoft Copilot, OpenAI ChatGPT, and Adobe Firefly reportedly lack effective controls concerning Indian languages and local social contexts. According to the study, these tools sometimes reinforce stereotypes against certain communities.

A researcher associated with the report stated, “Content moderation systems are often designed with Western contexts in mind. They do not fully understand Indian political signals, dog whistles, or coded language. This gap can allow harmful content to circulate.”

Surveillance Measures

The report raises concerns over surveillance measures, referring to a statement by Maharashtra Chief Minister Devendra Fadnavis about the development of an AI tool in collaboration with the Indian Institute of Technology Bombay. This tool is reportedly intended to help identify alleged illegal Bangladeshi immigrants and Rohingya refugees through initial screening based on language and accent.

Linguistic experts have questioned the reliability of such a system, noting, “Bengali dialects across borders share deep similarities. It is extremely difficult, if not impossible, to determine nationality accurately through accent alone.”

A lawyer working on citizenship cases added, “When technology is used to flag people based on how they speak, the burden falls on poor and marginalized citizens to prove they belong.”

Facial Recognition and Policing

Another key concern is the use of facial recognition technology (FRT) by police forces across several states. The study states there is little public information about how these systems are procured, their accuracy, and how errors are handled. Mistaken identity can have serious consequences, particularly linked to criminal investigations.

A digital rights advocate remarked, “If a facial recognition system wrongly matches a person, that error can follow them for years. For minorities who already face profiling, the risks are higher.”

Welfare Schemes and Algorithmic Exclusion

The report highlights problems in welfare delivery, claiming that flaws in AI systems have excluded eligible beneficiaries from government schemes in several states. Vague algorithms and automated decision-making systems are deployed without public consultation, leaving citizens to prove their eligibility when flagged as ineligible.

A social worker in Uttar Pradesh mentioned, “Many families do not understand why their ration or pension stops. They are told the system has rejected them. There is no clear explanation and no simple way to appeal.”

Concerns Over the Electoral Process

The study raises questions about the lack of transparency in software used to mark “suspicious” voters. Limited clarity exists on how voters are flagged, how data is verified, and what safeguards prevent errors. A constitutional expert stated, “The right to vote is fundamental. If automated systems are used without transparency, citizens may have to go through long legal processes just to protect their voting rights.”

Community leaders have expressed concern that Muslims, who often face scrutiny in citizenship-related matters, could be affected if flawed systems are used in voter verification.

Recommendations

At the end of the report, several recommendations are made for governments, industry, and civil society. These include:

  • Transparent policy-making
  • Independent review of algorithms
  • Strong human oversight
  • Clear complaint systems
  • Alignment with international human rights standards

A representative of the Centre for the Study of Organised Hate emphasized, “Artificial intelligence should serve people, not target them. Governance must be rooted in constitutional values and equal rights.”

Conclusion

As the India AI Impact Summit 2026 approaches, the report adds urgency to the debate on how AI is being used in India. For many Indian Muslims, the core concern is not technology itself, but how it is used. A young student in Mumbai summarized the mood: “We are not against technology. We just want fairness. We want to know that new tools will not be used to single us out.”

The report concludes that aligning AI governance with democratic values and fundamental rights is essential if trust is to be maintained in a diverse country like India.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...