Proposed AI Framework May Entrench Bias Against Minorities, Groups Caution
As the AI Impact Summit 2026 is set to commence in New Delhi, a new report by the Center for the Study of Organized Hate and the Internet Freedom Foundation has raised significant concerns regarding India’s proposed artificial intelligence governance framework. While the framework is projected to promote inclusivity and innovation, the report warns that it may disproportionately harm religious minorities, Dalit, and Adivasi communities, as well as sexual and gender minorities, in the absence of binding safeguards.
Concerns Over Structural Inequalities
Titled AI Governance at the Edge of Democratic Backsliding, the report argues that the Union government’s preference for voluntary compliance and self-regulation over enforceable statutory protections risks entrenching structural inequalities. Marginalized communities often lack the financial and institutional capacity to challenge opaque algorithmic systems in courts, exacerbating existing disparities.
Inadequate Existing Statutes
Despite the Ministry of Electronics and Information Technology’s AI Governance Guidelines released in November 2025 rejecting what they describe as “compliance-heavy regulation,” the report contends that reliance on existing statutes such as the Information Technology Act, 2000 and the Bharatiya Nyaya Sanhita, 2023 may prove inadequate in addressing emerging harms rooted in automated decision-making systems.
Vulnerable Groups and Targeted Protections
While the government’s summit vision of “Democratizing AI and Bridging the AI Divide” emphasizes inclusive technological growth, the report highlights that the guidelines only broadly refer to “vulnerable groups”. They fail to specifically acknowledge the distinct risks faced by Muslims, Dalits, Adivasis, and LGBTQIA+ persons, thereby leaving critical gaps in targeted protections.
Amplification of Polarization
The report documents instances where generative AI tools have amplified communal polarization, citing the circulation of AI-generated videos and images targeting Muslim communities and opposition leaders. Notably, some of these materials were shared by state units of the Bharatiya Janata Party before being deleted following public criticism.
Concerns in Policing and Surveillance
Beyond online harms, the report raises alarms about the expanding deployment of AI in policing and surveillance. For instance, predictive policing pilots in states such as Andhra Pradesh, Odisha, and Maharashtra may replicate entrenched biases embedded in historical crime data. Proposals to deploy linguistic AI tools to identify alleged undocumented migrants could also invite discriminatory profiling.
Facial Recognition Technology
The growing use of facial recognition technology in cities like Delhi, Hyderabad, Bengaluru, and Lucknow is flagged as troubling, especially given that India lacks a dedicated legal framework comparable to the European Union’s risk-based AI regulation, which restricts real-time biometric identification in public spaces.
Risks in Welfare Delivery
Additionally, the report highlights exclusion risks in welfare delivery. The mandatory use of facial recognition authentication in schemes such as the Integrated Child Development Services may disproportionately affect poor and marginalized beneficiaries, particularly when technical failures occur.
Conclusion
While acknowledging India’s ambition to build sovereign AI capacity, the authors conclude that without binding transparency obligations, independent oversight, and explicit minority protections, the current governance model may deepen existing inequalities rather than democratize technological progress.