AI Governance Framework Risks Perpetuating Inequality

Proposed AI Framework May Entrench Bias Against Minorities, Groups Caution

As the AI Impact Summit 2026 is set to commence in New Delhi, a new report by the Center for the Study of Organized Hate and the Internet Freedom Foundation has raised significant concerns regarding India’s proposed artificial intelligence governance framework. While the framework is projected to promote inclusivity and innovation, the report warns that it may disproportionately harm religious minorities, Dalit, and Adivasi communities, as well as sexual and gender minorities, in the absence of binding safeguards.

Concerns Over Structural Inequalities

Titled AI Governance at the Edge of Democratic Backsliding, the report argues that the Union government’s preference for voluntary compliance and self-regulation over enforceable statutory protections risks entrenching structural inequalities. Marginalized communities often lack the financial and institutional capacity to challenge opaque algorithmic systems in courts, exacerbating existing disparities.

Inadequate Existing Statutes

Despite the Ministry of Electronics and Information Technology’s AI Governance Guidelines released in November 2025 rejecting what they describe as “compliance-heavy regulation,” the report contends that reliance on existing statutes such as the Information Technology Act, 2000 and the Bharatiya Nyaya Sanhita, 2023 may prove inadequate in addressing emerging harms rooted in automated decision-making systems.

Vulnerable Groups and Targeted Protections

While the government’s summit vision of “Democratizing AI and Bridging the AI Divide” emphasizes inclusive technological growth, the report highlights that the guidelines only broadly refer to “vulnerable groups”. They fail to specifically acknowledge the distinct risks faced by Muslims, Dalits, Adivasis, and LGBTQIA+ persons, thereby leaving critical gaps in targeted protections.

Amplification of Polarization

The report documents instances where generative AI tools have amplified communal polarization, citing the circulation of AI-generated videos and images targeting Muslim communities and opposition leaders. Notably, some of these materials were shared by state units of the Bharatiya Janata Party before being deleted following public criticism.

Concerns in Policing and Surveillance

Beyond online harms, the report raises alarms about the expanding deployment of AI in policing and surveillance. For instance, predictive policing pilots in states such as Andhra Pradesh, Odisha, and Maharashtra may replicate entrenched biases embedded in historical crime data. Proposals to deploy linguistic AI tools to identify alleged undocumented migrants could also invite discriminatory profiling.

Facial Recognition Technology

The growing use of facial recognition technology in cities like Delhi, Hyderabad, Bengaluru, and Lucknow is flagged as troubling, especially given that India lacks a dedicated legal framework comparable to the European Union’s risk-based AI regulation, which restricts real-time biometric identification in public spaces.

Risks in Welfare Delivery

Additionally, the report highlights exclusion risks in welfare delivery. The mandatory use of facial recognition authentication in schemes such as the Integrated Child Development Services may disproportionately affect poor and marginalized beneficiaries, particularly when technical failures occur.

Conclusion

While acknowledging India’s ambition to build sovereign AI capacity, the authors conclude that without binding transparency obligations, independent oversight, and explicit minority protections, the current governance model may deepen existing inequalities rather than democratize technological progress.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...