AI for Health Equity: Bridging the Gap in India’s Healthcare Innovation

AI in Healthcare: Governance, Equity, and Responsible Innovation in India

Mumbai: The conversation on artificial intelligence in healthcare is often dominated by grand promises—faster diagnoses, scalable access, and precision medicine at population scale. However, the inaugural Winter Dialogue on RAISE (Responsible AI for Synergistic Excellence in Healthcare) at Ashoka University recently focused on more pressing questions: who does AI really work for, who does it leave out, and how do we govern what we do not yet fully understand.

Hosted by the Koita Centre for Digital Health at Ashoka University (KCDH-A) in partnership with NIMS Jaipur and with WHO SEARO as a technical host alongside ICMR-NIRDHS and the Gates Foundation, this two-day dialogue served as an official Pre-Summit Event for the AI Impact Summit 2026. It was the first in a series of four national RAISE dialogues scheduled across India, focusing on the theme of Health AI: Policy and Governance.

The Gap Between Technical Capability and Institutional Readiness

A unifying thread across sessions was the gap between technical capability and institutional readiness. Dr. Karthik Adapa, Regional Adviser for Digital Health at WHO, highlighted the persistent problem of “pilotitis”—the tendency for digital health solutions to remain trapped in experimental pilots without scaling into public systems. He emphasized that frameworks like SALIENT are essential as they encourage practitioners to think beyond models and metrics towards integration, evaluation, and long-term use.

Optimisation vs. Equity

The tension between optimisation and equity surfaced repeatedly throughout discussions. Dr. Anurag Agrawal posed a critical question: ‘Would you choose a model with higher average accuracy but poor performance for women, or one with lower accuracy that shows equity in outcomes?’ This sentiment was encapsulated in the phrase: ‘AI for Health, not Healthcare for AI.’

Panels reflected on the complexities of translating principles into practice. Case studies on tuberculosis screening, cancer detection, and maternal health monitoring across Indian states illustrated both promise and fragility—highlighting issues like fragile data pipelines, uneven infrastructure, regulatory uncertainty, and deeply embedded social biases that algorithms can replicate.

Challenges in Mental Health

Discussions on mental health were particularly cautious. Dr. Prabha Chand noted that large language models are ‘optimised for engagement, not clinical outcomes.’ Dr. Smruti Joshi reminded attendees that ‘mental health judgment cannot be fully automated.’ The challenge lies not in whether AI has a role, but in how narrowly and carefully that role is defined, especially when working with vulnerable populations.

Validation and Accountability

Validation and accountability emerged as central themes. Dr. Mary-Anne Hartley stressed that imperfect data leads to imperfect models, especially in a diverse context like India. Continuous monitoring, bias mitigation, and human-in-the-loop systems must be standard, not optional.

The Ethical Core

Dr. Anurag Agrawal returned to the ethical core of the conversation: ‘The real test of health AI is not peak accuracy in controlled settings, but equitable performance in the real world. If AI systems work well on average but fail women or marginalized populations, we have failed our purpose. We must design AI for health—not bend healthcare to fit AI.’

This sentiment was echoed by Vice-Chancellor Somak Raychaudhury, who noted, ‘Responsible AI in health cannot be built in silos… Universities play a crucial role—not only in advancing research but in creating the intellectual and institutional infrastructure needed to ensure that AI serves public good, equity, and trust at scale.’

A Platform for Sustained Dialogue

RAISE, as described by Aradhita Baral, is intended as “a platform for sustained dialogue rather than isolated conversations.” Its expansion to IIT Delhi, Bengaluru, and Hyderabad over the coming weeks indicates that India’s AI-in-health debate is shifting from hype to homework—from what is possible to what is responsible.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...