AI in Healthcare: Governance, Equity, and Responsible Innovation in India
Mumbai: The conversation on artificial intelligence in healthcare is often dominated by grand promises—faster diagnoses, scalable access, and precision medicine at population scale. However, the inaugural Winter Dialogue on RAISE (Responsible AI for Synergistic Excellence in Healthcare) at Ashoka University recently focused on more pressing questions: who does AI really work for, who does it leave out, and how do we govern what we do not yet fully understand.
Hosted by the Koita Centre for Digital Health at Ashoka University (KCDH-A) in partnership with NIMS Jaipur and with WHO SEARO as a technical host alongside ICMR-NIRDHS and the Gates Foundation, this two-day dialogue served as an official Pre-Summit Event for the AI Impact Summit 2026. It was the first in a series of four national RAISE dialogues scheduled across India, focusing on the theme of Health AI: Policy and Governance.
The Gap Between Technical Capability and Institutional Readiness
A unifying thread across sessions was the gap between technical capability and institutional readiness. Dr. Karthik Adapa, Regional Adviser for Digital Health at WHO, highlighted the persistent problem of “pilotitis”—the tendency for digital health solutions to remain trapped in experimental pilots without scaling into public systems. He emphasized that frameworks like SALIENT are essential as they encourage practitioners to think beyond models and metrics towards integration, evaluation, and long-term use.
Optimisation vs. Equity
The tension between optimisation and equity surfaced repeatedly throughout discussions. Dr. Anurag Agrawal posed a critical question: ‘Would you choose a model with higher average accuracy but poor performance for women, or one with lower accuracy that shows equity in outcomes?’ This sentiment was encapsulated in the phrase: ‘AI for Health, not Healthcare for AI.’
Panels reflected on the complexities of translating principles into practice. Case studies on tuberculosis screening, cancer detection, and maternal health monitoring across Indian states illustrated both promise and fragility—highlighting issues like fragile data pipelines, uneven infrastructure, regulatory uncertainty, and deeply embedded social biases that algorithms can replicate.
Challenges in Mental Health
Discussions on mental health were particularly cautious. Dr. Prabha Chand noted that large language models are ‘optimised for engagement, not clinical outcomes.’ Dr. Smruti Joshi reminded attendees that ‘mental health judgment cannot be fully automated.’ The challenge lies not in whether AI has a role, but in how narrowly and carefully that role is defined, especially when working with vulnerable populations.
Validation and Accountability
Validation and accountability emerged as central themes. Dr. Mary-Anne Hartley stressed that imperfect data leads to imperfect models, especially in a diverse context like India. Continuous monitoring, bias mitigation, and human-in-the-loop systems must be standard, not optional.
The Ethical Core
Dr. Anurag Agrawal returned to the ethical core of the conversation: ‘The real test of health AI is not peak accuracy in controlled settings, but equitable performance in the real world. If AI systems work well on average but fail women or marginalized populations, we have failed our purpose. We must design AI for health—not bend healthcare to fit AI.’
This sentiment was echoed by Vice-Chancellor Somak Raychaudhury, who noted, ‘Responsible AI in health cannot be built in silos… Universities play a crucial role—not only in advancing research but in creating the intellectual and institutional infrastructure needed to ensure that AI serves public good, equity, and trust at scale.’
A Platform for Sustained Dialogue
RAISE, as described by Aradhita Baral, is intended as “a platform for sustained dialogue rather than isolated conversations.” Its expansion to IIT Delhi, Bengaluru, and Hyderabad over the coming weeks indicates that India’s AI-in-health debate is shifting from hype to homework—from what is possible to what is responsible.