Towards Responsible AI for Mental Health and Well-Being
On January 29, 2026, over 30 international experts in artificial intelligence, mental health, ethics, and public policy gathered for an online workshop organized by the Delft Digital Ethics Centre (DDEC) at the Delft University of Technology (TU Delft). This event marked the first WHO Collaborating Centre on AI for health governance, emphasizing the importance of ethics in this rapidly evolving field.
Held as an official pre-summit event of the India AI Impact Summit 2026, with support from the World Health Organization, the workshop convened researchers, policymakers, clinicians, and advocates. Dr. Alain Labrique, Director of WHO’s Department of Data, Digital Health, Analytics, and AI, noted: “As AI increasingly interacts with people in moments of emotional vulnerability, we as WHO and its stakeholders must ensure these systems are designed and governed with safety, accountability, and human well-being at their core.”
Challenges of Generative AI in Mental Health
Central among these challenges is the growing use of generative AI tools—which were neither designed nor tested for mental health purposes—for emotional support, particularly among young people. The potential risks this poses are significant. Sameer Pujari, WHO’s AI Lead, remarked, “We are at a critical juncture. The pace of AI adoption in people’s daily lives has far outstripped investment in understanding its impact on mental health. Closing that gap requires coordinated action and dedicated resources from both the public and private sectors.”
Dr. Kenneth Carswell of WHO’s Department of Noncommunicable Diseases and Mental Health emphasized the need for cross-disciplinary collaboration: “Minimizing risks from generative AI for mental health while maximizing benefits requires bringing together the voices of those most affected, clinical and research expertise, governance and regulatory frameworks, and data to inform understanding. WHO is committed to ensuring that users’ well-being stays at the center as these tools evolve.”
Key Recommendations from the Workshop
The workshop distilled these discussions into three principal recommendations:
- Generative AI use should be recognized as a public mental health concern, with responses across government, health systems, and industry that address all generative AI solutions, not only those intended for mental health.
- Mental health should be integrated into impact assessments and monitoring of AI solutions to better understand their effects on determinants of health, short-term clinical measures, and long-term outcomes, such as emotional dependence. One workshop participant stressed, “We need independent investments to test these effects.”
- AI tools used for mental health support should be co-designed with mental health experts and individuals with lived experience, including youth. Tools must be grounded in the best available evidence and tailored to cultural, linguistic, and contextual factors. Participants emphasized the importance of consumer empowerment, while TU Delft’s Dr. Caroline Figueroa highlighted the urgent need for consensus on crisis referral frameworks and accountability systems.
The Role of WHO Collaborating Centres
More broadly, the workshop illustrated how the WHO Collaborating Centre mechanism has become a critical pillar in implementing the WHO’s vision for responsible AI in health. Through this mechanism, WHO mobilizes world-class academic expertise and convenes diverse international stakeholders to generate evidence-based recommendations in support of its standard-setting role. As Dr. Stefan Buijsman, managing director of the DDEC, noted: “As a WHO Collaborating Centre, we can increase impact by collaborating with experts around the world, domain experts, and governments.”
Looking Ahead: Building a Global Consortium
WHO is establishing a Consortium of Collaborating Centres on AI for Health, a network of leading institutions across all six WHO regions, to support Member States in the responsible adoption of AI. A pre-convening of candidate consortium members took place on March 17–19, 2026, at TU Delft, where institutions aligned on shared priorities and agreed on initial collaboration mechanisms to build the collaborative infrastructure needed to ensure that AI governance in health is grounded in evidence, ethics, and the needs of diverse populations worldwide.