Responsible AI in Mental Health: Key Insights from Global Experts

Towards Responsible AI for Mental Health and Well-Being

On January 29, 2026, over 30 international experts in artificial intelligence, mental health, ethics, and public policy gathered for an online workshop organized by the Delft Digital Ethics Centre (DDEC) at the Delft University of Technology (TU Delft). This event marked the first WHO Collaborating Centre on AI for health governance, emphasizing the importance of ethics in this rapidly evolving field.

Held as an official pre-summit event of the India AI Impact Summit 2026, with support from the World Health Organization, the workshop convened researchers, policymakers, clinicians, and advocates. Dr. Alain Labrique, Director of WHO’s Department of Data, Digital Health, Analytics, and AI, noted: “As AI increasingly interacts with people in moments of emotional vulnerability, we as WHO and its stakeholders must ensure these systems are designed and governed with safety, accountability, and human well-being at their core.”

Challenges of Generative AI in Mental Health

Central among these challenges is the growing use of generative AI tools—which were neither designed nor tested for mental health purposes—for emotional support, particularly among young people. The potential risks this poses are significant. Sameer Pujari, WHO’s AI Lead, remarked, “We are at a critical juncture. The pace of AI adoption in people’s daily lives has far outstripped investment in understanding its impact on mental health. Closing that gap requires coordinated action and dedicated resources from both the public and private sectors.”

Dr. Kenneth Carswell of WHO’s Department of Noncommunicable Diseases and Mental Health emphasized the need for cross-disciplinary collaboration: “Minimizing risks from generative AI for mental health while maximizing benefits requires bringing together the voices of those most affected, clinical and research expertise, governance and regulatory frameworks, and data to inform understanding. WHO is committed to ensuring that users’ well-being stays at the center as these tools evolve.”

Key Recommendations from the Workshop

The workshop distilled these discussions into three principal recommendations:

  1. Generative AI use should be recognized as a public mental health concern, with responses across government, health systems, and industry that address all generative AI solutions, not only those intended for mental health.
  2. Mental health should be integrated into impact assessments and monitoring of AI solutions to better understand their effects on determinants of health, short-term clinical measures, and long-term outcomes, such as emotional dependence. One workshop participant stressed, “We need independent investments to test these effects.”
  3. AI tools used for mental health support should be co-designed with mental health experts and individuals with lived experience, including youth. Tools must be grounded in the best available evidence and tailored to cultural, linguistic, and contextual factors. Participants emphasized the importance of consumer empowerment, while TU Delft’s Dr. Caroline Figueroa highlighted the urgent need for consensus on crisis referral frameworks and accountability systems.

The Role of WHO Collaborating Centres

More broadly, the workshop illustrated how the WHO Collaborating Centre mechanism has become a critical pillar in implementing the WHO’s vision for responsible AI in health. Through this mechanism, WHO mobilizes world-class academic expertise and convenes diverse international stakeholders to generate evidence-based recommendations in support of its standard-setting role. As Dr. Stefan Buijsman, managing director of the DDEC, noted: “As a WHO Collaborating Centre, we can increase impact by collaborating with experts around the world, domain experts, and governments.”

Looking Ahead: Building a Global Consortium

WHO is establishing a Consortium of Collaborating Centres on AI for Health, a network of leading institutions across all six WHO regions, to support Member States in the responsible adoption of AI. A pre-convening of candidate consortium members took place on March 17–19, 2026, at TU Delft, where institutions aligned on shared priorities and agreed on initial collaboration mechanisms to build the collaborative infrastructure needed to ensure that AI governance in health is grounded in evidence, ethics, and the needs of diverse populations worldwide.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...