Ethical AI Framework for Mental Health Care

Huntsman Mental Health Institute’s New Framework for Ethical AI in Healthcare

Salt Lake City, UT — The Huntsman Mental Health Institute has unveiled a groundbreaking framework aimed at ensuring the ethical, transparent, and equitable deployment of artificial intelligence (AI) systems in healthcare. This pioneering framework, known as the Scalable Agile Framework for Execution in AI (SAFE AI), has been published in the Journal of Medical Internet Research (JMIR), a leading peer-reviewed academic journal for digital health research.

Overview of the SAFE AI Framework

Developed in collaboration with various healthcare AI partners, the SAFE AI framework provides practical guidance for small and medium-sized enterprises engaged in building medical AI technologies. One of its key features is the integration of ethical checkpoints into standard development workflows, enabling organizations to proactively identify and mitigate potential biases that could adversely impact patient care.

Dr. Warren Pettine, a researcher at the institute and senior author of the publication, emphasizes the significance of this framework: “AI is increasingly shaping how clinicians make decisions in mental health care, from crisis triage to treatment recommendations. With SAFE AI, we provide a roadmap that ensures these systems are not only effective but also fair, transparent, and continuously monitored. Every patient deserves equitable care—especially those in vulnerable mental health settings.”

Importance of Ethical AI in Mental Health

As AI tools become prevalent in psychiatric and behavioral health care, concerns regarding fairness and bias are rising. Without intentional oversight, AI systems can inadvertently reflect or amplify disparities present in training data, potentially compromising the quality of care for already underserved populations.

The SAFE AI framework addresses this critical challenge by establishing rigorous processes to ensure equity across patient groups. Key components include:

  • Ongoing monitoring for “bias drift”
  • Subgroup performance evaluations
  • Clear communication strategies for conveying AI limitations to clinicians

“Responsible AI supports our mission to advance mental health knowledge, hope, and healing for all,” said Pettine. “This framework gives healthcare organizations the tools to ensure AI strengthens—not undermines—that mission.”

Collaborative Partnerships Driving Innovation

The SAFE AI framework was developed by a research team within the institute, supported in part by the Huntsman Mental Health Foundation and collaborated with key partners, including:

  • MTN (AI company)
  • Data Science Alliance (San Diego nonprofit)
  • Nemsee LLC

A Model for Translational Research

The SAFE AI project exemplifies the institute’s commitment to translational research, effectively bridging academic rigor with real-world healthcare innovation. This initiative aligns with the institute’s new Translational Research Building, currently under construction at the University of Utah, which aims to accelerate collaboration among researchers, clinicians, and industry partners.

“This is the kind of research that has immediate, meaningful impact,” stated Pettine. “We’re not just studying how AI is used in mental health; we’re helping define how it should be built.”

Conclusion: Advancing Responsible AI in Behavioral Health

The publication positions the Huntsman Mental Health Institute as a national leader in guiding the ethical development of AI systems for healthcare, particularly in behavioral health, where patient vulnerabilities and complex biases necessitate heightened oversight.

“When AI assists in mental health decisions, fairness and transparency are not optional,” remarked Pettine. “SAFE AI catches problems before they cause harm and keeps patient equity at the center.”

This study was partially supported by various organizations, including the National Institute on Aging and the US Army xTech AI Grand Challenge. The views and conclusions expressed in this report are those of the authors and do not necessarily represent the official policies of the National Institutes of Health, the US Army, or the US Government.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...