Huntsman Mental Health Institute’s New Framework for Ethical AI in Healthcare
Salt Lake City, UT — The Huntsman Mental Health Institute has unveiled a groundbreaking framework aimed at ensuring the ethical, transparent, and equitable deployment of artificial intelligence (AI) systems in healthcare. This pioneering framework, known as the Scalable Agile Framework for Execution in AI (SAFE AI), has been published in the Journal of Medical Internet Research (JMIR), a leading peer-reviewed academic journal for digital health research.
Overview of the SAFE AI Framework
Developed in collaboration with various healthcare AI partners, the SAFE AI framework provides practical guidance for small and medium-sized enterprises engaged in building medical AI technologies. One of its key features is the integration of ethical checkpoints into standard development workflows, enabling organizations to proactively identify and mitigate potential biases that could adversely impact patient care.
Dr. Warren Pettine, a researcher at the institute and senior author of the publication, emphasizes the significance of this framework: “AI is increasingly shaping how clinicians make decisions in mental health care, from crisis triage to treatment recommendations. With SAFE AI, we provide a roadmap that ensures these systems are not only effective but also fair, transparent, and continuously monitored. Every patient deserves equitable care—especially those in vulnerable mental health settings.”
Importance of Ethical AI in Mental Health
As AI tools become prevalent in psychiatric and behavioral health care, concerns regarding fairness and bias are rising. Without intentional oversight, AI systems can inadvertently reflect or amplify disparities present in training data, potentially compromising the quality of care for already underserved populations.
The SAFE AI framework addresses this critical challenge by establishing rigorous processes to ensure equity across patient groups. Key components include:
- Ongoing monitoring for “bias drift”
- Subgroup performance evaluations
- Clear communication strategies for conveying AI limitations to clinicians
“Responsible AI supports our mission to advance mental health knowledge, hope, and healing for all,” said Pettine. “This framework gives healthcare organizations the tools to ensure AI strengthens—not undermines—that mission.”
Collaborative Partnerships Driving Innovation
The SAFE AI framework was developed by a research team within the institute, supported in part by the Huntsman Mental Health Foundation and collaborated with key partners, including:
- MTN (AI company)
- Data Science Alliance (San Diego nonprofit)
- Nemsee LLC
A Model for Translational Research
The SAFE AI project exemplifies the institute’s commitment to translational research, effectively bridging academic rigor with real-world healthcare innovation. This initiative aligns with the institute’s new Translational Research Building, currently under construction at the University of Utah, which aims to accelerate collaboration among researchers, clinicians, and industry partners.
“This is the kind of research that has immediate, meaningful impact,” stated Pettine. “We’re not just studying how AI is used in mental health; we’re helping define how it should be built.”
Conclusion: Advancing Responsible AI in Behavioral Health
The publication positions the Huntsman Mental Health Institute as a national leader in guiding the ethical development of AI systems for healthcare, particularly in behavioral health, where patient vulnerabilities and complex biases necessitate heightened oversight.
“When AI assists in mental health decisions, fairness and transparency are not optional,” remarked Pettine. “SAFE AI catches problems before they cause harm and keeps patient equity at the center.”
This study was partially supported by various organizations, including the National Institute on Aging and the US Army xTech AI Grand Challenge. The views and conclusions expressed in this report are those of the authors and do not necessarily represent the official policies of the National Institutes of Health, the US Army, or the US Government.