New Utah AI Laws Change Disclosure Requirements and Identity Protections, Target Mental Health Chatbots
On May 7, 2025, Utah enacted five new artificial intelligence (AI) bills that significantly alter its existing Artificial Intelligence Policy Act (AIPA). These changes aim to enhance consumer protections, particularly around the use of generative AI and mental health chatbots.
Key Takeaways
- Utah has introduced five new bills that further shape its existing AI policies and add new requirements.
- Disclosure requirements surrounding the use of AI technology have been narrowed, along with the definition of generative AI.
- Mental health chatbots are now subject to specific disclosure obligations and restrictions, with penalties for non-compliance.
- One bill expands previous laws regarding the abuse of personal identity to address threats from deepfakes and similar technologies.
Amendments to the Utah Artificial Intelligence Policy Act (SB 226 and SB 332)
The AIPA mandates that consumers must be informed when they are interacting with generative AI rather than a human, particularly when asked by the consumer or when the provider is in a regulated occupation. The recent amendments in SB 226 and SB 332 introduce several significant changes:
- Clear and Unambiguous Requests: Consumers must now make clear and unambiguous requests for disclosure.
- High-Risk Interactions: Disclosure requirements for regulated occupations are limited to “high-risk AI interactions,” which include the collection of sensitive personal information, personalized recommendations, and providing advice in fields like finance, law, and mental health.
- Narrower Definition of Generative AI: The definition now solely encompasses AI designed to simulate human conversation.
- New Safe Harbor: Providers are not subject to enforcement actions if generative AI clearly discloses its non-human status throughout the interaction.
Furthermore, SB 332 extends the AIPA’s duration, which was initially set to sunset on May 7, 2025, until July 1, 2027.
Consumer Protections for Mental Health Chatbots (HB 452)
HB 452 introduces specific protections for users of mental health chatbots that engage in conversations similar to those had with licensed therapists. Key provisions include:
- Disclosure Obligations: Providers must clearly disclose that users are interacting with an AI chatbot at multiple points, including before access and after significant periods of inactivity.
- Advertising Restrictions: Chatbots must disclose any sponsorships, and user input cannot be used for targeted advertisements.
- Restrictions on Health Information Sales: Providers are prohibited from selling or sharing identifiable health information without user consent.
Moreover, mental health chatbot providers may qualify for an affirmative defense against liability if they comply with specific policy requirements.
Addressing the Abuse of Personal Identity (SB 271)
SB 271 expands Utah’s existing laws on the nonconsensual use of personal identity, broadening the scope to cover new risks posed by technologies like deepfakes. The amendments include:
- Expanded Scope: The law now covers fundraising, solicitations, and the commercial use of an individual’s identity.
- Prohibitions on Tool Providers: It restricts the sale and distribution of technologies intended for unauthorized identity creation.
- First Amendment Protections: The law includes exceptions to protect rights related to newsworthiness and artistic expression.
SB 271 maintains a private right of action against those who misuse personal identity, granting plaintiffs the ability to seek injunctive relief and damages.
Conclusion
Utah’s recent legislative changes mark a significant shift in how AI technologies, particularly in sensitive areas like mental health, are regulated. These laws aim to balance innovation in AI with the necessary protections for consumers, ensuring transparency and accountability in the deployment of these technologies.