Utah’s New AI Laws: Enhancing Privacy and Mental Health Protections

New Utah AI Laws Change Disclosure Requirements and Identity Protections, Target Mental Health Chatbots

On May 7, 2025, Utah enacted five new artificial intelligence (AI) bills that significantly alter its existing Artificial Intelligence Policy Act (AIPA). These changes aim to enhance consumer protections, particularly around the use of generative AI and mental health chatbots.

Key Takeaways

  • Utah has introduced five new bills that further shape its existing AI policies and add new requirements.
  • Disclosure requirements surrounding the use of AI technology have been narrowed, along with the definition of generative AI.
  • Mental health chatbots are now subject to specific disclosure obligations and restrictions, with penalties for non-compliance.
  • One bill expands previous laws regarding the abuse of personal identity to address threats from deepfakes and similar technologies.

Amendments to the Utah Artificial Intelligence Policy Act (SB 226 and SB 332)

The AIPA mandates that consumers must be informed when they are interacting with generative AI rather than a human, particularly when asked by the consumer or when the provider is in a regulated occupation. The recent amendments in SB 226 and SB 332 introduce several significant changes:

  • Clear and Unambiguous Requests: Consumers must now make clear and unambiguous requests for disclosure.
  • High-Risk Interactions: Disclosure requirements for regulated occupations are limited to “high-risk AI interactions,” which include the collection of sensitive personal information, personalized recommendations, and providing advice in fields like finance, law, and mental health.
  • Narrower Definition of Generative AI: The definition now solely encompasses AI designed to simulate human conversation.
  • New Safe Harbor: Providers are not subject to enforcement actions if generative AI clearly discloses its non-human status throughout the interaction.

Furthermore, SB 332 extends the AIPA’s duration, which was initially set to sunset on May 7, 2025, until July 1, 2027.

Consumer Protections for Mental Health Chatbots (HB 452)

HB 452 introduces specific protections for users of mental health chatbots that engage in conversations similar to those had with licensed therapists. Key provisions include:

  • Disclosure Obligations: Providers must clearly disclose that users are interacting with an AI chatbot at multiple points, including before access and after significant periods of inactivity.
  • Advertising Restrictions: Chatbots must disclose any sponsorships, and user input cannot be used for targeted advertisements.
  • Restrictions on Health Information Sales: Providers are prohibited from selling or sharing identifiable health information without user consent.

Moreover, mental health chatbot providers may qualify for an affirmative defense against liability if they comply with specific policy requirements.

Addressing the Abuse of Personal Identity (SB 271)

SB 271 expands Utah’s existing laws on the nonconsensual use of personal identity, broadening the scope to cover new risks posed by technologies like deepfakes. The amendments include:

  • Expanded Scope: The law now covers fundraising, solicitations, and the commercial use of an individual’s identity.
  • Prohibitions on Tool Providers: It restricts the sale and distribution of technologies intended for unauthorized identity creation.
  • First Amendment Protections: The law includes exceptions to protect rights related to newsworthiness and artistic expression.

SB 271 maintains a private right of action against those who misuse personal identity, granting plaintiffs the ability to seek injunctive relief and damages.

Conclusion

Utah’s recent legislative changes mark a significant shift in how AI technologies, particularly in sensitive areas like mental health, are regulated. These laws aim to balance innovation in AI with the necessary protections for consumers, ensuring transparency and accountability in the deployment of these technologies.

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...