Utah’s New AI Safety Regulations Target Child Protection

Utah Moves to Extend Child Safety Tech Rules to AI

On January 27, 2026, Utah lawmakers are advancing their efforts in tech regulation, now focusing on artificial intelligence (AI) systems and public safety.

New Legislation: The Artificial Intelligence Transparency Act

Representative Doug Fiefia has introduced H.B. 286, the Artificial Intelligence Transparency Act, during the 2026 General Legislative Session. The bill mandates that the largest AI companies operating in Utah publicly disclose how they assess and mitigate serious risks associated with their systems, particularly those impacting children.

This proposal expands on Utah’s recent initiatives to regulate social media platforms, emphasizing the state’s commitment to shaping governance of powerful digital technologies, even as Congress struggles to establish national AI regulations.

Key Requirements of H.B. 286

Under H.B. 286, covered AI companies must:

  • Draft and publish public safety and child protection plans explaining how they evaluate and mitigate severe AI-related risks.
  • Adhere to these plans in practice rather than treating them as voluntary guidelines.
  • Report significant AI safety incidents.
  • Protect employees from retaliation for raising internal concerns or disclosing failures.

Supporters of the bill argue it closes a regulatory gap. Although many AI companies have voluntarily adopted internal safety frameworks, Utah currently lacks requirements for them to document or disclose these efforts.

Focus on Transparency

The bill aims to avoid creating a new regulatory agency or imposing strict technical standards. Instead, it emphasizes transparency — compelling companies to publicly explain their processes for handling safety risks as AI systems grow more advanced and widely used.

Utah’s Leadership in Child-Focused Tech Regulation

Utah has established itself as a national leader in child-focused tech regulation, particularly regarding social media. In March 2023, it became the first U.S. state to implement laws restricting children’s social media use, including parental consent requirements and age verification mandates. These measures have been framed as a model for other states to emulate.

H.B. 286 extends this philosophy to AI, a sector characterized by rapid advancement and limited existing guardrails.

Support from Advocacy Groups

Advocacy organizations supporting the bill believe transparency alone can prompt changes in corporate behavior. Andrew Doris, a senior policy analyst at the Secure AI Project, emphasized the importance of acting now to mitigate major risks associated with AI.

Adam Billen, vice president of public policy at Encode AI, reflected on lessons learned from social media, stating that families cannot rely on tech companies to voluntarily protect them from AI-driven tragedies.

Public Support for AI Oversight

The announcement of H.B. 286 coincided with a statewide survey revealing significant voter concern regarding AI oversight. The survey indicated that 90% of Utah voters support requiring AI developers to implement safety protocols to protect children, while 71% expressed worry that the state may not regulate AI adequately.

Scope and Enforcement of H.B. 286

H.B. 286 clearly defines its scope and enforcement mechanisms. It applies solely to large frontier developers, defined as companies that have trained advanced AI models using at least 10²26 computational operations and reported annual revenues exceeding $500 million. This criterion limits coverage to a select group of leading AI developers.

The bill grants companies flexibility to adjust to rapidly changing technology while raising questions about whether public disclosure alone can keep pace with advanced AI systems, especially when failures may only be visible after harm has occurred.

Enforcement will be managed by the Utah attorney general, who can take civil action against companies violating the law, with penalties up to $1 million for first violations and $3 million for subsequent violations. Reported AI safety incidents will be evaluated by the state’s Office of AI Policy, rather than through a new regulatory agency.

Next Steps for H.B. 286

H.B. 286 is scheduled for initial review on January 27, when it will be considered by the House Economic Development and Workforce Services Standing Committee. This bill is part of several measures on the committee’s agenda, and the meeting will be livestreamed for public viewing.

If the bill advances out of committee, it will move toward a full House vote, initiating a wider discussion on how states should hold AI developers accountable for safety risks.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...