Utah Moves to Extend Child Safety Tech Rules to AI
On January 27, 2026, Utah lawmakers are advancing their efforts in tech regulation, now focusing on artificial intelligence (AI) systems and public safety.
New Legislation: The Artificial Intelligence Transparency Act
Representative Doug Fiefia has introduced H.B. 286, the Artificial Intelligence Transparency Act, during the 2026 General Legislative Session. The bill mandates that the largest AI companies operating in Utah publicly disclose how they assess and mitigate serious risks associated with their systems, particularly those impacting children.
This proposal expands on Utah’s recent initiatives to regulate social media platforms, emphasizing the state’s commitment to shaping governance of powerful digital technologies, even as Congress struggles to establish national AI regulations.
Key Requirements of H.B. 286
Under H.B. 286, covered AI companies must:
- Draft and publish public safety and child protection plans explaining how they evaluate and mitigate severe AI-related risks.
- Adhere to these plans in practice rather than treating them as voluntary guidelines.
- Report significant AI safety incidents.
- Protect employees from retaliation for raising internal concerns or disclosing failures.
Supporters of the bill argue it closes a regulatory gap. Although many AI companies have voluntarily adopted internal safety frameworks, Utah currently lacks requirements for them to document or disclose these efforts.
Focus on Transparency
The bill aims to avoid creating a new regulatory agency or imposing strict technical standards. Instead, it emphasizes transparency — compelling companies to publicly explain their processes for handling safety risks as AI systems grow more advanced and widely used.
Utah’s Leadership in Child-Focused Tech Regulation
Utah has established itself as a national leader in child-focused tech regulation, particularly regarding social media. In March 2023, it became the first U.S. state to implement laws restricting children’s social media use, including parental consent requirements and age verification mandates. These measures have been framed as a model for other states to emulate.
H.B. 286 extends this philosophy to AI, a sector characterized by rapid advancement and limited existing guardrails.
Support from Advocacy Groups
Advocacy organizations supporting the bill believe transparency alone can prompt changes in corporate behavior. Andrew Doris, a senior policy analyst at the Secure AI Project, emphasized the importance of acting now to mitigate major risks associated with AI.
Adam Billen, vice president of public policy at Encode AI, reflected on lessons learned from social media, stating that families cannot rely on tech companies to voluntarily protect them from AI-driven tragedies.
Public Support for AI Oversight
The announcement of H.B. 286 coincided with a statewide survey revealing significant voter concern regarding AI oversight. The survey indicated that 90% of Utah voters support requiring AI developers to implement safety protocols to protect children, while 71% expressed worry that the state may not regulate AI adequately.
Scope and Enforcement of H.B. 286
H.B. 286 clearly defines its scope and enforcement mechanisms. It applies solely to large frontier developers, defined as companies that have trained advanced AI models using at least 10²26 computational operations and reported annual revenues exceeding $500 million. This criterion limits coverage to a select group of leading AI developers.
The bill grants companies flexibility to adjust to rapidly changing technology while raising questions about whether public disclosure alone can keep pace with advanced AI systems, especially when failures may only be visible after harm has occurred.
Enforcement will be managed by the Utah attorney general, who can take civil action against companies violating the law, with penalties up to $1 million for first violations and $3 million for subsequent violations. Reported AI safety incidents will be evaluated by the state’s Office of AI Policy, rather than through a new regulatory agency.
Next Steps for H.B. 286
H.B. 286 is scheduled for initial review on January 27, when it will be considered by the House Economic Development and Workforce Services Standing Committee. This bill is part of several measures on the committee’s agenda, and the meeting will be livestreamed for public viewing.
If the bill advances out of committee, it will move toward a full House vote, initiating a wider discussion on how states should hold AI developers accountable for safety risks.