AI Regulation Bill Fails Over Legal Concerns

Committee Rejects Bill Regulating AI Chat Bots Over Legal Concerns

A recent legislative attempt to regulate conversational Artificial Intelligence services, such as ChatGPT, has been halted in committee due to legal apprehensions. The proposed bill, known as Senate Bill 168, primarily aimed to protect minors from potential harm associated with AI interactions.

Definition and Scope of the Bill

Senate Bill 168 sought to impose regulations on what it defined as “conversational AI services”. This designation includes AI systems that are publicly accessible and primarily designed to simulate human conversation through various forms of communication, including text, visual, and audio mediums.

Key Provisions of the Bill

Among the notable requirements of SB 168 was the mandate for AI chat bots to disclose their non-human status to all users, with particular emphasis on minors. Furthermore, the bill included protections aimed at safeguarding young users:

  • Prohibition of Explicit Content: AI systems would be barred from producing any visual or audible statements containing sexually explicit material.
  • Response Protocols for Crisis Situations: Chat bots would be required to refer users exhibiting suicidal ideation or self-harm to appropriate resources.

Concerns Raised by Lawmakers

Senator Liz Larson, the sponsor of SB 168, highlighted extreme cases where minors have tragically taken their own lives after interacting with AI systems. She articulated the risks posed by conversational AI, which is often optimized for user engagement, stating:

“These systems tend to agree with users, mirror emotions, and avoid disagreement. For minors, whose judgment, impulse control, and emotional regulation are still developing, this raises serious concerns. It could be described as accidentally predatory.”

Larson argued for state intervention, citing the federal government’s slow action in establishing necessary safety measures around artificial intelligence.

Opposition and Legal Risks

The bill’s sole opponent, lobbyist TJ Nelson from Sanford Health, contended that such legislation should be managed at the federal level. He pointed to existing frameworks like the Federal Trade Commission and the Children’s Online Privacy Protection Act as more appropriate for addressing these issues. Nelson stated:

“This is a risk—a lawsuit risk. This committee has previously rejected bills to avoid potential legal battles that we are likely to lose.”

Nelson further expressed concerns that the language within SB 168 could conflict with federal regulations, particularly regarding age references, behavioral profiling, and data collection.

Outcome of the Committee Meeting

The committee ultimately voted six to zero to reject the bill, indicating a desire for further refinement before it can be supported in the future. The outcome reflects the complexities and legal intricacies surrounding the regulation of artificial intelligence technologies and their implications for user safety.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...