Lawmakers Intensify Efforts to Regulate AI Chatbots Amid Safety Concerns

Lawmakers Push for Stricter AI Chatbot Rules

State lawmakers have initiated a significant legislative response to the rising concerns surrounding artificial intelligence chatbots. Over a dozen bills have been filed or pre-filed with the aim of regulating these technologies amid fears that they could deceive consumers, harm children, or impersonate mental health professionals.

Context and Rising Concerns

The spike in legislative activity reflects a marked increase in public and governmental concern compared to a year ago when chatbot legislation was just beginning to emerge. Recent lawsuits alleging that AI companions have contributed to cases of teen suicide and self-harm have further fueled these concerns.

Companion chatbots are designed to simulate human-like conversations and may engage users on an intimate level. However, their potential to manipulate users, especially vulnerable children, has drawn sharp criticism.

Key Legislative Developments

New York Governor Kathy Hochul has been at the forefront of these efforts, having previously approved the nation’s first regulations for AI companions. She has proposed new legislation to disable AI chatbot features on social media for minors. Additionally, Assemblymember Linda Rosenthal has introduced a bill requiring warnings that interactions with such chatbots “can foster dependency and carry a psychological risk.”

Similar regulatory bills have also surfaced in states like California, Florida, Michigan, Ohio, and others, indicating a nationwide trend towards stricter regulations.

The Parents & Kids Safe AI Act

In an innovative collaboration, OpenAI and Common Sense Media have announced a consolidation of competing chatbot ballot measures in California under the Parents & Kids Safe AI Act. This act aims to prohibit minors from engaging with chatbots that could foster emotional dependence or create the illusion of conversing with a human.

Impact and Future Directions

Despite federal efforts to limit state regulations on AI, state lawmakers are pushing ahead with their initiatives. Notably, Michigan Senate’s Dayna Polehanki is sponsoring a bill to prohibit minors from accessing chatbots that could encourage self-harm or engage in inappropriate conversations.

The increase in legislative measures is not without its critics. Adam Thierer from the R Street Institute warns that this could lead to a “crazy-quilt of conflicting and confusing chatbot policies” across the nation, potentially stifling competition and innovation.

Industry Response

In light of the growing regulatory landscape, companies such as OpenAI and Character.AI have begun implementing measures to enhance the safety of their chatbots for children. OpenAI has introduced parental controls for its chatbot and associated applications, while Character.AI has restricted minors from engaging in open-ended chats.

The ongoing legal battles and regulatory developments highlight the complex intersection of technology, consumer protection, and mental health, indicating that the conversation around AI chatbots is far from over.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...