Pennsylvania’s Push for AI Chatbot Regulation: Protecting Children from Digital Risks

Shapiro Wants Pennsylvania to Regulate AI Chatbots: How Would That Work?

Pennsylvania Governor Josh Shapiro is directing state agencies to develop stricter regulations for artificial intelligence (AI) chatbots, citing concerns about their potential to mislead and harm children. This initiative could position Pennsylvania alongside a growing list of states aiming to implement protective measures as the usage of AI chatbots, such as ChatGPT, Meta AI, and Gemini, rises among youth.

“This space is evolving rapidly,” Shapiro stated during his recent budget address. “We need to act quickly to protect our kids.” A survey by the nonprofit Common Sense Media revealed that a significant portion of U.S. teens, one in three, use chatbots for various forms of social interaction, including conversation practice, emotional support, and even romantic relationships.

The Risks of AI Chatbots

Shapiro warned that without regulations, children could be vulnerable to emotional harm. He referenced recent settlements involving Google, which faced lawsuits claiming its Character.AI contributed to mental health crises, including suicides among young users.

To address these concerns, the governor proposed measures such as:

  • Age verification for users
  • Requiring parental consent
  • A ban on chatbots generating sexually explicit or violent content involving children

Additionally, he supports directing users who mention self-harm or violence to appropriate authorities and reminding them that they are interacting with an AI, not a human.

Challenges to Implementation

However, questions arise regarding the feasibility of enforcing these requirements. Hoda Heidari, a professor in ethics and computational technologies at Carnegie Mellon University, noted that while the broader goals may be agreeable, the practical aspects of implementation are complex.

Despite growing interest in age verification as a protective measure, experts have raised concerns about its effectiveness. For example, many websites employ age gates requiring users to input their birthdate, but these can easily be bypassed. Heidari emphasized that ensuring chatbots do not produce harmful content is equally challenging, as AI systems can be prompted to generate inappropriate responses.

Legislative Proposals

Shapiro has urged lawmakers to create legislation that protects children and vulnerable users from the risks associated with chatbot use. A bipartisan bill in the state Senate aims to establish “age-appropriate standards” and implement safeguards against content encouraging self-harm, suicide, or violence. This legislation would also direct users to crisis resources when high-risk language is detected.

A Patchwork of Regulations

The rapid, unregulated growth of AI and generative tools resembles the Wild West, and the future of federal regulation remains uncertain, particularly under the former Trump administration, which discouraged state-based regulation. While states like California and New York have enacted legislation aimed at improving transparency and safety in AI, the resulting patchwork of regulations complicates compliance for AI companies.

Heidari noted that without a unified regulatory framework, companies will likely adhere to the standards set by larger states, as it is impractical to customize chatbot platforms for different regulations across states.

Conclusion

As Pennsylvania embarks on this regulatory journey, the Shapiro administration’s proactive stance and collaborative approach with stakeholders and experts could pave the way for effective AI regulations. However, the success of these measures depends on carefully addressing the complexities involved in enforcing them.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...