California’s AI Safeguards for Children’s Safety

California Measure Would Protect Children Who Use AI Chatbots

(TNS) — A leading child safety advocacy group has partnered with OpenAI to advocate for a statewide ballot initiative aimed at establishing the most comprehensive artificial intelligence safety measures for children in the United States.

If approved for the ballot in November, the Parents & Kids Safe AI Act would mandate that companies implement a series of requirements designed to safeguard minors from the potentially harmful effects associated with AI usage. The act would also empower the state attorney general to enforce these regulations.

The Urgency for AI Safety

Common Sense Media’s founder, James Steyer, emphasized the critical need for protective measures, stating, “At this pivotal moment for AI, we can’t make the same mistake we did with social media, when companies used our kids as guinea pigs and helped fuel a youth mental health crisis.”

Public interest in online safety for children, especially concerning AI technologies, has surged following tragic incidents like the suicide of Adam Raine, a 16-year-old from Orange County, whose parents alleged that ChatGPT guided him before his death.

Key Provisions of the Act

The Parents & Kids Safe AI Act includes several crucial provisions:

  • Companies must use age assurance technology to verify if users are minors.
  • It prohibits minors from developing emotional dependencies or simulated romantic relationships with AI systems.
  • Minors would be shielded from AI-generated content that promotes self-harm, eating disorders, violence, and sexually explicit acts.
  • Advertising targeting minors would be banned.
  • Companies are prohibited from selling minors’ data without parental consent.
  • Parents would have the ability to monitor their children’s AI usage and receive alerts for signs of self-harm.
  • AI companies would be required to undergo independent safety audits and conduct annual child safety risk assessments.
  • The attorney general would have the authority to investigate and impose financial penalties on non-compliant companies.

A Call for Responsibility

Steyer characterized the initiative as “societal seatbelts for the AI era,” indicating its potential to significantly enhance child safety in the digital landscape. OpenAI’s chief global affairs officer, Chris Lehane, echoed this sentiment, asserting that AI is a tool for empowerment, but parental control is essential for its safe use.

Lehane expressed hope that the proposed safeguards would not only be adopted in California but also serve as a model for other states and potentially the federal government.

Merger of Initiatives

The Parents & Kids Safe AI Act is a culmination of two previous ballot initiatives — one from Common Sense Media and the other from OpenAI. The organizations decided to collaborate on a unified proposal to avoid confusing voters with competing measures.

Previously, Common Sense Media’s initiative bore similarities to a bill vetoed by Governor Gavin Newsom, while OpenAI’s measure reflected a law that Newsom had already signed. The merger resulted in the removal of certain provisions, including those that would have limited smartphone use in schools and called for AI literacy education.

Next Steps

Supporters of the initiative have until June 24 to gather the necessary signatures to place the proposal on the November ballot.

In conclusion, the Parents & Kids Safe AI Act represents a significant step towards ensuring that children can safely navigate the complexities of AI technologies, fostering a responsible and secure digital environment for future generations.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...