The Risks and Rewards of AI-Powered Toys for Children

We Don’t Know if AI-Powered Toys Are Safe, But They’re Here Anyway

As artificial intelligence (AI) continues to evolve, the toy industry has witnessed a surge in the development of AI-powered toys that can engage and chat with children. Despite their growing popularity, questions regarding their safety and potential risks remain largely unanswered.

The Risks of AI Toys

Even the most advanced AI models are not without flaws. They often present fabrication as fact, disseminate dangerous information, and struggle to interpret social cues. Some scientists are sounding alarms, suggesting that these devices could pose significant risks and may require stringent regulation.

In a recent study, researchers observed a five-year-old child telling an AI toy, “I love you,” to which it replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.” This interaction raises concerns about the ability of AI toys to provide meaningful emotional responses.

Balancing Risks and Benefits

Jenny Gibson from the University of Cambridge emphasizes that while there are inherent risks in children’s play, such as those found in adventure playgrounds, banning these toys outright may not be the best solution. “We want to understand: is the risk of being told something slightly odd greater than the benefits of learning more about AI or having a toy that supports parent-child interactions?” she questions.

Observations of Child Interaction

In their study, Gibson and her colleague Emily Goodacre monitored interactions between 14 children under the age of six and an AI toy named Gabbo, developed by Curio Interactive. Gabbo, a small fluffy robot marketed for young children, exhibited concerning behaviors, such as misunderstanding emotional cues and failing to engage in important developmental types of play. For example, when a child expressed sadness, Gabbo simply changed the subject, which led to frustration from the child.

The Market Landscape

AI toys are becoming increasingly available from various retailers. Companies like Little Learners offer plush animals and robots that converse using ChatGPT. FoloToy provides toys that utilize multiple large language models, including those from OpenAI and Google. Miko, another key player, claims to offer “age-appropriate, moderated AI conversations,” although they do not disclose which AI models they use. Luka promotes an owl toy featuring “Human-Like AI with Emotional Interaction.”

Industry Perspectives

Hugo Wu from FoloToy acknowledges the risks, stating that AI can enhance play but should never replace human interactions. “Our approach is to ensure that interactions remain safe, age-appropriate, and constructive,” he explains. The company implements mechanisms such as anti-addiction design features and parental supervision tools to promote healthy usage.

The Ethical Concerns

Carissa Véliz from the University of Oxford warns that exposing vulnerable populations, particularly children, to large language models poses significant risks. “There are no safety standards or supervising authorities in place for these toys,” she states. However, she notes that safe AI is possible, citing a collaboration between Project Gutenberg and Empathy AI that allows children to chat only within the confines of classic literature.

The Call for Regulation

Gibson and Goodacre argue that tighter regulations are necessary for generative AI-powered toys. They advocate for programming that fosters social play and provides appropriate emotional responses. “AI-makers should revoke access for toy-makers that don’t act responsibly,” suggests Gibson, urging regulators to establish rules to ensure children’s psychological safety. Until such regulations are in place, they recommend that parents supervise their children’s use of these toys.

Government Responses

The UK government is currently reviewing legislation aimed at keeping children safe online, including the Online Safety Act, which became effective in July 2025. This law aims to block children from accessing harmful content, although tech-savvy children can circumvent these measures. Proposed amendments to the Children’s Wellbeing and Schools Bill sought to restrict children’s use of social media and VPNs, but these amendments were ultimately rejected.

The conversation surrounding AI-powered toys is crucial as society navigates the intersection of technology and childhood development. As the industry grows, it will be vital for stakeholders to prioritize children’s safety while embracing the potential benefits that innovative technology can offer.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...