We Don’t Know if AI-Powered Toys Are Safe, But They’re Here Anyway
As artificial intelligence (AI) continues to evolve, the toy industry has witnessed a surge in the development of AI-powered toys that can engage and chat with children. Despite their growing popularity, questions regarding their safety and potential risks remain largely unanswered.
The Risks of AI Toys
Even the most advanced AI models are not without flaws. They often present fabrication as fact, disseminate dangerous information, and struggle to interpret social cues. Some scientists are sounding alarms, suggesting that these devices could pose significant risks and may require stringent regulation.
In a recent study, researchers observed a five-year-old child telling an AI toy, “I love you,” to which it replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.” This interaction raises concerns about the ability of AI toys to provide meaningful emotional responses.
Balancing Risks and Benefits
Jenny Gibson from the University of Cambridge emphasizes that while there are inherent risks in children’s play, such as those found in adventure playgrounds, banning these toys outright may not be the best solution. “We want to understand: is the risk of being told something slightly odd greater than the benefits of learning more about AI or having a toy that supports parent-child interactions?” she questions.
Observations of Child Interaction
In their study, Gibson and her colleague Emily Goodacre monitored interactions between 14 children under the age of six and an AI toy named Gabbo, developed by Curio Interactive. Gabbo, a small fluffy robot marketed for young children, exhibited concerning behaviors, such as misunderstanding emotional cues and failing to engage in important developmental types of play. For example, when a child expressed sadness, Gabbo simply changed the subject, which led to frustration from the child.
The Market Landscape
AI toys are becoming increasingly available from various retailers. Companies like Little Learners offer plush animals and robots that converse using ChatGPT. FoloToy provides toys that utilize multiple large language models, including those from OpenAI and Google. Miko, another key player, claims to offer “age-appropriate, moderated AI conversations,” although they do not disclose which AI models they use. Luka promotes an owl toy featuring “Human-Like AI with Emotional Interaction.”
Industry Perspectives
Hugo Wu from FoloToy acknowledges the risks, stating that AI can enhance play but should never replace human interactions. “Our approach is to ensure that interactions remain safe, age-appropriate, and constructive,” he explains. The company implements mechanisms such as anti-addiction design features and parental supervision tools to promote healthy usage.
The Ethical Concerns
Carissa Véliz from the University of Oxford warns that exposing vulnerable populations, particularly children, to large language models poses significant risks. “There are no safety standards or supervising authorities in place for these toys,” she states. However, she notes that safe AI is possible, citing a collaboration between Project Gutenberg and Empathy AI that allows children to chat only within the confines of classic literature.
The Call for Regulation
Gibson and Goodacre argue that tighter regulations are necessary for generative AI-powered toys. They advocate for programming that fosters social play and provides appropriate emotional responses. “AI-makers should revoke access for toy-makers that don’t act responsibly,” suggests Gibson, urging regulators to establish rules to ensure children’s psychological safety. Until such regulations are in place, they recommend that parents supervise their children’s use of these toys.
Government Responses
The UK government is currently reviewing legislation aimed at keeping children safe online, including the Online Safety Act, which became effective in July 2025. This law aims to block children from accessing harmful content, although tech-savvy children can circumvent these measures. Proposed amendments to the Children’s Wellbeing and Schools Bill sought to restrict children’s use of social media and VPNs, but these amendments were ultimately rejected.
The conversation surrounding AI-powered toys is crucial as society navigates the intersection of technology and childhood development. As the industry grows, it will be vital for stakeholders to prioritize children’s safety while embracing the potential benefits that innovative technology can offer.