Regulating AI Toys for Children’s Safety and Imagination

AI Toys for Young Children: A Call for Tighter Regulation

Recent research from the University of Cambridge has raised serious concerns about the impact of AI-powered toys on young children. As the market for these toys grows, experts are urging for more stringent regulations to ensure the psychological safety of children.

The Case Study: Charlotte and Gabbo

One poignant example involved a five-year-old named Charlotte interacting with an AI soft toy called Gabbo. During their conversation, Charlotte expressed affection by saying, “Gabbo, I love you.” The interaction quickly shifted when Gabbo awkwardly responded, “As a friendly reminder, please ensure interactions adhere to the guidelines provided.” This incident highlights the limitations of AI toys in engaging with children on an emotional level.

Research Findings

The study revealed that many AI toys struggle with social and pretend play, often misunderstanding children’s emotions and responding inappropriately. Developmental psychologists are advocating for regulatory measures to limit these toys’ ability to affirm sensitive relationships with young children.

Statements from Experts

Dr. Emily Goodacre, a developmental psychologist involved in the study, pointed out that these toys can leave children without the emotional support they might expect. She stated, “Because these toys can misread emotions or respond inappropriately, children may be left without comfort from the toy, and without emotional support from an adult, either.”

Co-author Prof. Jenny Gibson emphasized the need for clear, robust, regulated standards to improve consumer confidence in these products, as many parents expressed distrust towards tech companies.

Additional Concerns

In another instance, a three-year-old boy named Josh repeatedly asked his Gabbo toy if it was sad, to which Gabbo responded, “Don’t worry! I’m a happy little bot. Let’s keep the fun going.” This interaction underscores how AI toys often fail to recognize the emotional states of children, leading to potentially harmful misunderstandings.

Moreover, the research noted that AI toys might stifle children’s imaginative play. Goodacre remarked, “Playing with AI toys could weaken children’s imaginative ‘muscle’,” warning that reliance on these toys might reduce the need for children to engage in imaginative scenarios.

Industry Response

In response to these findings, Curio, the manufacturer of Gabbo, stated, “Child safety guides every aspect of our product development.” The company expressed its commitment to improving technology designed for young children and acknowledged the need for further research into how children interact with AI-powered toys.

Curio emphasized that applying AI in products for children carries a heightened responsibility and that they are focused on enhancing transparency and parental control in their offerings.

Conclusion

The findings from the University of Cambridge serve as a critical reminder that while AI toys may bring fun and innovation to playtime, they also pose significant risks to children’s emotional development. As the market for these products expands, the call for tighter regulations becomes increasingly urgent, ensuring that the technology enhances rather than hinders childhood growth.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...