Integrating Children’s Rights in AI Governance

How should children’s rights be integrated into AI governance?

AI is increasingly present within educational contexts where it personalizes learning experiences, monitors attainment, and assists teachers in lesson planning. As children spend more time online and connect via social media, AI shapes how they access information and maintain friendships. Furthermore, AI systems in the public sector significantly impact children’s lives and their families by informing decisions related to key services such as healthcare, education, housing, and criminal justice.

Despite the pervasive influence of AI on children’s lives, they are often excluded from the decision-making processes concerning the design, development, and governance of AI technologies. This lack of inclusion means that AI systems are frequently developed without considering the unique needs and vulnerabilities of younger users. Moreover, policy responses aimed at protecting children often lack insights into their actual experiences and interests.

Neglecting these crucial considerations can lead to significant detrimental impacts on children’s rights. Although nearly every country has ratified the United Nations Convention on the Rights of the Child (UNCRC), a study by UNICEF in 2020 found that most national AI strategies made minimal or no mention of children’s rights. Governance plays a pivotal role in ensuring that children’s rights are upheld and that AI systems are designed and implemented to serve the best interests of the child. To achieve this, further research is necessary to understand the evolving relationships children have with AI.

Current Work in Children’s Rights and AI

Since 2020, a dedicated team has been working on various projects exploring the intersection of children’s rights and AI. Collaborating with UNICEF, they piloted policy guidance on child-centred AI. In a series of semi-structured interviews with public sector stakeholders in the UK, many expressed a desire to engage children in discussions about AI but reported uncertainty on how to proceed.

To further this dialogue, a research article titled Navigating Children’s Rights and AI in the UK examines current strategies addressing children’s rights in relation to AI systems. In a broader context, the team conducted in-depth analyses mapping frameworks that align data-intensive technologies with children’s rights and well-being. Their findings were published in the report AI, Children’s Rights, & Wellbeing: Transnational Frameworks, which features heatmaps illustrating the assessment of 13 international frameworks concerning key themes such as children’s rights and well-being.

The analysis highlighted that most frameworks primarily focus on the responsibilities of governments and policymakers to uphold children’s rights through regulations and guidelines. However, there is often a lack of emphasis on the government’s obligation to avoid infringing upon children’s rights. Additionally, the distinction between children’s rights and well-being is frequently blurred, and less than half of the reviewed frameworks recommended a Child Rights Impact Assessment (CRIA).

Engaging Children in AI Development

Research is underway to establish best practices for involving children in AI development, aimed at fostering child-centred AI. An ongoing project, begun in 2021, collaborates with the Children’s Parliament and the Scottish AI Alliance to engage primary school children (ages 8–12) in Scotland. Workshops with around 100 children across four schools have been conducted to gather their views on AI and how they wish to be involved in its development and governance.

In the current phase, children are directly engaging with developers and policymakers to influence decision-making regarding AI systems that affect their lives. This initiative highlights the importance of incorporating children’s voices in AI governance.

Legal Frameworks and Future Directions

Collaboration with the Council of Europe has led to a mapping study assessing the need for legally binding frameworks for AI systems used by or affecting children up to the age of 18. This study addresses three significant challenges identified during a conference introducing the new strategy for children’s rights:

  • The absence of legal frameworks addressing children’s rights in the context of AI.
  • The design of AI systems often overlooks children’s rights.
  • The scientific evidence regarding the impact of AI on children’s development is fragmented.

The findings of this mapping study are anticipated to be published within the first half of the year.

Key Findings and Recommendations

The report AI, Children’s Rights, & Wellbeing: Transnational Frameworks underscores the necessity for a multistakeholder approach to facilitate international collaboration and knowledge sharing. This is essential to ensure the effective implementation of these frameworks. The work serves as a foundation for future research and a resource for policymakers and practitioners aiming to develop child-centred approaches to AI.

As many countries prepare for the updated EU AI Act and the Council of Europe Framework Convention on AI and human rights, it is critical to keep children and young people at the forefront of these discussions. The inclusion of children’s rights in the draft framework convention is a positive step forward.

While progress is being made at the legal and policy levels regarding children’s rights, more efforts are necessary. As AI becomes increasingly integrated into children’s daily lives, it is paramount that their rights and voices are central to decision-making processes concerning the design, development, deployment, and governance of AI systems. Actively involving children in these processes is crucial to maximizing the benefits of AI while minimizing associated risks, ultimately creating a digital world where children can thrive.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...