Integrating Children’s Rights in AI Governance

How should children’s rights be integrated into AI governance?

AI is increasingly present within educational contexts where it personalizes learning experiences, monitors attainment, and assists teachers in lesson planning. As children spend more time online and connect via social media, AI shapes how they access information and maintain friendships. Furthermore, AI systems in the public sector significantly impact children’s lives and their families by informing decisions related to key services such as healthcare, education, housing, and criminal justice.

Despite the pervasive influence of AI on children’s lives, they are often excluded from the decision-making processes concerning the design, development, and governance of AI technologies. This lack of inclusion means that AI systems are frequently developed without considering the unique needs and vulnerabilities of younger users. Moreover, policy responses aimed at protecting children often lack insights into their actual experiences and interests.

Neglecting these crucial considerations can lead to significant detrimental impacts on children’s rights. Although nearly every country has ratified the United Nations Convention on the Rights of the Child (UNCRC), a study by UNICEF in 2020 found that most national AI strategies made minimal or no mention of children’s rights. Governance plays a pivotal role in ensuring that children’s rights are upheld and that AI systems are designed and implemented to serve the best interests of the child. To achieve this, further research is necessary to understand the evolving relationships children have with AI.

Current Work in Children’s Rights and AI

Since 2020, a dedicated team has been working on various projects exploring the intersection of children’s rights and AI. Collaborating with UNICEF, they piloted policy guidance on child-centred AI. In a series of semi-structured interviews with public sector stakeholders in the UK, many expressed a desire to engage children in discussions about AI but reported uncertainty on how to proceed.

To further this dialogue, a research article titled Navigating Children’s Rights and AI in the UK examines current strategies addressing children’s rights in relation to AI systems. In a broader context, the team conducted in-depth analyses mapping frameworks that align data-intensive technologies with children’s rights and well-being. Their findings were published in the report AI, Children’s Rights, & Wellbeing: Transnational Frameworks, which features heatmaps illustrating the assessment of 13 international frameworks concerning key themes such as children’s rights and well-being.

The analysis highlighted that most frameworks primarily focus on the responsibilities of governments and policymakers to uphold children’s rights through regulations and guidelines. However, there is often a lack of emphasis on the government’s obligation to avoid infringing upon children’s rights. Additionally, the distinction between children’s rights and well-being is frequently blurred, and less than half of the reviewed frameworks recommended a Child Rights Impact Assessment (CRIA).

Engaging Children in AI Development

Research is underway to establish best practices for involving children in AI development, aimed at fostering child-centred AI. An ongoing project, begun in 2021, collaborates with the Children’s Parliament and the Scottish AI Alliance to engage primary school children (ages 8–12) in Scotland. Workshops with around 100 children across four schools have been conducted to gather their views on AI and how they wish to be involved in its development and governance.

In the current phase, children are directly engaging with developers and policymakers to influence decision-making regarding AI systems that affect their lives. This initiative highlights the importance of incorporating children’s voices in AI governance.

Legal Frameworks and Future Directions

Collaboration with the Council of Europe has led to a mapping study assessing the need for legally binding frameworks for AI systems used by or affecting children up to the age of 18. This study addresses three significant challenges identified during a conference introducing the new strategy for children’s rights:

  • The absence of legal frameworks addressing children’s rights in the context of AI.
  • The design of AI systems often overlooks children’s rights.
  • The scientific evidence regarding the impact of AI on children’s development is fragmented.

The findings of this mapping study are anticipated to be published within the first half of the year.

Key Findings and Recommendations

The report AI, Children’s Rights, & Wellbeing: Transnational Frameworks underscores the necessity for a multistakeholder approach to facilitate international collaboration and knowledge sharing. This is essential to ensure the effective implementation of these frameworks. The work serves as a foundation for future research and a resource for policymakers and practitioners aiming to develop child-centred approaches to AI.

As many countries prepare for the updated EU AI Act and the Council of Europe Framework Convention on AI and human rights, it is critical to keep children and young people at the forefront of these discussions. The inclusion of children’s rights in the draft framework convention is a positive step forward.

While progress is being made at the legal and policy levels regarding children’s rights, more efforts are necessary. As AI becomes increasingly integrated into children’s daily lives, it is paramount that their rights and voices are central to decision-making processes concerning the design, development, deployment, and governance of AI systems. Actively involving children in these processes is crucial to maximizing the benefits of AI while minimizing associated risks, ultimately creating a digital world where children can thrive.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...