UN Panel Explores Human-Centric AI Impact Study

Putting Humans at the Centre: UN AI Panel Begins Work on Global Impact Study

Tasked with navigating the volatile intersection of innovation and ethics, a group of world-leading experts is launching a landmark study into the forces transforming modern life.

The Mission of the UN AI Panel

“We are not just focusing on AI as a mathematical or algorithmic field: we are also looking at ensuring that humans are central to decision-making,” says a founding member of the UN’s Independent International Scientific Panel on AI.

The panel, formally appointed by the General Assembly in February, comprises 40 experts from diverse backgrounds, including academia, the private sector, civil society, and government. Their expertise spans core technical AI knowledge, applied AI, safety, infrastructure, and AI policy.

Human-Centric Decision Making

“A human in the machine” is a phrase often used in relation to AI, emphasizing the importance of human involvement in AI-driven decisions. The panel aims to discern when human expertise is necessary and when tasks can be automated.

Understanding the link between AI and human models is crucial. This concept, known as the co-adaptation loop, refers to the evolution that occurs when humans receive new information or when AI systems learn. The panel is exploring how to use AI to enhance human capabilities rather than replacing them, fostering cooperation between AI and humans across various fields.

Advocating for Public Digital Infrastructure

The panel member advocates for a public digital infrastructure to ensure that everyone has access to the resources needed to develop AI technologies. This includes incorporating diverse cultures and languages into AI models to prevent bias towards a limited number of countries.

Addressing Ethical Concerns

The launch of the panel reflects growing concerns about the risks of unregulated AI. UN Secretary-General António Guterres has warned that “humanity’s fate cannot be left to an algorithm,” while the UN High Commissioner for Human Rights cautioned against AI developers who ignore fundamental social and ethical principles.

Ethics and trust are vital in the AI sector, along with an understanding of the limitations of AI models. One potential solution is AI watermarking, which could clarify whether content is human-originated or AI-generated, helping to distinguish between the two.

Future Directions

These topics and others are expected to feature in the Scientific Panel’s first report, scheduled for release at the Global Dialogue on AI Governance in Geneva on July 6-7. The panel is mandated to produce an annual report with evidence-based assessments related to the opportunities, risks, and impacts of artificial intelligence.

Importantly, the panel is not a regulatory body; it will not set rules, enforce standards, or prescribe policy. Instead, it will provide rigorous, evidence-based analyses that inform decision-making without prescribing specific actions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...