AI Governance: Prioritizing Human Rights in an Automated World

AI Governance and Human Rights

As rapid advances in AI continue to evolve, global leaders are increasingly confronted with pressing questions regarding power, accountability, and the protection of fundamental freedoms in an automated world.

Key Discussions at the Internet Governance Forum 2025

During the Internet Governance Forum 2025 held in Lillestrøm, Norway, a pivotal session highlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. This statement, which is supported by 21 countries and counting, delineates a vision for human-centric AI governance grounded in international human rights law.

Participants from various sectors—including governments, civil society, and the tech industry—gathered to stress the urgent necessity for a multistakeholder approach to address the real and present risks that AI poses to rights such as privacy, freedom of expression, and democratic participation.

Interconnectedness of Human Rights and Security

Ambassador Ernst Noorman of the Netherlands emphasized that human rights and security must be viewed as interconnected. He cautioned that unregulated AI use could destabilize societies instead of providing protection, referencing the Netherlands’ own experiences with biased welfare algorithms.

Moreover, panellists including Germany’s Cyber Ambassador Maria Adebahr highlighted the alarming trend of AI being weaponized for transnational repression. Adebahr reiterated Germany’s commitment by doubling funding for the FOC.

The Role of Citizens and the Private Sector

Ghana’s cybersecurity chief, Divine Salese Agbeti, pointed out that the misuse of AI is not restricted to governments; citizens have also exploited this technology for manipulation and deception.

From the private sector, Microsoft’s Dr. Erika Moret presented the company’s comprehensive approach to embedding human rights in AI. This includes ethical design, impact assessments, and a refusal to engage in high-risk applications such as facial recognition in authoritarian contexts. Moret underscored the company’s adherence to UN guiding principles and the necessity for transparency, fairness, and inclusivity.

Global Frameworks and Calls to Action

The session also brought attention to essential global frameworks like the EU AI Act and the Council of Europe’s Framework Convention, advocating for their widespread adoption as critical tools in managing AI’s global impact. The discussion culminated in a shared call to action, urging governments to leverage regulatory tools and procurement power to uphold human rights standards in AI. Concurrently, the private sector and civil society are encouraged to advocate for accountability and inclusion.

The FOC’s statement remains open for new endorsements, serving as a foundational text in the ongoing endeavor to align the future of AI with the fundamental rights of all people.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...