Embedding Human Rights in AI Governance: A Practical Approach

From Principles to Practice: Embedding Human Rights in AI Governance

Human rights in the age of artificial intelligence are no longer an abstract concern; they are fast becoming an operational constraint on how data is collected, analyzed, and turned into evidence. For cybersecurity, information governance, and eDiscovery teams, the United Nations’ emerging AI rights framework is shifting from background noise in New York conference rooms to a live factor in tooling decisions, regulatory interactions, and cross‑border contentious matters.

This article outlines how global rights‑based frameworks are evolving—and how cybersecurity, governance, and legal professionals can prepare systems, policies, and documentation accordingly.

The UN’s Digital Compact: A Rights‑Based Vision for AI

The Global Digital Compact, annexed to the Pact for the Future adopted at the Summit of the Future in September 2024, sets out a shared vision for an “inclusive, open, sustainable, fair, safe and secure digital future for all.” It commits states to foster digital cooperation that “respects, protects and promotes human rights,” including through enhanced governance of artificial intelligence, closing digital divides, and developing interoperable data governance approaches. The Compact is a non‑binding political declaration under General Assembly resolution A/RES/79/1, but it consolidates a set of expectations that states have endorsed at the head‑of‑government level.

From Resolutions to Infrastructure: UN Bodies and Their Mandates

One practical expression of those expectations is the decision to establish both an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance. The General Assembly operationalized that move on 26 August 2025 by adopting resolution A/RES/79/325, which sets out the terms of reference and modalities for the Panel and the Dialogue. The resolution mandates the Panel to provide evidence‑based assessments on AI’s opportunities, risks, and societal impacts, and tasks the Dialogue with convening governments and stakeholders annually to share best practices, discuss international cooperation, and support responsible AI governance aligned with the Sustainable Development Goals.

For practitioners, the key point is that these processes are explicitly anchored in the Global Digital Compact’s human‑rights‑centric framing of digital governance, rather than in a purely technical or industrial-policy agenda. A sensible immediate step for any organization using AI in security operations, compliance analytics, or review workflows is to inventory where those systems intersect with rights‑sensitive contexts—such as surveillance, employment, or content moderation—and to ensure that governance documentation addresses explainability, oversight, and redress in language that can be understood by regulators and courts.

The First Binding Treaty on AI: Europe Sets the Legal Bar

The Convention requires parties to ensure that activities across the lifecycle of AI systems are fully consistent with human rights, democracy, and the rule of law. Its core obligations echo familiar data‑protection principles—legality, proportionality, transparency, accountability, and privacy by design—but now apply explicitly across AI system design, operation, and oversight.

As of early 2026, several states, including the United States, have signed the Convention. The Convention will enter into force after a three‑month period following ratification by at least five signatories, including at least three Council of Europe member states—a threshold not yet met at this writing. Whether the United States will proceed to ratification—which would require Senate consent—remains uncertain, as no ratification process has yet been completed. Regardless, the Convention’s principles are already shaping expectations among trading partners, regulators, and counterparties in cross-border matters.

For cross‑border investigations and eDiscovery, this means treaty‑level language on explainability, oversight, and access to effective remedies may increasingly be invoked in arguments about AI‑enhanced review, document prioritization, or risk scoring.

Human Rights Language, Real‑World Impacts

Civil society and UN human‑rights bodies are working to ensure that AI governance is explicitly framed in terms of impacts on internationally protected rights, not only in terms of “safety” or “ethics.” Article 19, for example, has stressed that the UN’s Global Digital Compact and AI‑related resolutions should prioritize human rights and sustainable development, and has pressed states to refrain from using AI systems that cannot be operated in compliance with human rights law or that pose undue risks, particularly to vulnerable people. The Freedom Online Coalition’s 2025 joint statement on AI and human rights similarly highlights risks such as arbitrary surveillance, discriminatory profiling, and online harms, and calls for risk assessments, human oversight, and effective redress mechanisms.

The UN Human Rights Office’s B‑Tech Project has gone further by publishing a taxonomy of human‑rights risks connected to generative AI. That paper maps harms from generative AI products and services onto specific rights, including freedom from physical and psychological harm, equality before the law and protection against discrimination, privacy, freedom of expression and access to information, participation in public affairs, and rights of the child. It emphasizes that the most serious harms being observed are impacts on rights already guaranteed under international human rights law, and that existing state duties to protect rights and corporate responsibilities to respect them should be applied directly to generative AI.

For cybersecurity and information‑governance teams, this rights‑based turn has concrete implications. AI‑driven monitoring and anomaly‑detection tools may need to be documented not only in terms of technical performance, but also in terms of their impact on privacy, non‑discrimination, and freedom of expression, especially when used in workplace surveillance or content moderation contexts. A practical way to integrate this is to extend existing data‑protection impact assessment processes into AI impact assessments that explicitly consider which rights could be affected, how, and what mitigations—such as human‑in‑the‑loop review or clear appeal mechanisms—are in place.

Practical Steps for Cybersecurity, IG, and eDiscovery Teams

Beyond the text of these instruments, practitioners and commentators are already drawing operational implications. Analysts of UN resolution A/RES/79/325 see the Panel and Dialogue as platforms that will promote transparency and accountability, capacity‑building for states with fewer resources, and openness and interoperability in AI models and data. This mix aligns closely with what many enterprises are already pursuing internally: more explainable models, better documentation, and clearer lines of responsibility when AI informs high‑stakes decisions.

For US-based teams, these international developments do not displace domestic frameworks—such as the NIST AI Risk Management Framework, emerging state laws in Colorado and California, or sector-specific guidance from the FTC, EEOC, and FDA—but they increasingly inform them. Organizations with multinational operations, global clients, or exposure to cross-border litigation should expect international human-rights norms to surface alongside domestic compliance obligations.

For cybersecurity operations, this suggests that AI‑assisted tools used for intrusion detection, fraud prevention, or behavioral analytics should be implemented with explicit human‑rights safeguards in mind. That can mean logging AI‑generated alerts and decisions in ways that allow later audit, ensuring that algorithmic outputs are not the sole basis for punitive measures without human review, and testing for discriminatory impacts in datasets and thresholds. It also means being prepared to explain, in clear and non-technical terms, how a given system works, what data it uses, and what measures exist to prevent unjustified intrusion into private life.

Information‑governance leaders are likely to face growing expectations that AI‑relevant policies—on retention, classification, and access control—reflect not only national privacy laws but also international commitments to cultural and linguistic diversity, inclusion, and non‑discrimination. In practice, that may involve asking vendors precise questions about training data sources, bias mitigation practices, and whether systems can be configured to provide user‑facing explanations and contestation channels in languages that match the affected populations.

For eDiscovery and investigations, the Framework Convention’s insistence that AI use remain consistent with human rights and the rule of law provides a shared reference point when parties discuss the defensibility of analytics tools. Legal teams may increasingly be expected to show that AI‑assisted review platforms allow for meaningful human oversight, provide sufficient logs to reconstruct how key decisions were reached, and include mechanisms to correct or challenge outputs that appear biased or erroneous. One practical tactic is to align internal AI‑governance playbooks with the themes reflected in A/RES/79/325 and in rights‑based guidance: transparency about AI’s role in workflows, investment in staff capacity to interrogate AI outputs, and a preference for systems that support audit and interoperability over opaque black boxes.

Toward a Rights‑Based Discipline for AI Governance

This article reflects developments and public documentation available as of January 2026, including the Pact for the Future and its annexes, resolution A/RES/79/325, and early commentary on the Council of Europe Framework Convention. The picture that emerges is not yet a fully harmonized “global” AI regime, but it is already a recognizable architecture: a UN‑anchored political compact that centers human rights in digital governance, a pair of new UN mechanisms to translate those commitments into ongoing expert advice and dialogue, a binding regional treaty setting high‑level obligations for AI systems, and a growing body of human‑rights analysis targeted specifically at generative AI.

The practical relevance of these frameworks will vary: for organizations operating solely within the United States, the impact is indirect but growing; for those with international footprints, the obligations may become directly material. Either way, the question for cybersecurity, information‑governance, and eDiscovery professionals is less whether these instruments are perfect, and more whether internal AI‑enabled processes can be defended under their logic.

That means being able to show—not just assert—that AI tools are deployed in ways that respect dignity, minimize discriminatory impact, preserve the integrity of evidence, and provide realistic paths for redress when things go wrong. As enforcement approaches and litigation strategies evolve, the real test will be whether the AI systems embedded in your security stack, data‑governance workflows, and review platforms can be clearly explained, legally defended, and ethically justified—before regulators or courts ask.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...