Unifying AI Risk Management: Bridging the Gaps in Governance

As artificial intelligence becomes increasingly woven into the fabric of our lives, the need to manage its potential risks has spurred the development of numerous frameworks and standards. While these initiatives share common goals, their fragmentation threatens to hinder the responsible and trustworthy deployment of AI. Understanding the approaches being taken to bridge these divides and promote greater consistency and interoperability in AI risk management is therefore crucial. This exploration delves into the key strategies aiming to unify this complex landscape, examining how collaboration, harmonization, and practical tools can pave the way for more effective and aligned AI governance.

What approaches are used to promote greater consistency and interoperability in AI risk management

As AI governance matures, numerous risk management frameworks and standards are emerging. To prevent fragmentation and ensure effective implementation of trustworthy AI, a push for greater consistency and interoperability is underway. This involves cooperation between state and non-state actors, both domestically and internationally, focusing on AI risk management, design (e.g., “trustworthiness by design”), and impact assessments.

Key Approaches to Interoperability:

  • Framework Mapping: Comparing and mapping different AI risk management frameworks is a foundational step. The goal is to identify areas of functional equivalence and divergence across these frameworks.
  • Common Risk Management Steps: Most frameworks align with four high-level steps: ‘DEFINE’ (scope, context, and criteria), ‘ASSESS’ (risks at individual, aggregate, and societal levels), ‘TREAT’ (risks to mitigate adverse impacts), and ‘GOVERN’ (risk management processes). These steps provide a common structure for interoperability.
  • Addressing Governance Differences: Frameworks often vary in how they handle the ‘GOVERN’ function. Some explicitly include governance activities, while others distribute them throughout the risk management process or omit them altogether. Harmonizing governance approaches is crucial.
  • Conceptual and Terminological Alignment: Analyzing key concepts and terminology in different initiatives is essential. Identifying areas of consensus and incompatible components can help clarify debates around concepts like transparency, explainability, and interpretability.
  • Due Diligence Frameworks: Leveraging existing due diligence frameworks, like the OECD Due Diligence Guidance for Responsible Business Conduct (OECD DDG), to develop good practices for responsible AI is a promising avenue.
  • Certification Scheme Alignment: Researching and analyzing the alignment of AI certification schemes with OECD Responsible Business Conduct (RBC) and AI standards can improve the quality, comparability, and interoperability of these schemes.
  • Interactive Tools: Developing online tools that allow organizations and stakeholders to compare frameworks and navigate existing methods, tools, and good practices for AI risk management can facilitate interoperability.

While the general approaches are aligned, high-level differences exist, primarily around the ‘GOVERN’ function. Scopes of individual frameworks also cause inconsistencies. For example, The OECD DDG considers risks associated with business relationships more broadly, while ISO 31000, NIST AI RMF, HUDERIA, EU AIA, AIDA, and IEEE 7000-21 focus on more product-centered or value-driven approaches to managing AI risks. Addressing these differences will be key to promoting consistent and interoperable AI risk management.

What are the key differences observed across various AI risk management frameworks

AI risk management frameworks are converging on a core set of principles, but significant differences persist in their approach. These discrepancies primarily revolve around the “GOVERN” function, encompassing elements like monitoring, communication, documentation, consultation, and embedding risk management practices.

While most frameworks seek to “DEFINE,” “ASSESS,” and “TREAT” AI risks, the methods for governing these processes diverge substantially.

Governance Approaches: Varying Levels of Emphasis

Some frameworks explicitly incorporate these governance activities under a distinct “GOVERN” function, while others distribute them across the entire risk management lifecycle or omit them altogether.

For example:

  • The EU AI Act (EU AIA) and the Canada AI and Data Act (AIDA) require providers of high-risk AI systems to identify, analyse and mitigate risks. However, consulting and embedding risk management into organizational culture are absent.
  • The Council of Europe’s draft Human Rights, Democracy and the Rule of Law Risk and Impact Assessment (HUDERIA) is partly aligned but elements relating to GOVERN are not present.
  • ISO/IEC Guide 51 is aimed at informing development of product safety standards and does not include embedding risk management policies and consulting stakeholders .

Scope and Focus: A Matter of Perspective

Frameworks also differ in their scope, target audience, and risk landscape, leading to varying approaches to governance.

  • OECD DDG: Broader scope includes risks associated with business relationships. It recommends risk mitigation on the sale and the distribution of goods.
  • ISO 31000: Narrower scope considers risks and impacts to the organization.
  • NIST AI RMF: Focuses on harm to people, organizations, and ecosystems.
  • HUDERIA: Addresses risks to human rights, democracy, and the rule of law.
  • EU AIA & AIDA: Takes a product-safety approach
  • IEEE 7000-21:Integrates value-based considerations and stakeholder views into product or service design.
  • Target audience: The OECD DDG and ISO standards are aimed at board level organization changes. The other offer board level recommendations but implementation is prima at the technical level.

The EU AIA and AIDA also incorporate a unique regulatory feature wherein regulators define what constitutes a “high-risk” system, effectively prioritizing risk management efforts for companies.

What are the planned future actions for enhancing AI risk management practices

Several strategic initiatives are in the pipeline to bolster AI risk management, with a focus on promoting interoperability and practical implementation. Here’s a rundown of the key areas:

Harmonizing AI Terminology and Concepts

The immediate next step involves a deep dive into the commonalities and differences in the language and concepts used across various AI impact assessment and risk management frameworks. This will include:

  • Identifying definitions and concepts that have a broad consensus.
  • Pinpointing potentially incompatible or unclear areas that could impede practical implementation. For example, debates on the meanings of transparency, explainability, and interpretability.
  • Developing a common understanding of the AI value chain, including the different actors involved and the various risks present at each stage.

Developing Good Practices for Responsible Business Conduct in AI

A promising approach for implementing AI risk management is to leverage the existing frameworks for responsible business conduct. This would involve aligning AI-specific terminology and frameworks with principles from the OECD Guidelines for Multinational Enterprises (MNE) and Due Diligence Guidance (DDG). Outcomes could include workshops and actionable guidelines, clarifying how Due Diligence Guidance principles for Responsible Business Conduct could be specifically applied to AI.

Aligning Certification Schemes with RBC and AI Standards

To improve the quality, comparability, and interoperability of certification standards and initiatives, the OECD is developing an alignment assessment process to evaluate the alignment of initiatives with the recommendations of OECD DDG. This move sets the stage for providing concrete recommendations to translate and align AI practices with Responsible Business Conduct (RBC) practices, and vice versa:

Developing an Interactive Online Tool

An interactive online tool would be created to assist organizations and stakeholders in making comparisons between frameworks. This tool will include both comparison frameworks derived from the steps mentioned previously, and it will help users navigate existing methods, tools, and good practices for identifying, assessing, treating, and governing AI risks. This would be linked to the Catalogue of Tools and Metrics for Trustworthy AI.

When it comes to governing AI risk, a key takeaway from a recent OECD report is that while various AI risk management frameworks generally align on high-level steps—DEFINE, ASSESS, TREAT, and GOVERN—significant differences emerge in how they approach the “GOVERN” function. This impacts the interoperability of these frameworks.

Key Differences in Governance

Here’s a breakdown of the core areas where governance approaches diverge:

  • Explicit vs. Distributed Governance: Some frameworks explicitly include governance activities under a designated “GOVERN” function, while others distribute or omit them throughout the risk management process.
  • Stakeholder Engagement: Certain regulations, like the proposed EU AI Act (EU AIA) and Canada AI and Data Act (AIDA), may lack consultation requirements with internal and external stakeholders—a key aspect of the “GOVERN” function per OECD guidance on interoperability.
  • Embedding Risk Management: Similarly, embedding risk management into organizational culture—another “GOVERN” element—is not always explicitly addressed in proposed legislation.

Regulatory Considerations

Several significant regulatory nuances impact the “GOVERN” function:

  • EU AI Act and AIDA: Though requiring risk identification, analysis, and mitigation for high-risk AI systems, these proposed acts appear to lack some “GOVERN” risk management measures from the Interoperability Framework, like stakeholder consultation. However, the EU AI Act’s Article 17 requires a “quality management system” to ensure compliance, potentially incorporating risk management and accountability.
  • HUDERIA: The Council of Europe’s draft Human Rights, Democracy and the Rule of Law Risk and Impact Assessment (HUDERIA) is partly aligned but seems to lack elements from the Interoperability Framework related to GOVERN, like public communication on conformity to standards and leadership involvement in embedding risk management across the organization.
  • NIST AI RMF: While the document includes the sub-elements of GOVERN within its steps, these are integrated throughout the different frameworks.

Practical Implications

For AI governance and compliance professionals, these discrepancies in the “GOVERN” function have significant implications:

  • Complexity and Cost: A lack of interoperability between frameworks can complicate and increase the costs associated with implementing trustworthy AI.
  • Effectiveness and Enforceability: Non-interoperable frameworks may reduce the effectiveness and enforceability of AI risk management efforts.
  • Customization is key: The ISO 31000 standard context recommends customizing it to any organization and its specific contexts.

Call to Action

To ensure effective AI governance, legal-tech professionals, compliance officers, and policy analysts should advocate for:

  • Cooperation and Coordination: Encouraging collaboration between developers of standards and frameworks, both domestically and internationally.
  • Clear Metrics: Prioritizing clear metrics and definitions to ensure consistent risk management implementation across different use-cases.
  • Alignment with Broader Business Practices: Linking AI governance to responsible business conduct frameworks like the OECD Due Diligence Guidance.

Moving forward, the focus should be on harmonizing AI governance approaches and ensuring interoperability for practical and enforceable AI risk management.

Ultimately, fostering trustworthy AI demands not only consistent risk assessment but, critically, harmonized governance. While broad alignment exists in defining, assessing, and treating risks, fundamental differences in governing these processes present a significant barrier to effective implementation. Bridging these gaps, particularly regarding stakeholder engagement, embedding risk management, and integrating value-based business conduct, is crucial. By prioritizing cooperation, clear metrics, and alignment with established due diligence frameworks, legal, compliance, and policy professionals can pave the way for practically enforceable and truly responsible AI systems.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...

AI Alignment: Ensuring Technology Serves Human Values

Gillian K. Hadfield has been appointed as the Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University, where she will focus on ensuring that artificial...

The Ethical Dilemma of Face Swap Technology

As AI technology evolves, face swap tools are increasingly misused for creating non-consensual explicit content, leading to significant ethical, emotional, and legal consequences. This article...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...