Transforming AI Risk Governance: A Sociotechnical Approach

A Conceptual Model to Guide AI Risk Governance Strategies

Introduction

In recent years, risk mitigation has grown increasingly salient in the AI governance landscape. Across the world, both countries and multilateral organizations have progressed from high-level statements about the risks posed by AI to adopting frameworks, laws, and policies that clarify the rights and values that AI developers and users ought to respect.

Key examples include:

  • European Union Artificial Intelligence Act (Regulation (EU) 2024/1689)
  • Rules from the Biden-Harris Administration governing federal agencies’ use of AI
  • A unanimous United Nations General Assembly resolution on trustworthy AI (UN G.A. Res. 78/265)

These policies articulate the public interest that needs to be protected against AI risks, leading to the establishment of new AI safety institutes aimed at developing responsible design, evaluation, and use of AI.

Concerns About AI Capabilities

Policymakers and stakeholders are increasingly worried about the growing capabilities of AI models. Consequently, many emerging AI risk management actions focus on:

  • Improved testing and evaluation of AI models
  • Safeguards on model inputs and outputs
  • Limiting access to AI model weights

Supporters of a model-centric governance approach argue that interventions during the model training and release stages can help reduce downstream risks, particularly the misuse of generative AI models. However, some critics contend that this approach is infeasible and can hinder innovation and economic competition.

The Need for a Conceptual Framework

The current lack of a conceptual framework hampers AI risk management by limiting methods, tools, and expertise. This paper aims to structure the ongoing debate on AI risk management by assessing intervention points in the sociotechnical system.

AI risk management must focus on protecting public rights and safety. The paper argues for the need to recenter the prevention of harms at the sociotechnical level, acknowledging that only a comprehensive understanding of system components can effectively mitigate risks.

Key Analytic Shifts

Part I discusses various AI risk management frameworks, contrasting their focus on technical systems and highlighting their limitations. The frameworks include:

  • The EU AI Act
  • U.S. guidance on responsible AI use
  • The UK’s AI Security Institute research agenda
  • The NIST AI Risk Management Framework

Part II introduces the proposed conceptual framework, advocating for:

  • A sociotechnical approach to risk management
  • A preference for interventions aimed at preventing harms rather than merely reducing future hazards

Example Case Study

Part III examines a specific instance of AI risk related to image-based sexual abuse, showcasing how the proposed framework can mitigate risks effectively.

Recommendations to Policymakers

Part IV outlines four key recommendations:

  1. Develop a sociotechnical system map to identify relevant components related to the harm being investigated.
  2. Task deployers of AI systems with assessing and mitigating specific use case risks.
  3. Reduce reliance on developers for independent risk mitigation activities.
  4. Invest in the infrastructure necessary for sociotechnical evaluations and the range of risk mitigation techniques.

Limitations of Current AI Governance Frameworks

While new governance efforts aim to manage AI risks, several weaknesses persist:

  • Insufficient attention to the relationality of risk
  • Over-reliance on developers for risk management
  • Emphasis on technocratic tools
  • Model-centric mitigations that fail to address real-world harms

The article emphasizes that harms are often a product of complex interactions within sociotechnical systems rather than merely the capabilities of individual AI models. A comprehensive understanding of these interactions is crucial for effective risk management.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...