Mitigating AI-related Risks: Approaches to Regulation
The development of artificial intelligence (AI) has sparked a global debate on how to manage the associated risks, particularly as existing legal frameworks struggle to keep pace with rapid technological advancements. The responses from regulators vary significantly across different countries and regions, leading to a spectrum of approaches that can be broadly categorized into three main frameworks: the United States, the European Union, and the United Kingdom.
Regulatory Landscape Overview
As AI technologies push ethical boundaries, the lack of a cohesive regulatory framework has prompted discussions on how to effectively manage these risks. The AI Action Summit in Paris underscored this variability in regulatory approaches, with a focus on inclusiveness and openness in AI development, while only vaguely addressing safety and specific risks.
The US Approach: Innovate First, Regulate Later
The United States currently lacks comprehensive federal legislation specifically targeting AI. Instead, it relies on voluntary guidelines and market-driven solutions. Key legislative efforts include the National AI Initiative Act, aimed at coordinating federal AI research, and the National Institute of Standards and Technology’s (NIST) voluntary risk management framework.
In October 2023, an Executive Order issued by President Biden aimed to enhance standards for critical infrastructure and regulate federally funded AI projects. However, the regulatory landscape is fluid, as evidenced by President Trump’s revocation of this order in January 2025, indicating a potential shift towards prioritizing innovation over regulation.
Critics argue that this fragmented approach leads to a complex web of rules and lacks enforceable standards, especially regarding privacy protection. Yet, states are increasingly introducing AI legislation, suggesting a growing recognition of the need for regulation that does not stifle innovation.
The EU Approach: Damage-Prevention Regulation
In contrast, the European Union has taken a more proactive stance on AI regulation. The Artificial Intelligence Act introduced in August 2024 is regarded as the most comprehensive AI regulation to date. This act employs a risk-based approach, imposing stringent rules on high-sensitivity AI systems, such as those used in healthcare, while allowing low-risk applications to operate with minimal oversight.
Similar to the General Data Protection Regulation (GDPR), the AI Act mandates compliance not only within EU borders but also for any AI provider operating in the EU market. This could pose challenges for non-EU providers of integrated products. Critics have pointed out weaknesses in the EU’s approach, including its complexity and the lack of clarity in technical requirements.
The UK’s Middle Ground Approach
The United Kingdom has adopted a “lightweight” regulatory framework that balances the stringent EU regulations and the more relaxed US approach. This framework is grounded in principles of safety, fairness, and transparency, with existing regulators empowered to enforce these principles.
The establishment of the AI Safety Institute (AISI) in November 2023 marks a significant step in evaluating the safety of advanced AI models and promoting international standards. Despite this, criticisms remain regarding limited enforcement capabilities and a lack of a centralized regulatory authority.
Global Cooperation on AI Regulation
As AI technology evolves, the disparities in regulatory approaches are likely to become increasingly pronounced. There is an urgent need for a coherent global consensus on key AI-related risks. International cooperation is crucial to establish baseline standards that address risks while fostering innovation.
Global organizations like the Organisation for Economic Cooperation and Development (OECD) and the United Nations are actively working to develop international standards and ethical guidelines for AI. The industry must find common ground swiftly, as the pace of innovation continues to accelerate.