Global AI Regulation: Establishing Standards and Managing Risks

Setting a Global Standard | Comprehensive Artificial Intelligence Regulation

The EU AI Act: What it Does

The AI Act establishes a regulatory framework for artificial intelligence systems, setting standards based on the nature of their application and the level of risk they pose.

Prohibited Practices

At its strictest, the Act prohibits AI uses that infringe on fundamental rights, particularly privacy. These include:

  • Biometric surveillance in public places, such as facial recognition or gait analysis used to identify and track individuals without their consent—practices that raise serious concerns about privacy and civil liberties.
  • Social scoring systems which evaluate individuals based on their behavior, actions, or personal traits to determine access to services, benefits, or impose penalties, widely criticized for fostering discrimination and violating human rights.

Transparency

The Act also emphasizes transparency in the development and deployment of AI technologies by:

  • Promoting systems that are explainable, accountable, and accessible to human understanding—moving away from opaque, “black box” models.
  • Requiring clarity around an AI system’s design, the data it processes, and the logic behind its decisions.

The Act aims to build trust in AI, ensure fairness, support regulatory compliance, and allow for the identification and mitigation of potential biases.

Risk Classification of AI Systems

The AI Act adopts a tiered, risk-based framework to regulate AI, categorizing systems into four distinct classes based on their potential impact:

  • Minimal risk: These systems pose little to no threat to users or society. An example is an AI algorithm that recommends videos based on viewing history.
  • Limited risk: This category includes foundation models—large-scale AI systems trained on extensive and diverse datasets, serving as the backbone for many generative AI applications, such as chatbots. While generally safe, they require transparency and disclosure to ensure responsible use.
  • High risk: AI systems in this class carry significant potential to affect health, safety, or fundamental rights. Examples include algorithms used in hiring or employee performance evaluations, where biased outcomes could lead to discrimination or unfair treatment.
  • Unacceptable risk: These uses are deemed incompatible with EU values and are strictly prohibited, including AI systems designed to manipulate behavior or infringe on rights.

Penalties and Global Reach

The AI Act imposes strict financial penalties for serious violations, with fines reaching up to 7% of a company’s total global annual revenue. Its provisions apply not only to EU-based organizations but also to companies outside the EU that offer AI services or products within its borders. Given that AI technologies have been in use for over half a century, the regulation may affect existing systems already deployed across various industries.

Insurance Implications of AI-driven Risks

Triggering Events

AI technologies introduce a range of potential exposures that can trigger insurance claims, including:

  • Manipulated and falsified information: Deepfake videos and voice cloning can enable security breaches and sophisticated social engineering attacks.
  • Hallucinations and misinformation: AI-generated outputs that are false or misleading may lead to liability for directors, officers, and professionals who rely on them in decision-making.
  • Privacy violations: Sharing sensitive data with AI systems may breach contracts, privacy laws, or regulatory obligations.
  • Intellectual property infringement: AI-generated content can unlawfully replicate or misuse protected assets such as images, code, music, trademarks, or personal identifiers used in training datasets.
  • Model bias: Systematic errors in AI models can produce discriminatory or unfair outcomes, exposing organizations to reputational and legal risk.
  • False advertising (AI washing): Misrepresenting AI capabilities or minimizing associated risks may result in regulatory scrutiny or consumer claims.

These risks can lead to financial loss, legal liability, property damage, or even bodily injury—implicating a broad spectrum of insurance coverage. Relevant policies may include Cyber Liability, Directors & Officers (D&O), Errors & Omissions (E&O), Media Liability, Employment Practices Liability (EPL), Products Liability, and General Liability.

To address these emerging exposures, AI-specific insurance products have been developed. Brokers can assist organizations in identifying and securing coverage tailored to their unique AI risk profile.

The Evolving Landscape of AI Regulation

AI governance is rapidly shifting across all jurisdictions. While the Biden administration introduced federal compliance rules scheduled to take effect on May 15, 2025, those regulations were rescinded by the current administration. However, an Executive Order outlining guiding principles for AI development remains in force. The previous administration emphasized AI competence as a strategic priority to maintain U.S. leadership in technology, making comprehensive federal legislation unlikely in the near term.

In the absence of federal mandates, individual states have begun to act. Colorado has passed legislation modeled after the EU’s AI Act, and other states are expected to follow suit.

Internationally, regulatory momentum is building. UK authorities are pursuing sector-specific AI rules, while the European Union is advancing a unified legal framework that applies across all industries, regulated or not. The EU is also reforming liability standards for AI systems and AI-enhanced products, aiming to simplify the process for victims to seek compensation.

Globally, experts have identified over 70 jurisdictions with draft AI legislation under review. As the pace of AI innovation accelerates, regulatory frameworks will continue to expand, shaping how organizations develop and deploy AI technologies. Risk professionals must remain vigilant, ensuring that their risk transfer strategies and management programs evolve in step with this dynamic regulatory environment.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...