Understanding Compliance for Risky AI Systems in the Workplace

Risky AI Systems: An Overview

The emergence of Artificial Intelligence (AI) has brought about significant advancements, but it also poses substantial risks. Understanding these risks is crucial for businesses and employers who utilize AI systems. This article delves into the implications of the EU AI Act, a groundbreaking piece of legislation that regulates AI based on its risk levels.

The EU AI Act: A World First

On February 2, 2025, the EU AI Act came into effect, marking the first legislative effort globally to regulate AI systems. The Act categorizes AI uses into various risk levels—from minimal risk to unacceptable risk. This classification aims to enhance safety, transparency, and sustainability while preventing discriminatory practices in AI applications.

Organizations need to be proactive in ensuring compliance to avoid potential fines and reputational damage.

Scope of the Act: Who Needs to Comply?

The Act applies not only to AI suppliers within the EU but also to providers and users located outside the EU if their AI outputs are utilized within the EU. For instance, using an AI recruitment tool in the UK for hiring within the EU falls under the Act’s jurisdiction.

Risk Categories Defined

AI uses are classified into different risk categories:

  • Minimal Risk: Most AI systems currently available in the EU market.
  • Limited Risk: Subject to light-touch obligations.
  • High Risk: Stricter regulations are imposed.
  • Unacceptable Risk: Banned outright due to significant threats to users and society.

The maximum penalty for non-compliance can reach €35 million or up to 7% of a firm’s total annual turnover.

High Risk AI Systems

High-risk AI systems include applications that affect fundamental rights, such as:

  • Biometric data categorization (e.g., AI in CCTV).
  • Education and training tools (e.g., detecting erratic student behavior).
  • Employment-related AI (e.g., HR decision-making and recruitment).
  • Justice administration (e.g., AI in alternative dispute resolution).

These systems must demonstrate a significant risk of harm to health, safety, or fundamental rights to qualify as high risk.

Unacceptable Risk AI Systems

As of February 2, 2025, certain AI systems have been categorized as unacceptable risk and are thus prohibited. Examples include:

  • Systems that socially score individuals.
  • Emotion recognition technologies in workplaces and schools.
  • Biometric categorization systems that infer sensitive attributes.

Action Steps for Employers

Employers must take immediate steps if their business falls within the Act’s scope:

  • Audits: Evaluate current AI systems for compliance with risk categories.
  • Policies: Establish governance policies to guide responsible AI usage.
  • Training: Educate employees about AI risks and responsibilities.
  • Supplier Compliance: Ensure third-party AI providers adhere to the Act.

Proactive compliance with the Act will help maintain a culture focused on people, mitigate substantial fines, and protect the organization’s reputation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...