What the EU AI Act Means for High-Risk Systems

Experts Across Tech Sector Share Their Views On EU AI Act Changes Coming Into Force

Preparatory obligations related to high risk AI systems take effect today as the EU AI Act moves closer to full application. These rules apply to organizations inside and outside the EU that place AI systems on the EU market or use them within the bloc.

The European Commission states that the law addresses safety and rights risks tied to certain AI uses, declaring, “The AI Act is the world’s first comprehensive law for AI. It aims to address risks to health, safety, and fundamental rights.”

High Risk AI Systems

High risk systems cover AI used in areas such as recruitment screening, credit scoring, access to healthcare, education assessment, and law enforcement. These applications can influence decisions about individuals in direct and lasting ways.

The Commission links the law to trust, stating that uneven national rules and legal uncertainty have slowed the uptake of AI across the EU, creating the need for a single framework.

What Do Providers Need To Do?

Providers must complete a conformity assessment before a high risk AI system is placed on the market or put into service. This assessment checks risk management, data governance, technical documentation, transparency, human oversight, accuracy, and cybersecurity.

A quality management system must be in place across the system’s lifecycle. According to the Commission, “Providers of high risk AI systems remain responsible for the safety and compliance of the system throughout its lifecycle.” Each high risk system must be entered into a public EU database, allowing authorities to review this information as part of market surveillance.

If the system or its intended use changes meaningfully, the assessment must be repeated. For AI used as safety components in regulated products, Article 6 links these duties directly to third-party product conformity checks.

New Duties for Deployers and Public Authorities

Deployers must follow instructions for use and monitor how systems operate in practice. Human oversight must be assigned to staff with the authority to intervene when risks appear. Public authorities and organizations delivering public services must complete a fundamental rights impact assessment before first use, evaluating effects on rights protected under EU law alongside data protection duties.

Individuals affected by AI-supported decisions must be informed, and where a decision has legal effects, they can request an explanation. The Act mandates that deployers provide “a clear and meaningful explanation.”

Workplace use brings added notice duties, requiring that employees and workers’ representatives be informed before high risk systems are deployed.

Classification and Guidance

The Act classifies high risk AI by intended purpose, with Annex III listing sensitive uses in employment, education, migration, justice, and biometric identification. Providers can argue that an Annex III system is not high risk if it performs a narrow or preparatory task and does not influence outcomes; this assessment must be documented and shared with authorities on request.

The Commission plans to issue guidance with practical examples to support classification, aiming to provide businesses with clarity while maintaining protections for health, safety, and fundamental rights.

Penalties reinforce the rules, with fines reaching €35 million or 7% of global turnover for banned practices, and lower thresholds for other breaches.

Expert Reactions to the Updates

Industry leaders provide insights on the implications of the EU AI Act:

  • Ian Jeffs, UK&I Country General Manager at Lenovo Infrastructure Solutions Group, emphasizes the importance of clarity for businesses as AI transitions from experimentation to large-scale deployment. He notes that the upcoming guidance on high-risk classification and post-market monitoring will be critical for operationalizing compliance.
  • Adam Spearing, VP of AI GTM ServiceNow EMEA, describes February 2026 as a pivotal moment for responsible innovation in Europe. He believes proactive AI governance will accelerate business value rather than hinder it.
  • Christian Kleinerman, EVP Product at Snowflake, highlights the importance of trust and safety as foundational elements in AI discourse. He urges businesses to prioritize transparency and governance to unlock a competitive advantage.

These expert opinions illustrate the collective optimism and cautious approach that the tech sector is adopting in response to the new regulatory landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...