Experts Across Tech Sector Share Their Views On EU AI Act Changes Coming Into Force
Preparatory obligations related to high risk AI systems take effect today as the EU AI Act moves closer to full application. These rules apply to organizations inside and outside the EU that place AI systems on the EU market or use them within the bloc.
The European Commission states that the law addresses safety and rights risks tied to certain AI uses, declaring, “The AI Act is the world’s first comprehensive law for AI. It aims to address risks to health, safety, and fundamental rights.”
High Risk AI Systems
High risk systems cover AI used in areas such as recruitment screening, credit scoring, access to healthcare, education assessment, and law enforcement. These applications can influence decisions about individuals in direct and lasting ways.
The Commission links the law to trust, stating that uneven national rules and legal uncertainty have slowed the uptake of AI across the EU, creating the need for a single framework.
What Do Providers Need To Do?
Providers must complete a conformity assessment before a high risk AI system is placed on the market or put into service. This assessment checks risk management, data governance, technical documentation, transparency, human oversight, accuracy, and cybersecurity.
A quality management system must be in place across the system’s lifecycle. According to the Commission, “Providers of high risk AI systems remain responsible for the safety and compliance of the system throughout its lifecycle.” Each high risk system must be entered into a public EU database, allowing authorities to review this information as part of market surveillance.
If the system or its intended use changes meaningfully, the assessment must be repeated. For AI used as safety components in regulated products, Article 6 links these duties directly to third-party product conformity checks.
New Duties for Deployers and Public Authorities
Deployers must follow instructions for use and monitor how systems operate in practice. Human oversight must be assigned to staff with the authority to intervene when risks appear. Public authorities and organizations delivering public services must complete a fundamental rights impact assessment before first use, evaluating effects on rights protected under EU law alongside data protection duties.
Individuals affected by AI-supported decisions must be informed, and where a decision has legal effects, they can request an explanation. The Act mandates that deployers provide “a clear and meaningful explanation.”
Workplace use brings added notice duties, requiring that employees and workers’ representatives be informed before high risk systems are deployed.
Classification and Guidance
The Act classifies high risk AI by intended purpose, with Annex III listing sensitive uses in employment, education, migration, justice, and biometric identification. Providers can argue that an Annex III system is not high risk if it performs a narrow or preparatory task and does not influence outcomes; this assessment must be documented and shared with authorities on request.
The Commission plans to issue guidance with practical examples to support classification, aiming to provide businesses with clarity while maintaining protections for health, safety, and fundamental rights.
Penalties reinforce the rules, with fines reaching €35 million or 7% of global turnover for banned practices, and lower thresholds for other breaches.
Expert Reactions to the Updates
Industry leaders provide insights on the implications of the EU AI Act:
- Ian Jeffs, UK&I Country General Manager at Lenovo Infrastructure Solutions Group, emphasizes the importance of clarity for businesses as AI transitions from experimentation to large-scale deployment. He notes that the upcoming guidance on high-risk classification and post-market monitoring will be critical for operationalizing compliance.
- Adam Spearing, VP of AI GTM ServiceNow EMEA, describes February 2026 as a pivotal moment for responsible innovation in Europe. He believes proactive AI governance will accelerate business value rather than hinder it.
- Christian Kleinerman, EVP Product at Snowflake, highlights the importance of trust and safety as foundational elements in AI discourse. He urges businesses to prioritize transparency and governance to unlock a competitive advantage.
These expert opinions illustrate the collective optimism and cautious approach that the tech sector is adopting in response to the new regulatory landscape.