Kickstarting Compliance with the EU AI Act: Four Essential Steps

Four Ways to Kickstart Compliance with the EU AI Act

The European Union’s Artificial Intelligence Act (AI Act) represents the world’s first comprehensive AI regulation, poised to affect companies far beyond Europe’s borders. This complex law governs the development, deployment, and use of AI systems and general-purpose AI models within the EU while also having extraterritorial scope that imposes obligations on many U.S.-based organizations that develop, sell, or use AI technologies.

How to Prepare for U.S. Compliance

The AI Act takes a staggered approach to application. The first set of obligations, focusing on prohibited AI practices and AI literacy, took effect in February of 2025. Requirements for providers of general-purpose AI models (GPAI) will take effect on August 2, 2025, with many remaining rules scheduled for August 2, 2026. Now is the time for U.S. companies to kickstart their compliance journeys. Below are four key steps to prepare for this new law.

1. Determine whether your organization is in scope

The first step is determining whether your organization falls within the scope of the EU AI Act. The Act applies broadly to providers, deployers, importers, and distributors of AI systems operating in the EU. It extends to organizations outside the EU that either (1) place AI systems on the EU market or put them into service within the EU, or (2) enable AI outputs to be used by individuals located in the EU (subject to certain exceptions). Given this expansive scope, compliance obligations may apply to various U.S.-based entities across different industries, including cloud service providers, security vendors, and companies offering services like identity verification, HR tools, customer service chatbots, threat detection, and AI-driven decision-making systems.

2. Identify where your AI systems land on the risk spectrum and whether they are GPAI

If your organization is covered, the next step is to understand how your product fits within the AI Act’s risk spectrum, which categorizes AI systems based on whether they pose unacceptable, high, limited, or minimal risk. This classification is essential because each tier has distinct obligations. Organizations should inventory the AI systems currently in use or under development and assess which systems fall into which tier to identify the applicable requirements. Special attention should be given to products that could be considered high-risk or potentially prohibited under the Act.

A brief description of the tiers and their respective obligations follows:

  • The Act prohibits the use of AI systems in connection with specific practices that entail unacceptable risk, such as social scoring by governments or certain real-time biometric surveillance.
  • High-risk systems, such as those used in critical infrastructure, recruitment, employment, education, life insurance, law enforcement, or identity verification, are subject to strict regulatory requirements under the AI Act. Obligations include mandatory risk assessments, detailed technical documentation, human oversight mechanisms, and cybersecurity and data governance controls.
  • Certain limited-risk AI systems need to comply with minimal transparency and/or marking requirements, primarily affecting systems designed to interact directly with natural persons.
  • Minimal risk AI systems may be covered by future voluntary codes of conduct to be established.

The Act also has specific rules for GPAI, effective August 2, 2025, which apply even to models placed on the market or put into service before that date. These models, trained with a large amount of data using self-supervision at scale, exhibit significant generality and can competently perform a wide range of distinct tasks. The AI Act imposes stricter rules (e.g., model evaluations, risk mitigation plans, incident reporting, and enhanced cybersecurity measures) for models that pose systemic risks.

3. Design a governance program and take steps toward compliance

With the applicability analysis and AI system inventory complete, your organization should perform a gap assessment against its existing compliance measures. From there, you can identify the steps that still need to be taken. Creating cross-functional playbooks for classification, transparency, and oversight will pay dividends as the Act takes effect and enforcement begins. Examples of steps organizations should undertake for compliance readiness include:

  • Internal Governance: Establish internal AI governance committees to track use cases, update risk registers, and engage legal, compliance, and security teams.
  • Risk Documentation and Technical Controls: Maintain detailed documentation for AI systems, particularly those categorized as high-risk. Implement technical controls and perform regular risk assessments in line with the AI Act’s requirements.
  • Human Oversight Mechanisms: Ensure qualified personnel can understand, monitor, and, where necessary, override automated decision-making processes to fulfill the AI Act’s human oversight requirements.
  • Third-Party Oversight and AI Literacy: Engage vendors, partners, and contractors to confirm that they maintain appropriate levels of AI governance and literacy, especially where their tools or services fall under the AI Act or are integrated into your own AI systems or GPAI.
  • Training and Awareness Programs: Implement an organization-wide AI training program with enhanced modules tailored to employees directly involved in AI development, deployment, or oversight.
  • Cyber readiness: Although the Act does not specify data protection measures, this is an opportune moment to review and update your organization’s data and cybersecurity practices. Organizations may have existing obligations regarding EU data protection principles such as data minimization, purpose limitation, and lawful data sourcing, particularly when handling data from EU residents. Adding AI to the mix of products and services may add complexity, requiring additional security measures to prevent adversarial attacks, model manipulation, and unauthorized access, especially for high-risk systems.

4. Keep an eye on the U.S. (and other jurisdictions)

The U.S. does not yet have a national AI regulation analogous to the AI Act; however, companies cannot ignore domestic developments. Both the Biden Administration and the Trump Administration have issued executive orders on AI, but federal policymaking remains in its early stages. Significant activity is occurring at the state level. A robust compliance program should track emerging federal and state laws and consider how they interact with the AI Act.

One noteworthy example is the Colorado Artificial Intelligence Act, passed in 2024 and set to take effect in 2026. Like the AI Act, it employs a risk-based approach and imposes obligations on developers and deployers of high-risk AI systems. However, there are key differences, such as the Colorado law being more limited in scope and defining high risk more generally, rather than codifying specific uses as high risk.

Organizations should also monitor other markets, as additional jurisdictions may follow the EU’s lead in regulating AI.

Conclusion

Preparation should begin now, as the August 2, 2025, effective date is fast approaching. These steps will help in-house professionals operationalize these legal requirements and ensure compliance, even amid a rapidly evolving legal landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...