Critical Evaluations of AI Compliance Under the EU Act

Thorough Reviews of Targets’ AI Capabilities Under the EU Act

The new obligations stemming from the EU’s Artificial Intelligence Act (AI Act) concerning general-purpose AI (GPAI) models are set to take effect in August, with additional regulations anticipated in the upcoming years. As artificial intelligence increasingly permeates various business sectors, it becomes crucial for dealmakers to integrate these regulations into their decision-making processes.

The Impact of the AI Act on M&A

The AI Act imposes significant requirements on organizations that develop and deploy certain AI systems and GPAI models within the EU. In the context of Mergers and Acquisitions (M&A), this necessitates enhanced due diligence processes. It is essential for dealmakers to ensure that targets maintain a clear inventory of their GPAI models and AI systems, comprehend their usage, and take proactive steps toward compliance. This understanding will dictate the contractual protections needed in purchase agreements and will form the foundation for the target’s ongoing compliance post-closing.

Regulatory Approach of the AI Act

Lawmakers have adopted a risk-based approach in the AI Act, where the level of risk associated with an AI system determines the degree of regulatory obligations. High-risk AI systems, such as those used in medical devices, are subject to stringent regulations. In contrast, low-risk AI systems are not regulated. The Act also includes obligations for GPAI models, which, although not classified as AI systems, can be utilized as components within them.

Current Trends in AI Usage in Sales Decks

Currently, the prevalence of AI features in sales materials is relatively low. Buyers tend to lead due diligence efforts rather than sellers prominently featuring AI capabilities in their presentations. Conducting a thorough review of AI capabilities early in the due diligence process is critical, as these technologies significantly impact compliance with applicable legislation.

Global Implications of the AI Act

The AI Act applies not only to entities within the EU but also to those outside its borders. For example, if an EU-based company employs a foreign firm to utilize a high-risk AI system, the Act’s provisions will still apply. Additionally, providers outside the EU who place AI systems or GPAI models on the market within the EU must also comply.

Enforcement and Penalties

Non-compliance with the AI Act can lead to hefty fines, up to €35 million or 7% of a company’s global annual turnover, whichever is greater. Similar to the General Data Protection Regulation (GDPR), the AI Act outlines factors to consider when determining administrative fines, making it difficult to predict penalties for specific violations.

Potential Adjustments to the AI Act

There have been discussions regarding the possibility of adjusting the AI Act, particularly concerning exemptions for small and medium-sized enterprises (SMEs). Some reports suggest that EU lawmakers might consider pausing enforcement of the Act to enhance competitiveness.

Responsibilities of PE Sponsors

Private equity sponsors might bear responsibility for any breaches of the AI Act committed by their portfolio companies. Regulators are expected to analyze whether the investors exert ‘decisive influence’ over the portfolio companies, similar to the scrutiny applied in competition law and GDPR cases.

Internal Uses of AI by GPs

Sponsors should evaluate AI systems available in the market to determine their potential benefits, such as improving decision-making processes and data analytics. They must also assess whether these systems fall under the scope of the AI Act and what obligations they must fulfill as either providers or deployers.

Conclusion: Navigating the Complex Regulatory Landscape

The obligations imposed by the AI Act are considerable and should be addressed by companies well before the effective date. Understanding the nuances of AI systems and their implications on compliance is essential for organizations operating in this evolving landscape. As the regulatory environment matures, companies must remain vigilant and proactive in adapting to these changes, ensuring that they meet their obligations while leveraging AI technologies effectively.

More Insights

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...

Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft's chief scientist, Dr. Eric Horvitz, has criticized Donald Trump's proposal to ban state-level AI regulations, arguing that it could hinder progress in AI development. He emphasizes the need...

AI Regulation: Europe’s Urgent Challenge Amid US Pressure

Michael McNamara discusses the complexities surrounding the regulation of AI in Europe, particularly in light of US pressure and the challenges of balancing innovation with the protection of creative...

Decoding the Regulation of Health AI Tools

A new report from the Bipartisan Policy Center examines the complex regulatory landscape for health AI tools that operate outside the jurisdiction of the FDA. As AI becomes more integrated into...

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on "high-risk"...

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...