AI Compliance: A New Imperative in M&A Due Diligence

M&A: AI Act Compliance Enters Due Diligence Phase

On 1 August 2024, the European Union Regulation on Artificial Intelligence (known as the AI Act) officially came into force, with phased implementation planned between 2025 and 2026.

As of 2 February last year, the rules on unsustainable AI practices and literacy obligations apply. Starting from 2 August, rules concerning General Artificial Intelligence models (GPAI) will take effect, with regulations on High Risk AI becoming operational by 2 August this year (with certain products allowed an extension to 2027).

Transforming AI into a Regulated Entity

This new risk-based regulatory framework transforms AI from a mere innovative technology into an object of precise regulation. It introduces requirements for traceability, human oversight, and accountability, with potential fines reaching up to EUR 35 million or 7% of global turnover.

In parallel, Italy has enacted L.132/2025 in alignment with the AI Act, aiming to enhance principles of humanity, transparency, and security—particularly in critical sectors. The law mandates traceability of algorithmic decisions, human control, increased protection of minors, specific information obligations, and criminalizes illegal practices such as deepfakes.

AI in Extraordinary Transactions

Since last year, artificial intelligence has emerged as a significant factor in extraordinary transactions. It is now a critical subject of evaluation during the due diligence process. The shift from traditional technological due diligence to AI risk due diligence signifies that it is no longer sufficient to assess whether a target utilizes AI systems. Instead, it is essential to:

  • Map the types and purposes of AI systems
  • Understand the role of the company (supplier, integrator, or user)
  • Analyze the technological supply chain
  • Examine the models and datasets used
  • Identify rights of use and dependencies on third parties

This comprehensive mapping allows for the classification of systems according to the taxonomy of the AI Act (unacceptable, high, limited, minimal risk), enabling an estimation of compliance obligations, costs, and timelines that directly impact the post-acquisition business plan.

Emerging Red Flags

In light of this new regulatory environment, several red flags have emerged:

  • The use of AI systems in critical decisions without adequate governance and human oversight
  • Poorly documented historical datasets regarding provenance, licenses, and quality
  • Critical dependence on third-party suppliers without sufficient contractual safeguards
  • Absence of procedures for monitoring and managing AI-related incidents

These issues pose legal, reputational, and operational risks, which can affect asset valuation, potentially leading to price discounts or even the abandonment of a transaction.

Necessity of AI Due Diligence

Conducting AI due diligence, which includes audits of training data, verification of consents, licenses, and analysis of code and documentation, is now indispensable. This process cannot be postponed until the post-closing phase.

Adapting Contractual Terms

Contractual agreements are also evolving. In addition to traditional Representations and Warranties, new clauses are being introduced to ensure proper classification and compliance of systems under the AI Act. These clauses address:

  • The absence of prohibited practices under Section 5
  • Ownership of rights to the data and technologies used
  • Any hidden dependencies in the supply chain
  • The absence of regulatory violations

In cases of compliance gaps, measures such as remediation covenants, conditions precedent, and price adjustments are employed to allocate regulatory risk. Pre-closing governance constraints are also implemented to preserve compliance and transaction value.

AI Readiness in Venture Capital

The focus on AI readiness is growing in the venture capital sector. Enhanced disclosure rights, governance clauses, and obligations to allocate resources to compliance are becoming standard in term sheets. AI compliance is increasingly viewed as an indicator of management maturity and a protective measure at exit.

Conclusion: AI Compliance as a Value Lever

In conclusion, AI compliance serves as a significant value lever. It reduces discounts and penalties, expedites negotiations, increases attractiveness to investors (including cross-border investors), and enhances market confidence. The value of a tech company is now determined not just by its algorithms but by its ability to develop them sustainably and compliantly, transforming compliance from a regulatory obligation into a competitive advantage.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...