AI-Specific Diligence in Corporate Transactions
In an era of expanding AI use and oversight, AI-specific due diligence is a critical component of any corporate transaction where the target company leverages AI in a meaningful way.
The first instinct in many transactions is to route everything that looks technical into familiar lanes, such as intellectual property (IP), privacy, cybersecurity, and commercial contracting, on the theory that these disciplines, taken together, capture most technology-related risks in modern businesses. While traditional diligence may uncover some of these risks, it will not reliably surface the unique issues presented by AI, such as:
- Opaque model provenance
- Tainted training data
- Inadequate evaluation and monitoring practices
- Silent third-party model dependencies
- Shadow deployments by business teams
- Contract terms that shift liability from vendors to buyers
Additionally, AI presents more dynamic risks. AI systems learn from data, adapt to new contexts, and sometimes act differently when deployed than in testing environments. The surrounding legal landscape is also evolving. Risk derives not only from what the law currently prohibits but also from what regulators and counterparties expect a responsible organization to do in the future. These dynamic risks do not fit neatly inside older diligence patterns, and understanding these issues is crucial for determining deal valuation, transaction structure, risk allocation, integration plans, and post-closing remediation budgets.
Because traditional diligence approaches may not fully capture AI-related concerns, it is imperative for buyers and their counsel to conduct specialized AI diligence to identify and mitigate these risks.
Key Considerations in Establishing an AI Diligence Framework
This article outlines key considerations for establishing an AI diligence framework, including:
- Preliminary AI diligence scoping decisions
- Substantive areas of inquiry into the target’s AI assets and uses
- Relevant legal and regulatory risks
- The target’s AI governance practices
Acquirers and investors who incorporate thorough AI diligence into their transaction playbook will be better positioned to close transactions with confidence and integrate targets smoothly and safely.
Key Factors Defining the Scope of AI Diligence
The depth and focus of AI diligence should be tailored to:
- Transaction Structure: The structure of the transaction influences the depth of diligence. For example, in a minority investment, the investor might perform a narrower review focused on specific concerns. In a full acquisition, a comprehensive review of all AI assets, systems, and practices is necessary.
- Target’s Commercialization of AI: Whether the target sells or licenses AI solutions to customers or uses AI internally affects the diligence process.
- Extent and Materiality of AI Use: The diligence team should assess how significantly the target relies on AI in its business.
- Role as AI Developer Versus AI Deployer: Counsel should consider whether the target is primarily an AI developer or an AI deployer.
- Relevant Industries and Jurisdictions: The industry sector can greatly affect AI risk exposure.
Sources of Information
Effective AI diligence relies on gathering comprehensive information from several critical sources, including:
- Model cards
- Internal risk or impact assessments
- Technical whitepapers
- Testing and validation results
- Audit results
- Training data summaries
- Policies or procedures governing AI development and use
Direct interviews with the target’s technical teams can be invaluable, illuminating how AI tools are used in practice and uncovering known limitations or incidents.
Substantive AI Diligence
After determining the scope and assembling the right team, the buyer should conduct a deep dive into substantive areas of AI risk and compliance, including:
- Proprietary development of AI technology
- Use of third-party AI technology
- Deployment of AI technology in the target business
- Training data
- Generative AI (GenAI) use
Proprietary AI development requires evaluation of ownership, quality, and risk profile of the target’s AI assets. Identifying third-party dependencies is critical, as many AI systems incorporate open-source libraries or pre-trained models.
Training Data and Data Governance
The buyer should investigate the sources, legality, and quality of the data used to train the target’s AI models. Key considerations include:
- Collection Methods and Data Sources: Understanding how data was obtained is essential for assessing legal exposure.
- Data Preparation and Processing: Proper data preparation affects model performance and compliance.
- Compliance with Licenses, Consents, and Permissions: Confirming the target had the right to use each dataset is crucial to mitigate risks.
- Bias and Fairness Assessments: Buyers should ascertain whether bias audits were performed on AI models.
Legal and Regulatory Landscape
AI-related laws and regulations are rapidly evolving. Buyers should identify the legal and regulatory risks that the target might face, including:
- Developing AI Laws: Understanding if the target is tracking relevant legislative developments is vital.
- State AI Laws in the US: Diligence should identify whether the target operates in jurisdictions with AI laws.
- Consumer Protection Laws: Evaluating the risk of consumer complaints about AI outputs is important.
AI Governance and Organizational Framework
Evaluating the target’s internal AI governance framework is increasingly important. The buyer should consider whether the target has implemented:
- Formal policies
- Employee training programs
- Oversight mechanisms
- Incident response processes
Contractual Protections and Risk Allocation
After identifying AI-related issues and risks, the buyer and counsel should address these findings in the transaction documents. Examples of contractual protections include:
- Representations and warranties
- Indemnification provisions
- Pre- and post-closing covenants
- Closing conditions
Incorporating thorough AI diligence into transaction playbooks ensures that acquirers and investors are well-prepared to navigate the complex landscape of AI-related risks.