When AI Is Forced on Compliance: The ECCP as Your Guide
The email arrives with no warning. The business has selected an AI platform. IT is already integrating it, and a pilot is underway. The Board of Directors is on board and enthusiastic. The Chief Compliance Officer has been asked to “provide governance” within one week.
Where Do You Begin?
The answer is straightforward: you can begin with the U.S. Department of Justice’s (DOJ) 2024 Evaluation of Corporate Compliance Programs (ECCP).
The ECCP makes it explicit: prosecutors will assess how companies identify, manage, and control risks arising from new and emerging technologies, including artificial intelligence, both in business operations and within compliance programs themselves. This prosecutorial mandate provides essential information to guide your management request.
Reframe AI as a DOJ Risk Assessment Issue
Start by treating AI not merely as a technical deployment but as a risk assessment obligation. The ECCP clearly states that risk assessments must evolve as internal and external risks change, specifically highlighting AI as a technology requiring affirmative analysis. Prosecutors will inquire whether the company assessed how AI could impact compliance with criminal laws, whether AI risks were integrated into enterprise risk management, and whether controls exist to ensure AI is used only for its intended purposes.
For the Chief Compliance Officer (CCO), this necessitates formally incorporating AI use cases into the compliance risk assessment. If AI influences investigations, monitoring, training, third-party diligence, or reporting, it falls under DOJ scrutiny.
Inventory Before You Draft Policy
The ECCP does not reward aspirational policies unsupported by facts. Prosecutors want to understand why a company structured its compliance program the way it did. Before drafting AI governance frameworks, compliance must demand a full inventory of AI use:
- What tools are deployed or piloted;
- Which business functions use them;
- What data they ingest;
- Whether outputs are advisory or decision-shaping.
This inventory should explicitly include employee use of generative AI tools. The ECCP emphasizes the management of insider misuse and unintended consequences of technology. Unmanaged “shadow AI” use is now a compliance failure, not merely an IT inconvenience.
Focus on Decision Integrity, Not Model Design
One of the ECCP’s most overlooked insights is that DOJ evaluates outcomes and accountability, not technical elegance. When AI is utilized, prosecutors will ask:
- What decisions did the AI influence;
- What baseline of human judgment existed;
- How accountability was assigned and enforced.
Compliance officers should therefore center governance around decisions, not algorithms. If no one can explain how an AI output was evaluated, overridden, or escalated, the company cannot demonstrate that its compliance program works in practice. The ECCP explicitly asks what “baseline of human decision-making” is used to assess AI outputs and how accountability over AI use is monitored and enforced. This leads to the concept of Human in the Loop, which should be seen as an internal control in a best practices compliance program. Human-in-the-loop controls must be real, documented, and empowered.
Demand Explainability for Boards and Regulators
The DOJ does not expect boards to understand machine learning architectures. However, it does expect boards to exercise informed oversight. The ECCP repeatedly inquires whether compliance can explain risks, controls, and failures to senior management and the board. If a compliance officer cannot explain, in plain language, how AI affects compliance decisions, the program is not defensible. Every material AI use case should have a board-ready narrative:
- Why AI is used;
- What risks it creates;
- Where human judgment intervenes;
- How errors are detected and corrected.
This is not optional. Prosecutors will evaluate what information the boards reviewed and how they exercised oversight.
Integrate AI Governance Into Existing Controls
The ECCP warns against “paper programs.” This means that AI governance cannot exist in a separate policy silo. AI-related controls must integrate with existing compliance structures such as investigations protocols, reporting mechanisms, training, internal audit, and data governance. If AI identifies misconduct, how is that information escalated? If AI supports investigations, how are outputs preserved and documented? If AI supports training, how is effectiveness measured? The DOJ will look for consistency in approach, documentation, and monitoring, not novelty.
Insist on Resources and Authority
The ECCP devotes significant attention to whether compliance functions are adequately resourced, empowered, and autonomous. If AI governance responsibility is assigned to compliance, then compliance must have access to data, technical explanations, and escalation authority.