The Perils of ‘Good Enough’ AI in Compliance

When ‘Good Enough’ AI Gets You Fined (or Fired!)

This study explores the implications of relying on ‘good enough’ AI in legal and risk advisory contexts, emphasizing the potential for significant consequences when accuracy is sacrificed for speed.

The Temptation of Speed

In a world increasingly obsessed with faster and cheaper outputs, AI has made the notion of ‘good enough’ particularly appealing. For instance, the availability of tools that can generate obligation maps or summarize extensive regulatory clauses with mere prompts has revolutionized workflows. However, this convenience comes with a caveat: compliance is not merely a transactional process; it is a contract with regulators, stakeholders, and the public.

When shortcuts fail to meet regulatory standards, the defense of “We used AI” may not absolve organizations from liability. Instead, it could raise expectations regarding accountability and diligence.

Speed ≠ Safety: The Case of the Collapsing Proposal

A recent real-life example illustrates this point. A multinational firm seeking compliance with niche regulations invited proposals from various vendors. One vendor highlighted their use of expertly curated obligation libraries and legal oversight, while another claimed to offer a comprehensive platform that could automatically manage all obligations and controls.

During due diligence, the latter vendor admitted they could deliver speed but not assurance of accuracy. They could not guarantee that their tool’s recommendations would satisfy regulatory scrutiny. When pressed, they refused to underwrite the output, leading to the collapse of their value proposition. This scenario underscores the necessity for expert oversight in AI applications, particularly in complex regulatory environments.

Context ≠ Comprehension: Automation Missing Real-World Control

Another cautionary tale involves a high-risk venue operator that initially relied on AI-generated risk controls to enforce compliance rules, such as prohibiting underage patrons. While the AI proposed various complex measures based on industry practices, it overlooked a fundamental requirement: the presence of two full-time security staff to check patrons at entry. This highlights a critical flaw in AI: its inability to recognize controls that are not documented.

When AI Belongs in Your Compliance Stack

Despite these warnings, this is not a blanket condemnation of AI usage. When implemented correctly, AI can add significant value to risk and compliance processes, including:

  • Scanning policy libraries for inconsistent language
  • Flagging emerging risks from complaints or case data in real-time
  • Improving data quality at the point of capture
  • Drafting baseline documentation for expert review
  • Identifying change impacts across jurisdictions and business units

These use cases illustrate a pattern where AI handles volume and repetition, while humans manage nuance and insight. The most effective implementations treat AI as an accelerant rather than a replacement, recognizing the need for a clear distinction between support and substitution.

Key Questions Before Implementing AI Tools

As regulatory frameworks shift from rule-based assessments to ‘reasonable steps’ accountability, the pivotal question evolves from “Did we comply?” to “Can we demonstrate that we understood the risk and chose appropriate tools to manage it?” If your AI-assisted compliance efforts cannot explain their logic, show exclusions, or withstand scrutiny, you may be facing a liability rather than a time-saving solution.

Before integrating an ‘all-in-one automation’ solution, consider:

  • Will this tool produce explainable and auditable outcomes?
  • Is there clear human oversight at every high-risk stress point?
  • Can we justify the decision to use this tool, particularly if something goes awry?

If the answer to any of these questions is no, you risk undermining your compliance strategy instead of enhancing it.

The Final Takeaway

In an era that values speed, it is crucial to remember that speed without precision is merely a rounding error waiting to escalate into a major issue. Compliance leaders have a responsibility to ensure that expedited processes do not compromise accuracy, holding accountable those responsible when mistakes occur.

Ultimately, in the age of ‘good enough’ AI, being merely good is no longer sufficient—being right is essential.

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...