The Perils of ‘Good Enough’ AI in Compliance

When ‘Good Enough’ AI Gets You Fined (or Fired!)

This study explores the implications of relying on ‘good enough’ AI in legal and risk advisory contexts, emphasizing the potential for significant consequences when accuracy is sacrificed for speed.

The Temptation of Speed

In a world increasingly obsessed with faster and cheaper outputs, AI has made the notion of ‘good enough’ particularly appealing. For instance, the availability of tools that can generate obligation maps or summarize extensive regulatory clauses with mere prompts has revolutionized workflows. However, this convenience comes with a caveat: compliance is not merely a transactional process; it is a contract with regulators, stakeholders, and the public.

When shortcuts fail to meet regulatory standards, the defense of “We used AI” may not absolve organizations from liability. Instead, it could raise expectations regarding accountability and diligence.

Speed ≠ Safety: The Case of the Collapsing Proposal

A recent real-life example illustrates this point. A multinational firm seeking compliance with niche regulations invited proposals from various vendors. One vendor highlighted their use of expertly curated obligation libraries and legal oversight, while another claimed to offer a comprehensive platform that could automatically manage all obligations and controls.

During due diligence, the latter vendor admitted they could deliver speed but not assurance of accuracy. They could not guarantee that their tool’s recommendations would satisfy regulatory scrutiny. When pressed, they refused to underwrite the output, leading to the collapse of their value proposition. This scenario underscores the necessity for expert oversight in AI applications, particularly in complex regulatory environments.

Context ≠ Comprehension: Automation Missing Real-World Control

Another cautionary tale involves a high-risk venue operator that initially relied on AI-generated risk controls to enforce compliance rules, such as prohibiting underage patrons. While the AI proposed various complex measures based on industry practices, it overlooked a fundamental requirement: the presence of two full-time security staff to check patrons at entry. This highlights a critical flaw in AI: its inability to recognize controls that are not documented.

When AI Belongs in Your Compliance Stack

Despite these warnings, this is not a blanket condemnation of AI usage. When implemented correctly, AI can add significant value to risk and compliance processes, including:

  • Scanning policy libraries for inconsistent language
  • Flagging emerging risks from complaints or case data in real-time
  • Improving data quality at the point of capture
  • Drafting baseline documentation for expert review
  • Identifying change impacts across jurisdictions and business units

These use cases illustrate a pattern where AI handles volume and repetition, while humans manage nuance and insight. The most effective implementations treat AI as an accelerant rather than a replacement, recognizing the need for a clear distinction between support and substitution.

Key Questions Before Implementing AI Tools

As regulatory frameworks shift from rule-based assessments to ‘reasonable steps’ accountability, the pivotal question evolves from “Did we comply?” to “Can we demonstrate that we understood the risk and chose appropriate tools to manage it?” If your AI-assisted compliance efforts cannot explain their logic, show exclusions, or withstand scrutiny, you may be facing a liability rather than a time-saving solution.

Before integrating an ‘all-in-one automation’ solution, consider:

  • Will this tool produce explainable and auditable outcomes?
  • Is there clear human oversight at every high-risk stress point?
  • Can we justify the decision to use this tool, particularly if something goes awry?

If the answer to any of these questions is no, you risk undermining your compliance strategy instead of enhancing it.

The Final Takeaway

In an era that values speed, it is crucial to remember that speed without precision is merely a rounding error waiting to escalate into a major issue. Compliance leaders have a responsibility to ensure that expedited processes do not compromise accuracy, holding accountable those responsible when mistakes occur.

Ultimately, in the age of ‘good enough’ AI, being merely good is no longer sufficient—being right is essential.

More Insights

Southeast Asia’s Unique Approach to AI Safety Governance

Southeast Asia's approach to AI safety governance combines localized regulation with regional coordination, addressing the diverse cultural and political landscape of the region. The report outlines...

Comparing AI Action Plans: U.S. vs. China

In July, both the United States and China unveiled their national AI Action Plans, showcasing different approaches to AI development and governance. Despite their contrasting ideologies, the two...

Private Governance: The Future of AI Regulation

Private governance and regulatory sandboxes are essential for promoting democracy, efficiency, and innovation in AI regulation. This approach allows for agile and accountable experimentation that can...

Egypt Champions Ethical AI for Inclusive Development

Egypt's Minister of Planning and Economic Development, Rania Al-Mashat, emphasized the importance of robust governance frameworks for artificial intelligence to ensure it benefits society ethically...

Strengthening AI Governance for Fair Credit Access in Kenya

Kenya is at a critical juncture in utilizing artificial intelligence (AI) for financial inclusion, but expert Jimmie Mwangi warns that without strong governance, AI-driven credit scoring may...

Governance Challenges for Multi-Agent AI Systems

The article discusses the urgent need for governance frameworks to manage the interactions of multi-agent AI systems, highlighting the risks posed by their autonomous decision-making capabilities. It...

Addressing AI-Driven Online Threats with Safety by Design

The rapid growth of artificial intelligence (AI) is reshaping the digital landscape, amplifying existing online harms and introducing new safety risks, particularly through the use of deepfakes. A...

AI Governance: Strategies for Managing Risk in a Fragmented Regulatory Landscape

The article discusses the significant regulatory uncertainty surrounding global AI oversight and the importance of building governance frameworks to manage AI risks. Michael Berger from Munich Re...

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...