The Perils of ‘Good Enough’ AI in Compliance

When ‘Good Enough’ AI Gets You Fined (or Fired!)

This study explores the implications of relying on ‘good enough’ AI in legal and risk advisory contexts, emphasizing the potential for significant consequences when accuracy is sacrificed for speed.

The Temptation of Speed

In a world increasingly obsessed with faster and cheaper outputs, AI has made the notion of ‘good enough’ particularly appealing. For instance, the availability of tools that can generate obligation maps or summarize extensive regulatory clauses with mere prompts has revolutionized workflows. However, this convenience comes with a caveat: compliance is not merely a transactional process; it is a contract with regulators, stakeholders, and the public.

When shortcuts fail to meet regulatory standards, the defense of “We used AI” may not absolve organizations from liability. Instead, it could raise expectations regarding accountability and diligence.

Speed ≠ Safety: The Case of the Collapsing Proposal

A recent real-life example illustrates this point. A multinational firm seeking compliance with niche regulations invited proposals from various vendors. One vendor highlighted their use of expertly curated obligation libraries and legal oversight, while another claimed to offer a comprehensive platform that could automatically manage all obligations and controls.

During due diligence, the latter vendor admitted they could deliver speed but not assurance of accuracy. They could not guarantee that their tool’s recommendations would satisfy regulatory scrutiny. When pressed, they refused to underwrite the output, leading to the collapse of their value proposition. This scenario underscores the necessity for expert oversight in AI applications, particularly in complex regulatory environments.

Context ≠ Comprehension: Automation Missing Real-World Control

Another cautionary tale involves a high-risk venue operator that initially relied on AI-generated risk controls to enforce compliance rules, such as prohibiting underage patrons. While the AI proposed various complex measures based on industry practices, it overlooked a fundamental requirement: the presence of two full-time security staff to check patrons at entry. This highlights a critical flaw in AI: its inability to recognize controls that are not documented.

When AI Belongs in Your Compliance Stack

Despite these warnings, this is not a blanket condemnation of AI usage. When implemented correctly, AI can add significant value to risk and compliance processes, including:

  • Scanning policy libraries for inconsistent language
  • Flagging emerging risks from complaints or case data in real-time
  • Improving data quality at the point of capture
  • Drafting baseline documentation for expert review
  • Identifying change impacts across jurisdictions and business units

These use cases illustrate a pattern where AI handles volume and repetition, while humans manage nuance and insight. The most effective implementations treat AI as an accelerant rather than a replacement, recognizing the need for a clear distinction between support and substitution.

Key Questions Before Implementing AI Tools

As regulatory frameworks shift from rule-based assessments to ‘reasonable steps’ accountability, the pivotal question evolves from “Did we comply?” to “Can we demonstrate that we understood the risk and chose appropriate tools to manage it?” If your AI-assisted compliance efforts cannot explain their logic, show exclusions, or withstand scrutiny, you may be facing a liability rather than a time-saving solution.

Before integrating an ‘all-in-one automation’ solution, consider:

  • Will this tool produce explainable and auditable outcomes?
  • Is there clear human oversight at every high-risk stress point?
  • Can we justify the decision to use this tool, particularly if something goes awry?

If the answer to any of these questions is no, you risk undermining your compliance strategy instead of enhancing it.

The Final Takeaway

In an era that values speed, it is crucial to remember that speed without precision is merely a rounding error waiting to escalate into a major issue. Compliance leaders have a responsibility to ensure that expedited processes do not compromise accuracy, holding accountable those responsible when mistakes occur.

Ultimately, in the age of ‘good enough’ AI, being merely good is no longer sufficient—being right is essential.

More Insights

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...

AI Governance Framework: Ensuring Responsible Deployment for a Safer Future

At the 17th annual conference of ISACA in Abuja, stakeholders called for an AI governance framework to ensure responsible deployment of artificial intelligence. They emphasized the need for...

Essential Strategies for Effective AI Governance in Healthcare

The AMA emphasizes the necessity for CMOs and healthcare leaders to establish policies for AI tool adoption and governance due to the rapid expansion of AI in healthcare. Key foundational elements for...

UN Establishes AI Governance Panel for Global Cooperation

The United Nations General Assembly has adopted a resolution to establish an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance. This...

Emerging Cyber Threats: AI Risks and Solutions for Brokers

As artificial intelligence (AI) tools rapidly spread across industries, they present new cyber risks alongside their benefits. Brokers are advised to help clients navigate these risks by understanding...