The Perils of ‘Good Enough’ AI in Compliance

When ‘Good Enough’ AI Gets You Fined (or Fired!)

This study explores the implications of relying on ‘good enough’ AI in legal and risk advisory contexts, emphasizing the potential for significant consequences when accuracy is sacrificed for speed.

The Temptation of Speed

In a world increasingly obsessed with faster and cheaper outputs, AI has made the notion of ‘good enough’ particularly appealing. For instance, the availability of tools that can generate obligation maps or summarize extensive regulatory clauses with mere prompts has revolutionized workflows. However, this convenience comes with a caveat: compliance is not merely a transactional process; it is a contract with regulators, stakeholders, and the public.

When shortcuts fail to meet regulatory standards, the defense of “We used AI” may not absolve organizations from liability. Instead, it could raise expectations regarding accountability and diligence.

Speed ≠ Safety: The Case of the Collapsing Proposal

A recent real-life example illustrates this point. A multinational firm seeking compliance with niche regulations invited proposals from various vendors. One vendor highlighted their use of expertly curated obligation libraries and legal oversight, while another claimed to offer a comprehensive platform that could automatically manage all obligations and controls.

During due diligence, the latter vendor admitted they could deliver speed but not assurance of accuracy. They could not guarantee that their tool’s recommendations would satisfy regulatory scrutiny. When pressed, they refused to underwrite the output, leading to the collapse of their value proposition. This scenario underscores the necessity for expert oversight in AI applications, particularly in complex regulatory environments.

Context ≠ Comprehension: Automation Missing Real-World Control

Another cautionary tale involves a high-risk venue operator that initially relied on AI-generated risk controls to enforce compliance rules, such as prohibiting underage patrons. While the AI proposed various complex measures based on industry practices, it overlooked a fundamental requirement: the presence of two full-time security staff to check patrons at entry. This highlights a critical flaw in AI: its inability to recognize controls that are not documented.

When AI Belongs in Your Compliance Stack

Despite these warnings, this is not a blanket condemnation of AI usage. When implemented correctly, AI can add significant value to risk and compliance processes, including:

  • Scanning policy libraries for inconsistent language
  • Flagging emerging risks from complaints or case data in real-time
  • Improving data quality at the point of capture
  • Drafting baseline documentation for expert review
  • Identifying change impacts across jurisdictions and business units

These use cases illustrate a pattern where AI handles volume and repetition, while humans manage nuance and insight. The most effective implementations treat AI as an accelerant rather than a replacement, recognizing the need for a clear distinction between support and substitution.

Key Questions Before Implementing AI Tools

As regulatory frameworks shift from rule-based assessments to ‘reasonable steps’ accountability, the pivotal question evolves from “Did we comply?” to “Can we demonstrate that we understood the risk and chose appropriate tools to manage it?” If your AI-assisted compliance efforts cannot explain their logic, show exclusions, or withstand scrutiny, you may be facing a liability rather than a time-saving solution.

Before integrating an ‘all-in-one automation’ solution, consider:

  • Will this tool produce explainable and auditable outcomes?
  • Is there clear human oversight at every high-risk stress point?
  • Can we justify the decision to use this tool, particularly if something goes awry?

If the answer to any of these questions is no, you risk undermining your compliance strategy instead of enhancing it.

The Final Takeaway

In an era that values speed, it is crucial to remember that speed without precision is merely a rounding error waiting to escalate into a major issue. Compliance leaders have a responsibility to ensure that expedited processes do not compromise accuracy, holding accountable those responsible when mistakes occur.

Ultimately, in the age of ‘good enough’ AI, being merely good is no longer sufficient—being right is essential.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...