Compliance-Driven Changes in Enterprise GenAI Purchases

Will Compliance Rules Change How Enterprises Buy GenAI?

The landscape of enterprise AI purchasing is undergoing a significant transformation, shifting from a tech-first approach to a compliance-first mindset. This change is largely driven by the stringent regulations established under the EU AI Act, which imposes fines that can reach up to €35 million or 7% of annual revenue for non-compliance.

Regulatory Landscape

The AI Act came into effect on August 1, 2024, with full activation expected by August 2, 2026. Companies that fail to adhere to these regulations may be forced to withdraw AI systems from the market, rendering non-compliant solutions unviable. The implementation of these rules varies across EU member states, creating complexity for global enterprises navigating this regulatory environment.

Enforcement actions have already begun to yield substantial penalties, as evidenced by France’s CNIL imposing fines of €325 million on Google and €150 million on Shein for cookie consent violations.

Impact of Copyright Cases

High-profile copyright disputes are further reshaping the vendor selection process for enterprise buyers. For instance, Anthropic recently agreed to a proposed settlement of $1.5 billion concerning approximately 500,000 works, committing to destroy unlawfully obtained files. Similarly, Thomson Reuters secured a partial summary judgment against Ross Intelligence, which improperly utilized 2,243 Westlaw headnotes, harming Thomson Reuters’ market interests.

Some legal precedents are favorable for AI companies, such as the ruling in Bartz v. Anthropic, where the court deemed the use of purchased print books for AI training as “highly transformative” and a case of fair use, though this was separated from Anthropic’s use of pirated copies.

Shifting Buyer Priorities

The changing legal landscape is influencing enterprise buying behavior, with factors like security and cost gaining precedence over mere accuracy and reliability. A prominent industry leader noted that “for most tasks, all the models perform well enough now—so pricing has become a much more important factor.”

Organizations employing AI systems are advised to negotiate contracts that ensure the developer conducts thorough reviews of training inputs and eliminates reliance on questionable datasets. Furthermore, companies are increasingly seeking indemnities from AI providers to guard against potential IP infringements, data privacy breaches, and confidentiality violations.

Emergence of Compliance Startups

The demand for compliance solutions has led to significant investments in AI compliance companies. For example, Delve raised $32 million at a $300 million valuation, a substantial increase from its previous funding round, serving over 500 companies across various compliance frameworks. Meanwhile, Zango secured $4.8 million for its AI-driven governance, risk, and compliance platform.

The Market Shift

Despite the clarity of regulations, practical compliance remains challenging. AI models can inadvertently reproduce sensitive information from training data, leading to outputs that may contain confidential data. As a result, enterprises are moving from a “build” to a “buy” strategy, increasingly opting for third-party applications over internally developed tools, which are proving difficult to maintain in this dynamic environment.

This shift towards risk-first buying is creating new market categories where a startup’s legal safety can command a valuation premium far exceeding that of pure technological capabilities.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...