AI Regulation Risks: What Companies Need to Know About the EU AI Act

From Meta to Airbnb: Companies Flag Risks Dealing With EU AI Act

On March 4, 2025, it was reported that major U.S. companies including Meta Platforms Inc., Adobe Inc., and over 70 others have expressed concerns regarding the implications of the European Union’s Artificial Intelligence Act. This legislation establishes strict obligations for providers, distributors, and manufacturers of AI systems, potentially leading to significant compliance costs and alterations in product offerings.

Risks Highlighted in 10-K Filings

Companies like Airbnb Inc., Lyft Inc., and Mastercard Inc. have explicitly cited the EU AI Act as a risk in their recent 10-K filings with the U.S. Securities and Exchange Commission. They raised concerns about facing civil claims and hefty fines for non-compliance with the law. This marks a pivotal moment as it is the first time many companies have disclosed such risks in their annual reports.

Minesh Tanna, the global AI lead at Simmons & Simmons LLP, noted, “It probably reflects the fact that there’s going to be potentially aggressive enforcement of the EU AI Act.” The law’s introduction has led to fears regarding the potential for litigation and financial instability.

Understanding the EU AI Act

The EU AI Act, which became effective in August, adds to a growing list of technology and data privacy-related laws that companies must navigate. Its initial provisions aimed at preventing high-risk uses of AI began enforcement in February. However, the ambiguity surrounding the law’s requirements has heightened corporate anxiety.

Elisa Botero, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP, stated, “It’s when you start looking at how the regulations are enforced that you get a real feel of how the adjudicators will decide on these issues.” This uncertainty complicates compliance for many businesses.

Diverse Concerns and Compliance Costs

The law’s risk-based framework aims to ensure that AI systems operating within the EU are safe and uphold fundamental rights. Moreover, it aims to eliminate practices like AI-based deception. Compliance with the law could lead to substantial costs, as companies may need to hire additional personnel, engage external advisors, and manage various operational expenses.

The consulting firm Gartner Inc. highlighted in its 10-K filing that adhering to the EU AI Act “may impose significant costs on our business.” These expenses could stem from the need for detailed documentation, human oversight, and compliance with governance measures associated with higher-risk AI applications.

Fragmentation Risks and Market Impact

Companies like Airbnb have indicated that regulations such as the EU AI Act could hinder their ability to use, procure, and commercialize AI and machine learning tools moving forward. Joe Jones, director of research and insights for the International Association of Privacy Professionals, noted, “We’re seeing—and we’re going to see more of this—the risk of fragmentation with what products and services are offered in different markets.”

Roblox Corp. also acknowledged that the law might require adjustments in how AI is utilized in their products, depending on the associated risk levels. The EU has categorized higher-risk AI applications—such as biometric identification—as subject to more stringent compliance requirements.

Enforcement Challenges

The enforcement of the EU AI Act presents its own set of challenges due to the involvement of multiple stakeholders. While the European AI Office has regulatory authority over general-purpose AI systems, the enforcement of rules regarding high-risk uses will fall to the officials of the 27 EU member states.

Tanna remarked, “You could be facing action in multiple member states in respect to the same alleged issue.” The repercussions for breaching the Act could be severe, with fines reaching up to 35 million euros (around $36 million) or 7% of a company’s annual global turnover from the previous year, whichever is greater.

Future Disclosure and Investor Awareness

The current disclosures surrounding AI risks are expected to initiate a domino effect as companies gain further insights into the EU law. Organizations often scrutinize each other’s 10-K filings, leading to heightened caution regarding potential AI regulatory compliance.

Don Pagach, director of research for the Enterprise Risk Management Initiative at North Carolina State University, emphasized the importance of establishing robust risk management systems and ensuring that employees understand the full lifecycle of AI development.

As companies navigate the evolving landscape of AI governance, there may be increased questioning from investors regarding how they plan to respond to the EU AI Act. This scrutiny could drive a broader public concern about corporate accountability in AI development.

Ultimately, the EU AI Act stands as a critical piece of legislation that not only affects businesses seeking to enter the EU market but also those with existing EU clients. The law’s impact is substantial given the EU’s prominent market position, compelling businesses to prioritize compliance and strategic adaptation in their AI practices.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...