AI Regulation Risks: What Companies Need to Know About the EU AI Act

From Meta to Airbnb: Companies Flag Risks Dealing With EU AI Act

On March 4, 2025, it was reported that major U.S. companies including Meta Platforms Inc., Adobe Inc., and over 70 others have expressed concerns regarding the implications of the European Union’s Artificial Intelligence Act. This legislation establishes strict obligations for providers, distributors, and manufacturers of AI systems, potentially leading to significant compliance costs and alterations in product offerings.

Risks Highlighted in 10-K Filings

Companies like Airbnb Inc., Lyft Inc., and Mastercard Inc. have explicitly cited the EU AI Act as a risk in their recent 10-K filings with the U.S. Securities and Exchange Commission. They raised concerns about facing civil claims and hefty fines for non-compliance with the law. This marks a pivotal moment as it is the first time many companies have disclosed such risks in their annual reports.

Minesh Tanna, the global AI lead at Simmons & Simmons LLP, noted, “It probably reflects the fact that there’s going to be potentially aggressive enforcement of the EU AI Act.” The law’s introduction has led to fears regarding the potential for litigation and financial instability.

Understanding the EU AI Act

The EU AI Act, which became effective in August, adds to a growing list of technology and data privacy-related laws that companies must navigate. Its initial provisions aimed at preventing high-risk uses of AI began enforcement in February. However, the ambiguity surrounding the law’s requirements has heightened corporate anxiety.

Elisa Botero, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP, stated, “It’s when you start looking at how the regulations are enforced that you get a real feel of how the adjudicators will decide on these issues.” This uncertainty complicates compliance for many businesses.

Diverse Concerns and Compliance Costs

The law’s risk-based framework aims to ensure that AI systems operating within the EU are safe and uphold fundamental rights. Moreover, it aims to eliminate practices like AI-based deception. Compliance with the law could lead to substantial costs, as companies may need to hire additional personnel, engage external advisors, and manage various operational expenses.

The consulting firm Gartner Inc. highlighted in its 10-K filing that adhering to the EU AI Act “may impose significant costs on our business.” These expenses could stem from the need for detailed documentation, human oversight, and compliance with governance measures associated with higher-risk AI applications.

Fragmentation Risks and Market Impact

Companies like Airbnb have indicated that regulations such as the EU AI Act could hinder their ability to use, procure, and commercialize AI and machine learning tools moving forward. Joe Jones, director of research and insights for the International Association of Privacy Professionals, noted, “We’re seeing—and we’re going to see more of this—the risk of fragmentation with what products and services are offered in different markets.”

Roblox Corp. also acknowledged that the law might require adjustments in how AI is utilized in their products, depending on the associated risk levels. The EU has categorized higher-risk AI applications—such as biometric identification—as subject to more stringent compliance requirements.

Enforcement Challenges

The enforcement of the EU AI Act presents its own set of challenges due to the involvement of multiple stakeholders. While the European AI Office has regulatory authority over general-purpose AI systems, the enforcement of rules regarding high-risk uses will fall to the officials of the 27 EU member states.

Tanna remarked, “You could be facing action in multiple member states in respect to the same alleged issue.” The repercussions for breaching the Act could be severe, with fines reaching up to 35 million euros (around $36 million) or 7% of a company’s annual global turnover from the previous year, whichever is greater.

Future Disclosure and Investor Awareness

The current disclosures surrounding AI risks are expected to initiate a domino effect as companies gain further insights into the EU law. Organizations often scrutinize each other’s 10-K filings, leading to heightened caution regarding potential AI regulatory compliance.

Don Pagach, director of research for the Enterprise Risk Management Initiative at North Carolina State University, emphasized the importance of establishing robust risk management systems and ensuring that employees understand the full lifecycle of AI development.

As companies navigate the evolving landscape of AI governance, there may be increased questioning from investors regarding how they plan to respond to the EU AI Act. This scrutiny could drive a broader public concern about corporate accountability in AI development.

Ultimately, the EU AI Act stands as a critical piece of legislation that not only affects businesses seeking to enter the EU market but also those with existing EU clients. The law’s impact is substantial given the EU’s prominent market position, compelling businesses to prioritize compliance and strategic adaptation in their AI practices.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...