From Meta to Airbnb: Companies Flag Risks Dealing With EU AI Act
On March 4, 2025, it was reported that major U.S. companies including Meta Platforms Inc., Adobe Inc., and over 70 others have expressed concerns regarding the implications of the European Union’s Artificial Intelligence Act. This legislation establishes strict obligations for providers, distributors, and manufacturers of AI systems, potentially leading to significant compliance costs and alterations in product offerings.
Risks Highlighted in 10-K Filings
Companies like Airbnb Inc., Lyft Inc., and Mastercard Inc. have explicitly cited the EU AI Act as a risk in their recent 10-K filings with the U.S. Securities and Exchange Commission. They raised concerns about facing civil claims and hefty fines for non-compliance with the law. This marks a pivotal moment as it is the first time many companies have disclosed such risks in their annual reports.
Minesh Tanna, the global AI lead at Simmons & Simmons LLP, noted, “It probably reflects the fact that there’s going to be potentially aggressive enforcement of the EU AI Act.” The law’s introduction has led to fears regarding the potential for litigation and financial instability.
Understanding the EU AI Act
The EU AI Act, which became effective in August, adds to a growing list of technology and data privacy-related laws that companies must navigate. Its initial provisions aimed at preventing high-risk uses of AI began enforcement in February. However, the ambiguity surrounding the law’s requirements has heightened corporate anxiety.
Elisa Botero, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP, stated, “It’s when you start looking at how the regulations are enforced that you get a real feel of how the adjudicators will decide on these issues.” This uncertainty complicates compliance for many businesses.
Diverse Concerns and Compliance Costs
The law’s risk-based framework aims to ensure that AI systems operating within the EU are safe and uphold fundamental rights. Moreover, it aims to eliminate practices like AI-based deception. Compliance with the law could lead to substantial costs, as companies may need to hire additional personnel, engage external advisors, and manage various operational expenses.
The consulting firm Gartner Inc. highlighted in its 10-K filing that adhering to the EU AI Act “may impose significant costs on our business.” These expenses could stem from the need for detailed documentation, human oversight, and compliance with governance measures associated with higher-risk AI applications.
Fragmentation Risks and Market Impact
Companies like Airbnb have indicated that regulations such as the EU AI Act could hinder their ability to use, procure, and commercialize AI and machine learning tools moving forward. Joe Jones, director of research and insights for the International Association of Privacy Professionals, noted, “We’re seeing—and we’re going to see more of this—the risk of fragmentation with what products and services are offered in different markets.”
Roblox Corp. also acknowledged that the law might require adjustments in how AI is utilized in their products, depending on the associated risk levels. The EU has categorized higher-risk AI applications—such as biometric identification—as subject to more stringent compliance requirements.
Enforcement Challenges
The enforcement of the EU AI Act presents its own set of challenges due to the involvement of multiple stakeholders. While the European AI Office has regulatory authority over general-purpose AI systems, the enforcement of rules regarding high-risk uses will fall to the officials of the 27 EU member states.
Tanna remarked, “You could be facing action in multiple member states in respect to the same alleged issue.” The repercussions for breaching the Act could be severe, with fines reaching up to 35 million euros (around $36 million) or 7% of a company’s annual global turnover from the previous year, whichever is greater.
Future Disclosure and Investor Awareness
The current disclosures surrounding AI risks are expected to initiate a domino effect as companies gain further insights into the EU law. Organizations often scrutinize each other’s 10-K filings, leading to heightened caution regarding potential AI regulatory compliance.
Don Pagach, director of research for the Enterprise Risk Management Initiative at North Carolina State University, emphasized the importance of establishing robust risk management systems and ensuring that employees understand the full lifecycle of AI development.
As companies navigate the evolving landscape of AI governance, there may be increased questioning from investors regarding how they plan to respond to the EU AI Act. This scrutiny could drive a broader public concern about corporate accountability in AI development.
Ultimately, the EU AI Act stands as a critical piece of legislation that not only affects businesses seeking to enter the EU market but also those with existing EU clients. The law’s impact is substantial given the EU’s prominent market position, compelling businesses to prioritize compliance and strategic adaptation in their AI practices.