AI Regulation Risks: What Companies Need to Know About the EU AI Act

From Meta to Airbnb: Companies Flag Risks Dealing With EU AI Act

On March 4, 2025, it was reported that major U.S. companies including Meta Platforms Inc., Adobe Inc., and over 70 others have expressed concerns regarding the implications of the European Union’s Artificial Intelligence Act. This legislation establishes strict obligations for providers, distributors, and manufacturers of AI systems, potentially leading to significant compliance costs and alterations in product offerings.

Risks Highlighted in 10-K Filings

Companies like Airbnb Inc., Lyft Inc., and Mastercard Inc. have explicitly cited the EU AI Act as a risk in their recent 10-K filings with the U.S. Securities and Exchange Commission. They raised concerns about facing civil claims and hefty fines for non-compliance with the law. This marks a pivotal moment as it is the first time many companies have disclosed such risks in their annual reports.

Minesh Tanna, the global AI lead at Simmons & Simmons LLP, noted, “It probably reflects the fact that there’s going to be potentially aggressive enforcement of the EU AI Act.” The law’s introduction has led to fears regarding the potential for litigation and financial instability.

Understanding the EU AI Act

The EU AI Act, which became effective in August, adds to a growing list of technology and data privacy-related laws that companies must navigate. Its initial provisions aimed at preventing high-risk uses of AI began enforcement in February. However, the ambiguity surrounding the law’s requirements has heightened corporate anxiety.

Elisa Botero, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP, stated, “It’s when you start looking at how the regulations are enforced that you get a real feel of how the adjudicators will decide on these issues.” This uncertainty complicates compliance for many businesses.

Diverse Concerns and Compliance Costs

The law’s risk-based framework aims to ensure that AI systems operating within the EU are safe and uphold fundamental rights. Moreover, it aims to eliminate practices like AI-based deception. Compliance with the law could lead to substantial costs, as companies may need to hire additional personnel, engage external advisors, and manage various operational expenses.

The consulting firm Gartner Inc. highlighted in its 10-K filing that adhering to the EU AI Act “may impose significant costs on our business.” These expenses could stem from the need for detailed documentation, human oversight, and compliance with governance measures associated with higher-risk AI applications.

Fragmentation Risks and Market Impact

Companies like Airbnb have indicated that regulations such as the EU AI Act could hinder their ability to use, procure, and commercialize AI and machine learning tools moving forward. Joe Jones, director of research and insights for the International Association of Privacy Professionals, noted, “We’re seeing—and we’re going to see more of this—the risk of fragmentation with what products and services are offered in different markets.”

Roblox Corp. also acknowledged that the law might require adjustments in how AI is utilized in their products, depending on the associated risk levels. The EU has categorized higher-risk AI applications—such as biometric identification—as subject to more stringent compliance requirements.

Enforcement Challenges

The enforcement of the EU AI Act presents its own set of challenges due to the involvement of multiple stakeholders. While the European AI Office has regulatory authority over general-purpose AI systems, the enforcement of rules regarding high-risk uses will fall to the officials of the 27 EU member states.

Tanna remarked, “You could be facing action in multiple member states in respect to the same alleged issue.” The repercussions for breaching the Act could be severe, with fines reaching up to 35 million euros (around $36 million) or 7% of a company’s annual global turnover from the previous year, whichever is greater.

Future Disclosure and Investor Awareness

The current disclosures surrounding AI risks are expected to initiate a domino effect as companies gain further insights into the EU law. Organizations often scrutinize each other’s 10-K filings, leading to heightened caution regarding potential AI regulatory compliance.

Don Pagach, director of research for the Enterprise Risk Management Initiative at North Carolina State University, emphasized the importance of establishing robust risk management systems and ensuring that employees understand the full lifecycle of AI development.

As companies navigate the evolving landscape of AI governance, there may be increased questioning from investors regarding how they plan to respond to the EU AI Act. This scrutiny could drive a broader public concern about corporate accountability in AI development.

Ultimately, the EU AI Act stands as a critical piece of legislation that not only affects businesses seeking to enter the EU market but also those with existing EU clients. The law’s impact is substantial given the EU’s prominent market position, compelling businesses to prioritize compliance and strategic adaptation in their AI practices.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...