AI Regulation Risks: What Companies Need to Know About the EU AI Act

From Meta to Airbnb: Companies Flag Risks Dealing With EU AI Act

On March 4, 2025, it was reported that major U.S. companies including Meta Platforms Inc., Adobe Inc., and over 70 others have expressed concerns regarding the implications of the European Union’s Artificial Intelligence Act. This legislation establishes strict obligations for providers, distributors, and manufacturers of AI systems, potentially leading to significant compliance costs and alterations in product offerings.

Risks Highlighted in 10-K Filings

Companies like Airbnb Inc., Lyft Inc., and Mastercard Inc. have explicitly cited the EU AI Act as a risk in their recent 10-K filings with the U.S. Securities and Exchange Commission. They raised concerns about facing civil claims and hefty fines for non-compliance with the law. This marks a pivotal moment as it is the first time many companies have disclosed such risks in their annual reports.

Minesh Tanna, the global AI lead at Simmons & Simmons LLP, noted, “It probably reflects the fact that there’s going to be potentially aggressive enforcement of the EU AI Act.” The law’s introduction has led to fears regarding the potential for litigation and financial instability.

Understanding the EU AI Act

The EU AI Act, which became effective in August, adds to a growing list of technology and data privacy-related laws that companies must navigate. Its initial provisions aimed at preventing high-risk uses of AI began enforcement in February. However, the ambiguity surrounding the law’s requirements has heightened corporate anxiety.

Elisa Botero, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP, stated, “It’s when you start looking at how the regulations are enforced that you get a real feel of how the adjudicators will decide on these issues.” This uncertainty complicates compliance for many businesses.

Diverse Concerns and Compliance Costs

The law’s risk-based framework aims to ensure that AI systems operating within the EU are safe and uphold fundamental rights. Moreover, it aims to eliminate practices like AI-based deception. Compliance with the law could lead to substantial costs, as companies may need to hire additional personnel, engage external advisors, and manage various operational expenses.

The consulting firm Gartner Inc. highlighted in its 10-K filing that adhering to the EU AI Act “may impose significant costs on our business.” These expenses could stem from the need for detailed documentation, human oversight, and compliance with governance measures associated with higher-risk AI applications.

Fragmentation Risks and Market Impact

Companies like Airbnb have indicated that regulations such as the EU AI Act could hinder their ability to use, procure, and commercialize AI and machine learning tools moving forward. Joe Jones, director of research and insights for the International Association of Privacy Professionals, noted, “We’re seeing—and we’re going to see more of this—the risk of fragmentation with what products and services are offered in different markets.”

Roblox Corp. also acknowledged that the law might require adjustments in how AI is utilized in their products, depending on the associated risk levels. The EU has categorized higher-risk AI applications—such as biometric identification—as subject to more stringent compliance requirements.

Enforcement Challenges

The enforcement of the EU AI Act presents its own set of challenges due to the involvement of multiple stakeholders. While the European AI Office has regulatory authority over general-purpose AI systems, the enforcement of rules regarding high-risk uses will fall to the officials of the 27 EU member states.

Tanna remarked, “You could be facing action in multiple member states in respect to the same alleged issue.” The repercussions for breaching the Act could be severe, with fines reaching up to 35 million euros (around $36 million) or 7% of a company’s annual global turnover from the previous year, whichever is greater.

Future Disclosure and Investor Awareness

The current disclosures surrounding AI risks are expected to initiate a domino effect as companies gain further insights into the EU law. Organizations often scrutinize each other’s 10-K filings, leading to heightened caution regarding potential AI regulatory compliance.

Don Pagach, director of research for the Enterprise Risk Management Initiative at North Carolina State University, emphasized the importance of establishing robust risk management systems and ensuring that employees understand the full lifecycle of AI development.

As companies navigate the evolving landscape of AI governance, there may be increased questioning from investors regarding how they plan to respond to the EU AI Act. This scrutiny could drive a broader public concern about corporate accountability in AI development.

Ultimately, the EU AI Act stands as a critical piece of legislation that not only affects businesses seeking to enter the EU market but also those with existing EU clients. The law’s impact is substantial given the EU’s prominent market position, compelling businesses to prioritize compliance and strategic adaptation in their AI practices.

More Insights

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...

Mastering Sovereign AI Clouds in Intelligent Manufacturing

Sovereign AI clouds provide essential control and compliance for manufacturers, ensuring that their proprietary data remains secure and localized. As the demand for AI-driven solutions grows, managed...

Empowering Ethical AI in Scotland

The Scottish AI Alliance has released its 2024/2025 Impact Report, showcasing significant progress in promoting ethical and inclusive artificial intelligence across Scotland. The report highlights...

EU AI Act: Embrace Compliance and Prepare for Change

The recent announcement from the EU Commission confirming that there will be no delay to the EU AI Act has sparked significant reactions, with many claiming both failure and victory. Companies are...

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...

EU AI Act Faces Growing Calls for Delay Amid Industry Concerns

The EU has rejected calls for a pause in the implementation of the AI Act, maintaining its original timeline despite pressure from various companies and countries. Swedish Prime Minister Ulf...

Tightening AI Controls: Impacts on Tech Stocks and Data Centers

The Trump administration is preparing to introduce new restrictions on AI chip exports to Malaysia and Thailand to prevent advanced processors from reaching China. These regulations could create...

AI and Data Governance: Building a Trustworthy Future

AI governance and data governance are critical for ensuring ethical and reliable AI solutions in modern enterprises. These frameworks help organizations manage data quality, transparency, and...

BRICS Calls for UN Leadership in AI Regulation

In a significant move, BRICS nations have urged the United Nations to take the lead in establishing global regulations for artificial intelligence (AI). This initiative highlights the growing...