EU AI Act: The Hidden Risks for SMEs and How to Prepare

The EU AI Act and the Blind Spot for SMEs: Understanding the Implications

The AI hype of recent years is giving way to a harsh legal reality: with the EU AI Act, the European Union is setting globally unique and binding limits on the use of artificial intelligence. From August 2026, the stakes will be high for the vast majority of companies, yet alarmingly few are prepared.

Companies that fail to comply risk facing drastic fines of up to €35 million or seven percent of their global annual revenue. A dangerous misconception persists that this law only affects tech companies or developers of their own AI models. In reality, the strict requirements also apply to companies that merely purchase AI functions or unknowingly incorporate them into everyday standard software.

Countdown to Compliance

The countdown is on, and the clock is ticking audibly. By August 2, 2026, the full requirements for high-risk AI systems will become mandatory, governance structures must be demonstrated, and transparency obligations for generative AI will come into effect. The transitional periods granted since the regulation officially entered into force in August 2024 are nearing expiration.

For those hoping for a delay, the European Commission is discussing a Digital Omnibus Package aimed at making obligations clearer and more manageable, particularly for small and medium-sized enterprises (SMEs). However, the majority of obligations will still take effect on the aforementioned deadline.

Risk-Based Approach

The EU AI Act categorizes AI systems into four groups based on risk levels. AI practices with unacceptable risk, such as systems for socially rating individuals, are completely prohibited and can trigger fines of up to €35 million. High-risk AI systems in areas like lending, human resources management, and law enforcement must comply with comprehensive documentation requirements.

Classifying AI systems correctly is often complex. Companies are obligated to provide written justification for their classification decisions, even if the result is that a system is deemed low-risk. This requirement applies to virtually every company using AI functions in its software.

High-Risk Requirements

Organizations with high-risk AI systems must undergo a full conformity assessment by August 2026, maintain technical documentation, and register in the EU’s public database for high-risk AI. This encompasses implementing a thorough risk management system throughout the AI system’s lifecycle, ensuring quality training data, and mandatory incident reporting.

Challenges for SMEs

For many German SMEs, the EU AI Act has not yet received the attention it deserves. The complexities of the regulations and the technical terminology can be daunting, especially since the law applies not only to in-house developed AI but also to third-party AI functions.

Unlike the GDPR, which required procedural adjustments, the AI Act necessitates a deep understanding of the technologies in use. Companies must assess whether AI modules in their software influence critical decisions, such as credit approvals or recruitment processes.

Governance Structure Requirements

At the heart of the EU AI Act is the requirement for a genuine AI governance structure that makes AI decisions accountable and transparent. This includes appointing an AI compliance officer, creating an internal AI governance body, and establishing regular risk reports and audits.

While these requirements may seem like bureaucratic hurdles, they represent the necessary infrastructure for responsible AI use. Companies unaware of their AI systems’ operations expose themselves to significant regulatory risks.

Understanding the Penalty System

The EU AI Act employs a three-tiered penalty system based on the severity of violations. The most severe penalties apply to prohibited AI practices, where fines can reach up to €35 million. Violations of high-risk requirements can incur fines of up to €15 million. This makes it clear that the costs of compliance are far lower than the potential penalties for non-compliance.

Strategic Opportunities

It would be shortsighted to view the EU AI Act solely as a cost burden. Companies investing early in compliance infrastructure will gain competitive advantages. Clients, particularly in the public sector, are increasingly valuing suppliers who can demonstrate responsible AI use.

For companies yet to begin preparations, time is running out. The recommended roadmap starts with an immediate inventory of all AI systems. Following this, companies should clarify their roles—whether as providers, operators, or distributors—and establish governance structures and documentation processes.

By spring 2026, basic governance structures should be in place, contracts with AI suppliers reviewed, and complaint procedures defined. Compliance is not merely a regulatory obligation but a pathway to better corporate governance in the AI age.

For consulting firms in the field of digital transformation, the EU AI Act presents a strategic opportunity to assist clients in navigating compliance challenges. As the regulatory landscape becomes increasingly complex, expertise in this area will be invaluable.

The EU AI Act marks the beginning of a mature and responsible AI economy in Europe, balancing technological progress with the protection of fundamental rights.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...