The EU AI Act and the Blind Spot for SMEs: Understanding the Implications
The AI hype of recent years is giving way to a harsh legal reality: with the EU AI Act, the European Union is setting globally unique and binding limits on the use of artificial intelligence. From August 2026, the stakes will be high for the vast majority of companies, yet alarmingly few are prepared.
Companies that fail to comply risk facing drastic fines of up to €35 million or seven percent of their global annual revenue. A dangerous misconception persists that this law only affects tech companies or developers of their own AI models. In reality, the strict requirements also apply to companies that merely purchase AI functions or unknowingly incorporate them into everyday standard software.
Countdown to Compliance
The countdown is on, and the clock is ticking audibly. By August 2, 2026, the full requirements for high-risk AI systems will become mandatory, governance structures must be demonstrated, and transparency obligations for generative AI will come into effect. The transitional periods granted since the regulation officially entered into force in August 2024 are nearing expiration.
For those hoping for a delay, the European Commission is discussing a Digital Omnibus Package aimed at making obligations clearer and more manageable, particularly for small and medium-sized enterprises (SMEs). However, the majority of obligations will still take effect on the aforementioned deadline.
Risk-Based Approach
The EU AI Act categorizes AI systems into four groups based on risk levels. AI practices with unacceptable risk, such as systems for socially rating individuals, are completely prohibited and can trigger fines of up to €35 million. High-risk AI systems in areas like lending, human resources management, and law enforcement must comply with comprehensive documentation requirements.
Classifying AI systems correctly is often complex. Companies are obligated to provide written justification for their classification decisions, even if the result is that a system is deemed low-risk. This requirement applies to virtually every company using AI functions in its software.
High-Risk Requirements
Organizations with high-risk AI systems must undergo a full conformity assessment by August 2026, maintain technical documentation, and register in the EU’s public database for high-risk AI. This encompasses implementing a thorough risk management system throughout the AI system’s lifecycle, ensuring quality training data, and mandatory incident reporting.
Challenges for SMEs
For many German SMEs, the EU AI Act has not yet received the attention it deserves. The complexities of the regulations and the technical terminology can be daunting, especially since the law applies not only to in-house developed AI but also to third-party AI functions.
Unlike the GDPR, which required procedural adjustments, the AI Act necessitates a deep understanding of the technologies in use. Companies must assess whether AI modules in their software influence critical decisions, such as credit approvals or recruitment processes.
Governance Structure Requirements
At the heart of the EU AI Act is the requirement for a genuine AI governance structure that makes AI decisions accountable and transparent. This includes appointing an AI compliance officer, creating an internal AI governance body, and establishing regular risk reports and audits.
While these requirements may seem like bureaucratic hurdles, they represent the necessary infrastructure for responsible AI use. Companies unaware of their AI systems’ operations expose themselves to significant regulatory risks.
Understanding the Penalty System
The EU AI Act employs a three-tiered penalty system based on the severity of violations. The most severe penalties apply to prohibited AI practices, where fines can reach up to €35 million. Violations of high-risk requirements can incur fines of up to €15 million. This makes it clear that the costs of compliance are far lower than the potential penalties for non-compliance.
Strategic Opportunities
It would be shortsighted to view the EU AI Act solely as a cost burden. Companies investing early in compliance infrastructure will gain competitive advantages. Clients, particularly in the public sector, are increasingly valuing suppliers who can demonstrate responsible AI use.
For companies yet to begin preparations, time is running out. The recommended roadmap starts with an immediate inventory of all AI systems. Following this, companies should clarify their roles—whether as providers, operators, or distributors—and establish governance structures and documentation processes.
By spring 2026, basic governance structures should be in place, contracts with AI suppliers reviewed, and complaint procedures defined. Compliance is not merely a regulatory obligation but a pathway to better corporate governance in the AI age.
For consulting firms in the field of digital transformation, the EU AI Act presents a strategic opportunity to assist clients in navigating compliance challenges. As the regulatory landscape becomes increasingly complex, expertise in this area will be invaluable.
The EU AI Act marks the beginning of a mature and responsible AI economy in Europe, balancing technological progress with the protection of fundamental rights.