Preparing SMEs for EU AI Compliance Challenges

How SMEs Can Prepare for the EU’s AI Regulations

Imagine you are the HR manager of a medium-sized manufacturing company with 250 employees spread across Europe and North America. You receive hundreds of resumes for each job opening, often more than 500 for a single position. Your small HR team cannot possibly review them all thoroughly, so you have implemented an in-house CV screening system powered by AI. This tool, based on a publicly available, open-weight foundation model and further trained on the resumes of past successful hires, helps identify promising candidates by assessing their skills, experience, and fit.

This seemingly innocuous tool now places your company at the center of a regulatory conundrum.

Under the EU AI Act, which formally went into effect on August 1, 2024, but features staggered compliance deadlines, this fictional CV screener is considered a “high-risk” application of AI. Other high-risk applications include AI systems evaluating creditworthiness for loans, AI managing critical railways or road infrastructure, and AI systems used for determining access to education. All of these will be subject to strict compliance requirements starting on August 2, 2026. Companies that violate these rules could face potentially crippling financial penalties ranging up to 7% of annual turnover. Simple efficiency tools have become potential liabilities.

The AI Act applies to companies of all sizes that develop, sell, or use any type of AI system in the European Union, or whose AI system outputs are used in the EU. Additionally, the General-Purpose AI Code of Practice, released in final form in July 2025, makes concrete specific compliance obligations that apply based on computational thresholds rather than company size.

Given the EU’s regulatory influence, often referred to as the “Brussels effect”, these rules are likely to shape AI governance globally. Tellingly, OpenAI recently sent a letter to California’s Governor Gavin Newsom recommending that they treat developers of frontier models as compliant with state requirements “when they’ve signed onto a parallel framework such as the EU’s AI Code of Practice.”

Challenges for SMEs

For small and medium-sized enterprises (SMEs), the challenges presented by the AI Act are particularly acute. SMEs are less likely than larger companies to have access to the resources needed to adapt and comply quickly. These new rules could prompt SMEs to outsource AI compliance and innovation to expensive intermediaries. Even worse, if the regulatory burden is too high, they might be incentivized to delay the implementation of AI tools, losing out on productivity gains.

However, there is a path forward for SMEs. They must execute now on specific strategies to overcome these challenges. The most savvy may even have an opportunity to use compliance with the AI Act to set themselves apart from competitors who are less prepared.

Understanding the Regulatory Environment

Most of the provisions for high-risk AI systems, including those used in HR, will go into effect on August 2nd, 2026, although the regulatory apparatus is already taking shape. The act’s enforcement follows a phased timeline: the prohibitions of certain AI systems came into effect in February 2025, and obligations for General-Purpose AI (GPAI) models came into effect in August 2025. After being delayed several times, a lengthy Code of Practice for GPAI models has now been published to help providers demonstrate compliance. In practical terms, every firm integrating a general-purpose AI model into their AI systems—whether a niche chatbot or a foundation model—must provide clear documentation and publish a summary of the training data to satisfy the requirements.

The exact computation thresholds are still the subject of intense debate. Today, the commission defines 10²⁴ FLOPS as an “indicative criterion” for GPAI, with a higher standard for models presumed to pose “systemic risk,” though these standards are likely to change repeatedly in the future. Adding to confusion, the commission acknowledged that it might postpone some AI Act obligations because the promised European harmonized standards are running late. In early July 2025, dozens of leading European companies publicly called on the commission to “stop the clock” on the most stringent AI requirements.

Given this volatile situation, SME leaders should plan as if the 2026 deadline is still in place while maintaining budget flexibility in case Brussels grants more time. They can be sure that their obligations under the law will be quite extensive. Before an AI system can be marketed or used in the EU, its provider must undertake a rigorous conformity assessment to ensure that it meets a range of requirements, including those relating to data quality, transparency, human oversight, and cybersecurity.

Companies must also establish ongoing risk-management systems to mitigate potential harm to users and data subjects. For example, the AI Act requires that our fictional CV screening tool must be trained on representative datasets and include bias-detection systems. Previously, companies may have used readily available data sets without rigorous checks, especially if they designed their AI systems in-house. Now, they must actively curate diverse datasets and mitigate potential biases. This process is both technically challenging and resource-intensive.

Overall, leaders of SMEs should closely monitor official guidance and harmonized standards to stay abreast of the shifting regulatory environment. Alternatively, they could also look at ISO 42001, an international standard that specifies requirements for establishing an AI management system within organizations. Still, much will depend on how the AI Act is implemented in practice. The AI Act’s reliance on the ambiguous concept of “intended purpose” illustrates this problem, as general-purpose AI systems like ChatGPT often lack a single, predefined use.

Assessing Exposure and Compliance Costs

As the enforcement date approaches, affected companies must begin to clear several hurdles. They should conduct a comprehensive inventory of all AI systems in use, cataloguing each system’s functionality, the underlying models and data sources used, the deployment context, and the company’s role as provider, deployer, importer, or distributor.

Most importantly, they must then assess each system rigorously against the EU AI Act’s risk classification criteria to determine whether it is high-risk. According to Art. 6(1) of the EU AI Act, an AI system shall be considered high-risk if it is intended to be used as a safety component of a product covered by union harmonization legislation listed in Annex I and if the product is required to undergo a third-party conformity assessment. (Tellingly, however, the commission guidelines on classifying high-risk AI systems, and related requirements and obligations, are not available as of this writing.)

For AI systems classified as high risk, providers must implement and maintain an iterative risk-management system. This system must span the entire AI model lifecycle to identify, evaluate, mitigate, and monitor foreseeable harms, ranging from algorithmic bias and safety vulnerabilities to data protection and cybersecurity threats. Providers must also establish a documented quality management system that includes written policies, development and testing procedures, and change management controls to ensure compliance and traceability. Providers must also guarantee transparency and human oversight, disclosing when and how AI is used, and embedding mechanisms that allow trained personnel to halt AI-driven decisions when necessary. Furthermore, documentation detailing the system’s purpose, architecture, data governance practices, performance metrics, and the results of conformity assessments must be drawn up before market placement—and later be kept up-to-date. Finally, before marketing any high-risk AI system in the EU, providers must register it in an EU database.

Researchers estimated that setting up the quality management system mentioned above would cost €193,000–330,000 initially and approximately €71,400 annually to maintain. According to a survey by Deloitte, 52% of respondents are concerned that the AI Act will limit their opportunities for AI innovation, and only 36% say their organizations are well-prepared to implement the law.

With the August 2026 deadline approaching, SME leaders must make a strategic decision: either delay the adoption of AI, given this complexity and uncertainty—or transform compliance into a competitive advantage.

Preparing for AI Compliance

While ambiguous regulation and heavy compliance costs typically entrench incumbents, SMEs are not powerless. A recent interview-based study found that SMEs demonstrate greater agility and flexibility in AI adoption than large corporations. Leveraging this strength, a comprehensive AI Act action plan for SMEs involves building strategic partnerships, implementing compliance-by-design, and turning compliance into a competitive and marketing advantage.

1. Strategic partnerships

A strategic AI adoption framework for SMEs emphasizes internal and external collaboration. This might include organizing workshops and sharing success stories and case studies, allowing for knowledge sharing and ensuring that SMEs stay informed. If implemented correctly, these measures do not violate antitrust law’s anti-collusion rules: The AI Act explicitly requires Member States to provide SMEs with dedicated communication channels for guidance and queries, proportional conformity assessment fees, and support for participation in standardization.

SMEs could expand upon these strategies: For example, an SME consortium could conduct joint bias and robustness testing using common tools and then produce its own technical file to support conformity assessment. This would reduce the cost and time per firm while probably also increasing regulator confidence in the chosen tools.

SMEs can pursue AI partnerships through horizontal collaboration with peers and vertical partnerships with specialized service providers. One example is Saidot, an AI governance startup based in Helsinki that secured €1.75 million in seed funding after developing compliance solutions to help organizations align with the requirements of the EU AI Act. This attracted clients including the Scottish Government and Deloitte. Similarly, Silo AI—Europe’s largest private AI lab, founded in Helsinki in 2017—has leveraged its expertise in responsible AI development and open-source multilingual models to attract major enterprise clients. This ultimately led to its €615 million acquisition by AMD. Another notable case is that of AIcendence, a German healthcare AI startup that successfully navigated complex rules by collaborating with the European Digital Innovation Hub (EDIH) Schleswig-Holstein. This enabled the rapid development of an AI-powered diagnostic tool and secured them government funding.

2. Compliance-by-design

If SMEs are developing AI systems in-house, they should incorporate compliance features from the outset rather than adding them later. AI compliance experts argue that using proactive compliance-by-design achieves savings of $3.05 million per data breach. Accordingly, SMEs should start by mapping their data-driven use cases to the risk tiers of the AI Act, rapidly establishing a quality management system, and recording accuracy, robustness, and biases, using publicly available benchmarks. This will also help them to improve their AI-based products and services over the medium term by highlighting any problems or biases in the training data before they compound. It will also help reveal which automated steps should need oversight.

Crucially, the AI Act stipulates that SMEs should have free and prioritized access to “regulatory sandboxes,” enabling and formalizing such compliance-by-design testing. Sandboxes are designed to involve standards organizations, EDIHs, testing facilities, and other actors. SMEs should engage these partners early to co-develop test plans and align with emerging standards. Each EU member state must establish at least one AI regulatory sandbox by August 2, 2026, and documentation from sandbox participation can be reused to demonstrate compliance. Firms are even protected from administrative fines when acting in good faith under sandbox guidance.

Evidence from the Financial Conduct Authority’s regulatory sandbox in the UK suggests significant benefits, including a 15% increase in capital raised by participating firms and a 50% higher probability of securing funding. The program has achieved lasting effects, with 90% of firms from the first cohort progressing to market launch. Regulatory sandboxes launched in response to the AI Act may look different, but they could have similar benefits.

3. Compliance-to-advantage

Finally, there is evidence that ethical AI adoption can help small businesses build urgently needed trust among customers and partners. Over the past five years, global consumer trust in AI has fallen from 61% to 53%. A systematic literature review shows that algorithmic discrimination and bias hinder trust in AI, whereas perceived fairness or justice enhance it. When AI systems exhibit errors, “hallucinations,” or biases, the foundation of trust between organizations and their stakeholders is undermined. Even worse, Deloitte researchers explain that “the faulty decisions that result most often impact multiple stakeholder groups.” This is crucial to understand for SMEs, which often depend on close-knit relationships with long-standing suppliers and customers, and should therefore prioritize ethical AI adoption.

If every company must comply, how can an SME stand out? Early adoption, transparency, and targeted marketing can help create differentiation. Once the current “hype phase” of generative AI inevitably fades, SMEs with working products and services whose capabilities are rigorously documented can transform their compliance outputs into customer-facing assurances, showcasing transparency to speed up the procurement process.

For example, publishing “model cards”—standardized documentation frameworks popularized on platforms such as GitHub and Hugging Face—provides structured transparency regarding the capabilities and limitations of AI systems, and makes this information easily accessible to a range of stakeholders. Research shows that adding detailed model cards to previously undocumented models correlates with increased download rates. In general, the first-mover advantage in AI compliance is particularly significant, given that the EU was the first jurisdiction to establish comprehensive AI regulation. Firms with relevant expertise could contribute to the development of future harmonized standards, which can then be used to demonstrate compliance with the AI Act.

Conclusion

One of the biggest unintended risks of the EU AI Act is that it could entrench the current market leaders and hinder innovation by imposing real or perceived compliance obligations that would disproportionately affect smaller companies. However, the idea that only Big Tech can survive is exaggerated. Unlike large incumbents, who are weighed down by legacy systems and layers of approval, SMEs can adapt workflows much easier—and thus gain competitive advantage. The steps outlined above lay out a number of ways they can do so.

To unlock this potential, Europe must do its part, too. Regulators should keep SME templates up to date and streamline overlapping data-protection rules, e.g. by looking at the overlap between the Fundamental Rights Impact Assessment and the Data Protection Impact Assessment. They should also underwrite open-source compliance tooling and support SMEs without the resources to participate in standard-setting meetings to enable them to participate in the standardization process. One way to achieve this would be through Small Business Standards, a European nonprofit association established with the support of the commission. If both sides deliver, Europe’s high-stakes approach to AI need not pose a high risk to its innovators.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...