EU Member States Struggle to Fund AI Act Enforcement

EU Member States Face Funding Shortages to Enforce AI Act

As the European Union (EU) initiates the phased implementation of the EU AI Act, significant challenges loom on the horizon. A recent warning from EU policy adviser Kai Zenner highlights critical financial strains faced by many member states, which are compounded by a lack of expert personnel necessary for effective enforcement.

Financial Constraints and Expertise Shortages

Zenner emphasized that most EU member states are “almost broke,” raising concerns about their ability to adequately fund data protection agencies. This financial precariousness is exacerbated by the ongoing loss of artificial intelligence (AI) talent to better-funded companies, which can offer substantially higher salaries, further undermining enforcement efforts.

“This combination of lack of capital finance and also lack of talent will be really one of the main challenges of enforcing the AI Act,” Zenner stated, indicating the urgent need for skilled experts to interpret and apply the complex regulations effectively.

Penalties and Implementation Timeline

In light of these challenges, EU countries are under pressure to finalize rules for penalties and fines associated with the AI Act by August 2. This legislation not only applies to companies based in the EU but also to foreign firms engaging in business within the EU’s jurisdiction.

Understanding the EU AI Act

Passed in July 2024, the EU AI Act stands as the most comprehensive framework for AI regulation globally, with its implementation commencing this year. This set of rules aims to protect individuals’ safety and rights, prevent discrimination and harm caused by AI, and foster trust in the technology.

The Brussels Effect

The EU AI Act is poised to serve as a potential template for AI regulations in other countries, reminiscent of how the EU influenced global privacy laws with the General Data Protection Regulation (GDPR). This phenomenon, known as the “Brussels effect,” underscores the EU’s role in shaping international regulatory standards.

Risk-Based Regulation Framework

Utilizing a risk-based system, the EU AI Act categorizes AI technologies based on their risk levels:

Unacceptable Risk Systems

These systems are outright banned and include:

  • Social scoring systems that rank citizens
  • AI that manipulates individuals through subliminal techniques
  • Real-time facial recognition in public spaces, with limited exceptions for law enforcement

High-Risk Systems

AI applications in sensitive areas such as hiring, education, healthcare, or law enforcement fall into the “high risk” category. These systems must adhere to stringent regulations, including:

  • Transparency in operations
  • Accuracy in outcomes
  • Maintaining records of decision-making processes
  • Regular testing and monitoring

For instance, if a hospital employs AI for patient diagnosis, the system must meet high standards and be subject to inspection to ensure compliance with the AI Act.

Limited-Risk Systems

Lower-risk systems, such as chatbots like ChatGPT, necessitate some transparency but are not heavily regulated. These AI systems are required to disclose that their content is AI-generated, ensuring users are aware of the technology’s involvement in content creation.

As the EU progresses with the AI Act, the financial constraints and expertise shortages pose significant risks to its successful implementation. The interplay of these factors will be crucial in determining how effectively the EU can regulate the rapidly evolving landscape of artificial intelligence.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...