Impact of the EU’s AI Law on Innovation in Czechia

The EU’s New AI Law and Its Implications for Czechia

The EU’s newly approved Artificial Intelligence (AI) Act, which came into force on April 1, aims to enhance privacy protection and create legal certainty for businesses across all member states. However, experts in Czechia warn that these regulations could stifle innovation and deter investment in the region.

What Exactly Does the Act Do?

The new regulation introduces stricter rules on artificial intelligence applications, banning high-risk uses such as real-time facial recognition and automated CV screening for job applications. AI systems are classified into four categories based on their risk levels, with a focus on high-risk and prohibited systems.

The law prohibits AI systems that manipulate or deceive individuals, exploit vulnerabilities, or discriminate unfairly. It also bans systems that:

  • Assess criminal risk based on personality traits
  • Develop databases without consent
  • Infer emotions in certain settings
  • Use biometric data to deduce sensitive information

Moreover, it restricts real-time remote biometric identification in public spaces for law enforcement, except in specific high-risk situations. Companies failing to comply with these prohibitions could face fines of up to EUR 35 million (approximately CZK 874 million) or up to 7 percent of their total annual turnover.

Industry Perspectives

Supporters of the law argue it will safeguard citizens’ rights, with advocates like Petra Stupková from the Czech Association of Artificial Intelligence stating, “We do not want a social scoring system like in China, and the AI Act should help prevent that.” However, she acknowledges that some restrictions may require adjustments to foster research and innovation.

Conversely, critics argue that the AI Act adds another layer of bureaucracy, making Europe less attractive for AI investment. Last year, U.S. companies poured around USD 100 billion (about CZK 2.3 trillion) into AI development, significantly outpacing European investments. Countries in the Middle East and China are also advancing faster in AI investment and adoption.

Jan Romportl, an AI expert, noted, “The EU lacks its own foundational AI models, social networks, and distribution channels. We are already falling behind, and another regulation will only worsen the situation.”

A Slippery Slope?

A critical concern is the potential for exemptions that could allow restricted AI applications, such as facial recognition in specific scenarios. For example, Prague’s Václav Havel Airport already utilizes AI-powered security cameras, and lawmakers are debating their implementation in football stadiums for enhanced safety. Critics warn this could lead to misuse.

Czech MP Patrik Nacher supports the idea but cautions against broader applications, stating, “I wouldn’t want this to become a slippery slope, where facial recognition expands from stadiums to nightclubs and eventually to restaurants.”

The Risk of Stifled Innovation

Beyond startups, larger technology firms may reconsider their presence in the European and Czech markets due to the high compliance costs associated with these regulations. The AI Act includes substantial penalties based on a company’s total turnover.

Zdeněk Valut, director of YDEAL Group, warned, “One day, even tech giants like Meta might decide enough is enough.” Patrik Tovaryš of Meta echoed these sentiments, suggesting that regulation has often hindered rather than encouraged innovation in Europe.

With Western firms potentially scaling back, there are fears that Chinese companies could step in, increasing their influence over European AI infrastructure. Valut explained, “China often offers investment in digital governance in exchange for using its AI tools,” posing a significant risk of European data falling into foreign hands.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...