Impact of the EU AI Act on Digital Innovation Costs

How the EU’s AI Act Will Shape Digitalization and Innovation Costs

This year, the world’s first comprehensive regulation on artificial intelligence (AI) – the European Union’s Artificial Intelligence Act – came into effect, binding for Latvia as well. Gradually implemented until the summer of 2027, this Act aims to ensure that AI systems are developed and used responsibly.

As the Act’s requirements are clarified at the national level, it is crucial to avoid the mistakes made with the General Data Protection Regulation (GDPR), which resulted in an excessive bureaucratic burden.

Purpose of the AI Act

The primary objective of the AI Act is to promote ethical, safe, transparent, and trustworthy AI usage. While it is essential to mitigate the risks associated with AI, it is equally important to prevent placing all responsibility solely on developers, which could adversely affect competition and the pace of digitalization.

Four Risk Levels Defined by the AI Act

The AI Act classifies AI systems into four distinct risk levels:

  • Unacceptable Risk: AI systems that pose a direct threat to human safety, livelihoods, and rights are completely prohibited. This category includes government-led social scoring systems, similar to those in China, and toys that use voice assistance to promote dangerous behavior.
  • High Risk: Systems utilized in critical infrastructures, surgical applications, exam assessments, hiring procedures, and migration and border control management are classified as high-risk and are subject to stringent requirements.
  • Limited Risk: Systems such as recommendation engines based on user behavior, virtual assistants, and translation systems fall under this category.
  • Minimal Risk: Low-risk systems, including email filters for spam, are considered minimal risk.

With the imposition of penalties for non-compliance, including significant fines, the AI Act will substantially impact IT system development and implementation processes. Understanding the rules regarding permitted and prohibited actions will be crucial for both developers and system clients.

Maintaining Digitalization Advantages

Analysis of AI usage and digitalization in the EU indicates that Latvia is ahead of some Western European countries, like Germany, in implementing and maintaining critical infrastructure systems and making state services accessible. It is imperative not to hinder this progress or repeat the mistakes seen during the GDPR implementation, where misunderstandings complicated various IT sector processes.

Small and medium-sized businesses face particular challenges. Therefore, it is vital to focus on solutions that reduce financial and administrative burdens for these entities.

Companies utilizing AI must balance innovation with legal compliance. The system development process will require initial risk assessments, data traceability procedures, quality control mechanisms, and comprehensive technical documentation throughout the development lifecycle.

Shared Responsibility

Responsibility for compliance with the AI Act’s requirements must be shared between system owners (which may include government and municipal institutions) and developers. If the burden and risk are disproportionately placed on developers, responsible parties may hesitate to participate in various IT projects, potentially diminishing competition and affecting quality.

Moreover, penalties for violations of the Act’s requirements are calculated as a percentage of the company’s total annual revenue from the previous fiscal year, indicating that fines could exceed the total cost of the respective development project.

Conclusion

In response to these challenges, the founding of the Latvian Artificial Intelligence Association (MILA) aims to promote responsible AI use, facilitate cross-border cooperation, and enhance knowledge exchange with government institutions. The Association is positioned to be a strong partner for the government, advocating for proportionality in the AI Act’s implementation process to achieve responsible AI use without stalling digitalization.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...