Impact of the EU AI Act on Digital Innovation Costs

How the EU’s AI Act Will Shape Digitalization and Innovation Costs

This year, the world’s first comprehensive regulation on artificial intelligence (AI) – the European Union’s Artificial Intelligence Act – came into effect, binding for Latvia as well. Gradually implemented until the summer of 2027, this Act aims to ensure that AI systems are developed and used responsibly.

As the Act’s requirements are clarified at the national level, it is crucial to avoid the mistakes made with the General Data Protection Regulation (GDPR), which resulted in an excessive bureaucratic burden.

Purpose of the AI Act

The primary objective of the AI Act is to promote ethical, safe, transparent, and trustworthy AI usage. While it is essential to mitigate the risks associated with AI, it is equally important to prevent placing all responsibility solely on developers, which could adversely affect competition and the pace of digitalization.

Four Risk Levels Defined by the AI Act

The AI Act classifies AI systems into four distinct risk levels:

  • Unacceptable Risk: AI systems that pose a direct threat to human safety, livelihoods, and rights are completely prohibited. This category includes government-led social scoring systems, similar to those in China, and toys that use voice assistance to promote dangerous behavior.
  • High Risk: Systems utilized in critical infrastructures, surgical applications, exam assessments, hiring procedures, and migration and border control management are classified as high-risk and are subject to stringent requirements.
  • Limited Risk: Systems such as recommendation engines based on user behavior, virtual assistants, and translation systems fall under this category.
  • Minimal Risk: Low-risk systems, including email filters for spam, are considered minimal risk.

With the imposition of penalties for non-compliance, including significant fines, the AI Act will substantially impact IT system development and implementation processes. Understanding the rules regarding permitted and prohibited actions will be crucial for both developers and system clients.

Maintaining Digitalization Advantages

Analysis of AI usage and digitalization in the EU indicates that Latvia is ahead of some Western European countries, like Germany, in implementing and maintaining critical infrastructure systems and making state services accessible. It is imperative not to hinder this progress or repeat the mistakes seen during the GDPR implementation, where misunderstandings complicated various IT sector processes.

Small and medium-sized businesses face particular challenges. Therefore, it is vital to focus on solutions that reduce financial and administrative burdens for these entities.

Companies utilizing AI must balance innovation with legal compliance. The system development process will require initial risk assessments, data traceability procedures, quality control mechanisms, and comprehensive technical documentation throughout the development lifecycle.

Shared Responsibility

Responsibility for compliance with the AI Act’s requirements must be shared between system owners (which may include government and municipal institutions) and developers. If the burden and risk are disproportionately placed on developers, responsible parties may hesitate to participate in various IT projects, potentially diminishing competition and affecting quality.

Moreover, penalties for violations of the Act’s requirements are calculated as a percentage of the company’s total annual revenue from the previous fiscal year, indicating that fines could exceed the total cost of the respective development project.

Conclusion

In response to these challenges, the founding of the Latvian Artificial Intelligence Association (MILA) aims to promote responsible AI use, facilitate cross-border cooperation, and enhance knowledge exchange with government institutions. The Association is positioned to be a strong partner for the government, advocating for proportionality in the AI Act’s implementation process to achieve responsible AI use without stalling digitalization.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...