The Future of AI Regulation in the EU: Key Developments and Challenges

The Current Status of the AI Act: Navigating the Future of AI Regulation in the EU

As the European Union (EU) continues to take significant strides in regulating emerging technologies, the Artificial Intelligence Act (AI Act) stands out as a landmark legislative effort aimed at regulating AI systems. As the EU moves forward with the AI Act, supervisory authorities are already stating that preparation for complying with the AI Act should start now – several key areas of contention have emerged. These debates highlight the challenge of balancing innovation with ethical and legal considerations, particularly in light of recent political developments.

Reportedly, the European Commission (EC) is already thinking of making more of the AI Act’s requirements voluntary, which proposal is facing significant pushback from the European Parliament (EP).

Timeline for Implementation

The AI Act is set to become fully applicable on August 1, 2026, following a two-year implementation period. However, there are exceptions, with some rules coming into force earlier. Since February 2, 2025, AI systems categorised as ‘unacceptable risk’ (such as AI systems that enable social scoring or untargeted scraping of the internet to create facial recognition databases) are banned, marking a significant step towards safeguarding fundamental rights. Furthermore, organisations developing or using AI systems must ensure that their employees are AI-literate, fostering sufficient knowledge of AI among their workforce.

By May 2, 2025, providers of AI systems will need to have their codes of practice ready to demonstrate compliance with the Act’s requirements. Moreover, high-risk systems will have additional time to comply, with the deadline extended to August 2, 2027, allowing stakeholders to adapt to the new regulatory landscape.

Current Status and Developments

Since 2021, the proposed AI Act has been undergoing scrutiny and debate within the EU legislative process. The EP and the Council of the EU have been actively engaged in discussions to refine the AI Act’s provisions and address concerns raised by various stakeholders. The AI Act was formally adopted on 13 March 2024. The EC now aims to provide guidance in these discussions. In February 2025, the EC published draft guidelines on prohibited AI practices and on the definition of AI systems – with critics stating that the documents lead to more confusion than clarity.

Currently, key areas of discussion include the definition of high-risk AI systems, the scope of transparency requirements, and the balance between innovation and regulation.

Definition of High-Risk AI Systems

One of the most significant areas of debate revolves around the definition and categorisation of high-risk AI systems. The AI Act seeks to impose stringent requirements on systems deemed high-risk, such as those used in law enforcement, critical infrastructure, and employment. However, stakeholders have raised concerns about the criteria used to determine what constitutes high risk. For instance, some argue that the current definitions may be too broad, potentially stifling innovation by imposing excessive regulatory burdens on technologies that do not pose significant risks. Others advocate for more precise criteria to ensure that high-risk applications are adequately regulated while allowing less risky technologies to flourish.

Transparency and Accountability

Transparency and accountability are central tenets of the AI Act, yet they remain contentious issues. The AI Act mandates that AI systems, particularly those classified as high risk, must be transparent in their operations and subject to human oversight. However, the specifics of these requirements are under debate. Industry representatives express concerns that overly prescriptive transparency obligations could hinder the development of proprietary technologies and compromise competitive advantage. Conversely, consumer advocacy groups emphasise the need for robust transparency measures to protect users and ensure ethical AI deployment.

Copyright Legal Gap

In February 2025, in a letter to the EC, 15 cultural organisations highlighted the need for new legislation to protect writers, musicians, and creatives who are vulnerable due to an alleged “legal gap” in the AI Act. The AI Act does not adequately address copyright challenges posed by generative AI models, according to a copyright expert. The text and data mining exemption in the AI Act, originally intended for limited private use, allegedly has been misinterpreted in a way that could allow large tech companies to process vast amounts of intellectual property. This has sparked alarm and lawsuits from authors and musicians. The EC has acknowledged these challenges and is considering additional measures to balance innovation with the protection of human creativity.

Hungary’s Use of Facial Recognition Technology

A concrete example of the challenges faced in implementing the AI Act is Hungary’s use of facial recognition technology, where Hungary is proposing to use AI-based facial recognition to fine Gay Budapest’s Pride participants. Reports indicate that Hungary’s deployment of this technology violates the provisions of the AI Act, where an EC spokesperson states that the assessment of legality would depend on whether the facial recognition would be administered in real time or afterwards. Members of the EP are urging the EC to look into the issue. This case underscores the difficulties in enforcing the AI Act’s requirements across member states – the liability portion of the AI Act is still not clear, especially since the withdrawal of the AI liability Directive – and highlights the need for clear guidelines and enforcement mechanisms.

Protection of Minors

The AI Act also addresses the protection of minors, yet this area remains fraught with challenges. Ensuring that AI systems do not exploit or harm minors is a priority, but the guidelines for effectively achieving this are still being refined. The complexity of regulating AI in contexts involving minors, such as educational technologies and social media platforms, requires careful consideration to balance protection with access to beneficial technologies.

Implications for Stakeholders

Although the AI Act still is quite theoretical and the liability portion of the act is still not clear, it is important that organisations making use of AI systems are aware of the rules and start preparing their compliance with the AI Act. This includes conducting thorough risk assessments, implementing transparency measures, and enhancing AI literacy.

Conclusion

All in all, the AI Act is poised to have far-reaching implications for businesses, developers, and users of AI technologies across the EU, implications that are often unforeseen. The AI Act represents a pivotal moment in the regulation of AI within the EU. As it becomes clear, it is no “silver bullet” legislation, resolving all unclarities for organisations using and developing AI systems. Guidelines of the EC attempting to provide guidance partly fail to do just that. Moreover, the AI Act is relevant across all sectors and practice areas, and AI systems will be regulated by other legislation besides the AI Act, such as the GDPR and anti-discrimination laws. These complexities highlight the need for comprehensive legal guidance.

In upcoming episodes of this AI series, specific topics and aspects of the AI Act will be delved into, navigating between these sectors and practice areas, attempting to create awareness and provide hands-on cross-disciplinary guidance.

More Insights

Shaping Responsible AI Governance in Healthcare

The AI regulatory landscape has undergone significant changes, with the US and UK adopting more pro-innovation approaches while the EU has shifted its focus as well. This evolving environment presents...

AI Basic Law: Industry Calls for Delay Amid Regulatory Ambiguities

Concerns have been raised that the ambiguous regulatory standards within South Korea's AI basic law could hinder the industry's growth, prompting calls for a three-year postponement of its...

Essential Insights on GDPR and the EU AI Act for Marketers

This article discusses the importance of GDPR compliance and the implications of the EU AI Act for marketers. It highlights the need for transparency, consent, and ethical use of AI in marketing...

Understanding the EU AI Act Risk Pyramid

The EU AI Act employs a risk-based approach to regulate AI systems, categorizing them into four tiers based on the level of risk they present to safety, rights, and societal values. At the top are...

Harnessing Agentic AI: Current Rules and Future Implications

AI companies, including Meta and OpenAI, assert that existing regulations can effectively govern the emerging field of agentic AI, which allows AI systems to perform tasks autonomously. These...

EU’s Unexpected Ban on AI in Online Meetings Raises Concerns

The European Commission has banned the use of AI-powered virtual assistants in online meetings, citing concerns over data privacy and security. This unexpected decision has raised questions about the...

OpenAI Calls for Streamlined AI Regulations in Europe

OpenAI is urging the EU to simplify AI regulations to foster innovation and maintain global competitiveness, warning that complex rules could drive investment to less democratic regions. The...

Designing Ethical AI for a Trustworthy Future

Product designers are crucial in ensuring that artificial intelligence (AI) applications are developed with ethical considerations, focusing on user safety, inclusivity, and transparency. By employing...

Bridging the Gaps in AI Governance

As we stand at a critical juncture in AI’s development, a governance challenge is emerging that could stifle innovation and create global digital divides. The current AI governance landscape resembles...