EU AI Act Implementation Resources Unveiled

The EU AI Act Newsletter #88: Resources to Support Implementation

Welcome to the EU AI Act Newsletter, a brief biweekly newsletter providing up-to-date developments and analyses of the EU artificial intelligence law.

Legislative Process

Resources to Support Implementation

The European Commission has launched two key resources to facilitate AI Act implementation: the AI Act Service Desk and Single Information Platform. These initiatives aim to support trustworthy AI development while providing necessary legal clarity across Europe.

The Single Information Platform will serve as a central hub for AI Act information, offering stakeholders comprehensive guidance on implementation. The platform includes materials from Member States, FAQs, and various resources. Three digital tools are featured on the platform:

  • Compliance Checker helping stakeholders identify their legal obligations and compliance requirements;
  • AI Act Explorer for intuitive navigation through the Act’s chapters, annexes, and recitals;
  • An online form connecting users to the AI Act Service Desk, staffed by experts working alongside the AI Office.

Italy’s AI Law

Italy has adopted its national AI law, with implementation beginning October 10, 2025. This legislation complements the EU AI Act and includes both general principles and sector-specific rules for areas not covered by EU legislation. The law designates two competent authorities: the Agency for Digital Italy (AgID) as the notifying authority and the National Cybersecurity Agency (ACN) as the market surveillance authority.

The government has twelve months to adopt additional measures, including aligning the national framework with the AI Act, assigning administrative powers to competent authorities, establishing rules for training AI systems, regulating the use of AI in investigative and policing activities, and updating the framework for civil and criminal penalties. Notably, the final version omits previously proposed requirements for labeling AI-generated news content, as general transparency requirements under the AI Act apply.

Dutch Want to Clarify AI Rules Instead of Delaying Them

The Netherlands has issued a position paper supporting clarification of AI rules over delays, while advocating for reduced regulatory burdens in the digital rulebook. The paper outlines three key principles:

  • Maintaining the original goals of digital legislation while focusing on clarification and coherence;
  • Reducing compliance costs through practical tools and assistance, especially for governments and SMEs;
  • Streamlining governance through enhanced coordination of European regulatory boards.

Specific AI Act recommendations include prioritizing implementation simplification over deadline extensions, creating a common list of critical infrastructure under Annex III, developing compliance templates while maintaining flexibility for providers, and extending the derogation for Quality Management Systems to SMEs.

Analyses

Dutch Chips Company Slams EU for Overregulating AI

ASML’s Chief Financial Officer Roger Dassen has criticized the EU’s approach to AI regulation, arguing it drives talent and companies toward Silicon Valley. Speaking at an event in Eindhoven, he suggested that Europe’s regulatory-first approach is hampering AI development. ASML, Europe’s leading tech company by market value, has advocated for pausing parts of the AI Act’s implementation, joining 46 companies in requesting a two-year delay.

The company recently became the largest shareholder in French AI firm Mistral with a €1.3 billion investment, strengthening its influence in the EU AI space. Additionally, he urged completion of the EU’s capital markets union to improve startup funding, noting that while Europe excels at creating startups, it struggles with scaling them up.

California is Getting Its ‘AI Act’ Together

California has taken significant steps in AI regulation while federal policy remains stalled, with Governor Newsom signing legislation on AI transparency and child protection. Key measures include the AI Transparency Act of 2025, addressing fake online content, and Senator Wiener’s SB 53, which establishes safety standards for powerful AI systems and protects whistleblowers.

However, these reforms fall short of the EU AI Act’s scope and advocates’ desired protections. Notable gaps remain in location privacy and algorithmic fairness, with a proposed automated systems assessment requirement postponed. While progress has been made in children’s online safety, the legislation stops short of establishing strong financial accountability for platforms.

Timeline on Guidelines on AI Act Interplay

The European Commission intends to release guidelines explaining how the AI Act interacts with other digital laws from the third quarter of 2026, potentially coinciding with or following the implementation of key provisions. This timing is particularly relevant for high-risk AI systems, whose core requirements take effect on August 2, 2026.

The guidelines will address the AI Act’s relationship with regulations, including Medical Devices Regulation, General Data Protection Regulation, Digital Markets Act, Digital Services Act, copyright rules, and broader product safety regime. Additionally, guidance on high-risk obligations and their application along the AI value chain is expected in Q2 or Q3 2026, while clarification on incident reporting interactions with sectoral and horizontal legislation will follow later.

Commission Not Considering Common Specifications Despite AI Standards Delays

The European Commission is not planning to develop mandatory technical requirements (common specifications) under the AI Act, despite delays in preparing technical standards needed for legal compliance. Common specifications were intended as a fallback solution when technical standards prove inadequate or delayed.

The standards for high-risk AI systems are now expected to arrive around August 2026, coinciding with the implementation deadline. At a recent closed-door meeting with the European Parliament’s AI Act implementation working group, Commission officials cited insufficient time and resources for developing common specifications. This lack of a fallback option might incentivize industry players to further delay standard-setting, particularly as such delays have already prompted calls to postpone high-risk requirements.

Leading parliamentarian Brando Benifei has argued that any postponement should be contingent on the Commission’s commitment to implement common specifications if standards remain incomplete after an extension.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...