Italy’s Pioneering AI Framework Law: A New Era Beyond the EU AI Act

Italy Moves on AI: The New AI Framework Law

Italy has recently made a landmark decision by becoming the first EU Member State to adopt a comprehensive AI Framework Law. Approved on September 17, 2025, this statute is designed to work alongside Regulation (EU) 2024/1689, establishing a national framework for the adoption, development, and deployment of artificial intelligence.

Core Principles and Scope

The AI Framework Law begins with a set of general principles intended to guide AI operations in Italy. Importantly, these principles are high-level and do not impose new compliance duties on AI providers or deployers. Instead, they set a legal and political tone emphasizing an anthropocentric approach. This means that AI must support human decision-making, respect fundamental rights, and never displace human responsibility.

Moreover, the use of AI must comply with constitutional rights and EU laws, stressing principles like transparency, non-discrimination, gender equality, and cybersecurity. Notably, the law explicitly prohibits AI systems that could undermine democratic processes, addressing concerns over algorithmic amplification and disinformation.

The law also clarifies its own limits, stating that it does not create new obligations beyond those outlined in the EU AI Act. Businesses should view the AI Act as the primary compliance source while treating the Italian law as a framework that sets national guardrails.

Governance and Institutional Roles

To effectively govern AI, the Framework Law outlines a supervisory structure that aligns national responsibilities with the EU AI Act. A new Coordination Committee has been established within the Office of the Prime Minister, tasked with designing and updating Italy’s national AI strategy.

Two specialized agencies serve as national AI authorities:

  • AgID (Italy’s Digital Transformation Agency): Acts as the notifying authority, managing conformity assessments and promoting AI adoption.
  • ACN (National Cybersecurity Agency): Functions as the market-surveillance authority with powers to investigate and sanction AI system security and resilience.

These authorities must collaborate with existing regulators, including AGCOM for digital services coordination, the Garante for data protection, and various financial sector regulators.

Sectoral Guard-Rails

The AI Framework Law introduces targeted national rules specifically in high-social sensitivity sectors, supplementing the EU AI Act:

Healthcare and Disability

AI is acknowledged as a vital tool for healthcare, but AI systems cannot restrict access to healthcare based on discriminatory factors. Patients have the right to be informed when AI is involved in their care. The law allows for certain health data uses in AI research deemed of “relevant public interest,” streamlining data for research and development.

Employment

Employers must ensure that workplace AI is safe, reliable, and non-intrusive. Workers must be informed when AI tools are deployed, reflecting existing labor law obligations. A new Labour AI Observatory will monitor AI’s impact on the workforce.

Public Administration and Justice

While AI can be used to enhance public administration, accountability remains critical. AI must not replace judicial reasoning or interpretation, ensuring that judges maintain exclusive decision-making powers.

Minors

The law sets specific rules regarding AI-related consent for individuals under 18. Children under 14 require parental consent to access AI technologies, while those aged 14-17 may consent independently if the information provided is clear.

Intellectual Property, Content, and Criminal Law

The law amends Italy’s intellectual property and criminal codes to address challenges posed by synthetic content. It clarifies that AI-assisted creations are still considered “works of human intellect,” and unauthorized text-and-data mining (TDM) is now a criminal offense, increasing the stakes for developers.

A new standalone offense of disseminating AI-generated or altered content unlawfully has been introduced, carrying penalties of one to five years’ imprisonment. The law also enhances penalties for crimes conducted using AI, targeting areas like market manipulation.

Economic Development and National Strategy

The AI Framework Law aims to influence national strategy, procurement choices, and public investment flow:

  • National AI Strategy: A strategy must be prepared and updated every two years, aligning incentives and identifying priority use cases.
  • Public Procurement: E-procurement platforms should prioritize AI solutions that maintain strategic data in Italian data centers.
  • Investment in AI: Up to €1 billion is authorized for investing in Italian AI and cybersecurity companies through a state venture vehicle.

Delegated Legislation

The law sets a demanding agenda for secondary legislation, requiring the government to adopt several legislative decrees within 12 months regarding training data, illicit use of AI, and alignment with the EU AI Act.

Conclusion

The Italian AI Framework Law is significant not as a new compliance code but as a framework that sets the institutional architecture and legal points of friction for companies operating in Italy. With procurement priorities and sector-specific responsibilities, the law is designed to create a balanced approach to AI governance, while its future interaction with the EU AI Act remains to be seen.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...