Italy’s Pioneering AI Framework Law: A New Era Beyond the EU AI Act

Italy Moves on AI: The New AI Framework Law

Italy has recently made a landmark decision by becoming the first EU Member State to adopt a comprehensive AI Framework Law. Approved on September 17, 2025, this statute is designed to work alongside Regulation (EU) 2024/1689, establishing a national framework for the adoption, development, and deployment of artificial intelligence.

Core Principles and Scope

The AI Framework Law begins with a set of general principles intended to guide AI operations in Italy. Importantly, these principles are high-level and do not impose new compliance duties on AI providers or deployers. Instead, they set a legal and political tone emphasizing an anthropocentric approach. This means that AI must support human decision-making, respect fundamental rights, and never displace human responsibility.

Moreover, the use of AI must comply with constitutional rights and EU laws, stressing principles like transparency, non-discrimination, gender equality, and cybersecurity. Notably, the law explicitly prohibits AI systems that could undermine democratic processes, addressing concerns over algorithmic amplification and disinformation.

The law also clarifies its own limits, stating that it does not create new obligations beyond those outlined in the EU AI Act. Businesses should view the AI Act as the primary compliance source while treating the Italian law as a framework that sets national guardrails.

Governance and Institutional Roles

To effectively govern AI, the Framework Law outlines a supervisory structure that aligns national responsibilities with the EU AI Act. A new Coordination Committee has been established within the Office of the Prime Minister, tasked with designing and updating Italy’s national AI strategy.

Two specialized agencies serve as national AI authorities:

  • AgID (Italy’s Digital Transformation Agency): Acts as the notifying authority, managing conformity assessments and promoting AI adoption.
  • ACN (National Cybersecurity Agency): Functions as the market-surveillance authority with powers to investigate and sanction AI system security and resilience.

These authorities must collaborate with existing regulators, including AGCOM for digital services coordination, the Garante for data protection, and various financial sector regulators.

Sectoral Guard-Rails

The AI Framework Law introduces targeted national rules specifically in high-social sensitivity sectors, supplementing the EU AI Act:

Healthcare and Disability

AI is acknowledged as a vital tool for healthcare, but AI systems cannot restrict access to healthcare based on discriminatory factors. Patients have the right to be informed when AI is involved in their care. The law allows for certain health data uses in AI research deemed of “relevant public interest,” streamlining data for research and development.

Employment

Employers must ensure that workplace AI is safe, reliable, and non-intrusive. Workers must be informed when AI tools are deployed, reflecting existing labor law obligations. A new Labour AI Observatory will monitor AI’s impact on the workforce.

Public Administration and Justice

While AI can be used to enhance public administration, accountability remains critical. AI must not replace judicial reasoning or interpretation, ensuring that judges maintain exclusive decision-making powers.

Minors

The law sets specific rules regarding AI-related consent for individuals under 18. Children under 14 require parental consent to access AI technologies, while those aged 14-17 may consent independently if the information provided is clear.

Intellectual Property, Content, and Criminal Law

The law amends Italy’s intellectual property and criminal codes to address challenges posed by synthetic content. It clarifies that AI-assisted creations are still considered “works of human intellect,” and unauthorized text-and-data mining (TDM) is now a criminal offense, increasing the stakes for developers.

A new standalone offense of disseminating AI-generated or altered content unlawfully has been introduced, carrying penalties of one to five years’ imprisonment. The law also enhances penalties for crimes conducted using AI, targeting areas like market manipulation.

Economic Development and National Strategy

The AI Framework Law aims to influence national strategy, procurement choices, and public investment flow:

  • National AI Strategy: A strategy must be prepared and updated every two years, aligning incentives and identifying priority use cases.
  • Public Procurement: E-procurement platforms should prioritize AI solutions that maintain strategic data in Italian data centers.
  • Investment in AI: Up to €1 billion is authorized for investing in Italian AI and cybersecurity companies through a state venture vehicle.

Delegated Legislation

The law sets a demanding agenda for secondary legislation, requiring the government to adopt several legislative decrees within 12 months regarding training data, illicit use of AI, and alignment with the EU AI Act.

Conclusion

The Italian AI Framework Law is significant not as a new compliance code but as a framework that sets the institutional architecture and legal points of friction for companies operating in Italy. With procurement priorities and sector-specific responsibilities, the law is designed to create a balanced approach to AI governance, while its future interaction with the EU AI Act remains to be seen.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...