Italy’s Groundbreaking AI Regulation: A New Era of Compliance

Italy’s Leadership in National AI Regulation

With the approval of Law No. 132 on September 23, 2025, Italy has positioned itself as the first European Union member state to adopt a comprehensive and specific law in compliance with EU Regulation No. 2024/1689 (the “AI Act”). This legislation anticipates the full entry into force of the AI Act and establishes a national framework for governance, supervision, and support for innovation in the field of artificial intelligence (AI).

Background

The draft law, known as Draft Law 1146/24, underwent scrutiny from various sector authorities, including the Italian Data Protection Authority (the “Garante”) and the European Commission. In its opinion dated September 12, 2024, the Commission emphasized the necessity for greater alignment with the AI Act and advocated for increased openness towards AI utilization. After amendments based on these recommendations, the Italian Parliament approved the law on September 17, 2025, which was subsequently published in the Official Gazette no. 223 on September 25, 2025.

The Sectors Concerned

The law’s first section introduces a national strategy on AI, which the Interministerial Committee for Digital Transition will update biennially, in collaboration with the Department for Digital Transformation. This strategy will guide policy and regulatory decisions regarding AI.

The second section delineates specific rules for various sectors, particularly focusing on healthcare and scientific research. AI is permitted as a supportive tool but is restricted from making decisions related to treatment access, ensuring that human oversight remains central. Additionally, personal data may be processed without consent in public and private non-profit research deemed of significant public interest, provided ethics committee approval is obtained and the Garante is notified.

The law also addresses the labor and justice sectors, emphasizing the accountability of AI “deployers.” It extends organizational controls beyond high-risk uses, linking new obligations with those established in privacy law, including Data Protection Impact Assessments (DPIA) and privacy by design.

Employment Sector Regulations

In the realm of employment, the law introduces specific safeguards for managing processes related to worker selection, evaluation, and monitoring through AI systems. It mandates transparency obligations for employers, ensures employees’ right to information, and requires impact assessments to mitigate algorithmic discrimination.

Moreover, the law establishes an observatory to monitor the impact of AI on work, aiming to maximize benefits while minimizing risks associated with AI systems in the workplace. It also promotes training initiatives for both workers and employers in AI literacy.

Justice Sector Regulations

The legislation imposes strict criteria for deploying AI systems within the justice sector, applicable to both case management and decision support. It reinforces human oversight and prohibits AI’s use in interpretative tasks, restricting automated tools to organizational, simplification, and administrative use, thus ensuring robust protection of the rights to defense and confidentiality.

Intellectual Property Considerations

The law recognizes copyright protection for works created with the assistance of AI, provided they reflect the author’s intellectual effort. Conversely, materials produced solely by AI do not receive protection. The reproduction and extraction of text and data via AI are permitted if the sources are legitimately accessible.

Governance Structure

In terms of governance, the national AI strategy is overseen by the Presidency of the Council of Ministers, involving key authorities such as the Agency for Digital Italy (AgID) and the National Cybersecurity Agency (ACN), alongside sectoral supervisory bodies like the Bank of Italy, CONSOB, and IVASS. Special emphasis is placed on cybersecurity, recognized as a fundamental prerequisite throughout the AI systems’ lifecycle.

Institutional coordination among national authorities, including the Data Protection Authority, will be crucial in aligning AI risk assessments with GDPR and ethical impact evaluations.

Interconnections with the AI Act, NIS2, and GDPR

This law effectively addresses areas not covered by the AI Act, identifying supervisory authorities, regulating inspections, supporting SMEs and public administrations, and defining penalties for non-compliance (e.g., concerning deepfakes). Despite the constraints posed by European harmonization, the national legislation enhances organizational safeguards and procedural requirements, extending these to low-risk scenarios and sensitive sectors like labor, health, and justice.

When it comes to generative AI and deepfakes, Italy adopts a more prescriptive stance, introducing criminal offenses and mechanisms for content traceability and authenticity, whereas the AI Act primarily emphasizes information obligations and codes of conduct with heightened requirements for systems posing systemic risks. The outcome is a model integrated with the GDPR, NIS2, and sector-specific regulations, translating general European provisions into actionable compliance controls.

Conclusion

In practice, compliance will hinge on the ability to coordinate governance and control assessments. Companies must prioritize:

  • Mapping systems and classifying risks;
  • Integrating DPIAs;
  • Defining roles and responsibilities for developers and users;
  • Including “AI Act-ready” contractual clauses in supply chains;
  • Implementing technical measures, including content mapping and incident reporting management.

Furthermore, continuous monitoring of European executive actions and national guidelines, especially those from the Italian Data Protection Authority, will be essential. These standards will shape evaluation criteria, technical parameters, and inspection priorities. Proactive engagement will not only mitigate the risk of penalties but also provide a competitive edge by transforming compliance into a hallmark of quality, security, and reliability in AI systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...