Italy’s Groundbreaking AI Regulation: A New Era of Compliance

Italy’s Leadership in National AI Regulation

With the approval of Law No. 132 on September 23, 2025, Italy has positioned itself as the first European Union member state to adopt a comprehensive and specific law in compliance with EU Regulation No. 2024/1689 (the “AI Act”). This legislation anticipates the full entry into force of the AI Act and establishes a national framework for governance, supervision, and support for innovation in the field of artificial intelligence (AI).

Background

The draft law, known as Draft Law 1146/24, underwent scrutiny from various sector authorities, including the Italian Data Protection Authority (the “Garante”) and the European Commission. In its opinion dated September 12, 2024, the Commission emphasized the necessity for greater alignment with the AI Act and advocated for increased openness towards AI utilization. After amendments based on these recommendations, the Italian Parliament approved the law on September 17, 2025, which was subsequently published in the Official Gazette no. 223 on September 25, 2025.

The Sectors Concerned

The law’s first section introduces a national strategy on AI, which the Interministerial Committee for Digital Transition will update biennially, in collaboration with the Department for Digital Transformation. This strategy will guide policy and regulatory decisions regarding AI.

The second section delineates specific rules for various sectors, particularly focusing on healthcare and scientific research. AI is permitted as a supportive tool but is restricted from making decisions related to treatment access, ensuring that human oversight remains central. Additionally, personal data may be processed without consent in public and private non-profit research deemed of significant public interest, provided ethics committee approval is obtained and the Garante is notified.

The law also addresses the labor and justice sectors, emphasizing the accountability of AI “deployers.” It extends organizational controls beyond high-risk uses, linking new obligations with those established in privacy law, including Data Protection Impact Assessments (DPIA) and privacy by design.

Employment Sector Regulations

In the realm of employment, the law introduces specific safeguards for managing processes related to worker selection, evaluation, and monitoring through AI systems. It mandates transparency obligations for employers, ensures employees’ right to information, and requires impact assessments to mitigate algorithmic discrimination.

Moreover, the law establishes an observatory to monitor the impact of AI on work, aiming to maximize benefits while minimizing risks associated with AI systems in the workplace. It also promotes training initiatives for both workers and employers in AI literacy.

Justice Sector Regulations

The legislation imposes strict criteria for deploying AI systems within the justice sector, applicable to both case management and decision support. It reinforces human oversight and prohibits AI’s use in interpretative tasks, restricting automated tools to organizational, simplification, and administrative use, thus ensuring robust protection of the rights to defense and confidentiality.

Intellectual Property Considerations

The law recognizes copyright protection for works created with the assistance of AI, provided they reflect the author’s intellectual effort. Conversely, materials produced solely by AI do not receive protection. The reproduction and extraction of text and data via AI are permitted if the sources are legitimately accessible.

Governance Structure

In terms of governance, the national AI strategy is overseen by the Presidency of the Council of Ministers, involving key authorities such as the Agency for Digital Italy (AgID) and the National Cybersecurity Agency (ACN), alongside sectoral supervisory bodies like the Bank of Italy, CONSOB, and IVASS. Special emphasis is placed on cybersecurity, recognized as a fundamental prerequisite throughout the AI systems’ lifecycle.

Institutional coordination among national authorities, including the Data Protection Authority, will be crucial in aligning AI risk assessments with GDPR and ethical impact evaluations.

Interconnections with the AI Act, NIS2, and GDPR

This law effectively addresses areas not covered by the AI Act, identifying supervisory authorities, regulating inspections, supporting SMEs and public administrations, and defining penalties for non-compliance (e.g., concerning deepfakes). Despite the constraints posed by European harmonization, the national legislation enhances organizational safeguards and procedural requirements, extending these to low-risk scenarios and sensitive sectors like labor, health, and justice.

When it comes to generative AI and deepfakes, Italy adopts a more prescriptive stance, introducing criminal offenses and mechanisms for content traceability and authenticity, whereas the AI Act primarily emphasizes information obligations and codes of conduct with heightened requirements for systems posing systemic risks. The outcome is a model integrated with the GDPR, NIS2, and sector-specific regulations, translating general European provisions into actionable compliance controls.

Conclusion

In practice, compliance will hinge on the ability to coordinate governance and control assessments. Companies must prioritize:

  • Mapping systems and classifying risks;
  • Integrating DPIAs;
  • Defining roles and responsibilities for developers and users;
  • Including “AI Act-ready” contractual clauses in supply chains;
  • Implementing technical measures, including content mapping and incident reporting management.

Furthermore, continuous monitoring of European executive actions and national guidelines, especially those from the Italian Data Protection Authority, will be essential. These standards will shape evaluation criteria, technical parameters, and inspection priorities. Proactive engagement will not only mitigate the risk of penalties but also provide a competitive edge by transforming compliance into a hallmark of quality, security, and reliability in AI systems.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...