AI, Liability and Governance: What companies must know about Italy’s Law No. 132/2025

On 10 October 2025, Italy’s new Law No. 132/2025 on Artificial Intelligence entered into force. Italy positions itself as the first EU Member State to adopt comprehensive national legislation that complements the overarching EU AI Act (Regulation (EU) 2024/1689) This marks a milestone in Europe’s approach to AI governance.

While the Law aims for coordination with the EU framework, it introduces distinctly Italian legal and constitutional principles, like for example the explicit protection of democratic integrity.

The foundation of the new law promotes a human-centered and anthropocentric vision for AI. This stance directly shapes the greatest challenge that companies are facing today: the IP barrier to generative AI content.

The IP barrier: when is AI-generated content truly a ‘work’

The most critical intervention for businesses leveraging new technologies, is the Law’s amendment to the Italian Copyright Act (L. 633/1941).  This provision definitively addresses the ambiguous status of generative AI output.

The Law confirms that a work created “with the aid of artificial intelligence tools” is only protected if it is “the result of human creativity and reflects the intellectual work of the author”.

-> this legal clarification imposes a crucial burden of proof on all deployers of AI systems. Content generated by an AI model acting autonomously will not be eligible for copyright protection under Italian law. In this context, ‘autonomously’ should be understood as referring to situation in which human involvement is limited to a simple prompt. The human contribution must be substantial and creative. 

The risk of creating commercially valuable content that is legally unprotected, and thus unlicensable or indefensible, is now immediate and profound.

The New Due Diligence: a mandate for corporate governance

The human-centered approach translates directly into increased obligations for corporate governance and commercial risk management.

The corporate risk

The principle of ‘human oversight’ is central. The Law clearly states that AI cannot substitute the ultimate decisional power of the individual. This is a critical factor in managing administrative liability under Legislative Decree No. 231/2001 (Modello 231).

Companies must establish and document a Human Authorship Protocol (HAP). This protocol must demonstrably trace the creative role of a human editor or prompted in the final AI-assisted output. Failure to prove this process means that the company lacks protected IP, potentially exposing it to liability and commercial failure.

The contractual overhaul

The change in IP status requires an overview of all related commercial documentation.

  • IP assignment and licensing: existing commercial contracts must be immediately updated. They must now explicitly define and require the documentation of the ‘Human intellectual work’ that transforms a GenAI output into a protected asset.

  • Professional services: for intellectual professions, the use of AI is restricted to instrumental and support activities. Professionals must also communicate to their clients which AI systems are used, all of which in a clear and comprehensible manner.

Sectoral specifity and future implementation

Law No. 132/2025 also introduces specific sectoral rules that signal future areas of regulatory focus, even as some key provisions remain delegated to future executive decrees.

  • Financial & insurance: the law empowers the government to issue decrees that will define the legal framework for the use of data and algorithms in the financial and insurance sector.  This will require institutions to establish robust governance, auditing and testing of AI systems.

  • Employment: employers must adhere to strict principles of safety, reliability and transparency when deploying AI systems in the workplace and obligated to inform employees of their use.

Next steps

Italy’s new law confirms that AI adoption must be strategic, auditable and compliant. The era of passively using GenAI tools is over; the focus is now on governance, transparency and traceability.