#AIlaw

AI, Liability and Governance: What companies must know about Italy’s Law No. 132/2025

On 10 October 2025, Italy’s new Law No. 132/2025 on Artificial Intelligence entered into force. Italy positions itself as the first EU Member State to adopt comprehensive national legislation that complements the overarching EU AI Act (Regulation (EU) 2024/1689) This marks a milestone in Europe’s approach to AI governance.

While the Law aims for coordination with the EU framework, it introduces distinctly Italian legal and constitutional principles, like for example the explicit protection of democratic integrity.

The foundation of the new law promotes a human-centered and anthropocentric vision for AI. This stance directly shapes the greatest challenge that companies are facing today: the IP barrier to generative AI content.

The IP barrier: when is AI-generated content truly a ‘work’

The most critical intervention for businesses leveraging new technologies, is the Law’s amendment to the Italian Copyright Act (L. 633/1941).  This provision definitively addresses the ambiguous status of generative AI output.

The Law confirms that a work created “with the aid of artificial intelligence tools” is only protected if it is “the result of human creativity and reflects the intellectual work of the author”.

-> this legal clarification imposes a crucial burden of proof on all deployers of AI systems. Content generated by an AI model acting autonomously will not be eligible for copyright protection under Italian law. In this context, ‘autonomously’ should be understood as referring to situation in which human involvement is limited to a simple prompt. The human contribution must be substantial and creative. 

The risk of creating commercially valuable content that is legally unprotected, and thus unlicensable or indefensible, is now immediate and profound.

The New Due Diligence: a mandate for corporate governance

The human-centered approach translates directly into increased obligations for corporate governance and commercial risk management.

The corporate risk

The principle of ‘human oversight’ is central. The Law clearly states that AI cannot substitute the ultimate decisional power of the individual. This is a critical factor in managing administrative liability under Legislative Decree No. 231/2001 (Modello 231).

Companies must establish and document a Human Authorship Protocol (HAP). This protocol must demonstrably trace the creative role of a human editor or prompted in the final AI-assisted output. Failure to prove this process means that the company lacks protected IP, potentially exposing it to liability and commercial failure.

The contractual overhaul

The change in IP status requires an overview of all related commercial documentation.

  • IP assignment and licensing: existing commercial contracts must be immediately updated. They must now explicitly define and require the documentation of the ‘Human intellectual work’ that transforms a GenAI output into a protected asset.

  • Professional services: for intellectual professions, the use of AI is restricted to instrumental and support activities. Professionals must also communicate to their clients which AI systems are used, all of which in a clear and comprehensible manner.

Sectoral specifity and future implementation

Law No. 132/2025 also introduces specific sectoral rules that signal future areas of regulatory focus, even as some key provisions remain delegated to future executive decrees.

  • Financial & insurance: the law empowers the government to issue decrees that will define the legal framework for the use of data and algorithms in the financial and insurance sector.  This will require institutions to establish robust governance, auditing and testing of AI systems.

  • Employment: employers must adhere to strict principles of safety, reliability and transparency when deploying AI systems in the workplace and obligated to inform employees of their use.

Next steps

Italy’s new law confirms that AI adoption must be strategic, auditable and compliant. The era of passively using GenAI tools is over; the focus is now on governance, transparency and traceability.

Denmark’s groundbreaking “deepfake copyright” proposal

Gianpaolo Todisco

In the ever-evolving landscape of law and technology, Denmark has taken a bold step into uncharted territory. A new legislative proposal seeks to give individuals copyright-like control over their own face, voice, and likeness—a direct response to the rise of AI-generated deepfakes.

This could become a landmark development not only for Europe, but for global digital rights.sed this autumn, Denmark plans to champion similar legal reforms across the EU during its upcoming presidency.

With the proliferation of generative AI tools, creating realistic videos or audio clips of someone saying or doing things they never did is now accessible to anyone with a smartphone. While some deepfakes are used for satire or entertainment, others cross the line into invasion of privacy, fraud, harassment, and reputational harm.

Existing laws around defamation or privacy often fall short when addressing AI-manipulated content. Most notably, there’s a legal void around who owns a person’s likeness in a digital format.

The Danish Ministry of Culture has proposed a legal framework that would treat a person's image, voice, and facial expressions as intellectual property. Under the proposal:

  • Individuals can demand the removal of deepfake content created or shared without consent.

  • Violators can be sued for damages, similarly to copyright infringement.

  • Parody and satire remain protected, preserving freedom of expression.

In a digital world where data is currency and identity can be faked in seconds, this proposal places control back in the hands of individuals.

Denmark has already signaled that, if the law passes, it intends to champion similar reforms at the EU level during its 2025 presidency. This aligns with ongoing discussions in Brussels around AI regulation, digital rights, and ethical frameworks for emerging technologies.

This development could redefine fundamental legal concepts:

  • Personal identity as IP: A new category of rights where human traits are treated as digital assets.

  • Stronger guardrails on AI: A legal model that balances innovation with accountability.

  • Precedent-setting potential: If adopted across the EU or other regions, it could shape future global standards.

    As AI capabilities continue to blur the lines between real and artificial, legal systems must adapt. Denmark’s proposal reflects a proactive approach—not just reacting to harm, but reshaping the very framework of digital identity.

    Whether this model becomes a European standard or remains a national experiment, one thing is certain: the age of treating your likeness like intellectual property has begun.