AI

Is AI an Author?

Gianpaolo Todisco - Partner

Current intellectual property laws were conceived in an era in which creativity was the exclusive prerogative of human beings. However, the increasing use of generative AI, such as ChatGPT for texts or MidJourney for images, has made it necessary to rethink the regulations.

At the moment, most international legislation states that only a human being can be considered an author or inventor.

For example, the United States Patent and Trademark Office (USPTO) has repeatedly rejected patent applications for inventions made by AI, arguing that only a human being can be recognized as an inventor.

The United Kingdom Intellectual Property Office (UKIPO) follows the same line, excluding AI as a possible copyright holder.

European Union: EUIPO and the European Court of Justice also agree that only a natural person can claim copyright on a work.

Italian case law has developed a notion of creativity which, as highlighted by the Court of Cassation in sentence no. 25173/2011, “does not coincide with that of creation, originality and absolute novelty”, but refers to the “personal and individual expression of an objectivity”. This principle was further developed by the Court of Cassation with sentence no. 10300/2020, which specified that the work must “reflect the personality of its author, manifesting the latter's free and creative choices”.

The Italian system, as confirmed by the Civil Code in art. 2575, recognizes copyright on “works of the intellect of a creative nature”, assuming a direct connection between the work and the personality of its creator. As highlighted by the Court of Florence in sentence no. 1372/2022, “copyright does not protect ideas but only the expressive form that the author gives to the work, since it is in the expressive form that the author manifests his creativity and expresses his personality”.

One point of debate is whether AI can be recognized as an author in all respects. Currently, regulatory bodies consider AI to be a tool, and the rights to the generated works belong to the programmer or the user who gave the input for the creation. However, in some cases human intervention is minimal or even non-existent, making it difficult to establish authorship of the work.

The absence of clear regulations poses several problems. First of all, legal protection: works generated by AI may not be protected by copyright, making them freely usable by anyone.

From the point of view of plagiarism and copyright infringement, many AIs are trained on existing data and could create content that resembles copyrighted works, raising legal issues.

Finally, if AIs generate works of art, music and texts on a large scale, artists and creators risk seeing their work devalued or replaced.

Possible Solutions and Future Prospects

  • New categories of copyright: create a new form of protection for works generated by AI with a percentage of human involvement.

  • Attribution of rights: assign ownership of AI works to the creator of the algorithm or to whoever provided the creative input.

  • Regulation of AI use: define clear rules on how to train AI without violating pre-existing copyrights.

In conclusion, artificial intelligence is transforming the intellectual property landscape and poses challenges that legislators must address quickly. Until regulations are updated, companies and creators will have to be careful in their use of AI-generated content to avoid legal problems. The debate is still open, and the decisions made today will define the future of creativity in the digital age.

AI Act: New scenarios in the regulation of artificial intelligence

The AI ACT, the European Regulation on Artificial Intelligence, was approved by the European Parliament on June 14, will be submitted for consideration by EU countries in the Council, with the aim of becoming law by the end of 2023.  The proposed AI Act takes a risk-based approach and provides for penalties of up to €30,000,000 or up to 6 percent of the previous year's total annual worldwide turnover in the event of infringement.

The proposed EU Regulation on Artificial Intelligence aims to create a reliable legal framework for AI, based on the EU’s fundamental values and rights, with the goal to ensure the safe use of AI, and prevent risks and negative consequences for people and society.

The proposal establishes harmonized rules for the development, marketing, and use of AI systems in the EU through a risk-based approach with different compliance obligations depending on the level of risk (low, medium, or high) that software and applications may pose to people's fundamental rights: The higher the risk, the greater the compliance requirements and responsibilities of developers.

In particular, the AI Act proposes a fundamental distinction between:

-          "Prohibited Artificial Intelligence Practices", that create an unacceptable risk, for example, for the violation of EU fundamental rights. This includes systems that:

o   Use subliminal techniques that act without a person's knowledge or that exploit physical or mental vulnerabilities and are such as to cause physical or psychological harm;

o   Used by public authorities, such as, social scoring, real-time remote biometric identification in public spaces, predictive policing based of indiscriminate collection, and facial recognition unless there is a specific need or judicial authorization.

-          "High-Risk AI Systems" that pose a high risk to the health, safety or fundamental rights of individuals, such as systems that enable biometric Identification and categorization of individuals, to determine access to educational and vocational training institutions, to score admission tests or conduct personnel selection activities, to be used for political elections, etc. The placing on the market and use of this type of systems, therefore, is not prohibited but requires compliance with specific requirements and the performance of prior conformity assessments.

In particular, these systems must comply with a number of specific rules, including:

-          Establishment and maintenance of a risk management system: it is mandatory to establish and maintain an active risk management system for artificial intelligence (AI) systems.

-          Quality criteria for data and models: AI systems must be developed according to specific qualitative criteria for the data used and the models implemented to ensure the reliability and accuracy of the results produced.

-          Documentation of development and operation: Adequate documentation of the development of a given AI system and its operation in required, including the systems’ compliance with applicable regulations.

-          Transparency to users: it is mandatory to provide users with clear and understandable information on how AI systems work, to make them aware about how data are used and how results are generated.

-          Human oversight: AI systems must be designed so that they can be supervised by human beings.

-          Accuracy, robustness and cybersecurity: it is imperative to ensure that AI systems are reliable, accurate and secure. This includes taking steps to prevent errors or malfunctions that could cause harm or undesirable outcomes.

In some cases, conformity assessment can be carried out independently by the manufacturer of AI systems, while in other cases it may be necessary to involve an external conformity assessment body.

-          "Limited Risk AI Systems" that do not pose significant risks and for which there are general requirements for information and transparency to the user. For example, systems that interact with humans (e.g., virtual assistant), that are used to detect emotions, or that generate or manipulate content (e.g., Chat GPT), must adequately disclose the use of automated systems, including for the purpose of enabling informed choices or opting out of certain solutions.

The Regulation is structured in a flexible way so that it can be applied or adapted to different cases that may arise as a result of technological developments. The Regulation also takes into account and ensures the application of complementary rules, such as those on data protection, consumer protection and the Internet of Things (IoT).

The Regulation provides for fines of up to 30 million euros or up to 6 percent of the total annual worldwide turnover of the preceding year in case of violation.

As mentioned above, the text approved by the European Parliament will be submitted to the Council for consideration, with the aim of being adopted by the end of 2023. If so, it will be the first legislation in the world to address in such a comprehensive and detailed manner the potential issues arising from placing AI systems on the market.

We will provide updates on future regulatory developments

For details and information, please contact David Ottolenghi of Clovers.