AI Act: Entry into force of the first provisions

Gianpaolo Todisco - Partner

February 2, 2025 marks a milestone for the AI Act, with the entry into force of the first regulations at European Union level. This regulation introduces a harmonized legal framework for all member states, imposing common rules on those who develop, market or use artificial intelligence systems in the EU. The main objective is to reduce regulatory fragmentation, protecting the fundamental rights enshrined in Article 1 and promoting an internal market in accordance with the Nice Charter.

Parties involved

The new provisions mainly concern two categories:

Artificial intelligence developers and suppliers, who must ensure that their systems comply with the criteria established by the AI Act before they are marketed.

End users and organizations operating in regulated sectors, who are obliged to comply with the established rules, especially for applications classified as high risk.

Key obligations for companies and organizations

From February 2, 2025, companies and organizations that use artificial intelligence must comply with two main obligations:

1. Prohibition of artificial intelligence practices with unacceptable risk

The AI Act adopts an approach based on risk classification, dividing AI systems into four categories:

· Minimal risk: systems such as anti-spam filters, without regulatory restrictions.

· Limited risk: applications such as chatbots, subject to transparency obligations.

High risk: systems used in critical sectors (healthcare, justice, personnel evaluation), subject to rigorous compliance and monitoring measures.

Unacceptable risk: practices prohibited as of February 2, 2025.

Prohibited practices include:

· Subliminal or deceptive manipulation, i.e. systems that influence behavior without consent.

Exploitation of the vulnerabilities of specific groups, such as minors or people with disabilities.

Social scoring, reputation evaluation systems based on personal data, with discriminatory effects.

Real-time biometric identification in public spaces, except for specific exceptions.

Emotion recognition in sensitive contexts, such as work and education.

Creation of biometric databases through scraping, i.e. the unauthorized collection of biometric data online.

Companies must ensure that they do not adopt these practices in their products and services. Violations can result in penalties of up to 35 million euros or 7% of global annual turnover, whichever is greater.

1. Obligation of AI literacy

Article 4 of the AI Act requires companies and public administrations to provide adequate training on the functioning and risks of artificial intelligence. This obligation also extends to those who, while not operating directly in the technology sector, use AI in their processes.

· The required measures include:

· Training programs for employees on the opportunities and risks of AI.

· Internal guidelines for the responsible use of artificial intelligence.

2. Raising awareness of the ethical and legal implications of AI.

The AI Act applies not only to suppliers established in the EU, but also to those outside Europe whose systems are used in the territory of the Union.

Subsequent stages and new obligations

The AI Act will be implemented gradually, with key deadlines in the coming years.

3. August 2025

Specific provisions on AI governance and obligations for general-purpose artificial intelligence models will come into force. Companies will be required to:

· Maintain detailed documentation on system testing and development.

· Adopt standardized procedures to ensure safety throughout the system's life cycle.

· Conduct periodic compliance assessments.

Failure to comply with these provisions will result in significant penalties.

4. August 2026

The AI Act will be fully operational and will apply to all artificial intelligence systems, including those classified as high risk. Organizations will need to take additional measures, including:

· Impact assessments to identify and mitigate risks.

· Continuous monitoring to detect anomalies in AI systems.

Conclusion

The AI Act represents a crucial step in the regulation of artificial intelligence in Europe, guaranteeing a safe technological development that respects fundamental rights. Companies and organizations must progressively adapt to the new obligations to avoid sanctions and ensure an ethical an