Facial recognition and European law: what really changes for companies and creatives.

Gianpaolo Todisco - Partner

Facial recognition has entered our daily lives almost without us noticing. We unlock our smartphones with our faces, enter some airports through biometric gates, and participate in events where smart cameras analyze flows and attendance. In retail and experiential communication, facial analysis systems promise to “read” emotions, personalize content, and improve interaction with the brand.

But when technology recognizes a face, it is not simply “looking”: it is processing biometric data. And in Europe, this radically changes the legal landscape.

With the adoption of the AI Act, the European Union has made a clear choice: biometrics is a very high-risk area and must be strictly regulated. This is complemented by the already established General Data Protection Regulation, which classifies biometric data as “special categories” deserving of enhanced protection.

For businesses, brands, creative agencies, and cultural operators, the issue is not theoretical. It is operational. And strategic.

Many companies view facial recognition as an innovative tool: automated VIP access, exclusive events, immersive retail, advanced profiling. In some cases, these are solutions integrated into security systems; in others, they are advanced marketing tools.

The point is that, legally speaking, we are not talking about simple software, but about a system that processes information capable of uniquely identifying a natural person.

The GDPR, in fact, considers biometric data used for identification as “sensitive” data. This means that processing is prohibited, except in very limited circumstances. Consent, for example, must be truly free, specific, and informed. And in a public or commercial context, freedom of consent is often questionable.

The AI Act adds another layer: it classifies remote biometric identification systems as prohibited (in some cases) or “high risk” (in most applications). And a high-risk system entails structural obligations: documented risk management, human oversight, strict data governance, traceability, technical controls, and CE marking.

This is no longer an IT issue. It is a matter of corporate governance.

In the creative world, the issue takes on an additional dimension.

Consider audiovisual production, photography, and global advertising campaigns. Today, there are systems that can automatically recognize faces in content, cross-reference them with databases, and analyze emotional reactions during the viewing of a commercial.

Or consider the issue of datasets: images published online, artistic photographs, editorial content that is ‘scraped’ and used to train facial recognition systems or artificial intelligence models.

At least four levels of protection are intertwined here:

  • personal data protection,

  • image rights,

  • copyright,

  • contractual liability.

A photographer could find themselves facing unauthorized use of their work for biometric purposes. A brand could be involved in a dispute for using a facial analysis system during an event without adequate disclosure. A platform could be held accountable for the use of opaque biometric databases.

Technology is accelerating. Legal risk is multiplying.

The penalties are significant: the GDPR allows fines of up to 4% of global annual turnover, while the AI Act can reach up to 7% in the most serious cases. For multinational luxury or entertainment groups, the economic impact can be significant.

But in the creative sector, the damage to reputation can be even more severe. Consumers are increasingly sensitive to issues of privacy, digital ethics, and the responsible use of AI. A brand perceived as invasive or lacking in transparency risks compromising its value narrative.

And today, brand value is, first and foremost, trust. This does not mean that facial recognition should be excluded altogether. It means that it must be rigorously evaluated.

What is needed is:

a preventive audit of the systems adopted;

an in-depth data protection impact assessment (DPIA);

a clear allocation of roles and responsibilities between the data controller, data processor, and AI provider;

specific contractual clauses;

a real analysis of the proportionality between the purpose pursued and the invasiveness of the tool.

Above all, an informed choice is needed: is the technology consistent with the brand's positioning? Is it necessary or simply “suggestive”?

Europe has chosen a regulatory model that prioritizes the protection of fundamental rights over technological deregulation. It is a choice that profoundly affects the way companies can innovate.

For companies and creatives, facial recognition is not just a technological opportunity: it is a test of their legal maturity and cultural responsibility.

In the new digital ecosystem, innovation cannot be separated from compliance. And compliance, if well managed, can become a competitive advantage.

Because in today's market, true innovation is that which respects the rules without sacrificing vision.