Recently, Spain’s data protection authority, the Agencia Española de Protección de Datos (AEPD), imposed a €500k fine on FC Barcelona for failing to conduct an adequate Data Protection Impact Assessment (DPIA) before processing biometric data from more than 140,000 members.
At first glance, this might appear surprising because the club had produced a risk assessment. But regulators concluded that what Barcelona submitted was not a DPIA in substance, even if it resembled one in form.
The case demonstrates that for organisations subject to GDPR, a DPIA is not a box-ticking exercise. Regulators expect a structured, rigorous analysis that demonstrates real consideration of privacy risks and alternatives.
For organisations deploying biometric technology, AI tools, analytics platforms, or other high-risk systems, the decision provides a clear blueprint for what a good DPIA should look like and how it demonstrates GDPR compliance.
The biometric census that triggered the investigation
In March 2023, FC Barcelona launched a digital census update for its 143,000 members. The process was required under club statutes and allowed members to update their information online.
The digital verification process included:
- logging into the system with a personal code
- scanning an identity document
- taking a selfie with liveness detection
- facial comparison between the selfie and the ID photo
- optional voice biometric recording
- completing personal data fields
Biometric verification technology was provided by Veridas Digital Authentication Solutions, with encrypted biometric vectors stored temporarily by a processor within the EEA. Over 112,000 members used facial biometric verification and over 72,000 created voice profiles.
However, complaints quickly followed. Members alleged that the biometric process appeared mandatory to complete the digital census, the facial verification required movements typical of biometric capture and the non-biometric, in-person option was not clearly communicated.
The complaints prompted an investigation by the AEPD.
Why was Barcelona fined?
The main issue was not the use of biometric data. Instead, the regulator concluded that Barcelona failed to conduct a legally compliant DPIA before launching the system, breaching GDPR.
Under GDPR, organisations are required to perform a DPIA whenever processing is “likely to result in a high risk to the rights and freedoms of natural persons.”
The AEPD highlighted several factors that clearly triggered this obligation:
- biometric authentication technology
- large-scale processing involving more than 143,000 people
- inclusion of minors, over 14,500 members were under 18
- automated validation of identity
- potential legal consequences, such as loss of membership for non-completion
Despite these risk indicators, the regulator determined that Barcelona’s assessment did not meet the substantive requirements of a DPIA.
What was wrong with Barcelona’s DPIA?
Barcelona argued that it had performed a three-stage risk assessment before deploying the system. However, regulators found that the document failed to meet the core requirements of GDPR.
Key deficiencies included:
1. Incomplete description of the processing
A valid DPIA must contain a systematic description of the processing activity.
Barcelona’s document listed categories such as name, member number, ID number and photograph. But it didn’t explicitly identify the biometric facial data generated during facial comparison. For a biometric system, this omission is considered fundamental. Regulators noted that a DPIA cannot assess risks properly if the processing itself is not accurately described.
2. No meaningful proportionality assessment
A central component of a DPIA is assessing whether the proposed processing is necessary and proportionate to achieve its purpose. Instead of asking whether biometric verification was necessary, Barcelona’s analysis effectively assumed that the biometric system was already justified. This approach is considered circular.
The regulator emphasised that a compliant DPIA must consider less intrusive alternatives, such as manual identity verification, document checks, in-person validation and multi-factor authentication methods. Because the DPIA did not genuinely analyse alternatives, it failed to demonstrate proportionality.
3. Underestimating risk
Biometric identifiers, such as facial templates or voiceprints, are inherently sensitive because they are unique and persistent identifiers, cannot be changed if compromised and could enable tracking or profiling.
Yet Barcelona’s assessment classified most risks as low or negligible. The AEPD concluded that this underestimated the intrinsic sensitivity of biometric data, undermining the credibility of the analysis.
4. Lack of structured risk mitigation
A proper DPIA should clearly show identified risks, mitigation measures and residual risk levels after controls. The regulator found that Barcelona’s documentation did not have detailed mitigation mapping and didn’t clearly show how risks would be reduced.
5. Limited involvement of the Data Protection Officer
Another criticism was the lack of early involvement of the DPO. Under GDPR best practice, the DPO should be involved during the design phase, ensuring privacy risks are considered before implementation.
What is a good DPIA under GDPR?
The Barcelona case shows that a DPIA must demonstrate thoughtful analysis, not just documentation. Under GDPR, a strong DPIA should include the following components.
Clear description of the processing
The DPIA should explain:
- what data is collected
- how it is collected
- how it flows through systems
- who accesses it
- where it is stored
- how long it is retained
For biometric systems, this means clearly identifying biometric templates, facial vectors, voiceprints and algorithmic matching processes. Without this detail, risk analysis cannot be meaningful.
Purpose and legal basis
The DPIA must clearly explain why the processing is necessary, what legitimate objective it serves and the legal basis under GDPR.
Necessity and proportionality analysis
A robust DPIA must ask questions like:
- Is this technology necessary?
- Could the same objective be achieved with less intrusive methods?
- Is biometric processing justified relative to the privacy impact?
This step is often where organisations fail because the DPIA is written after a system has already been selected.
Detailed risk analysis
Risks should be evaluated specifically for individuals, such as identity theft, misuse of biometric data, discrimination or profiling, surveillance concerns and security breaches. The assessment must reflect the true sensitivity of the data involved.
Mitigation measures
A compliant DPIA must describe safeguards such as:
- encryption and pseudonymisation
- limited retention periods
- strict access controls
- transparency mechanisms
- user choice and alternatives
Residual risks should also be documented.
Consultation with the DPO and stakeholders
The DPO should be involved in reviewing the DPIA. In high-risk scenarios, organisations may also need to consult regulators, consult affected individuals or groups, or conduct pilot testing.
Why DPIAs are becoming a major enforcement focus
The Barcelona case is part of a wider enforcement trend. Regulators increasingly examine the quality of DPIAs, not merely whether one exists. Authorities are recognising that many organisations treat DPIAs as compliance paperwork rather than governance tools. But a well-executed DPIA demonstrates several GDPR principles such as accountability, privacy by design and by default, risk-based compliance and transparency and fairness. Essentially, a good DPIA is evidence that privacy has been integrated into decision-making.
The €500k fine imposed by the AEPD indicates that a superficial risk assessment will not satisfy GDPR requirements. Organisations deploying biometric systems, AI tools, behavioural analytics, or large-scale monitoring technologies should treat DPIAs as a strategic governance process, an early design exercise and a demonstration of accountability. A DPIA can reduce regulatory risk and help organisations build systems that are safer, more transparent and ultimately more trusted by users.
And as the Barcelona case shows, failing to do so can be expensive.
Our 10-step guide to data protection outlines the essential actions organisations should take to build and maintain a robust data protection framework. It turns complex legal requirements into a clear, practical roadmap you can use to assess your current approach and strengthen your policies, controls and practices. Get it here.
