The interplay between the GDPR and the EU AI Act


{ dr. Homoki Péter / 2025.09.22 }

Webinar presentation for the European Lawyers Foundation and the Council of Bars and Law Societies of Europe (CCBE, Conference Latest Developments in Data Protection)

Slides are available here

[2] I would like to give a presentation on the interplay between the GDPR and the AI Act.

First, I would like to give an overview of why privacy is at the forefront of regulating the risks of AI.

Then, show some high-level similarities and differences between the two regimes, and how differently they regulate the issue of automated decision-making and profiling.

Then, I will try to discuss two related cases.

One is the Glukhin v. Russia from 2023 decided at the European Court of Human Rights.

And the other is a practical event where the legal consequences are still unfolding – what happened in relation to automated facial recognition at the Budapest Pride 2025, and why the regulatory approach of the EU is not really in line with the approach of the ECHR in Glukhin.

[3] In the EU, regulation of artificial intelligence became one of the most important tools in the mitigation of risks caused by AI system („AIS”) in general. And the risk to privacy was one of the foremost risks mentioned in the preparatory materials for the AI Act.

The risk of privacy is not a distant, remote risk like the existential fears of AI conquering humanity or the fears of displacement of our jobs.p

Also, among the many risks of society using AI, risk related to data privacy is an area where the EU already has shared competence with its member states. Like under Article 16 of the Treaty on the Functioning of the European Union.

Privacy is also a fundamental right, and as such, protected by both the Charter of Fundamental Rights of the EU, as well as Article 8 of the European Convention of Human Rights.

This also means that the protection of the Convention of Human Rights extends beyond the scope of EU law, such as, in the relation to national security and the military.

Unlike the EU AI Act!

[4] In theory, the GDPR was already capable of addressing many risks related to the use of AI. But the EU bodies wanted yet another horizontal regulation to be put in place.

Why?

It is said, AISs might have some specific features such as the opacity of the tools or the complexity of the technology that could make the enforcement of the existing legislation (GDPR) more difficult.

Also, some AI practices are deemed to be so dangerous from a privacy point of view that lawmakers decided clear, outright bans were needed for private entities under a separate regulation, not in GDPR.

And, at the same time, allow the same practices for law enforcement purposes, within a set of very flexible frameworks.

The third reason was that the best way to preempt further fragmentation of the AI market is to adopt a new horizontal regulation for AI products with all technical details of compliance outsourced to standardization bodies who will work out these details by the end of two thousand twenty-six.

[5] Similarities:

  1. The legal basis for both regulations is to protect fundamental rights.

GDPR is focused on the right to privacy and data protection, and on how this right is to be protected in practice.

For the AI Act, the focus is a lot wider.

This tries to address the negative possible effects of the AISs on many other fundamental rights, like freedom of assembly, workers’ rights, IP rights, children’s rights.

But the EU does not have general competence to harmonise fundamental rights, so, the legal basis for the AI Act, functioning of the internal market, Article 114 of the Treaty on the Functioning of the EU.

  1. Another similarity is that both regulations are very wide in scope, and they affect our everyday digital life. Hardly one day of an EU citizen can pass without being subjected to these regulations.

  2. Both regulations say they rely on a kind of risk-based approach.

At least the lawmakers said that they strived to achieve a proportionate balance between reducing the risk posed by the activity and the costs of regulation.

For the GDPR, this aspect is most visible when granting supervisory authorities flexibility in applying certain vague terms of the regulation, like personal data or processing.

For the AI Act, this flexibility is currently present at a very high level of the regulation: the risk-based approach is the reason why the lawmaker had to differentiate between four different categories of AI.

  1. They both rely on a regulatory principle called technological neutrality and therefore both try to solve important questions by postponing regulation of certain issues we expect will happen, or by delegating the actual regulation to third parties.

4.1. In relation to the first area the lawmaker is hopeful that some other details can be left to those being regulated, which is a form of self-regulation.

4.1.1. Like the principles of “something by design”, such as “data protection by design and by default” in Article 25 of the GDPR or the transparency, accuracy, and human oversight requirements in the design phase of the AI Act – where it’s not expressly said that this is “accuracy by design” or “human oversight by design”, but in practice the provisions arise from the same regulatory principle.

4.2. Subsidiarity is similar to delegation: important questions will have to be solved by the Member States, by their courts, by national practices and national authorities. As we have seen with the GDPR, this results in considerable divergence, and we can expect the same to happen with the AI Act as well.

In order to address some of these divergences, both regulations have in place some EU level coordination added to the top, like the European Data Protection Board and the European AI Board.

  1. Both regulations have extraterritorial reach: the processing of personal data of data subjects in the EU triggers the GDPR regardless of where the data controller or processor is based. Similarly, all the AI Act rules apply to deployers, and providers established outside the EU, as long as they make the artificial intelligence system available for use in the EU.

  2. And of course, both regulations rely on heavy fines to ensure compliance, like the €20 million maximum fine with an upper limit of 4% global turnover for GDPR, and an even higher one for the AI Act in relation to the prohibited practices (€35M and 7%).

[6] What are the most prominent differences between the two regulations?

  1. The focus of the GDPR is, of course, when personal data is processed.

While the focus of the AI Act is more diverse: it has at least four different regulatory purposes, depending on what category the AIS belongs to:

Is an AIS used in the market or put into circulation?

Is that a high-risk AIS? Is it a general-purpose AIS? Is it even a general-purpose AIS with systemic risk?

  1. The GDPR is about unpacking the protection of a single fundamental right into more detailed principles, with considerable number of practically exercisable rights to ensure compliance.

Whereas the EU AI Act is more about product regulation differentiated in accordance with the four risk categories listed above.

  1. The provisions in the GDPR mostly follow an approach that is called an “ex-post” approach.

It defines the rules and entrusts supervisory authorities to enforce compliance with these rules.

Whereas the AI Act is more about “ex ante” regulation. How should the system be designed in a way that complies with these requirements?

What should be done by the deployers and the providers before putting the AISs into circulation?

  1. Furthermore, the GDPR itself relies on the use of “codes of conduct” in a very minimalistic way in Article 40:

A code of conduct is a precautionary step that certain bodies can decide to take to enhance their compliance, to increase their legal certainty about compliance.

On the other hand, for the AI Act, while the codes of conduct in Article 95 are voluntary, yet in practice, they are expected to fill out important missing parts of the delegated regulation.

These codes of conduct are more similar to the “codes of practice” in the AI Act (Article 56): a tool for self-regulation in the AI Act, with the threat of the Commission stepping in to provide common rules.

  1. The GDPR contains lots of detailed rules about how the data subjects can enforce their rights, whereas the EU AI Act does not really focus on the rights of the affected person, beyond the limited rights under Article 85–86.

Like the right to lodge a complaint or expect the explanation of individual decisionmaking from the deployer of an AIS.

[7] 1. Let’s take a look at the regulation of automated decision-making and profiling in the GDPR and the Law Enforcement Directive.

The most important rules on automated decision-making are about decisions that are based on solely automated processing which has a legal effect concerning data subject.

In terms of the Law Enforcement Directive, this use is possible only if it is authorized by law, with appropriate safeguards (such as the possibility of human intervention).

It is prohibited if it’s based on special categories of personal data or results in discrimination against natural persons.

  1. The GDPR also covers some further exceptions: it is also allowed if the solely automated processing is necessary to perform a contract or if the data subject has given his or her explicit consent.

In terms of the safeguards, in the GDPR mentions the right of the data subject to express their views and their right to contest the decision.

  1. In addition to these provisions, the GDPR also mentions the right of the data subject to receive information on the existence of any automated decision-making.

So, not only just those which are based solely on automated processing: this includes cases the human makes the decision but this decision relies on the results of some automated processing.

  1. In contrast to the GDPR and the Law Enforcement Directive rules, the AI Act has more piecemeal rules. But the AI Act is relying on the definition of “profiling” as used in the GDPR.

[8] The AI Act itself has, like, 3 different provisions which relate to automated decision-making.

  1. One is about the prohibition of risk assessment systems to evaluate the chances of natural persons committing criminal offences.

But there is no such prohibition in place as long as that AIS is just used to support a human assessment of the same.

  1. The second: the generic exemption from being considered as a high-risk AIS will not apply if the AIS is doing some kind of “profiling” work.

  2. Most importantly, there is a „right to an explanation” which applies to any kind of „automated decision” based on the output from a high-risk AIS.

This applies even if the AIS was used only to support human assessment.

[9] We can see that most of the prohibited practices of the AI Act, in Article 5, have some kind of privacy-related aspects, like social scoring or criminal risk assessment, untargeted scraping of facial images for facial recognition systems, or inferring the emotions of natural persons in the workplace.

We will discuss the last issue more: the issue real-time biometric identification.

[10] Let’s see the first example from Russia and the ECHR.

Nikolaj Glukhin travelled on the subway with a cardboard figure of another Russian activist and a peaceful protest sign, and posted this on his Telegram channel.

He was fined for not notifying the authorities of his demonstration, where notification is required only for “quickly assembled objects”.

Glukhin was identified using videos of closed-circuit cameras, based on the evidence recorded on camera, and fined.

The ECtHR decided that there was a breach of Article 8 – as well as Article 10, but the latter is not within the scope of this presentation.

The court said that though interference had legal basis in domestic law, the local law was not appropriate.

It did not contain any limitations on

  1. the nature of situations which might give rise to the use of facial recognition or on

  2. the intended purposes of facial recognition allowed, or on

  3. the categories of people who might be targeted, or

  4. the processing of sensitive personal data.

  5. It also lacked procedural safeguards accompanying the use of facial recognition, like authorization procedures.

The Court said that it was essential in the context of implementing facial recognition technology to have detailed rules governing the scope and application of measures, as well as strong safeguards against risk of abuse and arbitrariness.

Now the Court also found that the measures taken against the applicant were intrusive because they have used live facial recognition technology.

Also, the personal data processed was sensitive because they contained information about the applicant’s political opinion.

The Court also highlighted that the applicant had been prosecuted for a minor administrative offence on the basis of facial recognition technologies. This activity did not otherwise represent any danger to public order.

The Court also highlighted that this use of FR „could have a chilling effect” in relation to the rights to freedom of expression and assembly.

[11] So, a short summary:

There were four items based on which the Court found that the Article 8 was breached in the Glukhin case. We will return to this test a bit later.

[12] In relation to the Budapest Pride, let’s start with what happened on 15 April.

The Hungarian Parliament has adopted the 15th amendment of the Base Law which was originally adopted in 2011, and also a new Act which

  1. partly extended the Act on the right of assembly with some new prohibitions, as well as

  2. amended the Act on petty offences with a specific new offence

  3. amended the Act on Facial Analysis Register.

The amendment of the act on petty offences now allowed the use of facial recognition system to establish the identity of a person suspected of committing an offence as regulated in detail in the Act on Facial Analysis Register.

[13] The Act on Facial Analysis Register already included a legal basis for using the registry for law enforcement purposes.

Before 2024, this was only allowed to prevent, detect and interrupt crimes.

After 2024, the same FR could be used for stricter category of petty offences, those kinds that are punishable by detention.

But this latest amendment in 2025, the scope was made even wider, so that committing any offences could be a legal basis to use the facial recognition.

A new provision was also included in the Act that enables the use of facial recognition for mass identification purposes as well.

We also have to highlight that the facial recognition act already included procedural safeguards, like only a central body was allowed to do this kind of recognition, and that it had to delete all the data after 30 days of the identification.

[14] This is not about data privacy, but it is important to understand the privacy aspects.

Based on the changes in the Base Law and the Act on Assembly, the police prohibited several notifications sent regarding planned assemblies of Pride in June.

Thanks to the April changes, the Act on Assembly now included an express prohibition of holding assemblies that present the essential elements of a) promoting deviation from gender identity corresponding to birth sex or b) promoting homosexuality.

Each such prohibition by the police was appealed, and each appeal was decided directly by the Supreme Court.

In several very quick decisions, the Supreme Court led the hand of the police on how to formulate the prohibition in a way that the Supreme Court will approve the prohibition next time.

So, after several rounds of restarting the administrative procedure, the Supreme Court finally gave its approval to the prohibition issued by the police.

This way, everything fell in place: a) there was a planned Pride assembly, b) a prohibition in place, and c) a provision that anybody attending a prohibited assembly can be identified by the use of facial recognition technology.

[15] Despite the prohibition, the Budapest Pride 2025 took place with many attendees.

There is no news about anyone having been subjected to fines for attending the banned assembly.

But there was a question submitted by one of the MPs to the ministry asking about whether the police is using automated facial recognition to issue fines.

To this exact question, the answer was “no”, and the reasoning given was that they don’t see it legally possible to fully automate a decision-making in an offence-based administrative procedure – because human supervision is a necessary part of an offence-based administrative process.

Beyond this reassuring answer, this doesn’t mean that there are no offence proceedings are taking place or will take place.

Even if there are such processes in place, due to the procedural safeguards of the Facial Recognition Act, this will be a long and difficult process, and it would take many years if they want to identify everyone who attended.

[16] Let’s compare the differences between this case and the Glukhin case.

  1. Compared to Glukhin, the first condition on not having limitations on the use of facial recognition is not true, at least not fully. We see there are stronger safeguards present in the Facial Recognition Act, it requires the retention of records used, and the affected persons also have a right to access.

  2. Secondly, this is not “live facial recognition”. Even if there will be any use of facial recognition, that will take place long, long after the banned assembly, even compared to Glukhin.

  3. The third condition is fulfilled in the Budapest Pride case, because the closed-circuit images will reveal participants’ political opinion.

So, the data is definitely sensitive data even if used for identification only.

  1. The fourth condition is fulfilled because facial recognition is used for a minor administrative offence which otherwise presents no danger to public order and more importantly.

More importantly, the possibility to use facial recognition to identify anyone committing even a petty offence, was included on purpose, to have a chilling effect, to discourage people from attending the prohibited assembly.

[17] As we can see, Article 5 paragraph 1 prohibition of the EU AI Act did not apply to this activity, because the use of facial recognition cameras (if any) was not a real-time use.

The wider problem is whether this prohibition will apply only to very-very limited cases, like when police is in hot pursuit of dangerous criminals, or where imminent actions are needed to find a victim or prevent a terrorist attack.

Just a delay of 5-10 minutes could be used as an excuse for biometric identification not being real-time.

Even if the EU AI Act preamble (17) equates „real-time systems” with “live” or „near-live” recognition, it seems the use of FR similar to the Glukhin case would not be considered as a „real-time use” of biometric identification under the AI Act.

Technically, locating a protester in the metro 6 days after the protest could hardly be called „live FR”.

But still, the Court came to the conclusion that the Glukhin case was a „live” use of FR. Even without having any direct evidence showing this („the absence of any other explanation for the rapid identification of the applicant, and the implicit acknowledgment by the Government of the use of live facial recognition”)

[18] So, most of the use of facial recognition by the police will not be a prohibited activity, but they will need to follow stricter rules in the design and operation of the AIS – as these will be high-risk AISs.

And even these provisions on high-risk AISs will only apply from August 2026.

Law enforcement, in so far as their use is permitted under relevant Union or national law:

(a) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences;

(b) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;

(c) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;

(d) AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;

(e) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences.

» Back