CCBE Guidelines on the use of GenAI


{ dr. Homoki Péter / 2026.03.17 }

Slides are available here

Artificial Intelligence and the Judiciary — CCBE Guidelines on the Use of AI by Lawyers

Introduction

Thank you for the invitation, and also for the opportunity to speak here today about CCBE’s guidelines on the use of AI by lawyers.

I’m now in a very comfortable position because I don’t have to emphasize how important this topic is for us. This was very different than in 2019 I was last had a presentation in Athens. Nobody asks whether we have to use the AI tools at all, the focus is on how to best use them. What is the most efficient way, what is still risky?

We all understand that if we want to stay relevant in the profession we have so much studied for, we have to prepare for the changes and we have to strive to remain in control. We have to convince our clients that we can still provide added value, it’s much better for them if they retain us with their legal problems.

What has changed in 2023?

The last time I was speaking about this matter in Athens, it was in January 2019.

At that time, I thought that the relatively small number of speakers of smaller languages will shield us a bit better from automation, compared to lawyers working with English law and in the English language. I was wrong. Nobody expected at that time that natural language processing and computers will practically solve a problem as ancient as the curse of Babel. And if linguistic diversity is unable to shield us, lack of reliable access to local legal corpus will not be able to protect us either.

Let’s talk about the latest guidelines published by the Council of Bars and Law Societies of Europe, in October 2025, the Guide on the use of generative AI by lawyers.

Focus on what are the keys to ensure an effective human oversight over the plethora of AI tools we can hardly understand, let alone fully control.

CCBE’s work in the field

Even before 2025, CCBE had extensive publications on this subject, like the one published in March 2022, the “Guide on the use of Artificial Intelligence-based tools by lawyers and law firms in the EU”. This was not an official CCBE guidance to bars and lawyers, but a technical document, for information.

Needless to say, the technology accessible to lawyers changed a lot between 2022 and 2025. Even if we already had foundational models like BERT or GPT-3 back then, only very few people understood what seismic changes the latest version of the then-current GPT version, InstructGPT will bring (this was made available as GPT-3.5 powering ChatGPT in November that year).

It turned the market of AI tools of natural language processing upside down, and from 2022 on, everybody started talking about “generative AI”, referring to both the picture generation capabilities of DALL-E and Stable Diffusion, and the text generation capabilities of large language models, such as GPT-3.5.

So, CCBE felt it necessary to issue a proper guidance, addressed to bars and lawyers, and the IT Law Committee worked on the guidance starting in January 2024, finalised it in August 2025, and CCBE has adopted it in October 2025.

Beyond giving a normative definition of generative AI, the Guide discussed the benefits and risks of AI tools relying on generative AI capabilities, as well as how the use of these tools affect the core principles of our profession as envisaged in CCBE’s model Code of Conduct and the Charter of Core Principles of the European Legal Profession.

Core Principles

Let me highlight some important parts of the Guide.

Competence

Most importantly, effective human oversight by the lawyers of the tools requires “competence” on behalf of the users.

To be competent in using the tools and in understanding what activities the tool use entails in the background. We have to look under the hood, even if we are not that comfortable with technology, because most of us would rather look like humanists trying to understand people than some engineers.

Without this technical knowledge, we cannot be aware of the actual risks involved and we cannot remain in any kind of control, nor can we exert any kind of human oversight on the output.

Competence is also key to ensure that we are using the tools properly, that we remain capable of finding out how to do that. We have to be able to evaluate our tool use: keep in mind what’s the desired outcome, spot the convincingly looking, but incorrect answers, find out when we have to involve special legal databases and other tools and also, when we should refrain from using a particular kind of tool.

Loyalty to clients

We also have to be loyal to the client, understand what they actually want (even despite what they are actually saying) and protect their interests. We have to protect their secrets, keep the information confidential even if the technical tools and the providers are acting like a black box. At the same time, we have to be transparent to the client, and keep the client reasonably informed of the tools we use, the major risks involved in the tool use. If the client objects to the use of certain tools, or have some reservations, we have to be prepared to accommodate that, and explain any complexities or difficulties of doing so.

Independence

But we have to remain independent as well. Not just of the client, but also of the few worldwide providers of AI models and services. That also leads us back to competence and our ability for human oversight, for evaluating the output. We have to be able to understand our tools and also to be able to replace a specific provider with the next best thing, if we have to.

Core Principles in Action

Let’s take a more detailed look at some more specific examples of how to ensure all this.

1. Confidentiality: Using GenAI Models via Web UIs and Large Commercial Providers

The most immediate problem of confidentiality of our profession is the same as we have with cloud computing in general since 2010. To be able to use IT tools as cheap as possible and conveniently, we have to trust external IT providers with our most valuable data, client data. AI models require enormous amounts of power and memory and the most efficient way to make the raw power accessible to masses is remotely, from as few data centers as possible, from a different jurisdiction 8-10 thousand kilometers away. Many solo lawyers are forced to use consumer products offered by large IT companies, with very limited information available about the way the service is actually provided.

The data we enter into a UI will be processed remotely and the results will be sent back. The AI provider will possess our full history of interactions, the questions raised and the answers provided, the documents uploaded for search, the text we have asked the system to draft.

We have to be able to exclude certain uses that clients would certainly disapprove.

Our interaction with the AI provider may be used for training newer version of their models. And these newly trained models will be available for everyone. This could result in accidental disclosure of sensitive information, closely verbatim. Or in just a higher probability for other parties to guess the right tactics in a certain case. In consumer grade AI products, you can find transparent information about this. Maybe it’s easy to turn off this functionality, maybe you have to choose a different product to do that. But it’s also possible that similar use arises from more obscure data processing purposes, like “product improvement”, or your provider may be a US company led by technical people ignorant of the fine print of what “anonymization” in the EU would actually require. They could retain summaries of your prompts not realising that even the summary (without personally identifiable information) remains client data that is subject to statutory protections in your jurisdiction they are not even aware of.

The AI providers can be compelled by court to hand over this data to another party, and some non-obviously sensitive information may even be made public subject to local law.

[Illustration: screenshot of court document]

Or the AI provider may be subject to orders from law enforcement or security authorities with a gagging order, prohibiting them from even notifying you of the request, even if you have an explicit right in the contract to notify you of this.

And the AI providers can suffer a data breach or be bought up by a different company with interests contrary to the client.

Obviously, this is not very reassuring for clients with a touch of paranoia, real or unreal, and this information can be very damaging for certain clients or cases.

In many criminal and family law cases, you can also accidentally breach acceptable use policies of AI providers by storing or processing sensitive information on these AI systems.

Presudonymisation

This also means that for many use cases, law firms have to work out a reliable workflow to pseudonymise materials before uploading for processing in a remote AI system in a way that can be reversed after processing by the remote system.

When to Use Local AI Models?

But when pseudonymisation is not an option, you have to be aware of the possibility to use AI systems over which you have a lot more control compared to AI systems accessible via a web UI or other Software-as-a-Service (SaaS). This could still be a cloud service, but Infrastructure-as-a-Service instead, where you have complete control over what kind of AI models you are using and what happens with the input and output. Or, if that’s not enough, you could run your own powerful computer in a remote location running the necessary AI models. Or, if you only trust computers and data you see, you can now run very powerful models even at home, for a price — even if not the most powerful, so-called frontier models.

[Illustration: 4 modes of AI]*

Of course, the more control you want over your AI model and the computers running that, the more technical expertise you will need to make that work. CCBE is now finalising a new, quite technical guidance on this very topic. We do believe it is important to get lawyers’ feet wet in this non-legal subject. Local models enable people to have a first-hand experience of how AI models actually work and it is very important for their independence. It’s like people in the 1980s who bought a personal computer as a hobby, perhaps just to play, were essential in diffusing the use of computer technology. The same is true with the world of AI — it’s not enough if there is just a general utility company in a different country who will charge us for the service. This step is crucial for the independence of the profession and the wider judicial system, and even the sovereignty of countries.

3. Improper Use and Lack of Oversight

This is the most difficult subject and a great part of our future depends on doing this right. Solving many programming or mathematical problems can be seen as a relatively “easy task”, because you can test or verify the results in some way. We can decide if the answer is correct or not. You can define a “test set”, a benchmark and evaluate the performance of your AI model. That will not necessarily say it can solve all programming or math problems, but you can rely on this measured performance to a considerable degree, and most mistakes will come to light in a short time.

But the AI model passing the qualification exam for law in that particular jurisdiction is not that useful as a benchmark.

And our task as a lawyer is not to measure the reliability of AI models, but to serve clients via reliable tools, the output of which we can trust. Being a lawyer is not really a well-documented profession, even with published case law. Yet, in the coming years, we have to build out the necessary scaffolding — a harness for the AI models to turn them from being a generic chatbot into a reliable tool well adapted to the workflow of lawyers, relying only on the latest applicable law.

Probably these tools will rely on a combination of properly configured agentic skills with reliable legal databases. But that will need a lot of effort from the profession, in different countries, because this is not something AI companies can do for us. They don’t know how heterogenous our profession is.

[Illustration: Claude Cowork and skill.md]

Principles 1/2

Until that’s done, I suggest everyone to be principled in their tool use:

  • Don’t ask questions that you know nobody will be able to verify. Still, verification by statistical sampling is still better than no verification at all.

  • Break down your complex task into smaller units — until you feel confident that it is possible to verify the answers somehow. E.g. build a checklist of compliance requirements from a law that you can agree with. Apply that to a contract you review only after you have this checklist. Or first, build a term sheet from the client email and a template contract from previous contracts, and have the actual contract drafted only after you have already verified these first milestones as correct. You will later need these internal steps, document them well.

  • Separate search and reasoning — focus first on searching for information that you can easily verify (is it actually there?). Accept inferences only if it is based on information you have checked.

Principles 2/2

Save, evaluate and reuse your prompts in a library. Create prompt templates, you don’t need special tools for that. Then, build workflows from those prompt templates, even if you use it manually. Be wary of using tools and models that are not transparent and change without notice.

  • Hallucination is still an existing problem that depends on the AI model used, the AI system (orchestration, workflow) that the model is built into, the size of the input, attachments, output, ongoing conversations, the amount of relevant information included in the training corpus of the model and the availability of reliable sources for the model to search against.

  • Use generative AI models only where more reliable tools are not available, e.g. a calculator, Excel, legal database. Try asking the AI model to build a specific tool for separate parts of your workflow, e.g. for data cleaning, extracting, searching or converting text (AI models can help you with VBA in Word or Excel, or use Python etc.)

4. Transparency

Finally, I would like mention that at first, it will appear that you are able to save a lot of time when using these tools. But if the client pays you by the hour, don’t be tempted to charge “time equivalent” fees, charge the actual time spent on the work. The main reason is you have to remain ethical, honest and transparent. But people often underestimate the difficulty in hiding the fact that one has relied on an AI tool. And people in general are becoming very sensitive to output generated by generative AI, because there are lots of non-obvious, stylistic giveaways. At the same time, detectors of “AI generated content” are not yet reliable (especially for non-native speakers of English).

Conclusion

So, it’s best to anticipate that clients expect us to use more AI tools. This is also a long term trend we cannot stop. We have to prepare for an increasing volume of work, increasing expectations from clients, with quicker turnaround and longer documents. Thank you.

» Back