Presentation for a conference organised by the Hungarian Bar CEE BAR PRESIDENTS’ CONFERENCE 17 October 2025 Budapest
The president of the Hungarian Bar Association has asked me to start with a presentation to start a debate on this topic.
How bar associations should use and regulate AI in the legal domain?
Taking into account the reports submitted, I think we can assume that there are some typical topics within this area. I’m not saying we should discuss all these in detail, but this is the menu:
a) AI tools in providing bar services and in integration with e-government services (AI tools improving the bars’ own capabilities)
b) regulating lawyers’ use of AI (deontology, ethics)
c) improving the knowledge and training of lawyers: technical assistance in creating relevant and interactive training materials with AI – reforming training
d) helping lawyers to protect their independence (representation in general)
The first one is important but not particularly problematic. And this topic differs from bar to bar: how much independence from the state we have, what infrastructure we own, how much service we can provide to lawyers, and in which fields we must rely on services provided by the state.
Also, we have already had in-depth discussions in the second field. Like, the CCBE Considerations on the Legal Aspects of AI, adopted in 2020, is still very much relevant; but also the Guide on the Use of AI-based Tools by Lawyers and Law Firms in the EU from March 2022 discussed these issues, as well as the latest, recently adopted Guidelines on Generative AI by the CCBE - they all discuss at a higher level, how certain core principles of our profession are affected by different technologies, like professional competence or the issue of confidentiality, or how these might affect certain provisions of codes of conduct.
Most parts of our core principles seem to remain quite constant at least for Europe, but the ethical guidelines will need to be updated from time to time. However, the bars’ approach in this area is tied to actual practice: to upholding and enforcing the rules, and to discussing and deciding actual cases as they arise. And then, update the rules. So, in relation to AI tools where the market is changing at neck-breaking speed, it is probably too early to try to predict and change the rules ex ante. But it is useful to exchange ideas, like any cases decided, any negative experiences, like the evergreen examples of lawyers submitting sloppy pleadings, with hallucinated cases, and horror stories like prosecution or judges doing the same.
In contrast, the third and fourth subjects—training lawyers and more generally, protecting lawyers’ independence — are two sides of the same coin: how can bars actually help lawyers, other than by issuing ethical guidelines?
Let’s start with a seemingly more practical question: what to train lawyers on? How does AI change the way we can train lawyers, and how exactly, should we train lawyers, and what should be the objective of our trainings?
Obviously, there is a change in what lawyers expect from trainings: it should be more focused, personalised and pertinent, it should be interactive, but not too much hassle. We have all grown to having a shorter attention span.
How can tools help bars achieve this? It is easier to create more interactive assessments (quiz, tests), it is also easier to create more illustration for materials.
LLMs can help identify difficult to understand concepts, or possible contradictions in regulation.
But LLMs do not help in processing or learning materials, do not decrease the amount of time actually spent on learning. It will not learn in our stead.
Even if some believe we won’t need that much information in the future to learn, because LLMs will know those “less important things” instead of us … But there is not much evidence behind that so far.
So, while administrative tasks (including those related to training) can be decreased, LLM can help us with filtering out bad training material, but the „elbow grease” requirement remains. We will still have to spend our neurons to understand things.
How can tools make it more difficult for bars to achieve training? Let me show you this short video as an example: {a one-minute video showing the use of ChatGPT Plus / Agents for passing a CLE training with minimal human intervention}
The question is, can we use the electronic training methods the same way? Will this remote assessment be enough in the future should everyone cheat? Impersonation on the internet is not just a problem of frauds, it also changes the perception we have regarding the importance of things, where we are not present in person. AI agents can impersonate us even in training, remote hearings, client conferences, webinars etc.
This leads to the next big question:
What should we use for training lawyers? And why should we train lawyers, what is the objective? Where do we see lawyers in 5–10 years’ time and what should they learn to remain relevant?
This is part of the AI literacy as required in the AI Act Article 4, lawyers being at least users of AI systems.
Should we try to teach backpropagation so that lawyers get a good understanding of why and how neural networks work? Or on why attention mechanisms and generative pretraining was important? I believe this leads us to very strategic questions:
Does it matter how bars see the future of lawyers as professionals? Can we position our profession better? Should we fight? With whom? What do we fight for? Who is the enemy and who is our ally? Can we just sit back, rely on our reserved activities, and wait for the special solutions offered to lawyers?
Let me show you some recent charts measuring the performance on some frontier models in tasks that the evaluators have believed are representative of what lawyers do.
As you can see, the clear message seems to be that frontier models are either better or could replace the professionals in these tasks in a short time.
And lawyers are just one profession, next in line with IT developers, where you can already hear very serious talks about job shortage due to frontier model capabilities. Even if that is not true, this is a very transparent and obvious problem that is much discussed. The only difference is, IT developers don’t have bar associations, they are not a regulated profession.
I’m not sure we can efficiently fight these messages, but I believe we at least have to have answers that fit into this narrative.
This table 2 is from the same GDPval evaluation.
Here, “naive” column means the frontier model output without human review – astonishing speed and cost improvements (even if we don’t know the real economics behind it), but of course, the results are not directly usable, not without a human. The “Try” columns mean the output was incorporated into a human review process.
This human review strategy in the same paper shows that even the frontier model providers do not expect their models to make sense of the noisy world in a very practical, end-to-end mean for all professionals.
They do not see expertise as a thing of the past, but their tools are so good at helping very important parts of the work of professionals that it would be an astonishing waste of our human capabilities and resources to do things the same way we have done for a thousand year.
Market or politics will not let this waste happen without punishment coming in a form or other.
The improvements in the actual world are pretty small so far, but the possibilities for improvement are huge. Just imagine what happens when the naive rate becomes good enough for end to end use.
So, lawyers will definitely need to learn how to make use of these frontier models and the latest AI tools. But can all lawyers do this? This extract is from the UN rapporteur’s report on human rights from July.
On the market, legal domain specific models from the cloud cost €100-€750 per month, some require at least 100 users per year etc. The threat is real.
For comparison, the French example mentioned by the rapporteur is a 30€/month subscription relying on a French model, a very capable Mistral, running in the EU.
CCBE has just finished the generative AI guidance for lawyers, but the IT Law Committee is now trying to write a more technical guideline.
Because sometimes, for some use cases, lawyers have to retain every control over their data. Like in high-profile criminal law cases.
For that purpose, we need to explain the differences between what is a cheap tool that you can run in your office, and what is an unattainably expensive supercomputer-based tool.
The so-called open weight tools (that anyone can use and download) are getting very good.
But the best performing models will always be too expensive for lawyers to run on their own hardware.
Even in the cheap tools’ segments, there is lots of room for improvement, and within CCBE IT Law Committee, we are trying to provide some more technical guidelines explaining the differences between what is a cheap tool and what is an unattainably expensive supercomputer based tool.
There is a way how we can be useful, even in an era where most of what we, as lawyers currently do, will be done better by machines.
Confidentiality, accountability, trust and the power of persuasion are all human-only qualities. These cases will always lead client to want to interact with lawyers.
And in these cases, lawyers will always be the source of ground-truth for the machines, they will guide even a potential superintelligent machine.
Finally, why I believe bars should put more effort into building standards that are used in AI workflows used by lawyers. Including in
evaluating the performance of AI tools in typical human lawyer tasks, or
in creating training materials for both humans and AI systems.
Just like last year, we again hear predictions that the AI bubble is about to burst.
But history shows that even when bubbles burst and cause serious economic harm, they leave behind durable gains that remain highly useful.
Probably, bars and lawyers will not use the data centers that will be left behind, but we will also use all the standards, processes that are put together during these volatile times. With or without machines.
Let me summarize the points I believe should be priority. These are related to an improvement in training (accelerate, improve, record and illustrate). And the other field is the standardisation work where bars should pay more attention: what do lawyers do, how to make external people better understand these actual tasks, and how to evaluate and provide training materials, suitable for both human and machine consumption.

