A possible transcript of the presentation on Guide on the use of AI-tools by lawyers and law firms in the EU


{ dr. Homoki Péter / 2023.04.27 }

The presentation is available here

Good morning everyone. My name is Peter Homoki, I am a Hungarian lawyer focusing on IT, and I will present you a guide for the lawyers published last year by the Council of Bars and Law Societies of Europe and ELF regarding the use of AI tools.

I have twenty minutes to cover this subject. I have worked on this project for a very long time, so I must be very concise.

The project started in April 2020, finished two years later. This was a real child of the pandemic. We made three deliverables, the first one was about assessing the current IT capabilities of lawyers, a second one on the Barriers and opportunities in the use of NLP tools by small law firms (by Pál Vadász et. al.). But this presentation is about the third deliverable … which was written in English, but very recently has been translated to Hungarian as well. You can find all the deliverables on https://ai4lawyers.eu/.

Why small firms?

The project and the guide wanted to focus on small law firms. You can see that the EU is much dominated by small law firms, 71% of the employees of legal service providers in the EU work in firms with less than 10 employees.

This share is very different in the UK, and the US is between the UK and the EU. (Actually, turnover gives similar results.)

And small firms have simple, flat organisation, they have tight IT budgets, they lack IT expertise, no resources for IT development. Large firms have professional help in IT.

The barriers of small languages and fragmented jurisdictions

There is also a second problem of size in the EU, and that is the problem of the size of languages and jurisdictions. LLMs have to be built on lots of data. Machine translation will not solve this problem, especially not in the legal domain.

In the legal domain, the problem of language is aggravated by the problem of jurisdiction, countries with very different jurisdictions using the same language. Here, special effort will be needed for training of legal specific AI tools. (But that’s also true for some US jurisdictions (e.g. Louisiana and Texas)

The objective of the guide: to standardise demand in the long term

So why is this Guide important for lawyers?

First, because we have fragmented IT markets for lawyers, due to cross-border differences, and most legaltech solutions have difficulties in scaling. Especially in software that is a foundation to other software, like in practice management software. We hope this guide may help in standardising the demand side.

As a consequence of fragmented markets, simply waiting for tools to appear will not happen in many countries. If the lawyers have no IT budget, there will be no providers to give them tools. There is strong competition between lawyers, that also includes competition between lawyers who want to use extra tools and are open to experiment with these, and between those who don’t want to do this. But this competition will also not solve the problem of lack of IT tools even in the mid term.

Furthermore, the government and courts do not want to say that from now on, lawyers who will not use this or that technology can no longer practice. If a technology is not tried and tested in the legal domain, it will not be introduced until the society is forced to do so, like during the pandemic. And lawyers will need to try and test these tools first, much better for them to do this way, than to be forced to do this.

Explaining some conceptual framework of NLPs in the Guide

Back to presenting the guide. One of the major aims of the guide was to give some explanation regarding the framework of AI tools, that are easy to understand for lawyers, but still useful in learning a bit more about how and why such tools work, and make the lawyers more comfortable in using these tools.

Besides the foundations, there are a couple of pages on Natural Language Processing and on the major categories of tools, such as text generation, language understanding and text retrieval.

Six scenarios from the future

My favourite chapter is a list of six different scenarios illustrating how a small law firm might be using some realistic AI tools in ten years’ time. This is not a fantasy novel, so we were incorporating tools that already appeared in research papers before 2022.

We only made the assumption that these tools will be reliable in ten years time for everyday use by a lawyer. The scenarios include high level description of a contract negotiation, a client intake process, document generation and a very streamlined court case preparation.

Deontology problems

Deontology problems are about risks that AI tools could cause with regard to how lawyers work with clients, courts and each other. Some of the problems of AI tools are the same as those raised by the use of cloud computing, or online platforms in general.

While it is very convenient for lawyers not having to deal with their IT infrastructure at their premises, and to be able to use some tools right away with minimum customizations, there are considerable drawbacks to this approach as well.

Using any such services will come with some decrease in the independence of lawyers, which also results in some degree of vendor lock-in, we’ve highlighted some of the possible approaches to mitigate this in the Guide.

Also, lawyers will have to ensure some way that such vendor lock-in minimises the effect on how the lawyer may access their own data, especially client related data.

And this IT outsourcing also brings in the problem of external governments having access to the lawyer’s data, and forcing IT providers to ensure such access.

The second major deontology problem is related to the number one problem of current LLMs. That is reliability of the models.

It is a very human thing to rely on opinions based on the status of an entity. But people will have to understand that the most advanced AI tools like NNs work very differently from human intelligence. They have access to more knowledge that a single human could amass, but the information they provide is sometimes surprisingly unreliable, even compared to what you would expect from a human. We don’t know whether this problem can be solved, but until it is, we have to adapt to such unreliability.

So lawyers will need to experience the degree of this unreliability and also, not use any tools unless the lawyers have a “reliable working model” of what kind of problems to expect, and how to counteract.

This is also the reason why for many current uses, it is not possible to make such tools accessible directly to clients without a lawyer’s involvement. This is called human in the loop, where only human review can ensure the reliability of the advice or other output.

However, this “human in the loop” also prevents scaling of many solutions. But this is not a major problem for law firms, because law firms rely on selling the human expertise, not selling AI models or legaltech tools. That’s a different business.

Also, primarily, AI tools will aim to

  • enhance the quality that can be achieved within the same time,

  • decrease the time spent on administration and on research, not to increase the amount of output.

The third deonotolgy issue is that of client confidentiality and privacy. Lawyers have to put the safety of client’s data ahead of any opportunities in cost savings. These technologies also affect this aspect and the related data protection issues.

Prior to using any such tools for client data, lawyers will have to find out if their use of the service will be reused for any purposes other than providing the service to the lawyer. This is crucial. Even if the provider promises full anyonimisation before reuse, that can never be complete, and trained models may be reverse engineered to reveal confidential information.

The last deontology issue is that of competence. A lawyer may fail to follow the expectation of competence both when not using a technical tool that is already widely used by other lawyers and required by clients, and also when using such technical tools too early on: such as when a tool is too risky for client data, or when a lawyer shows overconfidence and does not understand the risks of technology prior to using it.

Changes and updates since 2022

The guide came out a bit more than one year ago, and it is an understatement that since last April, many things have happened in this field. LLMs became an instant hit with the wider public, at least one type of LLM, and since even then more advanced versions have become available.

Thanks to these clear demonstrations, a lot more lawyer now understands why the field of AI is crucial for our future, and not just another esoteric hype that will burst in five years time.

Actually, many of the future scenarios imagined in the Guide suddenly became possible in the field of NLU and Generation. They are not necessarily reliable yet, but you can already experiment with these. Some priorities have suddenly changed, like now, we have to focus on retraining ourselves in the longer term, just like every other mental profession.

What we try to do is provide some more demos for the community of lawyers because of the dumping of information (now mostly AI generated) it is much easier to show what is possible, then to talk about this in another paper.

Still, research is more important than ever in this field, and a lot more people will have to be involved, at every level, regardless of their skills, especially trainers like those attending this conference.

» Back