This blog is based on a paper presented on 7 November 2024 by Robin Allen KC and Dee Masters to the UK’s Employment Law Bar Association called Judges, lawyers, and litigation: Do they, should they, use AI?
Almost each day, there are news reports about a fresh use for AI in solving a problem or speeding a solution. The business of law is not immune to this, but there is a significant gap between these ideas and the necessary knowledge that lawyers should have about AI systems, their uses, and their implications.
People may be less aware that judges are being encouraged to use, and are using, AI systems in the administration and delivery of justice. When this is happening, how, or even why, is largely unknown.
Moreover, many lawyers will have been encouraged to use some form of AI, not just in relation to discovery, but in their everyday work, including in the process of litigation. Yet they may not know to what extent this is lawful and within regulatory rules, or the extent to which they must declare their use. They may not even have thought about such issues when for instance just “trying out” Microsoft Copilot.
Five basic issues with AI in the justice system
There are five reasons why we think that this is important and people should know more about the implications of AI.
First, how an AI system produces a result is often never fully explainable, may be the product of systems which are biased, is rarely fully observable or transparent, and may use the work product of others who have not consented to its use in an AI tool. So whenever AI systems are in use there are forensic questions about all these points that lawyers and judges need to address.
Secondly, the AI tools currently on offer are only the beginning; what the future may hold is startling. For example, we know through our contacts in the industry that businesses are looking to produce tools that can create possible versions of an opponent’s skeleton argument (before exchange) and produce potential versions of the judgment which a judge might hand down so that lawyers can craft their submissions accordingly. Only a moment’s reflection is necessary before coming to the conclusion that there are profound ethical and regulatory issues in such a brave new world.
Thirdly, within the UK, there is neither legislation, nor regulation, nor case-law, that specifically controls the usage of AI systems, by lawyers, litigants or judges. Meanwhile, the rest of the world is thinking through and developing context specific smart regulation that can be a boost for business and government whilst protecting people and enabling beneficial innovation. Action in the UK is some considerable distance behind.
That is not to say that nothing has happened here, only that the UK so far has taken only the softest of actions, relying on existing rules and regulators, and leaving litigants to use existing data, equality and human rights law to challenge improper practices themselves.
Fourthly, it has been realised for some time that there is a particular issue about judicial use of AI systems. While there are signs of the beginning of a general discussion about AI and the judicial system, there is not yet a serious public debate about the general use of AI systems by judges.
Lastly, as AI systems are changing the way justice is being done, there is the potential for an immediate power imbalance between those that use AI systems and those who are then affected by that usage. Little has been said about this in the UK so far, but a judicial system which allows for such an imbalance is likely in the long run to undermine confidence in the rule of law.
AI use by law firms and litigators in the UK
AI is being used by non-contentious lawyers and by litigators in the preparation of their cases. Our assessment of the pattern of adoption of AI systems, based on discussions with colleagues and our knowledge of the practice of employment law, is that it is happening in waves as follows –
| Wave 1 (happening now) |
| Producing chronologies. |
| Producing basic opening skeleton arguments. |
| Drafting basic court orders. |
| Ordering information (e.g. producing a list of all documents referred to across statements). |
| Creating bundles where email chains appear once and in chronological order (rather than endlessly). |
| Identifying missing information. |
| Legal research. |
| Producing schedules of loss. |
| Disclosure exercises. |
| Wave 2 (maybe happening now/likely future activity) |
| Identifying all evidence that links to a particular factual dispute and identifying whether it is helpful/unhelpful to a party’s case. |
| “Marking up” trial bundles e.g. identifying which witnesses refer to what documents and in relation to what matters, identifying key documents/evidence. |
| Ordering evidence e.g. repetitive medical records to produce a master chronology. |
| Assessing merits of a claim/defence. |
| Legal research (highly-personalised). |
| Producing schedules of loss (detailed and highly personalised with little human guidance). |
| Drafting witness statements. |
| Cross-examination plans (from scratch or identifying missed points/disputes). |
| Producing possible versions of your opponent’s skeleton argument (i.e. before you have exchanged) using a database of previous skeleton arguments/closing submissions (query: will there soon be a “market” in our skeleton arguments which firms/barristers will harvest to use in AI?). |
| Producing possible versions of the judgment which the judge might hand down (using their previous judgments). |
| On retirement, well-known silks will offer up all materials (such as opinions etc.) drafted over a career to monetise their work via AI tools based on their work. |
| Witness training (PowerPoint and Teams can already be used to critique and provide real time feedback on presentation skills and speaking style). |
The tasks set out in the second table may already be happening and it seems certain to us that sooner or later – subject to developing controls – that they will be.
By judges
We are sure that some tech-savvy time–pressed judges and tribunal members will already be wondering whether Copilot or ChatGPT can make their lives easier. Any judge using an up-to-date version of Word to take notes or write judgments or orders will have Copilot on the top ribbon and when “cutting and pasting” a user will be prompted to use it. At present, there is no UK survey of which we are aware as to the extent that this is happening, however from conversations that we have had we estimate that the judicial use of AI systems in the UK is not negligible.
However, we can foresee many ways in which AI could play a role during the litigation process itself and as a means of dispute resolution
| Wave 1 (right now) |
| Summarising evidence heard in the Tribunal at the end of each day/the trial. |
| Summarising the case for a “write up” within a case management preliminary hearing. |
| Helping to finding common dates of availability by estimating trial hearing lists ahead of a preliminary hearing so the parties attend well prepared in terms of witness availability but also proposed directions. |
| Constructing complex case management timetables around parties’ availability (maybe provisionally ahead of a preliminary hearing). |
| Producing chronologies. |
| Identifying what areas need to be addressed in evidence/during cross-examination during a trial (could be useful where a party is unrepresented to ensure that the trial is fair). |
| Drafting basic orders. |
| Creating summaries of the relevant areas of law. |
| Next wave (possible future when technology improves) |
| Identifying all evidence that goes to particular areas of factual dispute. |
| Identifying if any issues have not been addressed during cross-examination (maybe very useful with LIPs to ensure that the trial is fair). |
| Reminding the judge of recent decisions in an area of law relevant to a case before them. |
| Drafting correspondence to the parties. |
| Predicting which cases are most likely to settle (useful for managing resources). |
| Predicting which ADR track is likely to be most effective. |
| Predicting how long a trial is really going to take including how much deliberation time will be needed. |
| Providing judges with “real time” prompts as to relevant case law or guidance as submissions/applications are being made. |
| Writing up (hopefully) uncontroversial aspects of a judgment for example which individuals gave evidence and when during a trial. |
| AI agents to act on behalf of the court service as a whole to answer basic questions e.g. when it the recent letter sent on behalf of the claimant likely to be read? Has it been read yet? Has there been an update on whether a judge has been allocated to a floating case? They might even be used to make administrative decisions like varying case management directions. |
| Data sharing across the breath of government in so far as relevant to the particular legal dispute e.g. a judge ordering compensation at the end of a case could have real time access to the claimant’s benefits record. |
| “Real time” translations of non-English witnesses and / or documents. |
There are even AI tools which promise to facilitate mediations by allowing parties to move beyond apparent impasse albeit alongside a human mediator.
Basic justice risks
These possible uses for AI in litigation, whether by judges or lawyers, or the parties more generally, give rise to significant issues.
Equality before the law: Some AI tools have been shown to discriminate because the training set used has been insufficiently representative or the machine learning process has led to the system learning a discriminatory correlation.
Duty to give reasons: There is a very real question mark as to whether judges can comply adequately with the duty to give reasons if elements of their decision- making have been supported by AI. This is because, at present, AI can rarely adequately explain itself.
Accuracy: Generative AI may look impressive, but it is not always accurate. LLMs in particular are prone to “hallucinations” where information is fabricated. This was an issue in a recent case in the US where it became apparent during the litigation that one of the experts had used Copilot leading to errors. The full judgment is worth reading and reveals a damning judicial assessment of the unthinking use of generative AI in litigation.
Power imbalance/equality of arms: AI has the potential to put well-resourced parties at a massive advantage. It is not too fanciful to imagine a future – if no controls are introduced – in which large law firms could harvest judgments and / or skeleton arguments in order to allow their clients to produce a judgment from a particular judge or an opponent’s approach towards settlement or have “real time” analysis of a trial.
Fairness: Procedural fairness is obviously the cornerstone of the Employment Tribunal process. If AI is to be used to undertake tasks ordinarily performed by a judge such as summarising evidence, how is that to be conducted in a way which has sufficient checks and balances so as to maximise accuracy? Even something as simple as summarising evidence could be skewed in favour of one particular party if the AI tool was not trained so as to be even-handed.
Data protection / confidentiality: The judiciary benefits from wide powers and exceptions in the Data Protection Act 2018. However, the position is less “generous” for lawyers. There are also various data protection provisions which allow the processing of data in the context of legal proceedings (including prospective legal proceedings) but lawyers do not have carte blanche and there are likely to be interesting legal arguments in the future about whether lawyers are permitted to process the personal data of clients (or former clients or other people’s clients) when using AI tools.
For example, is it lawful to use a chronology in Case X to generate a chronology in unconnected Case Y? What legal basis would be relied upon within the DPA 2018? How would rules like the principles of data minimisation, transparency, fairness and accuracy translate to the use of AI by lawyers? The answer to these questions is likely to be use case specific.
There is also a broader issue about the confidentiality of client data and the permitted uses when lawyers store it on systems and use it within AI tools.
Transparency: Since AI can be unfair, inaccurate and unlawful if not used properly, transparency is important. Unless the judge discloses such use, how is a party, or their lawyers, or an appeal court or indeed fellow members of a tribunal to know that such a use of AI has occurred?
Truth: Jurists since at least the time of Sir Francis Bacon, the first Queen’s Counsel, have asked the question “what is truth?”, but now with AI systems at work in law, this will need to be revisited yet again. The new question is “To what extent is a witness giving truthful evidence if it has been generated in part or in whole by an AI system?”
Guard rails: AI, the judiciary and lawyers in the UK
Now we move to sketch out what we consider to be the key “guard rails” that need to be discussed and debated in the UK when it comes to AI, the judiciary and lawyers in the UK.
Redlines: We think that the first point which requires urgent discussion in the UK is whether there are specific uses of AI in the litigation process which should simply be banned. There are some “obvious” use cases which should be considered for a complete ban such as using AI to predict judgments or assess the emotional temperament of a judge on the basis that it undermines the judicial process itself and creates an unacceptable power imbalance between the parties.
Transparency within the litigation process: There is already an increased focus on transparency in the litigation process in relation to the generation of important documents. For example, under the CPR there are now rules which require witness statements to be in the own “language” of the witness and there must be transparency about the drafting process itself. This type of approach should be extended to use of AI by lawyers, parties and judges in litigation. That is, there should be clear rules and processes around when and how and what information should be disclosed when a judge or lawyers or parties use AI. The information will need to be sufficiently detailed that the parties can satisfy themselves that the litigation process has been fair (for example, keeping records of all inputs and outputs). There will also need to be associated mechanisms and rules to allow parties to challenge the use of that AI insofar as they consider that it has impacted on the fairness or validity of the judicial process.
Data tainting / leakage: There will need to be mechanisms in place to ensure that data tainting and leakage does not occur when judges, lawyers and parties use AI. By way of example, if a judge were to use AI to create a chronology of the key events in a case based on the trial bundle and / or witness statements, they would need adequate training (including “prompt training” i.e. what instructions to give an AI tool like Copilot to ensure the best result) to ensure that the chronology was only based on the information in that particular matter. There would also have to be careful auditing of the AI tool used, its privacy settings and the broader IT settings, to ensure that data does not “leak” into the public domain (for example, used by an AI company to train its own AI tools) or “leak” between case files.
Data protection/privacy obligations and lawyers: How lawyers use data obtained from our clients or as part of our case load within AI tools is regulated by the DPA 2018 and the UK GDPR. It is easy to forget that data protection laws mean that nothing can be done with personal data unless it is permitted. Each lawyer must satisfy themselves that they are using personal data lawfully.
Human oversight measures: There will also need to be human oversight at every stage where AI is used. In many ways it is easy to see that too much human oversight undermines the utility of the AI tool in the first place. However, provided that the use of the tool is lawful in the first place, we consider that thoughtful human oversight measures should be feasible without undermining the utility of the tool.
Auditing: The cornerstone of judicial use of AI and its use by lawyers and parties must be auditing. The importance of auditing is confirmed time and time again by actors in this space including in the EU AI Act. For example, in the context of judicial AI, we envisage that a system would need to be put in place – both before a tool was rolled out and thereafter – to assess matters such as accuracy and the extent to which there is any bias in the system.
Ultimately, we think that there needs to be an urgent discussion and debate on the appropriate use of AI in the legal system.
