6 minutes read

FutureProof: How do you sue a robot?

Our recent articles in our FutureProof series have focused on what impact AI, and particularly the rise of Generative AI, will have on the way in which professionals work. Whatever the size of their business, it seems almost inevitable that all professional firms will begin to use some forms of Generative AI and that will increase as time goes on. Whilst we aren’t yet at the stage of robots acting autonomously, Generative AI certainly works differently (and has far more widespread possibilities) than any technological advancement we have adopted in our professional lives to-date. 

The potential benefits of Generative AI are well publicised and, over time, it will start to transform the way in which professionals work. But it also raises questions about the risks that arise from its use and, particularly, how liability will attach if it all goes wrong. 

Q1: Who will be the relevant party to claims?

Quite who would be the relevant party to a claim will depend, in part, on where the issue arises: is it one of input (relating to the information that has been used to develop a large language model), output (a lack of oversight on the eventual AI ‘answer’ produced) or process (a more typical systems failure)? 

Even if it is possible to pinpoint exactly what the issue was (on which see Question 3 below), will a contractual or tortious relationship arise? Is it, for example, sufficiently reasonably foreseeable to impose a duty of care on the provider of an AI tool where the party that suffers damage may not be the ‘purchaser’ of the software at all, but instead may be the purchaser’s client (for example, the solicitor’s lay client, rather than the firm which purchased and use the AI tool)? And what will the impact be of the numerous contractual indemnities, exclusions and warranties that will no doubt have formed part of the initial negotiations. 

In addition, what about the role of the AI system itself? There is an ongoing academic debate over whether or not (at some point) AI can or should be determined to have its own, distinct, legal personality. For now, momentum appears to have shifted away from the idea but it is certainly not one that should be ignored. 

Q2: Has there been a breach of duty?

The law of professional negligence boils down to one fundamental question: “Has the professional used reasonable skill and care?”.    

But the context in which that question is assessed is constantly evolving and the time will come when established practice will inevitably include the use of Generative AI.  

For now, the risks to professionals feel most likely to relate to their negligent use of AI or their failure to recognise its limitations. But there are already tasks which Generative AI can do more quickly and efficiently than a human. That will only increase as the technology develops, its outputs become more reliable and its use becomes more commonplace. Consequently there may well come a point (perhaps in the not too distant future) when professionals are found to have acted negligently for their failure to use it.    

Only a couple of years ago, that proposition might have been considered controversial, but it is now becoming more accepted. Indeed, the Master of the Rolls, Sir Geoffrey Vos, made precisely this point when giving a speech to the Professional Negligence Bar Association in April 2024.    

Q3: Has AI caused the loss?

As we have touched upon above, identifying what exactly has gone wrong is likely to be a costly and time-intensive process, with opportunities arising for a new subset of AI experts. 

There will be complex legal issues for the Courts to decide when the fault lies not necessarily with the professional but with the AI technology itself. Will it be a defence for the professional to say that they were entitled to rely on the accuracy of the AI? Contractual exclusions, indemnities and warranties will all come into play, there may be numerous parties involved, and the risk of satellite litigation feels high. 

With that in mind, will the judiciary/legislature look to adopt a strict liability approach similar to that which has been applied in respect of autonomous vehicles, or impose some form of assumed joint and several liability regime, to ensure certainty for the eventual end user and victim of any error/omission/injury?

It feels unlikely that this would go so far as to prevent the paying party then seeking to pursue recovery actions against the other commercial parties to the chain, but that process in itself will serve to increase the time, cost and complexity of litigation.

Q4: How will loss be assessed?

Typically professional negligence cases are assessed by reference to financial loss - for example, the missed opportunity to settle at a higher level, the incurrence of additional costs as a result of poor decision making, or rebuild costs following a negligent design. This is likely to remain the case where AI has been used to speed up or facilitate a process traditionally undertaken by a human being and an error has arisen. But what about circumstances where the error leads to issues of defamation or discrimination, or the unintentional release of the client’s sensitive or confidential data? How will the Courts approach the assessment of those damages where AI is involved and where the damage to the claimant is less easy to assess or quantify?

So, who will be to blame if it all goes wrong?

The truth of the matter is that the development of Generative AI is currently outpacing judicial and legislative change. Quite how any potential claims will be presented and dealt with feels like the great unknown of this space. Rather than try to predict the answers, all that can be done with any degree of certainty at this stage is to identify the issues that will form the focus of future claims in the professional indemnity space.  

What is clear is that for those professionals looking to embrace the opportunities arising, there is a responsibility to ensure that the technology is working properly, that those responsible maintain oversight and ensure there are sufficient risk management procedures in place to ensure all end products are properly checked. Most importantly, professionals need to ensure that the benefits of adopting Generative AI outweigh the risks and ensure that they have proper risk management processes in place before adopting it. 

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.

Contact

Sarah Rose

+441612355440

Neil Howes

+441133888458

How we can help you

Contact us

Related sectors & services