With so much AI hype, where are all the AI disputes?

A magnifying glass looking at AI technology. Next to it is an exclamation mark.

The advent of generative AI, underpinned by large language models, has caused a proliferation of AI software. We are seeing foundation models such as OpenAI’s GPT-4 and Google’s Gemini continually expand their capabilities, as well as new (sometimes disruptive) entries to the market such as Mistral and DeepSeek. There has been an explosion of new AI products, built on these foundation models, and extensive adoption across most sectors of the economy as businesses hunt for efficiencies and a competitive edge.

As the next industrial revolution gathers steam (or perhaps ‘gathers processing power’ would be more apt), there has been noticeably little litigation on AI. This article will outline the few areas where AI disputes are arising, consider why there aren’t more cases, and then posit where the next wave of disputes might arise.

For this article, an AI dispute is one which concerns a wrong resulting from, or caused by, the use (or misuse) of AI. This could include an IP infringement arising from training AI or from its output, a breach of contract caused by an ineffective AI product, harm caused to a customer due to a hallucination by an AI chatbot, or a discrimination claim resulting from bias occurring in the output of AI software.

Data is one of the key pieces of the AI puzzle. Along with improvements in computing power, it is the vast amount of data available in electronic form which has propelled the latest advancements in AI technology. Training a large language model requires extraordinary amounts of data, much of which has been scraped from the internet. This is the space in which the vast majority of existing AI litigation has arisen.

Considerable amounts of data on the internet, including some which has been scraped for training AI, is protected by intellectual property (IP) rights such as copyright. It did not take very long for the first cases of intellectual property infringement to arise. Most AI litigation to date has arisen in the US, which happens to be the home of the more popular foundation models, and most AI litigation to date is in respect of IP infringement. Cases have been brought by artists, music publishers, authors and news organisations (to name a few). One of the more high-profile cases is the New York Times case against Microsoft and OpenAI, alleging that the newspaper’s articles had been used to train AI software without consent.

Notwithstanding the various IP infringement cases which have arisen, there is a noticeable lack of AI litigation more generally. There are a few reasons why this might be the case. Firstly, it is still early days. Generative AI is still a relatively new tool and businesses are at the early stages of integrating it into their workflows. It is also possible that disputes are settling at the pre-action stage, which will be attractive to AI providers looking to maintain their reputation in a new and increasingly crowded market.

Of perhaps greater consequence are the particular issues which may accompany AI disputes in particular, aside from difficult fact patterns, which are untested. For example, there are complex questions of liability to be considered when it comes to harms caused by AI. Where harm is caused by generative AI software, should liability lie with the provider of the foundation model, the provider of the AI software, the business using the software, or some combination of the three? Further, it may be difficult for a claimant to obtain sufficient evidence to ground a cause of action where the AI model is opaque and the product is complex and / or acting autonomously.

For contractual claims, liability generally requires the existence of a contractual relationship. So, for the end-customer of a product or service, they are unlikely to have a contractual claim against anyone other than the immediate provider of the goods or services. To avoid costly disputes, it will be very important for businesses to fully consider risk allocation in contracts. For tortious claims, where liability falls will depend on the particular tort or statute. For example, negligence generally requires a duty of care to be owed by the defendant, and we may see some interesting cases considering how far up the AI chain this duty may go. Rather than a snail in the bottle, it may be a bug in the system which makes new law.

While the answers to these questions of liability will be fact-specific, there are general principles at play for AI development and use which would benefit from judicial guidance. Claimants are unlikely to finance such complex litigation until test cases have made their way through the courts. There are also areas of uncertainty where we may see legislative changes. For example, the Government recently consulted on changes to copyright law in relation to AI. As the use of AI products becomes more embedded and widespread, we expect more cases to be issued.

1.

2.

3.

4.

The technology underpinning generative AI is progressing at speed. There are likely to be more fundamental questions for the courts and legislators as the technology advances. Agentic AI is the latest goal for AI providers, and the autonomy of these ‘agents’ will result in novel questions of liability and accountability. As AI models become more powerful, legislators might even need to consider whether they should gain separate legal personality – a status acquired by companies over a century ago. If the last few years have taught us anything, it is that the distant horizon might not be so distant.

Footnotes

[1]         Lloyd v Google LLC [2021] UKSC 50.

[2]         Prismall v Google UK Ltd and DeepMind Technologies Ltd [2024] EWCA Civ 1516.

[3]         You can read more about the changes introduced by the bill in our briefing.

Key contacts and authors

Joseph Moore

Joseph Moore

Partner, Dispute Resolution