The advent of generative AI, underpinned by large language models, has caused a proliferation of AI software. We are seeing foundation models such as OpenAI’s GPT-4 and Google’s Gemini continually expand their capabilities, as well as new (sometimes disruptive) entries to the market such as Mistral and DeepSeek. There has been an explosion of new AI products, built on these foundation models, and extensive adoption across most sectors of the economy as businesses hunt for efficiencies and a competitive edge.
As the next industrial revolution gathers steam (or perhaps ‘gathers processing power’ would be more apt), there has been noticeably little litigation on AI. This article will outline the few areas where AI disputes are arising, consider why there aren’t more cases, and then posit where the next wave of disputes might arise.

What is an AI dispute?
For this article, an AI dispute is one which concerns a wrong resulting from, or caused by, the use (or misuse) of AI. This could include an IP infringement arising from training AI or from its output, a breach of contract caused by an ineffective AI product, harm caused to a customer due to a hallucination by an AI chatbot, or a discrimination claim resulting from bias occurring in the output of AI software.
What are people actually fighting about?
Data is one of the key pieces of the AI puzzle. Along with improvements in computing power, it is the vast amount of data available in electronic form which has propelled the latest advancements in AI technology. Training a large language model requires extraordinary amounts of data, much of which has been scraped from the internet. This is the space in which the vast majority of existing AI litigation has arisen.
Considerable amounts of data on the internet, including some which has been scraped for training AI, is protected by intellectual property (IP) rights such as copyright. It did not take very long for the first cases of intellectual property infringement to arise. Most AI litigation to date has arisen in the US, which happens to be the home of the more popular foundation models, and most AI litigation to date is in respect of IP infringement. Cases have been brought by artists, music publishers, authors and news organisations (to name a few). One of the more high-profile cases is the New York Times case against Microsoft and OpenAI, alleging that the newspaper’s articles had been used to train AI software without consent.
Refresher on IP Rights
IP rights generally apply to original creations and give the creator the exclusive right to use that creation. Some of the more well-known IP rights in the UK (which are generally protected in most countries around the world in some way) are:
Copyright
Which protects expressions of ideas such as art, music, software and other literary works.
Trade mark
Which protect signs or symbols (e.g., logos) used to distinguish products or services.
Patents
Which protect inventions.
The most high-profile piece of AI litigation in the courts of England and Wales is Getty Images v Stability AI Ltd. In broad terms, Getty Images (a well-known supplier of images and other visual media) alleges infringement by Stability of its intellectual property rights 1) by scraping millions of images from Getty Images, without consent, and using those images to train AI which is made available to users in the UK, and 2) because the output of the AI, in the form of synthetic images, reproduces a substantial part of works protected by copyright and/or bears trade marks. The case is due to be heard in June/July 2025.
As most of the well-known foundation models are based in the US, we expect many of the IP infringement cases to arise there. But as the Getty Images case shows, even in those circumstances, there may be a case to answer in the UK.
Why is there so little litigation?
Notwithstanding the various IP infringement cases which have arisen, there is a noticeable lack of AI litigation more generally. There are a few reasons why this might be the case. Firstly, it is still early days. Generative AI is still a relatively new tool and businesses are at the early stages of integrating it into their workflows. It is also possible that disputes are settling at the pre-action stage, which will be attractive to AI providers looking to maintain their reputation in a new and increasingly crowded market.
Of perhaps greater consequence are the particular issues which may accompany AI disputes in particular, aside from difficult fact patterns, which are untested. For example, there are complex questions of liability to be considered when it comes to harms caused by AI. Where harm is caused by generative AI software, should liability lie with the provider of the foundation model, the provider of the AI software, the business using the software, or some combination of the three? Further, it may be difficult for a claimant to obtain sufficient evidence to ground a cause of action where the AI model is opaque and the product is complex and / or acting autonomously.
For contractual claims, liability generally requires the existence of a contractual relationship. So, for the end-customer of a product or service, they are unlikely to have a contractual claim against anyone other than the immediate provider of the goods or services. To avoid costly disputes, it will be very important for businesses to fully consider risk allocation in contracts. For tortious claims, where liability falls will depend on the particular tort or statute. For example, negligence generally requires a duty of care to be owed by the defendant, and we may see some interesting cases considering how far up the AI chain this duty may go. Rather than a snail in the bottle, it may be a bug in the system which makes new law.
While the answers to these questions of liability will be fact-specific, there are general principles at play for AI development and use which would benefit from judicial guidance. Claimants are unlikely to finance such complex litigation until test cases have made their way through the courts. There are also areas of uncertainty where we may see legislative changes. For example, the Government recently consulted on changes to copyright law in relation to AI. As the use of AI products becomes more embedded and widespread, we expect more cases to be issued.
What is around the corner?
1.
We consider that alongside continuing IP infringement cases, we will begin to see professional negligence cases (i.e., a professional falls below the standard required of them) – where AI has been misused by professionals when carrying out their work. This could even extend to non-use of AI, if the use of an AI tool has become standard practice in a particular industry and the individual has failed to use it. The courts will soon have to grapple with what impact the use of AI has on the reasonable standard expected of professionals.
2.
The well-rehearsed concerns about bias exhibited in the output of some generative AI models is likely to cause a steady increase in the number of discrimination claims brought under the Equality Act 2010 and related employment disputes.
3.
In light of how AI models are trained, and the extensive scraping of data from the internet, there is a risk that personal data has been used in that training. If an individual has suffered damage or distress as a result of a breach of data protection law, they may have a claim under UK GDPR. Due to the cost of litigation and the likely quantum of relief awarded, we would expect most claims to be brought by way of a mass claim (you can read more about these in last year’s Yearbook). However, in light of the Supreme Court’s decision in Lloyd v Google[1] and the Court of Appeal’s judgment in Prismall v Google[2], there are significant hurdles to mass data privacy claims succeeding, broadly due to the need for individualised analysis for each claimant.
One way around these hurdles is to bring claims under the competition regime by way of a Collective Proceedings Order, although that route has its own difficulties. Another potential lifeline for mass claims of this nature is the Digital Markets, Competition and Consumers Act 2024, which has introduced considerable new powers for the Competition and Markets Authority (CMA) to investigate infringements of consumer laws and enforce fines (see our briefing on the Act here). There may be scope for follow-on group claims if the CMA makes a factual finding of infringement against a business.
4.
Connected to the use (or misuse) of personal data, we may also see more challenges in relation to automated decision-making based on Article 22 of UK GDPR. There have already been cases in Europe based on Article 22 of the EU GDPR, for example in relation to AI being used for credit scoring or fraud probability scoring. The Data (Use and Access) Bill progressing through Parliament, in its current form, softens the rules on people being subject to automated decision-making, largely to encourage AI adoption.[3] However, the bill retains the general prohibition (subject to limited exceptions) where the automated decision is based on the processing of special category data (e.g., racial or ethnic origin) and demands ‘safeguards’ for automated decisions based on other types of personal data. If the law is relaxed in the UK and there is an increase in automated decision-making, disputes may arise out of allegations of ineffective safeguards, such as a lack of information to the data subject about decisions and/or meaningful human review.
And the distant horizon?
The technology underpinning generative AI is progressing at speed. There are likely to be more fundamental questions for the courts and legislators as the technology advances. Agentic AI is the latest goal for AI providers, and the autonomy of these ‘agents’ will result in novel questions of liability and accountability. As AI models become more powerful, legislators might even need to consider whether they should gain separate legal personality – a status acquired by companies over a century ago. If the last few years have taught us anything, it is that the distant horizon might not be so distant.
Footnotes
[1] Lloyd v Google LLC [2021] UKSC 50.
[2] Prismall v Google UK Ltd and DeepMind Technologies Ltd [2024] EWCA Civ 1516.
[3] You can read more about the changes introduced by the bill in our briefing.