3.1 Intellectual property ownership and infringement
Generative AI raises a host of complex intellectual property (IP) issues under existing legislation, which provide fertile ground for disputes. Firstly, there may be questions around ownership of the AI’s output – does it belong to the supplier of the AI system or to your organisation? Secondly, generative AI is trained on data. If the AI’s training data is protected by IP rights, then the training itself, and the use of the output of AI, may give rise to liability.1 Thirdly, generative AI may also learn from, and reuse, data inputted as prompts. So, if employees are inputting confidential information or information protected by IP rights, that may also give rise to litigation risk.
Where liability for IP infringement ultimately lies – for example, as between the provider of AI software and the user – is still quite uncertain and will depend significantly on the facts of each case.
3.2 Data protection
If personal data is processed in the training or use of an AI model, including when employees input prompts, in breach of privacy legislation (for example, without a lawful basis or in ways that individuals would not reasonably expect and/or that they have not been made aware of) then there is a risk of enforcement action by regulators as well as of compensation claims from data subjects. The ICO has taken action against Clearview AI Inc2 in respect of facial recognition technology, and issued a preliminary enforcement notice against Snap over an alleged potential failure properly to assess the privacy risks posed by its generative AI chatbot.3
3.3 Automated decision making
It is contrary to UK GDPR, subject to exceptions, for an organisation to allow AI to make decisions, without any human involvement, that have a legal or similarly significant effect on individuals. The ICO gives, as examples, automatic refusal of an online credit application or e-recruiting practices.4 Given advances in generative AI, it is important to consider whether this is something which might be occurring in businesses across a broad variety of sectors.
3.4 Bias
There are well-documented issues with bias occurring in the output of some generative AI models. This can occur due to pre-existing bias in the data upon which the AI is trained. A recent UNESCO report has identified instances of generative AI making links between gendered names and careers, generating negative responses to prompts related to LGBTQ+ individuals and women, and assigning stereotyped professions to different genders and ethnic groups.5 It is possible that any bias exhibited by AI systems used by businesses may result in discrimination claims under the Equality Act 2010. This could also give rise to employment disputes.
3.5 Errors
Another well-documented issue for generative AI models is the existence of hallucinations – or, less-euphemistically – the generation of wholly fictitious output. Alongside obvious liability which can arise from errors (for example, contractual liability), there may also be liability in professional negligence, misrepresentation or defamation. Very small errors may go unnoticed by human reviewers but may have a large impact. Interesting questions of contract law may also arise where a business’s chatbot makes an erroneous promise to a customer – for example, to sell a product or service vastly below market price. Conversely, one can also foresee in the future examples of non-use of AI potentially giving rise to a professional negligence claim, for example, if something goes wrong and AI has not been used in circumstances where its use has become standard practice.
3.6 AI-washing
There are numerous businesses claiming to be using AI when advertising their services. If they are not using AI, or not using it to the extent they imply, then they could be a risk of ‘AI-washing’. In March 2024, the Securities and Exchange Commission (SEC) in the USA released information about two settled enforcement actions for AI-washing against investment advisors.6 In England and Wales, such liability could potentially fall under misrepresentation, consumer protection laws or securities regulations.