The dilemma of the deepfake: intellectual property and synthetic AI-generated content
There's an ongoing global debate relating to the regulation of artificial intelligence (AI). Broadly speaking, policymakers are trying to strike a balance between addressing the potential safety concerns associated with the rise of AI, without stifling innovation and growth of the technology.
The use of AI to generate so-called “deepfakes” – artificial images of events that never happened, usually featuring the likeness of real people (most often celebrities) – has grown considerably in recent years. The use of AI-generated deepfakes by the President of France, Emmanuel Macron, at the start of the recent AI Action Summit in Paris has renewed press and public attention to the associated problems.
The safety of AI systems was also recently the subject of the independent International AI Safety Report (co-ordinated by the UK’s AI Security Institute), which discussed the state of advanced AI capabilities, and the associated risks associated with it (published by the UK Government’s Department for Science, Information and Technology, and AI Safety Institute in January 2025). Some of the risks applicable to deepfakes, for example the risk of malicious use, are already in the process of being addressed by the UK Government (which is seeking to make creating sexually explicit deepfake images a criminal offence).
The targets of deepfakes
Firstly, some context. According to Google’s DeepMind division, AI is more commonly used to create realistic but fake celebrity images than to assist in cyber-attacks.
Ofcom’s Discussion Paper published in July 2024 provides a comprehensive overview and explanation of deepfake technology. The paper lists many high-profile individuals, including the likes of Taylor Swift, Sadiq Khan and Martin Lewis, who have been the victim of deepfakes. More recently in January 2025, Pope Francis (a popular deepfake target) cautioned at the World Economic Forum in Davos that AI is feeding the “growing crisis of truth in the public forum”. But what can celebrities and other high-profile individuals do to counteract deepfakes?
Our series of articles
Over the coming weeks, we'll publish a series of articles that look at deepfakes through the lens of English law, including its potential limitations in this regard.
Our focus will be civil (not criminal) law. This first article focuses on deepfakes from an intellectual property law perspective. The second will focus on the data protection regime and our third article will consider other potentially relevant legal options beyond the intellectual property and data protection spheres. It will also look ahead to legislative proposals and what’s to come in the rapidly evolving landscape of AI regulation.
Intellectual property
Under English law, there is no “image right” as such i.e. individuals do not have a proprietary right that would enable them to prevent unauthorised use of their name, likeness or other personal characteristics. Instead, the law provides a patchwork of rights that can be asserted in some circumstances. The main potential intellectual property options for such individuals subject to deepfakes are claims for passing off and/or copyright infringement.
Passing off
The tort of passing off is based on the premise that a person should not misrepresent that goods or services originate from some other source. This type of action is most likely to be available to high-profile figures who are the victim of deepfakes where they can argue that the deepfake creator has falsely represented that they have endorsed a particular business, product or service. As with all passing off cases the individual will need to show the three elements of the tort – goodwill, misrepresentation and damage.
In order to show the requisite goodwill, the bottom line is that fame, celebrity status and reputation aren’t enough on their own to succeed in a claim. For the purposes of goodwill, such personalities must show (amongst other things) that they actively exploit their image, for example by engaging in licensing activity such as brand endorsements and partnerships, in the UK.
It would then be necessary for the individual to show that there has been a misrepresentation caused by the deepfake. One of the big question marks may be whether the deepfake is convincing enough to deceive members of the public into believing that the individual was endorsing a particular product. This may be hard to prove and will depend on how the individual is portrayed in the deepfake, along with the impression left on the viewer.
The misrepresentation must cause damage. In some cases there may be provable financial loss (such as diversion of sales or loss of endorsement revenue), but less tangible forms of damage are more likely. For example, the deepfake may harm the individual’s reputation because their image or voice is being used to promote a brand, goods and/or services that are not high quality/legitimate; or the celebrity would not choose to endorse them, for example due to a conflict with the celebrity’s existing partnerships, or inconsistency with their brand values (see Fenty and others v Arcadia Group Brands Ltd (trading as Topshop) and another [2015] EWCA Civ 3).
Copyright infringement
Copyright usually subsists automatically in certain types of original work, which may feature the relevant individual (for example, photographs, footage and sound recordings). Copyright may be infringed if a substantial part of (and/or the author of the work’s “own intellectual creation” in) the relevant work is used to train a particular AI model or used in the deepfake itself. There are, however, various practical and legal complications:
- The claimant must be the owner of the relevant copyright work, which rarely happens in the case of a photograph or film (for example, in the case of a photograph, any such rights would usually, but not always, be owned by the photographer, not the subject of the photograph).
- The composition of a deepfake may comprise many different images/videos etc. Leaving aside the practical challenges of accessing and interrogating AI models and datasets, this is also likely to make it difficult to show that a substantial part of the copyright work complained about was actually used in the deepfake or in the training of the model.
- Fair dealing with a copyright work is permitted for the purpose of caricature, parody or pastiche without the permission of the copyright owner. These are terms of art, but undoubtedly some deepfakes will involve the requisite humour, imitation and/or satire.
Concluding comments
The above analysis demonstrates that intellectual property rights do not provide instant protection or an easy solution to deepfakes. In our next article, we focus on how the data protection regime can potentially be used to combat deepfakes.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.
Contact
Alex Woolgar
+441865410904
James Shiel
+441223222246
Ella Moss
+441214568054