The dilemma of the deepfake: DeepFake Inc. and data protection
In this article, we consider deepfakes focusing on the data protection regime, which is key in the regulation of AI and AI-generated content.
Deepfake technology essentially works by analysing relevant contents (or “data samples”, usually obtained from “scoping” data available on the internet) and creating new content (such as faces, expressions, speech) resembling the samples. “Data” is therefore at the centre of this technology, bringing with it a wide-ranging and complex data protection regime.
An organisation creating the deepfake (let’s call it DeepFake Inc.) will need to analyse (and therefore “process”) data samples – some or all of which could include personal data (being data relating to an identifiable, living individual). For that data processing to be lawful, it must not breach any laws (such as contract law and intellectual property, as discussed in the first article in our series) and must fall within one of the valid lawful bases for data processing set out in Article 6 of the UK GDPR.
The most relevant lawful basis for the purposes of AI-generated content, as indicated by the Information Commissioner’s Office (or ICO, the UK regulator for data protection matters) is the “legitimate interest” basis. Whether it applies can be ascertained by asking the following 3 questions:
1. Is DeepFake Inc. pursuing a “legitimate interest” in processing the data samples?
The answer to this question will depend on the specific AI content being generated and its intended purposes. What is the data processing trying to achieve? What are its benefits to the public (and how important are they)? Could the use of the data be unethical/unlawful? We've seen a wide range of uses and motivations behind deepfakes ranging from commercial gain, entertainment, advertising, political ends (and sometimes voter confusion and fake news), fraud or more “obscure” motives. Depending on the use of the content, other regulatory regimes can also come into play such as electoral law, advertising and competition law, commercial law, criminal law, anti-fraud etc.
2. Is the processing of data in this way a reasonable way of achieving DeepFake Inc.’s purpose? Is there a less intrusive way of achieving the same result?
Clearly, the answer here will depend upon precisely what Deepfake Inc. is trying to achieve. It's notable that one of the intentions underpinning the UK GDPR (and the EU GDPR) was to rebalance the way data is handled by companies – to ensure that the rights, freedoms and views of individuals are not unnecessarily overridden or ignored. If a purpose can be achieved in a less intrusive manner, the UK ICO takes the view that the less intrusive approach is the correct option.
3. Do the concerned individual’s interests override DeepFake Inc.’s legitimate interest?
Answering this question will involve balancing the interests of the individuals in relation to whom data is being analysed/generated against the interests of DeepFake Inc. discussed above. Relevant factors in this balancing exercise include the sensitivity of the data, whether the individual(s) would expect the data to be used in this way, whether this will have an impact on the individual(s) (for example reputational harm, distress, potential illegal use where the data generated about the individual is incorrect), how significant any impact would be, whether the individuals are vulnerable in any way etc. The Human Rights Act 1998 is also relevant here, as it's necessary to balance the rights to privacy and family life and the right to freedom of expression. Often, the processing of data behind AI-generated content will be “invisible” (where the concerned individuals are not aware that the data is being processed in this way), which is considered by the ICO as being a high-risk type of processing as individuals may have lost of control over their data.
Some data to be used may fall in the category of “special category data”, to which stricter processing requirements apply. This includes “biometric data for the purpose of uniquely identifying a natural person”. It remains unclear whether the analysis of a sample of images of a person would satisfy the definition of special category data contained in Article 9(1) of the UK GDPR. Article 4(14) defines biometric data as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data”. Recital 51 states that “the processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person”.
If the data processed by DeepFake Inc. satisfies the definition of special category data, in addition to identifying a valid lawful basis for processing (as above), DeepFake Inc. would need to satisfy one of the conditions in Article 9, such as explicit consent (which seems unlikely to apply in context) or that the data is “manifestly made public by the data subject”. Guidance from the courts and the ICO suggests that for the data to be “manifestly made public”, there must be a deliberate act by the individual. It will not generally be sufficient to rely on the simple fact that the data was available on the internet. Guidance from the European Data Protection Board (the EU data regulator) makes it clear that simply because a photograph is made public, it does not entail that the biometric data retrieved technologically from such pictures is itself “made public”.
Our specialist data protection team can provide comprehensive advice on this rapidly evolving area of law.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.
Contact
Aurelie Tegho
+441612348846
Claire Williams
+441865968562