7 minutes read

Harmonising AI regulation with data protection legislation – what can the UK learn from the EU?

This article explores the relationship between the EU’s data protection and AI regulations through a comparison of the principles underpinning both regimes, and explains the potential implications of this relationship for UK businesses.

The UK approach - innovation first, legislation later

Artificial intelligence (AI) technologies are rapidly evolving and becoming central to businesses around the world. Their adoption has (for the most part) outpaced lawmakers. So far the UK’s regulatory philosophy has been to innovate first and introduce comprehensive legislation later. Recent governments have opted for a decentralised approach, which underscores a strategic decision to allow regulators to take the lead with the aim of encouraging technological innovation. 

The 2024 King’s Speech did not include an AI Bill as many thought it might, however the King said that the government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. 

Comparing the UK’s approach to the EU

In stark contrast to the UK government’s relatively hands off approach is the European Union’s Artificial Intelligence Act (EU AI Act), a legislative framework that presents a more structured and tiered model of risk-based regulation. 

The EU AI Act classifies AI applications into categories based on the risk they pose to users and society, ranging from unacceptable risk to minimal risk, with each category facing corresponding regulatory requirements. This methodology resonates with a broader societal demand for transparent and accountable AI systems, capable of being scrutinised and governed effectively. 

For UK businesses operating in, or targeting markets within the EU, this means grappling with a dual regulatory reality. On one hand UK businesses must continue to engage with the UK’s piecemeal regulatory framework, while also preparing to adhere to the more prescriptive and potentially restrictive requirements of the EU AI Act. This cross-jurisdictional challenge requires a dynamic and informed compliance strategy.

Despite the current divergence in regulatory philosophy, there is a consensus between the UK’s approach and the EU AI Act on several fronts, particularly the recognition of the transformative potential of AI and the need to foster public trust by safeguarding fundamental rights including data protection. 

Notably, the UK Government has not dismissed the possibility of enacting a more comprehensive framework in the future. Given the economic benefits of transnational legislative harmonisation in this area, it is very possible that any future UK AI legislation will mirror or closely mimic EU legislation. Therefore, even for UK businesses whose AI systems may not affect EU citizens, it is worth understanding how the EU AI Act interacts with current EU data protection legislation (EU GDPR), especially given that EU GDPR is retained in UK law as UK GDPR. Given the similarities between EU GDPR and UK GDPR we use the collective term GDPR below.

The EU AI Act’s complementary role alongside GDPR

The interplay between the EU AI Act and GDPR required careful consideration by legislators so that they can co-exist. On one hand, data protection requires that personal information is only processed within predetermined boundaries, promoting specificity and restraint. On the contrary, AI thrives on the exploration of vast and varied datasets. The EU, through the EU AI Act, has sought to deal with this paradox head on. 

Where GDPR provides the basis for data protection, setting the benchmark for personal data usage, the EU AI Act seeks to navigate the nuanced and unique challenges posed by AI. This includes fleshing out the standards for high-risk AI, which incorporates not only data protection concerns but also broader ethical considerations such as bias, discrimination and the impact on society.

GDPR is founded on principles that include lawfulness, fairness, transparency and the protection of individual rights. The EU AI Act complements these by promoting human oversight, data governance and technical robustness, principles essential to fostering trust in AI systems. When examined through this lens, the EU AI Act can be seen as extending the ambitions of the GDPR into the digital future, ensuring that AI does not operate in an ethical vacuum. 

Exploring the harmony

It's not just the principles and ambitions of GDPR and the EU AI Act that align, there are many areas of similarity including: 

  1. Data Protection and privacy: Both regulations prioritise the protection of personal data and individual privacy and both are cornerstones in the EU's framework for a digital future that is safe, ethical and respects fundamental rights.
  2. Risk-based approach: The EU AI Act proposes a risk-based framework for regulating AI systems, which is similar to the risk-based approach the GDPR takes toward data processing activities.
  3. Transparency: Transparency in data processing is a core principle of GDPR, which is mirrored in the EU AI Act. which requires transparency in how AI systems operate, especially for high-risk AI. This complements individuals’ abilities to advocate for their rights under GDPR.
  4. Accountability and governance: Both laws emphasise the importance of accountability in their respective domains. Companies must have clear governance structures to comply with both GDPR and the EU AI Act.
  5. Extraterritorial scope: Both GDPR and the EU AI Act have provisions that apply to entities outside the EU if they process data of EU individuals or offer goods and services to individuals within the EU.
  6. Rights of individuals and automated decision making: Both regulations tackle the issue of automated decision-making, with the GDPR granting individuals the right to challenge decisions made without human intervention and the EU AI Act demanding robust and transparent systems for scrutinising automated decision pathways.
  7. Impact assessments: GDPR requires Data Protection Impact Assessments for certain high-risk data processing activities. Similarly, the EU AI Act requires mandatory fundamental rights impact assessment for high-risk AI systems.
  8. Record-keeping requirements: The EU AI Act’s requirement for detailed records about AI data sets mirrors the GDPR’s emphasis on meticulous documentation, both aiming to enhance oversight and enforceability. 
  9. Security of processing: Both regulations underscore the necessity of securing processing activities to protect data and to ensure the safety and reliability of systems.
  10. Notification of breaches: The GDPR requires data breach notifications to supervisory authorities and affected individuals in certain cases. The EU AI Act allows for an individual to make complaints about non-compliance to market surveillance authorities. 
  11. Penalties: Penalties under both frameworks underscore the severity with which non-compliance is viewed. While the GDPR imposes fines up to €20 million or 4% of annual global turnover, the EU AI Act sets its sights even higher with penalties reaching up to €35 million or 7% of annual global turnover, reflecting the high stakes of enforcing regulation in AI’s evolving landscape.

While there are many similarities, a pivotal area of divergence is the treatment of special category data, which is sensitive personal information that is given extra protection under the GDPR. The EU AI Act proposes a nuanced approach, adopting a debiasing exception that allows for the use of such data under specific circumstances to correct and prevent discriminatory patterns within AI systems. This marks a deliberate tilt towards the pragmatic application of GDPR principles within the context of AI, addressing the operational realities where AI needs to process sensitive data to ensure fairness and non-discrimination in its outcomes.

What organisations can do now to prepare for the EU AI Act and any additional UK AI regulation 

The EU AI Act entered into force on 1 August 2024 and will (for the most part) be effective from 2 August 2026. Therefore, businesses looking to operate in EU markets must prepare for compliance with the Act’s impending obligations. 

Businesses operating in the UK should look to: 

  1. Continually monitor developments: stay informed about AI regulation.
  2. Implement risk management: look at your AI governance practices and consider implementing internal and external policies on your business’ use of AI.
  3. Internally audit: review existing AI contracts, processes and AI systems.
  4. Conduct a legal review: seek expert legal advice on interpreting the legislative requirements and what they mean for your organisation.
  5. Educate staff: educate employees about the impact of the legislation on their role and give them training on AI use to ensure compliance.

Organisations should adopt a forward-looking approach, implementing AI systems that not only meet existing data protection requirements but are also adaptable to accommodate more stringent regulatory environments. 

Proactive engagement with the dual UK and EU regulatory landscapes will be crucial in maintaining innovation momentum while upholding the trust and privacy of consumers. Adaptable, informed and ethical AI use will be the cornerstone of future success.

Contact

Robert Beveridge

+441612348804

Justin Humphries

+441612348742

How we can help you

Contact us

Related sectors & services