10 minutes read

FutureProof: AI and the Regulatory landscape - what does this mean for professionals?

In the latest in our FutureProof series, we explore the regulatory landscape regarding AI and consider what this means for professionals who use it.  

The regulation of AI in the UK and globally is at an early stage, and subject to change, as we all come to understand its capability, and the risks, of using it. Different jurisdictions have adopted varying approaches to AI, with a view to balancing the risks and benefits but also to allow for innovation and investment in this area.

AI Regulation in the UK

In the UK, regulation of AI remains an ongoing source of debate and evolution.  

The previous government’s approach to AI regulation

In March 2023, the previous Conservative government published a White Paper which looked at the risks and benefits of AI to society, the current regulatory landscape potentially applicable to AI, and set out a roadmap of how it proposed to develop and approach AI specific regulation.

Existing regulation and laws may partially address the development and use of AI. For instance:

  • Where AI produces discriminatory outcomes, this may be in breach of the Equality Act 2010
  • Data breach laws and regulations will govern the use of data
  • Consumer rights law may protect consumers using AI products, and
  • Inappropriate use of intellectual property may be in breach of intellectual property laws

However, none of these laws or regulations were specifically drafted with AI in mind.   

In November 2023, and on the back of its White Paper, the government then held the world’s first ever AI Safety Summit.  

That led to further consultation and a response paper was then published by the previous government in February 2024 which outlined its proposed approach to AI regulation. Essentially, the government at the time adopted a “pro-innovation approach” to AI regulation. This boiled down to three “core pillars”:

  • Leveraging existing regulatory authorities and frameworks
  • Establishing a “central function” to facilitate risk monitoring and regulatory coordination, and
  • Piloting a multi-agency advisory service called the “AI and Digital Hub”

That approach did not involve the introduction of any specific AI legislation or regulations. However, in February 2024, the previous government asked various “key sectoral and cross-economy” regulators to develop and publish their own strategic approaches to the use of AI. Find more about this approach, including the resulting strategies produced by the Legal Services Board, FCA and ICO. The previous government also requested that further details and guidance be produced by each “key” regulator by April 2025.

Differing views on AI Regulation

The debate on effective regulation continues in the UK. Some organisations such as the Ada Lovelace Institute support a more proactive regulation to ensure that AI is trustworthy and that developers are accountable when things go wrong (see report). Other organisations such as the CBI have suggested that the current approach promotes innovation and investment.

The new Labour government’s intentions

For now, we have no AI specific regulation in the UK. That said, in the King’s Speech in July 2024, the new Labour government announced its intention to introduce new rules to govern AI, alongside cybersecurity and digital information bills. This follows a statement in Labour’s manifesto that “Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”.

This suggests that the new government may take a different approach to the regulation of AI, with the intention of introducing AI specific legislation. It is not yet clear what this will look like, although the indications are likely to focus on “the handful of companies developing the most powerful AI models”. This suggests that the UK is unlikely to end up with rules such as those in the EU (see further below). Exactly when any draft Bill will be published is not yet known. The government has so far produced a short AI Opportunities “Action Plan: terms of reference” briefly setting out its roadmap towards an AI Bill.

Separately, the new Labour government has also committed (in the King’s Speech) to a Digital Information and Smart Data Bill. The purpose of that is to “harness the power of data for economic growth, to support a modern digital government, and to improve people’s lives”. You can see further information about the Government’s background briefing notes for the King’s Speech. It is likely that parts of these new laws, once introduced, will apply to AI. For instance, the bill envisions a bigger role for the ICO and one of the issues it addresses relates to automated decision making.

AI Regulation in Europe

In contrast, the approach taken by the EU is to adopt specific, and pro-active, AI regulation which has been described as the first comprehensive AI legal framework globally. The EU AI Act is drafted to have extra-territorial reach, meaning it will capture those who deploy and distribute AI systems as well as manufacturers and providers. For example, if a provider or deployer of an AI system is in a country outside of the EU, the AI Act will still apply if the output produced by the AI system is being used in the EU (subject to various exceptions). Accordingly, UK professionals operating in the EU may well be impacted by the AI Act.

Broadly speaking the EU AI Act adopts a risk-based approach which includes the following:

  • Fines of up to EUR 35 million or 7% of turnover for breaches
  • A series of prohibited AI activities
  • A series of high risk AI activities, with specific requirements for use, and
  • An approach which is not industry or sector-specific, in contrast to the UK’s current sectoral approach

Use of a risk-based approach means that awareness of what the EU AI Act deems ‘high risk’ will be vital as it may impact part of a business’ operations. High risk activities include AI that controls access to private and public services, such as AI systems “used to evaluate the credit score or creditworthiness of natural persons.” Another key area will be the use of AI to recruit, monitor or evaluate people in employment.

Where the use or deployment of AI is categorised as high risk, the majority of operative provisions in the Act will come into effect on 2 August 2026. This shorter period for compliance should be noted by those who have business in the EU. 

Prohibited AI, typically where it seeks to use subliminal techniques to manipulate or deceive, will be banned by the EU from 2 February 2025. Crucially, prohibited AI includes systems that are seen to exploit specific persons due to their age, disability or social or economic situation. There are a number of service providers, including financial services, whose algorithms may have biases against those with protected characteristics such as on the grounds of race. Companies would be well advised to undertake an AI audit if there is a real risk that some of their AI use could be categorised as prohibited.

The penalties under the EU AI Act will be a strong incentive to comply. Even large technology companies could be stung by fines of up to 7% of their turnover. However, the penalty provisions in Article 99 are also expressly intended to cover SMEs. This could be at a lower level of fine specified under the Act depending on the type of wrongdoing. This is identified on a sliding scale of 7%, 3% or 1% of worldwide turnover or fines of EUR35, EUR15 or EU7.5 respectively.

Enforcement of the Act will be under the remit of the new AI Office, established in February 2024. The foundational principles of enforcement will be to ensure AI can be trusted and is non-discriminatory. The tenor of the European approach means that a focus on AI governance will pay dividends.

The EU AI Act will be complimented by an EU AI liability directive. Crucially the Directive seeks to make it easier to pursue claims for harms caused by AI by introducing a rebuttable presumption that the AI caused loss. Whilst the Directive was proposed in September 2022, it is unclear when it would be implemented. However, it is illustrative of the EU’s thinking and desire to ensure that any AI framework can be meaningfully enforced.   

AI Regulation elsewhere

US

Much attention is focused on the US as an AI hub, with major tech players such as Alphabet, Google, Amazon, Meta, Microsoft and Apple all headquartered there. 

At US national level, the focus has been to ensure AI is safe to use and that bias and discrimination are appropriately checked. There are five guiding principles for the US AI Bill of Rights including providing protection from discriminatory algorithms and making provision for a human alternative to automated decisions. The full White House briefing is available here. Although there is no federal legislation focused on AI, in 2023 US federal agencies including the Federal Trade Commission and the Consumer Financial Protection Bureau issued a joint statement that “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” This is broadly similar to the UK approach where the regulators have expressed the intention to use existing tools to govern AI use.

The position is more variable at state level. In the key state of California, the Governor vetoed a proposed AI bill in September on the grounds that it could stifle innovation. In contrast, Colorado passed their Artificial Intelligence Act (CAIA) which adopts a risk-based approach to AI which comes into effect in February 2026. The CAIA is closely related to the EU AI Act in that it focuses on decisions related to education, employment and financial services. 

Elsewhere?

It is worth noting the approach in India, as a key tech hub which has adopted a principles-based approach in the Principles for Responsible AI (2021) framework. The principles include a need for safety, non-discrimination, transparency and accountability.

China has also introduced its own set of AI Measures which came into effect in 2023. The focus of these is on Generative AI provided to the public and applies regardless of where the providers are incorporated. 
Until a universal set of principles is agreed, a patchwork of differing regulations and frameworks is likely across jurisdictions.  

How does this impact professionals?

The mismatch between the regulations in different countries currently presents a risk to professionals working with clients in different jurisdictions when it comes to the adoption and/or provision of AI solutions to clients. Professionals will need to ensure that they understand the regulatory environment of the jurisdiction in which they work, particularly if that includes the EU, where the risks for non-compliance could be a hefty fine. 

And it is not just the mismatch, but the changing regulatory landscape that professionals will need to keep up with. It appears likely that changes are coming in the UK in particular. When making choices about adoption of AI, professionals need to keep in mind how this might be affected by further regulatory changes coming down the line, either in the UK or elsewhere.

This presents as great a risk to insured professionals as it does their insurers, particularly bearing in mind that fines and penalties are not usually covered by a professional indemnity policy (although the associated costs of investigations may be). In addition, there is a separate reputational risk of being sanctioned by a regulatory authority that professionals will wish to avoid.

Take-aways

Professionals will need to continue to be aware of the differing and changing regulatory environment when deciding to use or adopt AI. In particular, they should bear in mind the following:

  • What your regulators expect and what you are expected to do. For instance, for lawyers the SRA and Law Society has issued numerous guidance notes, as have other regulators and organisations for other professions and industries.
  • If you are operating in the EU and your use of AI is likely to be part of that operation, familiarise yourself with the new EU AI Act.
  • Be aware if involved in client work where clients are using AI – do you have a duty to advise your client on its regulatory obligations?

This article has been co-authored by Sara Ibrahim, Gatehouse Chambers.

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.

Contact

Jacqui King

+442076489284

How we can help you

Contact us

Related sectors & services