We have spoken to dozens of UK business owners in the last few months about AI. Almost all of them are using it in some form. ChatGPT for drafting emails. Copilot for summarising documents. AI-powered recruitment screening. Automated customer service chatbots.

Almost none of them have heard of the EU AI Act.

That is a problem, because the core obligations become enforceable on 2 August 2026 and the fines make GDPR look modest.

Wait, We Left the EU. This Doesn’t Apply to Us, Right?

Wrong. And this is the single biggest misconception we hear from UK businesses right now.

Common assumption
“We are a UK company. EU regulations do not apply to us post-Brexit.”

Reality The EU AI Act has extraterritorial scope. Under Article 2, if your AI system’s output is used within the EU, or if you place AI systems on the EU market, the Act applies to you. Full stop. This is the same principle that brought GDPR to every UK business that handles EU citizens’ data.

Common assumption
“We only use off-the-shelf tools. We are not developing AI.”

Reality The Act covers deployers as well as developers. If you use AI tools for hiring decisions, customer profiling, credit assessments, or any process that affects people in the EU, you have obligations under this law.

Common assumption
“This is only for big tech companies. It will not affect SMEs.”

Reality The Act applies based on what you do with AI, not how large your company is. A 20-person recruitment agency using AI screening tools has obligations. A 50-person law firm using AI for contract review has obligations. Size is not a shield.

Source: Farrer & Co – “The EU AI Act: What does it mean for UK organisations?”

Think of it like GDPR, but for AI

When GDPR came in, many UK businesses initially assumed it was “an EU thing”. Then the fines started landing. The ICO issued its largest-ever fine of £14 million to Capita in October 2025 for data protection failures. The EU AI Act follows the same extraterritorial model. If your AI outputs touch the EU, you are in scope.


What the EU AI Act Actually Is

The EU AI Act (Regulation EU 2024/1689) is the world’s first comprehensive AI law. It classifies AI systems by risk and applies proportionate obligations. Some provisions around prohibited practices and AI literacy already took effect in February 2025. The big deadline is 2 August 2026, when the core requirements for governance, high-risk AI, and transparency become fully enforceable.

The Act categorises AI systems into four tiers:

Prohibited
Social scoring, manipulative AI, real-time biometric surveillance
High Risk
HR screening, credit scoring, legal analytics, critical infrastructure
Limited Risk
Chatbots, deepfakes, emotion detection (transparency required)
Minimal Risk
Spam filters, AI in video games, basic automation

If you use AI for recruitment screening, employee monitoring, credit decisions, legal case analysis, or customer risk profiling, you are almost certainly operating in the high-risk category. That means conformity assessments, mandatory documentation, human oversight processes, and incident reporting.


The Fines Are Serious

Under Article 99 of the EU AI Act, penalties are structured in three tiers:

€35M
or 7% of global turnover
Prohibited AI practices (social scoring, manipulation, unauthorised surveillance)
€15M
or 3% of global turnover
Non-compliance with high-risk obligations, governance, or transparency requirements
€7.5M
or 1% of global turnover
Supplying incorrect or misleading information to authorities

For SMEs, the lower of the fixed amount or percentage applies. But even 1% of turnover is a material sum for most businesses we work with. And that is before you factor in reputational damage, contract losses, and the operational disruption of an enforcement action.

For context, GDPR’s maximum fine is 4% of global turnover. The EU AI Act goes to 7%.


What This Means for UK Businesses Right Now

You are probably already using AI in ways that carry obligations under this Act. Here are the scenarios we see repeatedly across the UK SMEs we work with:

If you serve clients in Europe, employ EU nationals, or your AI tools process data that ends up being used in EU member states, you should assume you are in scope.


The UK’s Own Regulatory Direction

The UK has taken a sector-specific approach to AI regulation so far, relying on existing regulators (ICO, FCA, Ofcom) rather than creating a single AI law. But that does not mean UK businesses are unregulated.

The ICO has already opened investigations into AI systems (including Grok) and has indicated that AI governance is a priority enforcement area through 2026. A statutory code of practice on AI and automated decision-making is expected later this year.

The UK government is also due to publish its AI and copyright reports by 18 March 2026 under the Data (Use and Access) Act 2025. These reports will further shape the regulatory landscape.

Whether through the EU AI Act’s extraterritorial reach or the UK’s own evolving regulatory framework, the direction of travel is clear. Unregulated AI use in business is ending.


Five Steps to Get Your Business Ready

The good news is that preparation does not require a legal department or a six-figure budget. It requires clarity, documentation, and someone who understands both the technology and the regulation. Here is a practical starting point:

1

Audit every AI system in use

List every tool, platform, and process that uses AI. Include the obvious ones (ChatGPT, Copilot) and the less obvious (automated email filtering, CRM lead scoring, CV screening tools). You cannot assess risk if you do not know what you are using.

1–2 weeks
2

Classify each system against the Act’s risk categories

For each AI system, determine whether it falls into prohibited, high-risk, limited-risk, or minimal-risk. Pay particular attention to anything that affects employment decisions, financial assessments, or interacts with EU-based individuals.

1–2 weeks
3

Appoint a governance lead

Someone in your organisation needs to own AI compliance. This does not need to be a full-time role. A fractional CISO or DPO with AI expertise can fulfil this function effectively for most SMEs.

Immediate
4

Start documentation and impact assessments

For any high-risk systems, begin producing technical documentation, risk assessments, and human oversight plans. This is the most time-consuming step and can take months. Start now.

2–4 months
5

Establish human oversight processes

For any AI system making or influencing significant decisions, ensure a qualified human can review, override, and be accountable for those decisions. Document who, how, and when.

2–4 weeks

If you start now, you have approximately five months. That is tight but realistic for most SMEs.


Final Thought

AI regulation is not coming. It is here. Some provisions are already in force. The core obligations land in August.

The businesses that treat this as a box-ticking exercise will struggle. The ones that build proper governance now will find it becomes a competitive advantage: clients trust you more, insurers treat you more favourably, and you avoid the scramble when enforcement begins in earnest.

The worst time to start preparing is after you receive a compliance notice. The best time was six months ago. The second best time is this week.

Where AssurePath Fits In

We help UK businesses navigate AI compliance without overcomplication or enterprise cost. Our fractional CISO and DPO services are built for exactly this kind of challenge: emerging regulation that requires expertise without a permanent headcount.

  • AI system audit and risk classification
  • Compliance documentation and governance frameworks
  • Fractional CISO and DPO support
  • Ongoing regulatory monitoring and updates

Five months is not long. We can help you use it well.