The biggest change is speed and quality. Criminals can produce convincing messages, fake voices and believable narratives faster than ever, targeting more people with less effort. That makes process and verification controls even more important.

This article covers what is genuinely changing, what is not, and a practical set of controls you can implement this quarter.

What's Actually Changed

1

Phishing has become cheaper and more convincing

Phishing used to be easy to spot because of poor grammar, generic content and clumsy formatting. AI has reduced those tells. Attackers can produce a tailored email in seconds, in your tone, referencing your suppliers, clients or projects.

The NCSC continues to treat phishing as a major driver of compromise and the guidance remains consistent: reduce exposure, report suspicious messages and design processes so a single email cannot trigger a high impact outcome.

2

Impersonation has moved beyond email

Voice cloning and synthetic media have moved from novelty to usable for fraud. In practice, this shows up as urgent calls, voicemails or Teams messages pushing someone to "just get the payment done" or "share the file quickly".

The NCSC has highlighted how generative AI makes it easier to create or modify text, images, voice and video and how that affects integrity and trust.

3

Business payment fraud is being industrialised

Payment diversion is not new but AI makes the social engineering side faster and more personalised. This is the scam where someone changes a supplier's bank details, diverts a payment and vanishes.

The NCSC covers business payment fraud, also known as Business Email Compromise (BEC), with practical advice and examples. UK fraud reporting guidance also addresses mandate fraud and invoice scams, which often present as "change of bank details" requests.

4

Your AI assistant is now part of your attack surface

This is the newest piece that many firms have not internalised yet.

A real example: Varonis Threat Labs described an attack on Microsoft Copilot (dubbed "Reprompt") where a single click on a crafted link could trigger prompt injection behaviour and lead to data exposure. Microsoft patched the issue in January 2026.

The takeaway is not "don't use Copilot". The takeaway is: treat AI tooling like any other business system. Control data access, control what can be shared, monitor usage and assume attackers will try to manipulate it.

What Hasn't Changed

The fundamentals still win

  • Most incidents still start with identity compromise, not Hollywood hacking
  • Most fraud still works because a process allows money to move on the back of a message
  • Most ransomware impact is defined by backup quality and recovery readiness, not the initial infection

AI increases volume and believability but it does not remove the need to bypass your controls. If your controls are weak, AI will help criminals find and exploit that weakness faster.

The NCSC has also warned that misunderstanding AI risks can be dangerous, especially where organisations treat AI systems as "magic" rather than software with failure modes and security requirements.

The Practical Part: 12 Controls That Still Stop Most Incidents

These are deliberately written in plain English. You do not need a large security team to implement them but you do need consistency.

Money Movement Controls

Stops payment diversion and impersonation fraud

1

No bank detail changes by email alone

Any change request must be verified out of band.

2

Call-back on a known number

Use a number already on file (CRM, contract, supplier master data), not a number provided in the email thread.

3

Two-person approval above a threshold

Set an amount that fits your firm. Make it policy, not optional.

4

New supplier bank details are held for 24 hours before first payment

This kills "urgent Friday 4pm" pressure tactics.

5

Log every bank detail change attempt

Who requested it, who verified it, what number was used and when it was changed.

Identity and Access Controls

Reduces successful compromise

6

MFA everywhere, especially email and admin accounts

Email compromise is still the gateway to most fraud narratives.

7

Conditional access and legacy authentication disabled

Reduce easy paths into accounts.

8

Least privilege for everyone, including AI tools and connectors

If an assistant can see everything, it can leak everything. The Copilot example is a reminder to scope permissions tightly.

Resilience Controls

Reduces downtime and ransom pressure

9

Backups you can restore, tested quarterly

A backup that cannot be restored is not a backup. Run a restore test and record the result.

10

A one-page incident "who does what" sheet

Name the incident lead, deputy and who calls the insurer, who calls the bank and who communicates to clients.

Messaging Controls

Reduces spoofing and improves detection

11

DMARC, SPF and DKIM on your domains

This reduces domain spoofing and improves email trust signals. It is not a silver bullet but it is table stakes.

12

A simple reporting loop for suspicious messages

Make it easy for staff to report quickly and without fear. The NCSC's Suspicious Email Reporting Service is a strong model for reporting and disruption.

A Simple Test You Can Run Next Week

If you want a fast way to validate your exposure, run these three mini-tests:

Supplier Bank Change Drill

15 minutes

Ask Finance: "If we received a bank change request right now, what exactly happens?"

If the answer is "we would reply to the email", you have a priority gap.

AI Tool Access Check

15 minutes

Pick one Copilot or ChatGPT business user. Confirm what data it can access, where prompts are logged and what your policy says staff may paste into it.

Restore Test

30-60 minutes

Restore one file, one mailbox item or one small system to prove you can.

What to Do About AI Specifically (Without Banning It)

A sensible approach is guardrails, not fear:

Define approved AI tools for business use
Require business accounts, not personal accounts
Ban copying client personal data into consumer AI tools
Tighten permissions for AI assistants and integrations
Treat prompt injection as a real risk, like phishing
Make AI usage auditable

The UK government has also discussed how generative AI increases capability for less sophisticated threat actors, especially in scams and fraud, which aligns with the practical view above.

Where AssurePath Fits

If you want help applying this in a practical way, this is what we normally do with clients:

  • A short workshop to implement money movement controls and verification scripts
  • Microsoft 365 hardening focused on identity and email compromise
  • Backup and restore testing with documented recovery steps
  • A lightweight tabletop exercise to validate decisions, comms and responsibilities

The goal is not perfect security. The goal is fewer successful incidents, less downtime and fewer expensive mistakes under pressure.

Free IT Health Check

Get a quick assessment of your current security posture and identify gaps before attackers do.

Start Assessment