The biggest change is speed and quality. Criminals can produce convincing messages, fake voices and believable narratives faster than ever, targeting more people with less effort. That makes process and verification controls even more important.
This article covers what is genuinely changing, what is not, and a practical set of controls you can implement this quarter.
What's Actually Changed
Phishing has become cheaper and more convincing
Phishing used to be easy to spot because of poor grammar, generic content and clumsy formatting. AI has reduced those tells. Attackers can produce a tailored email in seconds, in your tone, referencing your suppliers, clients or projects.
The NCSC continues to treat phishing as a major driver of compromise and the guidance remains consistent: reduce exposure, report suspicious messages and design processes so a single email cannot trigger a high impact outcome.
Impersonation has moved beyond email
Voice cloning and synthetic media have moved from novelty to usable for fraud. In practice, this shows up as urgent calls, voicemails or Teams messages pushing someone to "just get the payment done" or "share the file quickly".
The NCSC has highlighted how generative AI makes it easier to create or modify text, images, voice and video and how that affects integrity and trust.
Business payment fraud is being industrialised
Payment diversion is not new but AI makes the social engineering side faster and more personalised. This is the scam where someone changes a supplier's bank details, diverts a payment and vanishes.
The NCSC covers business payment fraud, also known as Business Email Compromise (BEC), with practical advice and examples. UK fraud reporting guidance also addresses mandate fraud and invoice scams, which often present as "change of bank details" requests.
Your AI assistant is now part of your attack surface
This is the newest piece that many firms have not internalised yet.
A real example: Varonis Threat Labs described an attack on Microsoft Copilot (dubbed "Reprompt") where a single click on a crafted link could trigger prompt injection behaviour and lead to data exposure. Microsoft patched the issue in January 2026.
The takeaway is not "don't use Copilot". The takeaway is: treat AI tooling like any other business system. Control data access, control what can be shared, monitor usage and assume attackers will try to manipulate it.
What Hasn't Changed
The fundamentals still win
- Most incidents still start with identity compromise, not Hollywood hacking
- Most fraud still works because a process allows money to move on the back of a message
- Most ransomware impact is defined by backup quality and recovery readiness, not the initial infection
AI increases volume and believability but it does not remove the need to bypass your controls. If your controls are weak, AI will help criminals find and exploit that weakness faster.
The NCSC has also warned that misunderstanding AI risks can be dangerous, especially where organisations treat AI systems as "magic" rather than software with failure modes and security requirements.
The Practical Part: 12 Controls That Still Stop Most Incidents
These are deliberately written in plain English. You do not need a large security team to implement them but you do need consistency.
Money Movement Controls
Stops payment diversion and impersonation fraud
No bank detail changes by email alone
Any change request must be verified out of band.
Call-back on a known number
Use a number already on file (CRM, contract, supplier master data), not a number provided in the email thread.
Two-person approval above a threshold
Set an amount that fits your firm. Make it policy, not optional.
New supplier bank details are held for 24 hours before first payment
This kills "urgent Friday 4pm" pressure tactics.
Log every bank detail change attempt
Who requested it, who verified it, what number was used and when it was changed.
Identity and Access Controls
Reduces successful compromise
MFA everywhere, especially email and admin accounts
Email compromise is still the gateway to most fraud narratives.
Conditional access and legacy authentication disabled
Reduce easy paths into accounts.
Least privilege for everyone, including AI tools and connectors
If an assistant can see everything, it can leak everything. The Copilot example is a reminder to scope permissions tightly.
Resilience Controls
Reduces downtime and ransom pressure
Backups you can restore, tested quarterly
A backup that cannot be restored is not a backup. Run a restore test and record the result.
A one-page incident "who does what" sheet
Name the incident lead, deputy and who calls the insurer, who calls the bank and who communicates to clients.
Messaging Controls
Reduces spoofing and improves detection
DMARC, SPF and DKIM on your domains
This reduces domain spoofing and improves email trust signals. It is not a silver bullet but it is table stakes.
A simple reporting loop for suspicious messages
Make it easy for staff to report quickly and without fear. The NCSC's Suspicious Email Reporting Service is a strong model for reporting and disruption.
A Simple Test You Can Run Next Week
If you want a fast way to validate your exposure, run these three mini-tests:
Supplier Bank Change Drill
15 minutesAsk Finance: "If we received a bank change request right now, what exactly happens?"
AI Tool Access Check
15 minutesPick one Copilot or ChatGPT business user. Confirm what data it can access, where prompts are logged and what your policy says staff may paste into it.
Restore Test
30-60 minutesRestore one file, one mailbox item or one small system to prove you can.
What to Do About AI Specifically (Without Banning It)
A sensible approach is guardrails, not fear:
The UK government has also discussed how generative AI increases capability for less sophisticated threat actors, especially in scams and fraud, which aligns with the practical view above.
Where AssurePath Fits
If you want help applying this in a practical way, this is what we normally do with clients:
- A short workshop to implement money movement controls and verification scripts
- Microsoft 365 hardening focused on identity and email compromise
- Backup and restore testing with documented recovery steps
- A lightweight tabletop exercise to validate decisions, comms and responsibilities
The goal is not perfect security. The goal is fewer successful incidents, less downtime and fewer expensive mistakes under pressure.
Sources and Further Reading
- NCSC: Business payment fraud (BEC)
- NCSC: What is business email compromise? (infographic PDF)
- NCSC blog: Preserving integrity in the age of generative AI
- NCSC: Report a scam email (SERS)
- NCSC: Configuring Microsoft 365 Outlook 'Report Phishing' add-in for SERS
- GOV.UK: Avoid and report internet scams and phishing
- Report Fraud (Action Fraud): Mandate fraud
- Take Five (UK Finance): Invoice and mandate scams
- Met Police: Mandate and cheque fraud guidance
- Varonis Threat Labs: Reprompt (single click Copilot attack)
- TechRadar: Microsoft Copilot AI attack took just a single click
- Tom's Guide: Copilot vulnerability only requires a single click
- Windows Central: Copilot "Reprompt" exploit detailed
- Malwarebytes: "Reprompt" attack lets attackers steal data from Microsoft Copilot
- NCSC report: The near-term impact of AI on the cyber threat (Jan 2024)
- NCSC report: Impact of AI on cyber threat from now to 2027 (May 2025)
- GOV.UK (DSIT): Safety and security risks of generative AI to 2025
- NCSC Annual Review 2024 (PDF)