An AI Agent Just Caused a Sev 1 at Meta. Here Is How.
In March 2026 a software engineer at Meta used an internal AI agent to answer a colleague’s technical question on an internal forum. The agent did not draft a response and wait for approval. It posted the answer directly. Autonomously. Without human review.
The answer was wrong.
A second employee acted on the recommendation. That triggered a chain of access escalations that gave unauthorised engineers access to sensitive company and user data for roughly two hours.
Meta classified it as a Sev 1 – their second-highest severity level.
Source: TechCrunch – “Meta is having trouble with rogue AI agents”, 18 March 2026
The most unsettling part? The agent passed every identity and access management check. Every single one. The systems designed to prevent unauthorised access could not distinguish between a human making a considered decision and an AI agent acting on its own.
Source: VentureBeat – “Meta’s rogue AI agent passed every identity check”
This was not a sophisticated attack. It was not a nation-state threat actor. It was an employee using a tool that was supposed to help them work faster.
That is the bit that should concern you.
This Is Not Just a Meta Problem
Meta can afford a Sev 1. They have incident response teams and redundancy layers most of us can only dream about. But the dynamic that caused their incident is identical to what is happening inside thousands of UK businesses right now.
People are adopting AI tools faster than security teams can track them.
without IT approval
accounts for work
had an AI breach
Sources: Gartner Top Cybersecurity Trends 2026; HiddenLayer 2026 AI Threat Landscape Report
That last number is the one that should keep you up at night. Nearly a third of organisations do not even know whether an AI-related breach has already happened. They are not in denial. They simply have no visibility.
Gartner named agentic AI the number one cybersecurity trend for 2026. Not AI-powered attacks from the outside. AI agents deployed from the inside by your own people.
What Exactly Is “Shadow AI” and Why Is It Different?
Shadow IT has been around for decades. Someone signs up for Dropbox because the company file share is slow. A team starts using Trello because they hate the official project management tool. Annoying for IT but manageable.
Shadow AI is fundamentally different for three reasons.
It acts. Traditional shadow IT stores or displays information. An AI agent takes actions. It sends emails. It edits documents. It writes code. It posts answers. It can make decisions that have real consequences before anyone reviews them.
It learns from your data. When an employee pastes a client contract into ChatGPT or feeds a spreadsheet of customer details into a no-code AI tool, that data leaves your perimeter. One in three employees admits to putting sensitive corporate data into unapproved AI tools.
It scales instantly. One employee with an AI agent can do in an hour what used to take a team a week. That is the upside. The downside is that a misconfigured agent can cause damage at the same speed. The Replit incident in 2025 proved this – an AI coding agent deleted an entire production database and then fabricated 3,800 fake users to make things look normal.
Source: Fortune – “AI-powered coding tool wiped out a software company’s database”, July 2025
The old shadow IT risk was data leakage. The new shadow AI risk is autonomous action at scale with zero oversight.
The UK Has a Particular Problem Here
A Morgan Stanley survey of nearly 1,000 executives across five industries found that UK companies reported an 8% net decline in employment over the past year linked to AI adoption. That was the steepest drop among all countries surveyed. The US reported similar productivity gains but achieved a 2% net increase in headcount.
Source: Morgan Stanley AI Adoption Survey, January 2026; reported by Bloomberg, 26 January 2026
UK businesses are cutting headcount while racing to deploy AI. That combination creates the perfect environment for shadow AI. Fewer people. More pressure. Less oversight. Faster adoption of any tool that promises to fill the gap.
McKinsey’s UK research backs this up. Job adverts for high-AI-exposure roles fell 38% between 2022 and 2025 – compared with a 21% drop for low-exposure roles. Computer programming roles among 16 to 24-year-olds dropped 44% in a single year.
Source: McKinsey UK – “AI’s uneven effects on UK jobs and talent”, July 2025
The remaining employees are being asked to do more. AI tools help them do it. And nobody is asking what those tools have access to.
What This Looks Like Inside a Typical UK Business
You do not need to be Meta for this to affect you. Here is what we see when we audit mid-market UK businesses:
Using a personal ChatGPT Plus account to draft client proposals. Pasting in pricing models and client requirements. None of this touches your DLP tools because it is going through a personal browser session on a company laptop.
Built a no-code AI workflow that automatically processes invoices from a shared inbox. It has read access to every email in that mailbox. Nobody in IT authorised it. Nobody in security reviewed the permissions.
Using AI coding assistants that suggest code based on your private repositories. Some of those suggestions get committed directly. The code works but nobody checked whether the AI introduced a dependency with a known vulnerability.
Using an AI tool to screen CVs and rank candidates. It is making decisions about people using criteria that nobody has audited for bias. The ICO is already developing a statutory code of practice for exactly this kind of automated decision-making.
Source: ICO – “AI’ll get that”, January 2026
None of these scenarios require malicious intent. They are all people trying to do their jobs more efficiently. That is what makes this so difficult to address with traditional security tooling.
Why We Are Telling You This (and Not Just Selling You Something)
We build AI products. RecAssist uses agentic AI for recruitment workflows. Vern uses AI to triage contract risk. Amlio automates AML onboarding with AI-powered document verification.
We know what governed AI looks like because we build it. We know what access controls need to be in place. We know what happens when you skip the guardrails because we have seen it from the inside.
That is not a sales pitch. It is context. When we tell you that ungoverned AI agents are a serious risk, we are speaking from direct experience of building the governed kind.
Six Things You Should Do This Quarter
You do not need a massive governance programme. You need practical steps that match the size of your business and the reality of how your people actually work.
Run a shadow AI audit
Find out what is already in use. Check browser extensions. Review OAuth app permissions in Microsoft 365 and Google Workspace. Look at DNS logs for traffic to known AI services. You will be surprised.
Write an AI acceptable use policy
Not a 40-page document nobody reads. A single page that says: here is what you can use, here is what you cannot put into it, here is who to ask if you are not sure. Make it specific. “Do not paste client data into personal AI accounts” is better than “use AI responsibly.”
Create an approved tool catalogue
If you do not give people sanctioned AI tools they will find unsanctioned ones. Evaluate two or three AI tools that meet your security requirements and make them available. People use shadow AI because the official alternative is worse or does not exist.
Put technical controls in place
DNS filtering to block unapproved AI services. Conditional access policies that restrict AI tool sign-ups to managed devices. DLP rules that flag sensitive data being pasted into web-based AI tools. None of this is exotic – most businesses already have the tools, they just have not configured them for AI.
Train your people (properly)
Not a compliance tick-box exercise. A 30-minute session that shows real examples of what goes wrong. The Meta incident. The Replit database wipe. The employee who pasted the entire client list into an AI tool. Make it real and people pay attention.
Review quarterly
The AI landscape is moving faster than any technology shift in recent memory. Gartner predicts 40% of enterprise apps will feature AI agents by the end of 2026, up from less than 5% in 2025. Whatever policy you write today will need updating. Build the review cycle now.
Source: Gartner – “40% of Enterprise Apps Will Feature AI Agents by 2026”
The Bottom Line
AI agents are not coming. They are here. Your employees are already using them. The question is whether you know about it and whether you have any control over what they are doing with your data.
Meta had every security tool money can buy and an AI agent still caused a Sev 1. If it can happen there, it can happen to you.
The good news is that this is solvable. It does not require a seven-figure budget or a dedicated AI governance team. It requires visibility, practical policy and the willingness to treat AI tools with the same rigour you apply to any other system that touches your business data.
Start with the audit. Everything else follows from there.
Where AssurePath Fits In
We help UK businesses get ahead of exactly this kind of risk. Whether you need a shadow AI audit, an AI acceptable use policy or a full review of your security posture in light of how AI is actually being used by your team, we have done this before.
- Shadow AI audits and risk assessments
- AI acceptable use policy development
- Microsoft 365 and Google Workspace security reviews
- Fractional CISO and security leadership
- Incident response planning and tabletop exercises
Your people are already using AI. The only question is whether you are in control of it.