You’ve run your AML processes. No flags on sanctions lists. Politically exposed person (PEP) checks clear. The ID documents match. Everything looks above board. So, you move on.
But in 2025, appearances can be deceiving. Because artificial intelligence has transformed how financial crime is committed.

The days of crude forgeries and dodgy scans are fading. Criminals are now deploying AI to create eerily convincing fake documents, cloned voices, and synthetic video calls that mimic real people, including your clients. And that changes everything.
For UK accountants understanding how these deepfake threats play into AML compliance is no longer optional. It’s vital.
Science fiction or a real financial risk?
If you’re sceptical about the real-world impact of deepfakes, consider this: in early 2024, the global engineering giant Arup fell victim to a deepfake scam to the tune of £25 million. The fraudsters didn’t need to hack any systems. They simply used deepfake video and audio to impersonate senior leadership on a video call, instructing staff to make multiple payments.
This wasn’t an isolated case. In 2019, fraudsters used AI-generated voice technology to impersonate a German CEO and persuade the UK subsidiary of a major energy company to transfer €220,000 to a Hungarian bank account. Similar attempts have targeted British firms using deepfaked WhatsApp videos and spoofed identities.
If global enterprises with robust internal controls can be caught off guard, the accountancy profession is far from immune.
Why accountants are a target
Accountants sit at the heart of financial transactions, often acting as gatekeepers for property deals, company formations, and capital movement. That makes the sector a prime target for criminals seeking to launder money under the radar.
And while many firms rely on automated AML tools – like ID verification software, credit file checks and biometrics – these systems are often built to catch outdated types of fraud. Deepfakes are engineered to beat them.
Let’s walk through a typical scenario.
A prospective client sends you ID and proof of address. You run an online ID check, perhaps even a biometric facial match. All systems say “pass.” You arrange a video call for final KYC. The client looks just like their passport photo. Their voice is calm, confident, and matches the details.
But what if none of it is real?
Synthetic media can now generate a real-time, AI-generated person on video who speaks with cloned audio. What looks like a living, breathing individual may be no more than data.
Why smaller accountancy firms have an advantage
Here’s the good news: smaller practices are better placed to spot anomalies than larger institutions, if they lean into their strengths.
Big firms rely on standardised processes. But smaller accountancy firms often know their clients personally. You’re more likely to recognise when something doesn’t add up. You can be agile, ask unexpected questions, and apply professional scepticism in real-time.
That human instinct, when supported by smart processes, can be the most effective line of defence.
Strengthening your AML approach against deepfakes
Here’s how accountancy firms can adapt their AML policies, controls and procedures to deal with synthetic fraud.
1. Introduce a dynamic human check
Online checks are essential but passive systems can be fooled. Add a live, unpredictable step to your verification. For example:
- Ask the person on a live video call to say a randomly generated phrase.
- Request they hold up an object like today’s newspaper or a branded mug.
- Ask them to turn on a light or move to a different part of the room.
These actions disrupt pre-programmed video deepfakes, which are typically scripted and lack real-world responsiveness.
“Smaller practices are better placed to spot anomalies than larger institutions, if they lean into their strengths.”
2. Layer your verification methods
Avoid relying on a single point of validation. Combine:
- Official document verification (e.g. passport scan),
- Biometric facial matching (with liveness detection),
- Credit reference and address history checks, and/or
- A direct call or video meeting, using dynamic prompts as noted above
Layering creates friction for fraudsters. One manipulated element may pass but it’s far harder to align every layer seamlessly.
Some criminals use “repeater” tactics, uploading dozens of slightly tweaked fake IDs to see which formats are accepted. Over time, they learn how to fool automated checks. Layering forces them to match multiple, unrelated data sources, dramatically raising the difficulty.
3. Monitor behavioural red flags
Many AML red flags don’t come from documents. They arise from behaviour. Be alert for:
- unusual urgency, such as clients pushing for same-day transactions,
- sudden switches in communication method (e.g. from email to WhatsApp),
- requests to change bank details last-minute, or
- inconsistent geographic data (e.g. claiming to be in the UK but logging in from abroad).
Accountants are trained to assess risk. When a client’s behaviour contradicts their story, trust your professional judgement.
4. Train your team for realistic scenarios
It’s no longer enough to train staff on static AML risks. Deepfake detection needs to be included in CPD, onboarding, and internal compliance reviews. Build training exercises around questions like:
- “What would you do if a familiar client video-called you, but something felt ‘off’?”
- “How would you test whether a caller is a real person?”
Role-playing these situations normalises the idea of challenging identity, without offending genuine clients. It empowers staff to treat instinct as evidence.
AI-driven fraud is not a distant possibility
It’s a growing trend. But firms that treat AML as a living, breathing part of client interaction are better placed to spot the signs.
For accountants, especially in boutique and mid-tier practices, the opportunity is clear. You’re already close to your clients. You understand their businesses. With a few targeted changes to your AML controls, like live prompts, layered checks, and staff awareness, you can spot anomalies that machines miss.
Richard Simms, Managing Director of AMLCC, is in the rare situation of having become a leading authority on anti-money laundering compliance, risk management and education while working as a hands-on regulated professional himself.
Since 2007 when AML regulation for accountants was introduced in the UK, as both a chartered accountant and an insolvency practitioner, Richard has seen first-hand the challenges of implementing effective AML processes.
Working with regulatory supervisors, Richard used his unique professional insights to create AMLCC (Anti-Money Laundering Compliance Company Limited) in 2008 to make AML easier for regulated businesses worldwide.










