Small to medium enterprises (SMEs) in the UK lost an average of £11,000 to online fraud in 2024, according to a new study by Mollie.
Some 58% of SMEs went through phishing scams, while 42% experienced refund fraud and 30% endured attempts at account takeovers.
Such attacks have a significant impact on productivity, with SMEs spending an average of 15 days per year dealing with them.
With the help of AI, online fraud is likely to increase, becoming more and more sophisticated in the process.
In the first three months of 2024, AI-driven scams cost the British people £1 billion. And, in the year following November 2022, when ChatGPT launched, there was a 1,265% increase in malicious phishing email globally.
Here, three cybersecurity experts explain four ways in which AI is making cybersecurity more challenging – and how SMEs and their accountants can protect themselves.
Challenge #1: Deepfakes can impersonate staff, including senior executives
“AI-powered deepfake technology has enabled highly convincing impersonation in images, videos and even live video calls,” says Truman Kain, security researcher, Huntress.

“Attackers can [appear and behave like] executives in virtual meetings, leading to fraud and deception in real time.”
Deepfakes often target finance teams, with urgent requests to transfer money, says Camden Woollven, Group Head, AI Product Marketing, GRCl Group.
“And [they’re] hitting organisations where it hurts – their decision-making chain.
“The risk has exploded because the technology is now cheap and accessible. Anyone can create these fakes with basic tools.”
In May 2024, British engineering company Arup lost millions of pounds to a deepfake scam.
Cybercriminals initiated an AI-generated video call during which they impersonated executives, and asked an employee to transfer £20 million.
“Organisations that fail to address this risk face financial loss, reputational damage and potential regulatory repercussions,” says Kain.
Solution #1: Strong verification protocols and anomaly monitoring
To defend against deepfakes, SMEs should put strong verification protocols in place.
Accountants can help by ensuring clients use multi-factor authentication for unusual or high-value transactions, says Kain.
“This means having a documented process [via which] multiple people must confirm big money transfers, ideally through different communication channels,” adds Woollven.
“So, if someone gets a request from the ‘CEO’ to wire money, they need to verify it through a separate channel – maybe a face-to-face meeting or a call to a pre-registered number.”
Accountants should keep a close eye on clients’ accounts by using anomaly-monitoring software.
Challenge #2: AI can compose perfectly written phishing email
A year or two ago, phishing email was still reasonably easy to spot, thanks to its poor grammar, awkward sentence structures and fuzzy logos.
However, AI is generally much better at writing than cybercriminals.
“Now, AI can craft grammatically perfect, highly personalised phishing emails in any language,” says Kain.
“[In addition], AI can automatically gather information from social media and company websites to create targeted spear-phishing campaigns that appear entirely legitimate.”
Such information might include your recent work projects, your colleagues’ names and even your LinkedIn updates, says Woollven.
As a result, AI-generated phishing email is much more realistic, which makes it harder to detect.
“Companies who ignore this threat risk not just data breaches, but serious reputational damage,” says Woollven.
“When customers find out their data was stolen because someone fell for a scam, they tend to take their business elsewhere.”
Solution #2: Regular training of employees and channels for reporting suspicious email
Even though an AI-written phishing email is convincing, it’s not impossible to spot. Training can go a long way in helping employees to be vigilant.
“[This training] should educate on the latest AI-driven scams, how to identify suspicious activity, and best practices for verifying communications,” says Kain.
Further, training should be practical, says Woollven.
“Don’t just run generic workshops. Show your team what AI-generated phishing emails actually look like. Make them aware of the specific details scammers might use from their social media profiles. The more hands-on the training, the better they’ll be at spotting the threats.”
Employees should be trained to recognise red flags, even where they’re subtle, says John Clark, product manager, take payments.
These include unexpected claims of an issue with an account, requests for personal details, inconsistent branding and formatting, and incorrect email addresses.
“Always remember to check the sender’s email address against previous communications,” says Clark.
“Cybercriminals often use subtle variations, such as replacing [the letter] ‘O’ with [the number] ‘0’.
“The same goes for any links in the email – look out for misspelled domain names.”
Further, a reporting system should be in place. This enables employees to report suspicious email so that cybersecurity staff can investigate it, and, if appropriate, block it before the cybercriminal attacks other employees.
Accountants can play a central role by advocating for cybersecurity budgeting, to ensure their clients allocate funds for protective measures, says Kain.
“They’re hitting organisations where it hurts – their decision-making chain. The risk has exploded because the technology is now cheap and accessible. Anyone can create these fakes with basic tools.”
Camden Woollven, Group Head, AI Product Marketing, GRCl Group
Challenge #3: AI-powered malware can learn and adapt
Malware is software that cybercriminals use to disrupt, damage or gain unauthorised access to a computer system.
It’s been around for a while, but AI has made it cleverer than ever before.
“Unlike traditional malware that follows a set pattern, this stuff watches how your security systems work and changes its behaviour to avoid detection,” says Woollven.
“It’s a nightmare for IT security teams because it’s constantly evolving, [and is] getting smarter at evading usual security measures.”
Businesses that fail to defend against malware are at risk of system-wide failures.
“We’re talking about operations grinding to a halt, customer data being compromised, and recovery costs that could run into the millions,” says Woollven.
Solution #3: Adopting a zero-trust security approach
The only way to protect against AI-powered malware is “adopting a zero-trust security approach”, says Woollven.
“This means treating every request as potentially suspicious, regardless of where it seems to come from.”
Accountants should encourage their clients to restrict access rights to the minimum necessary for each role, regularly update security protocols, and monitor for unusual activity.
“[This] might slow things down a bit, but it’s much better than dealing with a system-wide breach,” says Woollven.
Challenge #4: AI can clone voices that sound scarily like the real thing
AI-powered voice cloning occurs when scammers use audio clips, often from a person’s social media accounts, and use them to clone that person’s voice, says Clark.
“Previously, scammers had to manually call individuals to extract sensitive data,” adds Kain.
“Now, AI-generated voices can convincingly impersonate trusted figures, such as IT managers, using regional accents and emotional cues to build credibility.”
Further, automation can enable cybercriminals to make thousands of calls at once, boosting the likelihood of success.
Solution #4: Learn the signs and make independent verifications
To protect against voice cloning, accountants and their clients should learn the signs.
These include unexpected calls demanding urgent financial actions, requests for cryptocurrency, background noise that sounds artificial, and inconsistencies in the conversation, says Clark.
“Voice-cloning technology often struggles to create coherent and contextually accurate conversations.
“If the ‘person’ on the other end contradicts themselves, gives information that doesn’t align with what you know, or seems to dodge direct questions, it’s a reason for concern.”
Should an accountant or client feel uncertain, it’s important to gain independent verification by calling the person who appeared to make the call.
Keeping up with AI’s rapid evolution
“None of these strategies are one-time fixes,” says Woollven.
“They need to be regularly reviewed and updated because these AI threats are constantly evolving.
“The goal isn’t to make your business immune; that’s impossible. It’s about making it difficult enough that scammers move on to easier targets.”
The IFA’s AI and emerging technologies conference online on 6 March will provide updates, real life examples and practical solutions from industry specialists. More information here.