AI is driving new, sophisticated fraud techniques: how can businesses stay safe?

Updated as of: 20 June 2025

AI-driven fraud is scaling fast – deepfakes, cloned voices, and phishing scams are smarter, faster, and harder to detect. Businesses must act now to strengthen defences or risk serious financial and reputational fallout.

Shutterstock.com/Tapati Rinchumrus

More and more businesses are taking advantage of increasingly sophisticated AI technologies – but so too are fraudsters. The fraud risk landscape is evolving fast, with AI enabling malicious actors to operate with new tactics at greater scale and speed. This presents major new challenges for businesses.  

Traditional fraud prevention and cybersecurity regimes may not protect against AI-powered attacks. Companies must proactively confront emerging risks and continuously adapt to keep pace with fraudsters. 

AI automation makes it easier for criminals to carry out large-scale attacks across multiple channels at once. Individuals with limited technical knowledge can also use these tools to perpetrate cyber fraud. 

The potential costs for businesses that fall victim to AI-powered fraud are high. Generative AI could enable fraud losses to reach US$40 billion in the US alone by 2027, Deloitte’s Centre for Financial Services predicts. 

Half of UK and US finance professionals report being targeted by deepfake scams, according to a survey by software provider Medius. 85% of respondents surveyed said deepfake technology poses an existential crisis to business’ financial security.

Lexology PRO explores the evolving risk of AI-powered fraud and the mitigating measures companies should implement to keep themselves safe.    

How are fraudsters using AI? 

Voice cloning and deepfakes 

Fraudsters use generative AI tools – capable of producing highly realistic images, video and sound clips – to craft sophisticated and believable scams.

AI-generated voice recordings and video calls are used to execute social-engineering attacks, where fraudsters persuade employees to hand over company data or assets by posing as a trusted individual or organisation. 

28% of UK adults say they have been targeted by AI voice cloning scams, according to Starling Bank. 

This type of attack can incur heavy financial losses. Engineering company Arup confirmed in 2024 it had fallen victim to “deepfake fraud” after a Hong Kong-based employee was duped into transferring HK$200 million (US$25 million) to criminals by an AI-generated video call. The fraudsters led him to believe he was speaking with a company executive. 

Enhanced phishing scams 

“Phishing” is a technique used by cybercriminals where individuals are tricked into revealing sensitive or personal information. Fraudsters encourage the victim to click a malicious link within an email or text message, which appears to be from a genuine source. 

More than half of spam emails are AI-generated in 2025, according to research by Barracuda and Harvard University. AI can help criminals craft deceptive messages that lack the traditional “tells” which may point to a phishing scam, like poor spelling and grammar. This makes it easier for hackers to attack targets who speak different languages. 

Improved accuracy also increases the likelihood of malicious emails bypassing companies’ security and detections systems. What’s more, the proliferation of AI-generated content makes it increasingly difficult to distinguish genuine content from fraud. Employees may suffer from “vigilance fatigue,” and let their guard down.  

Manipulating AI chatbots 

Fraudsters can compromise companies own AI systems to extract sensitive or personal information, sometimes to perpetrate ransomware attacks. AI chatbots may be particularly vulnerable to hacking using “prompt injection attacks.”

Some malicious actors are circumventing the safety controls on AI chatbots to learn how to carry out criminal activities such as money laundering and fraud. Most AI chatbots can be easily tricked into giving dangerous or illegal responses, according to research from Cornell University. This raises questions about whether companies could be seen as complicit if their chatbots are being used to aid criminals. 

A study carried out by Immersive Labs found that 88% of participants – including people of all technical skills levels – could successfully use prompt injection to extract sensitive information from chatbots. 17% were able to extract information from even the most technically advanced chatbot in the study, underscoring the serious security risks associated with these commonly used AI tools. 

Fraudsters may also deploy fake AI chatbots impersonating trusted entities, such as a bank or government agency, to steal financial information or other data. 

APP fraud 

Authorised push payment (APP) fraud is when an authorised account holder is tricked into transferring money to a person they believe is the intended recipient but is in fact a fraudster. This type of scam carries huge financial costs if large sums of money are being transferred.

This is a particular concern for payment providers, which may be required to refund customers who fall victim to APP fraud if adequate measures were not in place to protect them. 

AI is making it easier for fraudsters to create convincing synthetic identities to pose as legitimate payment recipients, dramatically increasing the risk of APP fraud. 

How can businesses protect themselves?  

As AI becomes more sophisticated, its likely fraudsters will find even more innovative and harmful ways of deploying the technology. This should incentivise companies to urgently review their practices and implement robust measures to prevent AI-enabled fraud.  

Invest in advanced fraud-detection tools 

As fraudsters are weaponising AI, so too are security experts. AI-powered fraud-prevention tools are highly sophisticated, capable of real-time threat monitoring and behavioural analysis on massive scale. 

Companies should consider adopting the most advanced tools to decrease their risk of AI-powered fraud. In particular, AI-powered email filters that go beyond traditional keyword detection, focusing on behavioural signals and language patterns, can be highly effective at preventing AI-enabled phishing attacks. 

Train employees to spot, stop, and report 

Staff should be trained to spot, stop, and report AI-assisted fraud. Combatting AI fraud requires a combination of sophisticated tooling and human intuition. 

Fraud-prevention training should be updated to make employees aware of the unique characteristics of AI-powered threats, so they are able to detect and report it. 

Employees who process payments or control access to company systems should report any unusual activity, such as a high volume of access requests as unusual times, which should be investigated immediately. 

AI phishing simulations and other practical training exercises based on real-world scenarios can help to consolidate how effective fraud prevention looks in practice. 

Adopt a zero-trust approach 

Companies should adopt a “zero trust” approach to security. This means that no user can be implicitly trusted, even within internal communication channels. Continuous verification and identity checks should be required for all payment and access requests.

Verification methods must be robust enough to detect AI-enabled identity theft – speaking to an employee over the phone is not sufficient to confirm their identity, for example. Multi-factor authentication, when an access code is sent to a device only the genuine recipient has access to is an effective measure.  

Monitor for vulnerabilities in AI systems  

Companies should continuously monitor to ensure their own AI systems have not been compromised by hackers. Rigorous testing should be conducted to check that the AI is sufficiently resilient to prevent data breaches and prompt injection attacks

 See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.   

Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool. 

Stay up to date with key developments and in-depth articles by following Lexology’s AI hub.