How is AI streamlining AML processes? 

Updated as of: 28 July 2025

AI is transforming AML compliance, but implementation requires careful risk management, regulatory alignment, and human oversight to avoid bias and explainability issues. 

Key takeaways

  • AI adoption in AML compliance is rapidly accelerating as financial institutions move from pilot programmes to full-scale deployments.
  • Major banks are achieving efficiency gains and accuracy improvements by using AI to analyse massive transaction volumes and reduce false positives.
  • Human oversight remains critical despite AI's capabilities, as regulators emphasise the need for transparency, bias management, and explainable decision-making processes.

Shutterstock.com/SergeyKlopotov

Anti-money laundering (AML) teams are increasingly turning to AI to transform their compliance operations. 

A recent survey by the Association of Certified Anti-Money Laundering Specialists indicates that adoption and interest of AI in AML processes is growing, with 18% of respondents already have AI and machine learning solutions in production. Another 18% are piloting these technologies, while 25% plan implementation within the next 12 to 18 months.

AI usage for AML purposes is no longer fringe, but a vital tool in the fight against illicit finance. In 2024, HSBC reported that it uses AI to analyse 900 million transactions monthly for signs of financial crime, while African bank I&M Bank has partnered with ThetaRay, an AI-powered financial crime solution to enhance their AML systems. 

These programmes are full-scale operational deployments that are saving companies time and money, and increasing accuracy.

Lexology PRO examines how companies are using AI to tackle money laundering, what regulators have to say about the practice, and the key risk mitigation strategies to protect businesses. 

Key applications in AML workflows

AI is transforming multiple aspects of AML compliance, including transaction monitoring, customer due diligence, drafting suspicious activity reports (SARs), and sanctions screening. 

A recent survey by the Bank of England and the UK Financial Conduct Authority (FCA) found that fraud customer due diligence and transaction monitoring are two of the key use cases for AI in financial services.

Transaction monitoring

AI can identify subtle patterns and anomalies that human analysts might miss. Large FIs have identified this as an opportunity to gain efficiency. Standard Chartered Bank, for example, uses AI for real-time transaction monitoring, enabling instant alerts on fraudulent activities. Danske Bank similarly employs machine learning to spot red flags in complex transaction patterns that would otherwise go unnoticed.

Client screening 

Client screening leverages machine learning to analyse background information for potential new clients. The system checks whether clients appear on sanctions lists, adverse media reports, politically exposed person databases, or regulatory enforcement records. Based on this assessment, AI creates comprehensive risk profiles that determine appropriate monitoring scenarios and thresholds for ongoing surveillance. 

The technology processes vast amounts of data far more efficiently than manual reviews, enabling faster onboarding while maintaining rigorous compliance standards.

AI in customer due diligence processes can help to reduce false positives by learning from historical data, identifying patterns in customer behaviour, and prioritising truly suspicious activity over benign anomalies. South African bank Absa has worked with SymphonyAI to tackle financial crime and thereby reduced false positive alerts by 77%.

Drafting SARs

Under most AML laws globally, especially in jurisdictions like the UK, US, and EU, FIs and other entities in regulated sectors are required to file SARs they detect transactions that may involve money laundering, terrorist financing, or other financial crimes.

Drafting a SAR could take an in-house lawyer or compliance professional anywhere from 30 minutes to an entire day, depending on the complexity of the case, availability of documentation, and whether there are templates available to use.

The UK Financial Intelligence Unit revealed in March 2025 that it received 872,048 SARs between April 2023 and March 2024. 

Major FIs likely file tens of thousands of SARs each year, depending on their size, customer base, and risk exposure. Some AML teams have adopted the use of AI to draft SARs, saving hours of work and freeing up staff for higher-value work.

Henry Yu, Hong Kong-based chief AML officer at insurance company Manulife, tells Lexology PRO that his team have used ChatGPT and Copilot to draft SARs, increasing efficiency and reducing human error in the process. 

Large language models (LLMs) can generate coherent narratives for SARs, reducing manual drafting effort. 

Managing the risks 

Regulators worldwide are encouraging responsible AI adoption in AML processes. Singapore's Monetary Authority is strengthening AI capabilities through its Pathfinder programme, which provides proven solutions and best practices to financial institutions. Meanwhile, Ireland's Central Bank launched an innovation sandbox to support safe AI integration in compliance teams.

Yet even with regulatory backing and sandbox environments to test new approaches, AI implementation presents significant hurdles that institutions must navigate carefully. The “black box” nature of complex machine learning models creates explainability issues. Companies must demonstrate that their models are effective, reliable, and unbiased – both to clients and regulators.

Bias and discrimination pose significant risks. Models trained on incomplete or skewed datasets may produce unfair outcomes, potentially flagging individuals or businesses that pose little to no money laundering risk. Data quality remains fundamental: as machine learning is only as good as the information it processes

The Dutch Central Bank’s (DNB) previous scrutiny of challenger bank Bunq’s AI usage raised questions around companies overreliance on AI tools in AML processes. In 2019, the DNB prohibited Bunq's use of AI and machine learning to conduct AML monitoring, a move that led Bunq to sue the Central Bank. Although Bunq won the legal battle in an appeal court in The Hague in 2022, the case raised questions around companies’ overreliance on AI tools and how regulators will step in to restrict usage.

Best practices for implementation 

Successful AI adoption in any part of a business function requires careful planning and ongoing oversight. To keep out of the regulatory firing line and ensure the ethical use of AI for AML, companies should consider the following:

Audit AI systems

Regular, independent audits ensure AI maintains transparency, accuracy, and fair treatment across all cases. Monitoring for bias is essential to building confidence in the system’s outputs and increasing explainability around the technology.

"Closely with internal stakeholders to ensure model governance is documented and regularly validated. This aligns with regulator expectations that the tools utilised are understood and that the compliance officer can explain how the systems are being used," Brandi Reynolds, chief compliance officer at Trump-backed crypto company World Liberty Financial, tells Lexology PRO.

Morgan Heavener, partner at corporate consultancy group Accuracy, highlights a similar point: any AI tool must be both understood by the organisation’s key stakeholders and be explainable to regulators. “If an organisation’s stakeholders do not understand the technology, it will be difficult to confirm that the model is appropriately addressing risks and ensuring the company is complying with its regulatory requirements,” he explains. 

Keep humans in the loop 

As Karen Bisset, head of financial crime compliance at Bank of Ireland, recently suggested, companies should keep ensure any AI tools or processes are overseen by a human. 

These experts should also supervise AI algorithms, review outputs, and provide corrections when necessary. 

Reynolds concurs, stating that humans are kept in the loop within her teams “for all critical compliance decisions.” 

Prioritise data compliance

Confidentiality and data security of personal data should be priorities for any teams using AI as part of their AML process. Data protection principles and requirements – such as those laid out in the EU GDPR – apply in the context of AI, meaning that data subjects maintain the right to have their data deleted, including from AI training models. Teams should also avoid putting personal or confidential data into free to use LLMs that scrape information from prompts.

Invest in staff training

Companies should take steps to train employees in all areas of the AML process where AI technology is employed, teaching staff how to safely and effectively use the technology. Proper education will help to overcome resistance to new technology and ensures safe, effective usage.

Build emergency protocols

Consider having a “pull the plug” system in place, which can stop or pause AI systems should an issue arise. An emergency response plan will also help in-house legal and compliance teams response to critical situations cause by AI systems.

Engage with regulators early

Companies should engage with regulators to shape AI governance frameworks specific to AML compliance. By working together, companies and regulators can establish guidelines that balance innovation and risk management, and share best practices with others in the field. Some regulators can assist FIs further in their AI adoption journey. The UK FCA’s Digital Sandbox gives certain FIs the opportunity to trial a potential AI solution in a testing environment alongside regulatory guidance and support. 

See our interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.

Stay up to date with key developments and in-depth articles by following Lexology’s artificial intelligence and anti-money laundering hub.