AI’s Role in the New Era of Money Laundering

Global Governance Gap: How AI is Powering the Next Wave of Money Laundering

The global financial architecture is currently facing a ‘dual-use’ paradox. While artificial intelligence (AI) is hailed as a transformative force for governance and productivity, it simultaneously enables new solutions for fraudsters, money launderers, and terrorist financiers. To stay ahead of these threats and protect the integrity of the global financial system, regulators and other stakeholders must embrace the responsible use of AI.

Risks from AI Technologies

A recent horizontal scan on AI and deepfakes published by the Financial Action Task Force (FATF) highlights that different forms of AI technology pose varied new risks of money laundering and terrorist financing. Criminals can exploit ‘predictive AI’ models, known for detecting patterns and making predictions, to bypass traditional systems implemented by banks for detecting suspicious transactions.

‘Generative AI’, on the other hand, creates deepfakes such as realistic videos, audios, invoices, and IDs, which can be used to circumvent due diligence for preventing money laundering, especially through fake Know Your Customer (KYC) documentation.

‘Agentic AI’ can provide launderers with autonomous systems for layering and integrating illicit gains. For instance, agentic AI can operate millions of mule accounts to perform high-frequency, low-value transfers, effectively layering funds without creating detectable patterns. Additionally, to obfuscate the origin of funds, multiple AI agents could engage in online gambling using illegally obtained funds, with the winners cashing out as legitimate gambling winnings. Speculators can also employ agentic AI in stock markets to execute rapid ‘pump-and-dump’ schemes and manipulate market conditions.

‘General AI’, still under development, is expected to reason like humans, potentially laundering funds in complex ways that would challenge Law Enforcement Authorities (LEAs) to follow the money and collect necessary evidence for prosecution. General AI will make it easier to generate accommodation entries in accounting records, complicating tax inspections and the determination of fund sources.

Risk of Forum Shopping

Despite the requirement for all countries to implement the FATF’s anti-money laundering standards, effectiveness varies significantly. Generative AI can be trained on laws, regulatory texts, and the contextual nuances of different jurisdictions to identify weaknesses. AI trained in this manner can then devise layering strategies that exploit these weaknesses, leading to forum shopping that frustrates a country’s efforts to investigate cross-border transactions.

Such AI can also facilitate the development of complex corporate structures with multi-jurisdictional splits—companies incorporated in one jurisdiction but residing and holding bank accounts in others, with ownership obscured from all three. Moreover, AI can create an elaborate but fake paper trail to legitimize black money, moving funds through various bank accounts backed by realistic deepfake invoices and shipping documents. Agentic AI can generate fake businesses complete with operational websites and dummy email correspondence within minutes.

Need for Governance

If agentic AI can be trained to launder money, it can also be trained to combat financial crime. It is increasingly recognized that banks, financial institutions, supervisory bodies, financial intelligence units, tax authorities, and LEAs must adopt AI to stay ahead of criminals. However, these stakeholders face limitations due to the absence of consistent and enforceable global standards for AI regulation. Only a few jurisdictions have developed standards, leaving most parts of the world vulnerable.

This fragmentation creates ‘regulatory grey zones’ where criminals can navigate jurisdictions with the lowest enforcement risks. Given the cross-border nature of financial crimes, only standardized AI governance protocols across jurisdictions can effectively develop AI-driven controls to combat AI-driven money laundering.

Furthermore, the ‘black box’ nature of AI systems presents challenges for prosecution. When AI agents execute complex layering strategies across multiple jurisdictions in seconds, traditional laws struggle to assign human liability or collect evidence of the mens rea (‘evil mind’) behind laundering activities.

Global governance standards are essential to standardize the auditing of AI systems and establish evidentiary standards for AI-driven money laundering. Operational standards must be developed for digital KYC processes, and countries need to commit to universal ‘common goods’ to ensure that low-capacity countries have access to the same deepfake detection tools as international financial centers.

The dialogue surrounding the global governance gap in AI use is set to take center stage at the upcoming India AI Impact Summit 2026, being held in New Delhi this week. Notably, one of the summit’s seven chakras is ‘Safe and Trusted AI’, which aims to create interoperable safety and governance frameworks and provide countries of the Global South with equitable access to AI safety testing, evaluation tools, and transparency mechanisms.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...