Global Governance Gap: How AI is Powering the Next Wave of Money Laundering
The global financial architecture is currently facing a ‘dual-use’ paradox. While artificial intelligence (AI) is hailed as a transformative force for governance and productivity, it simultaneously enables new solutions for fraudsters, money launderers, and terrorist financiers. To stay ahead of these threats and protect the integrity of the global financial system, regulators and other stakeholders must embrace the responsible use of AI.
Risks from AI Technologies
A recent horizontal scan on AI and deepfakes published by the Financial Action Task Force (FATF) highlights that different forms of AI technology pose varied new risks of money laundering and terrorist financing. Criminals can exploit ‘predictive AI’ models, known for detecting patterns and making predictions, to bypass traditional systems implemented by banks for detecting suspicious transactions.
‘Generative AI’, on the other hand, creates deepfakes such as realistic videos, audios, invoices, and IDs, which can be used to circumvent due diligence for preventing money laundering, especially through fake Know Your Customer (KYC) documentation.
‘Agentic AI’ can provide launderers with autonomous systems for layering and integrating illicit gains. For instance, agentic AI can operate millions of mule accounts to perform high-frequency, low-value transfers, effectively layering funds without creating detectable patterns. Additionally, to obfuscate the origin of funds, multiple AI agents could engage in online gambling using illegally obtained funds, with the winners cashing out as legitimate gambling winnings. Speculators can also employ agentic AI in stock markets to execute rapid ‘pump-and-dump’ schemes and manipulate market conditions.
‘General AI’, still under development, is expected to reason like humans, potentially laundering funds in complex ways that would challenge Law Enforcement Authorities (LEAs) to follow the money and collect necessary evidence for prosecution. General AI will make it easier to generate accommodation entries in accounting records, complicating tax inspections and the determination of fund sources.
Risk of Forum Shopping
Despite the requirement for all countries to implement the FATF’s anti-money laundering standards, effectiveness varies significantly. Generative AI can be trained on laws, regulatory texts, and the contextual nuances of different jurisdictions to identify weaknesses. AI trained in this manner can then devise layering strategies that exploit these weaknesses, leading to forum shopping that frustrates a country’s efforts to investigate cross-border transactions.
Such AI can also facilitate the development of complex corporate structures with multi-jurisdictional splits—companies incorporated in one jurisdiction but residing and holding bank accounts in others, with ownership obscured from all three. Moreover, AI can create an elaborate but fake paper trail to legitimize black money, moving funds through various bank accounts backed by realistic deepfake invoices and shipping documents. Agentic AI can generate fake businesses complete with operational websites and dummy email correspondence within minutes.
Need for Governance
If agentic AI can be trained to launder money, it can also be trained to combat financial crime. It is increasingly recognized that banks, financial institutions, supervisory bodies, financial intelligence units, tax authorities, and LEAs must adopt AI to stay ahead of criminals. However, these stakeholders face limitations due to the absence of consistent and enforceable global standards for AI regulation. Only a few jurisdictions have developed standards, leaving most parts of the world vulnerable.
This fragmentation creates ‘regulatory grey zones’ where criminals can navigate jurisdictions with the lowest enforcement risks. Given the cross-border nature of financial crimes, only standardized AI governance protocols across jurisdictions can effectively develop AI-driven controls to combat AI-driven money laundering.
Furthermore, the ‘black box’ nature of AI systems presents challenges for prosecution. When AI agents execute complex layering strategies across multiple jurisdictions in seconds, traditional laws struggle to assign human liability or collect evidence of the mens rea (‘evil mind’) behind laundering activities.
Global governance standards are essential to standardize the auditing of AI systems and establish evidentiary standards for AI-driven money laundering. Operational standards must be developed for digital KYC processes, and countries need to commit to universal ‘common goods’ to ensure that low-capacity countries have access to the same deepfake detection tools as international financial centers.
The dialogue surrounding the global governance gap in AI use is set to take center stage at the upcoming India AI Impact Summit 2026, being held in New Delhi this week. Notably, one of the summit’s seven chakras is ‘Safe and Trusted AI’, which aims to create interoperable safety and governance frameworks and provide countries of the Global South with equitable access to AI safety testing, evaluation tools, and transparency mechanisms.