Agentic AI in Payments: Regulatory Insights and Challenges

Agentic AI in Payments: Key Regulatory Considerations

In recent years, the rapid emergence of large language models, fueled by the mass adoption of generative AI chatbots, has revitalized the focus on AI within the financial services sector. Financial institutions have quickly embraced the potential benefits of AI technology, integrating it into their operations from algorithmic trading to robo-advisory systems.

Understanding Agentic AI

Agentic AI signifies a shift beyond traditional AI systems. Unlike standard chatbots that rely heavily on user prompts to generate responses, agentic AI systems (or AI agents) exhibit a higher degree of autonomy. Once a task is defined by the user, these systems can operate independently, executing actions to achieve the intended goal without further human intervention.

AI Payment Agents: Transforming Transactions

The use of agentic AI presents significant opportunities, particularly within the payment services industry. Consumers can leverage AI agents to facilitate online payments more conveniently, searching for products based on specific criteria and automatically initiating purchases. This functionality extends beyond mere assistance; AI agents can complete transactions on behalf of users, thus falling under applicable payment services regulations.

Regulatory Landscape

As AI agents become integral to payment transactions, several regulatory considerations emerge:

Payment Services Regulation

The Second Payment Services Directive (PSD2) serves as the cornerstone for payment services regulation in the EU. While it does not specifically address AI, its provisions apply to AI agents performing transactions on behalf of customers. Entities utilizing AI agents must be aware of the licensing requirements set forth by PSD2, typically necessitating authorization as a Payment Service Provider (PSP).

Consent and Authentication

Under PSD2, customer consent is critical for transaction authorization. This can be granted prior to or after the transaction, and may include consent for multiple transactions. However, ensuring compliance with Strong Customer Authentication (SCA) can be challenging, as it requires verification through multi-factor authentication methods tied to the individual payer.

Disputes and Chargebacks

AI payment agents face potential liability issues in cases of unauthorized transactions resulting from errors or misinterpretations. The payment service provider may be held accountable for losses incurred due to AI agent decisions that deviate from customer instructions, particularly in instances of AI “hallucinations” where incorrect purchases are made.

Third-Party Risk Management

The infrastructure supporting AI agents often involves multiple stakeholders, necessitating rigorous third-party risk management under the EU Digital Operational Resilience Act (DORA). PSPs must ensure that their contracts with technology providers meet the necessary operational resilience standards.

Regulatory Treatment Under the EU AI Act

As entities deploy AI agents in payment contexts, they must also navigate the EU AI Act, which introduces a comprehensive regulatory framework for AI systems. This act categorizes AI technologies based on risk, imposing varying requirements depending on the classification of the AI system involved.

Future Outlook

With increasing interest from technology firms and payment service providers in agentic AI, the deployment of these systems within the payments sector is expected to rise. However, the regulatory landscape remains complex and evolving, requiring PSPs to carefully navigate partnerships with technology companies and ensure compliance with a multitude of regulatory frameworks.

As the industry continues to explore the potential of agentic AI, maintaining awareness of regulatory requirements and supervisory guidance at the EU Member State level will be crucial for successful implementation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...