Agentic AI in Payments: Key Regulatory Considerations
In recent years, the rapid emergence of large language models, fueled by the mass adoption of generative AI chatbots, has revitalized the focus on AI within the financial services sector. Financial institutions have quickly embraced the potential benefits of AI technology, integrating it into their operations from algorithmic trading to robo-advisory systems.
Understanding Agentic AI
Agentic AI signifies a shift beyond traditional AI systems. Unlike standard chatbots that rely heavily on user prompts to generate responses, agentic AI systems (or AI agents) exhibit a higher degree of autonomy. Once a task is defined by the user, these systems can operate independently, executing actions to achieve the intended goal without further human intervention.
AI Payment Agents: Transforming Transactions
The use of agentic AI presents significant opportunities, particularly within the payment services industry. Consumers can leverage AI agents to facilitate online payments more conveniently, searching for products based on specific criteria and automatically initiating purchases. This functionality extends beyond mere assistance; AI agents can complete transactions on behalf of users, thus falling under applicable payment services regulations.
Regulatory Landscape
As AI agents become integral to payment transactions, several regulatory considerations emerge:
Payment Services Regulation
The Second Payment Services Directive (PSD2) serves as the cornerstone for payment services regulation in the EU. While it does not specifically address AI, its provisions apply to AI agents performing transactions on behalf of customers. Entities utilizing AI agents must be aware of the licensing requirements set forth by PSD2, typically necessitating authorization as a Payment Service Provider (PSP).
Consent and Authentication
Under PSD2, customer consent is critical for transaction authorization. This can be granted prior to or after the transaction, and may include consent for multiple transactions. However, ensuring compliance with Strong Customer Authentication (SCA) can be challenging, as it requires verification through multi-factor authentication methods tied to the individual payer.
Disputes and Chargebacks
AI payment agents face potential liability issues in cases of unauthorized transactions resulting from errors or misinterpretations. The payment service provider may be held accountable for losses incurred due to AI agent decisions that deviate from customer instructions, particularly in instances of AI “hallucinations” where incorrect purchases are made.
Third-Party Risk Management
The infrastructure supporting AI agents often involves multiple stakeholders, necessitating rigorous third-party risk management under the EU Digital Operational Resilience Act (DORA). PSPs must ensure that their contracts with technology providers meet the necessary operational resilience standards.
Regulatory Treatment Under the EU AI Act
As entities deploy AI agents in payment contexts, they must also navigate the EU AI Act, which introduces a comprehensive regulatory framework for AI systems. This act categorizes AI technologies based on risk, imposing varying requirements depending on the classification of the AI system involved.
Future Outlook
With increasing interest from technology firms and payment service providers in agentic AI, the deployment of these systems within the payments sector is expected to rise. However, the regulatory landscape remains complex and evolving, requiring PSPs to carefully navigate partnerships with technology companies and ensure compliance with a multitude of regulatory frameworks.
As the industry continues to explore the potential of agentic AI, maintaining awareness of regulatory requirements and supervisory guidance at the EU Member State level will be crucial for successful implementation.