Harnessing AI in Payments: Navigating Opportunities and Regulatory Challenges

The Role of AI in the Payment Market: Opportunities, Innovation, and Regulatory Requirements

AI-driven technologies open up entirely new possibilities in the payment market. They enable companies to offer innovative products and services that were previously unimaginable. Big data can be used to offer better and more suitable products to customers, while machine learning can develop personalized financial services tailored to each customer’s behavior and needs. Aimed at enhancing cost efficiency and improving customer experiences, AI offers numerous opportunities for innovation and faster processes in the payment market. However, any advances must be balanced with EU regulatory frameworks like the AI Act, DORA, and existing risk management and supervisory requirements.

Use Cases

Two prominent use cases in the payment market are:

  • AI-based chatbots: These can handle various tasks, from answering simple customer inquiries to conducting complex transactions, thus improving customer service while relieving staff workload.
  • AI tools for fraud prevention: Advanced algorithms allow AI to identify suspicious activities in real-time and take appropriate measures promptly, significantly reducing financial losses due to fraud.

EU AI Act

Although using AI can ultimately save costs, the introduction of AI does require personnel and financial resources to help navigate and comply with the surrounding regulatory framework. The new player is the EU’s AI Act, which aims to comprehensively regulate AI technologies within the EU. Under the AI Act, chatbots and some fraud prevention tools will be classified as special types of AI under Article 50. This means that strict transparency rules will apply to providers of AI-based chatbots. In particular, customers need to know they are interacting with an AI. Providers and deployers of AI in the payment market must keep the AI Act in mind as it begins to have an impact.

DORA and Risk Management

The regulatory landscape for using AI in the payment market is broader than just the AI Act. A payment institution planning to incorporate AI needs to address all requirements set out in the Digital Operational Resilience Act (DORA), which will be applicable from January 17, 2025. DORA aims to ensure that all entities within the financial sector are resilient against digital threats, particularly cyber attacks and technical failures that could disrupt operations. Key requirements of DORA include:

  • Sound risk management for all IT systems: This includes AI and requires institutions to develop and implement a comprehensive risk management system tailored to information and communication technologies (ICT).
  • Continuous risk assessments: These must cover not only existing threats and vulnerabilities but also new and emerging risks.
  • Regular reviews of security measures: Adjustments to current threat scenarios are essential.

When AI tools are provided by third-party providers, outsourcing agreements need to be in place to ensure compliance with DORA requirements. Payment institutions must ensure that all DORA requirements are observed by any third-country providers, which can pose challenges.

Fairness Principle

The German financial regulator has published supervisory guidance on the ethical standards of AI. Utilizing AI can accelerate processes and enable the rapid analysis of vast amounts of data. However, problems may arise when machines make decisions if bias creeps in. Highly automated decision-making processes with minimal human oversight can amplify existing risks of discrimination. Payment institutions are mandated to prevent unjustified discrimination against customers. For instance:

  • Direct discrimination occurs when older people are disadvantaged in the provision of financial services due to their age.
  • Indirect discrimination could arise when a procedure makes decisions based on income levels, disadvantaging certain groups.

Organizations must ensure that their policies comply with anti-discrimination laws and promote fairness and equality. This includes implementing measures to review and adjust criteria used in decision-making processes to prevent bias or unequal treatment.

Looking at the Bigger Picture

In conclusion, the integration of AI into the payment market requires more than mere compliance with the AI Act. The implementation of AI technologies must be carefully embedded within existing processes and the comprehensive regulatory framework governing payment service providers. This includes adhering to DORA’s requirements, which mandate a robust risk management strategy for all IT systems, including AI, as well as compliance with data and consumer protection laws.

Regular risk assessments and continuous adjustments to security measures are crucial to adapting to new threat scenarios. Institutions must ensure that both internal and external AI service providers meet DORA’s stringent requirements and are thoroughly monitored.

Overall, deploying AI in the payment market should be seen as an integral part of a holistic business and risk management strategy, aiming to foster technological innovation while meeting regulatory obligations.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...