Impact of the EU AI Act on Financial Sector Compliance

How the EU Artificial Intelligence Act Impacts the Financial Market

The new EU Artificial Intelligence Act has been published, set to enter into force in February 2025. This legislation aims to regulate AI systems operating within the European Union (EU) and applies to all providers of such systems, regardless of their geographical location. It is crucial for both providers and users of AI systems within the EU to understand the impending regulations and their implications.

Primary Concerns of the Act

The Act is primarily concerned with ensuring that AI systems do not jeopardize users’ safety, security, or human rights. Articles 7 and 27 highlight the necessity for providers to analyze the purpose of their AI systems and evaluate the long-term impacts on human rights, thereby establishing a governance standard.

Impact on the Financial Market

For companies in the financial sector, awareness of two key points is essential:

  1. Whether the AI system they engage with is located within the EU.
  2. How this provider will comply with the Act.

Failure to consider these factors may lead to the oversight of significant risks that financial companies must monitor.

Hypothetical Risks

In a hypothetical scenario, a financial services company could face risks if it relies on an AI system that delivers inaccurate or misleading responses to queries. Such inaccuracies could influence financial decisions negatively. Additionally, using outdated templates that do not reflect current market conditions or regulatory requirements is a common pitfall, particularly regarding policies and contracts needed by the back office.

Another critical risk involves users inputting personal data into the system, which could be subsequently leaked to third parties, violating the General Data Protection Regulation (GDPR).

Data Protection Measures

As part of data protection, the AI system must implement monitoring based on a post-market plan. This plan ensures compliance with regulatory requirements once the AI system is made available to the public. Adhering to the stipulated standards is the only way to prevent adverse occurrences, which is part of the technical documentation.

According to Articles 10, 72, and 98, this plan is crucial. Article 19 further mandates that financial companies under EU regulation maintain logs generated automatically by high-risk AI systems.

Risk-Based Approach

The Act adopts a risk-based approach for AI systems, categorizing risks as unacceptable, high, limited, or minimal. Supervisory measures must correspond proportionately to the assessed risks. Although high-risk AI systems are permissible, they are subject to stringent obligations and standards.

High-Risk Providers

High-risk providers are defined as those possessing advanced complexity, capabilities, and performance that influence data quality, robustness, and transparency for both providers and users. Compliance with tests to ensure that the system does not replace human evaluation is outlined in Article 14.

Providers must identify all necessary measures to ensure consistent performance for the intended purpose, adhering to the requirements in Articles 9 and 60.

Document Retention

Providers of high-risk systems are required to keep the documents listed in Article 18 available to competent national authorities for a period of 10 years after the system is placed on the market. All providers must also implement a policy to comply with EU legislation as per Article 4 of Directive (EU) 2019/790 and Article 53 of the Act.

Penalties for Non-Compliance

The penalties for non-compliance with the Act can vary significantly, ranging from restricting market access to fines that can amount to 35 million euros or 7% of global turnover. Lesser fines can also apply, such as 7.5 million euros or 1.5% of turnover. Articles 5 and 99 detail these consequences, with evaluations considering the company’s size and the nature of the infringement.

Ensuring Compliance

It is imperative that AI systems comply with the Act, focusing on proper management of service quality to guarantee adherence to all conformity assessment procedures and change management protocols. The provider must establish processes for examination, testing, and validation of all necessary procedures throughout the system’s development, in accordance with Article 72.

Moreover, Article 17 mandates that providers manage all data involved in the system. A policy must be in place to establish the frequency of examinations, addressing aspects such as data acquisition, data collection, data analysis, data labeling, data storage, data filtration, data mining, data aggregation, data retention, and any other operations.

Conclusion

Financial companies must implement trustworthy AI systems that comply with all governance standards. It is advisable to follow reports and news related to AI providers to mitigate risks. Additionally, establishing a comprehensive policy and training for all employees on the safe use of these systems is highly recommended.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...