Impact of the EU AI Act on Financial Sector Compliance

How the EU Artificial Intelligence Act Impacts the Financial Market

The new EU Artificial Intelligence Act has been published, set to enter into force in February 2025. This legislation aims to regulate AI systems operating within the European Union (EU) and applies to all providers of such systems, regardless of their geographical location. It is crucial for both providers and users of AI systems within the EU to understand the impending regulations and their implications.

Primary Concerns of the Act

The Act is primarily concerned with ensuring that AI systems do not jeopardize users’ safety, security, or human rights. Articles 7 and 27 highlight the necessity for providers to analyze the purpose of their AI systems and evaluate the long-term impacts on human rights, thereby establishing a governance standard.

Impact on the Financial Market

For companies in the financial sector, awareness of two key points is essential:

  1. Whether the AI system they engage with is located within the EU.
  2. How this provider will comply with the Act.

Failure to consider these factors may lead to the oversight of significant risks that financial companies must monitor.

Hypothetical Risks

In a hypothetical scenario, a financial services company could face risks if it relies on an AI system that delivers inaccurate or misleading responses to queries. Such inaccuracies could influence financial decisions negatively. Additionally, using outdated templates that do not reflect current market conditions or regulatory requirements is a common pitfall, particularly regarding policies and contracts needed by the back office.

Another critical risk involves users inputting personal data into the system, which could be subsequently leaked to third parties, violating the General Data Protection Regulation (GDPR).

Data Protection Measures

As part of data protection, the AI system must implement monitoring based on a post-market plan. This plan ensures compliance with regulatory requirements once the AI system is made available to the public. Adhering to the stipulated standards is the only way to prevent adverse occurrences, which is part of the technical documentation.

According to Articles 10, 72, and 98, this plan is crucial. Article 19 further mandates that financial companies under EU regulation maintain logs generated automatically by high-risk AI systems.

Risk-Based Approach

The Act adopts a risk-based approach for AI systems, categorizing risks as unacceptable, high, limited, or minimal. Supervisory measures must correspond proportionately to the assessed risks. Although high-risk AI systems are permissible, they are subject to stringent obligations and standards.

High-Risk Providers

High-risk providers are defined as those possessing advanced complexity, capabilities, and performance that influence data quality, robustness, and transparency for both providers and users. Compliance with tests to ensure that the system does not replace human evaluation is outlined in Article 14.

Providers must identify all necessary measures to ensure consistent performance for the intended purpose, adhering to the requirements in Articles 9 and 60.

Document Retention

Providers of high-risk systems are required to keep the documents listed in Article 18 available to competent national authorities for a period of 10 years after the system is placed on the market. All providers must also implement a policy to comply with EU legislation as per Article 4 of Directive (EU) 2019/790 and Article 53 of the Act.

Penalties for Non-Compliance

The penalties for non-compliance with the Act can vary significantly, ranging from restricting market access to fines that can amount to 35 million euros or 7% of global turnover. Lesser fines can also apply, such as 7.5 million euros or 1.5% of turnover. Articles 5 and 99 detail these consequences, with evaluations considering the company’s size and the nature of the infringement.

Ensuring Compliance

It is imperative that AI systems comply with the Act, focusing on proper management of service quality to guarantee adherence to all conformity assessment procedures and change management protocols. The provider must establish processes for examination, testing, and validation of all necessary procedures throughout the system’s development, in accordance with Article 72.

Moreover, Article 17 mandates that providers manage all data involved in the system. A policy must be in place to establish the frequency of examinations, addressing aspects such as data acquisition, data collection, data analysis, data labeling, data storage, data filtration, data mining, data aggregation, data retention, and any other operations.

Conclusion

Financial companies must implement trustworthy AI systems that comply with all governance standards. It is advisable to follow reports and news related to AI providers to mitigate risks. Additionally, establishing a comprehensive policy and training for all employees on the safe use of these systems is highly recommended.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...