Impact of the EU AI Act on Financial Sector Compliance

How the EU Artificial Intelligence Act Impacts the Financial Market

The new EU Artificial Intelligence Act has been published, set to enter into force in February 2025. This legislation aims to regulate AI systems operating within the European Union (EU) and applies to all providers of such systems, regardless of their geographical location. It is crucial for both providers and users of AI systems within the EU to understand the impending regulations and their implications.

Primary Concerns of the Act

The Act is primarily concerned with ensuring that AI systems do not jeopardize users’ safety, security, or human rights. Articles 7 and 27 highlight the necessity for providers to analyze the purpose of their AI systems and evaluate the long-term impacts on human rights, thereby establishing a governance standard.

Impact on the Financial Market

For companies in the financial sector, awareness of two key points is essential:

  1. Whether the AI system they engage with is located within the EU.
  2. How this provider will comply with the Act.

Failure to consider these factors may lead to the oversight of significant risks that financial companies must monitor.

Hypothetical Risks

In a hypothetical scenario, a financial services company could face risks if it relies on an AI system that delivers inaccurate or misleading responses to queries. Such inaccuracies could influence financial decisions negatively. Additionally, using outdated templates that do not reflect current market conditions or regulatory requirements is a common pitfall, particularly regarding policies and contracts needed by the back office.

Another critical risk involves users inputting personal data into the system, which could be subsequently leaked to third parties, violating the General Data Protection Regulation (GDPR).

Data Protection Measures

As part of data protection, the AI system must implement monitoring based on a post-market plan. This plan ensures compliance with regulatory requirements once the AI system is made available to the public. Adhering to the stipulated standards is the only way to prevent adverse occurrences, which is part of the technical documentation.

According to Articles 10, 72, and 98, this plan is crucial. Article 19 further mandates that financial companies under EU regulation maintain logs generated automatically by high-risk AI systems.

Risk-Based Approach

The Act adopts a risk-based approach for AI systems, categorizing risks as unacceptable, high, limited, or minimal. Supervisory measures must correspond proportionately to the assessed risks. Although high-risk AI systems are permissible, they are subject to stringent obligations and standards.

High-Risk Providers

High-risk providers are defined as those possessing advanced complexity, capabilities, and performance that influence data quality, robustness, and transparency for both providers and users. Compliance with tests to ensure that the system does not replace human evaluation is outlined in Article 14.

Providers must identify all necessary measures to ensure consistent performance for the intended purpose, adhering to the requirements in Articles 9 and 60.

Document Retention

Providers of high-risk systems are required to keep the documents listed in Article 18 available to competent national authorities for a period of 10 years after the system is placed on the market. All providers must also implement a policy to comply with EU legislation as per Article 4 of Directive (EU) 2019/790 and Article 53 of the Act.

Penalties for Non-Compliance

The penalties for non-compliance with the Act can vary significantly, ranging from restricting market access to fines that can amount to 35 million euros or 7% of global turnover. Lesser fines can also apply, such as 7.5 million euros or 1.5% of turnover. Articles 5 and 99 detail these consequences, with evaluations considering the company’s size and the nature of the infringement.

Ensuring Compliance

It is imperative that AI systems comply with the Act, focusing on proper management of service quality to guarantee adherence to all conformity assessment procedures and change management protocols. The provider must establish processes for examination, testing, and validation of all necessary procedures throughout the system’s development, in accordance with Article 72.

Moreover, Article 17 mandates that providers manage all data involved in the system. A policy must be in place to establish the frequency of examinations, addressing aspects such as data acquisition, data collection, data analysis, data labeling, data storage, data filtration, data mining, data aggregation, data retention, and any other operations.

Conclusion

Financial companies must implement trustworthy AI systems that comply with all governance standards. It is advisable to follow reports and news related to AI providers to mitigate risks. Additionally, establishing a comprehensive policy and training for all employees on the safe use of these systems is highly recommended.

More Insights

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...

Global AI Regulation: Trends and Challenges

This document discusses the current state of AI regulation in Israel, highlighting the absence of specific laws directly regulating AI. It also outlines the government's efforts to promote responsible...

AI and Regulatory Challenges in the Gambling Industry

The article discusses the integration of Artificial Intelligence (AI) in the gambling industry, emphasizing the balance between technological advancements and regulatory compliance. It highlights the...