Impact of the EU AI Act on Financial Sector Compliance

How the EU Artificial Intelligence Act Impacts the Financial Market

The new EU Artificial Intelligence Act has been published, set to enter into force in February 2025. This legislation aims to regulate AI systems operating within the European Union (EU) and applies to all providers of such systems, regardless of their geographical location. It is crucial for both providers and users of AI systems within the EU to understand the impending regulations and their implications.

Primary Concerns of the Act

The Act is primarily concerned with ensuring that AI systems do not jeopardize users’ safety, security, or human rights. Articles 7 and 27 highlight the necessity for providers to analyze the purpose of their AI systems and evaluate the long-term impacts on human rights, thereby establishing a governance standard.

Impact on the Financial Market

For companies in the financial sector, awareness of two key points is essential:

  1. Whether the AI system they engage with is located within the EU.
  2. How this provider will comply with the Act.

Failure to consider these factors may lead to the oversight of significant risks that financial companies must monitor.

Hypothetical Risks

In a hypothetical scenario, a financial services company could face risks if it relies on an AI system that delivers inaccurate or misleading responses to queries. Such inaccuracies could influence financial decisions negatively. Additionally, using outdated templates that do not reflect current market conditions or regulatory requirements is a common pitfall, particularly regarding policies and contracts needed by the back office.

Another critical risk involves users inputting personal data into the system, which could be subsequently leaked to third parties, violating the General Data Protection Regulation (GDPR).

Data Protection Measures

As part of data protection, the AI system must implement monitoring based on a post-market plan. This plan ensures compliance with regulatory requirements once the AI system is made available to the public. Adhering to the stipulated standards is the only way to prevent adverse occurrences, which is part of the technical documentation.

According to Articles 10, 72, and 98, this plan is crucial. Article 19 further mandates that financial companies under EU regulation maintain logs generated automatically by high-risk AI systems.

Risk-Based Approach

The Act adopts a risk-based approach for AI systems, categorizing risks as unacceptable, high, limited, or minimal. Supervisory measures must correspond proportionately to the assessed risks. Although high-risk AI systems are permissible, they are subject to stringent obligations and standards.

High-Risk Providers

High-risk providers are defined as those possessing advanced complexity, capabilities, and performance that influence data quality, robustness, and transparency for both providers and users. Compliance with tests to ensure that the system does not replace human evaluation is outlined in Article 14.

Providers must identify all necessary measures to ensure consistent performance for the intended purpose, adhering to the requirements in Articles 9 and 60.

Document Retention

Providers of high-risk systems are required to keep the documents listed in Article 18 available to competent national authorities for a period of 10 years after the system is placed on the market. All providers must also implement a policy to comply with EU legislation as per Article 4 of Directive (EU) 2019/790 and Article 53 of the Act.

Penalties for Non-Compliance

The penalties for non-compliance with the Act can vary significantly, ranging from restricting market access to fines that can amount to 35 million euros or 7% of global turnover. Lesser fines can also apply, such as 7.5 million euros or 1.5% of turnover. Articles 5 and 99 detail these consequences, with evaluations considering the company’s size and the nature of the infringement.

Ensuring Compliance

It is imperative that AI systems comply with the Act, focusing on proper management of service quality to guarantee adherence to all conformity assessment procedures and change management protocols. The provider must establish processes for examination, testing, and validation of all necessary procedures throughout the system’s development, in accordance with Article 72.

Moreover, Article 17 mandates that providers manage all data involved in the system. A policy must be in place to establish the frequency of examinations, addressing aspects such as data acquisition, data collection, data analysis, data labeling, data storage, data filtration, data mining, data aggregation, data retention, and any other operations.

Conclusion

Financial companies must implement trustworthy AI systems that comply with all governance standards. It is advisable to follow reports and news related to AI providers to mitigate risks. Additionally, establishing a comprehensive policy and training for all employees on the safe use of these systems is highly recommended.

More Insights

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...

AI Governance Framework: Ensuring Responsible Deployment for a Safer Future

At the 17th annual conference of ISACA in Abuja, stakeholders called for an AI governance framework to ensure responsible deployment of artificial intelligence. They emphasized the need for...

Essential Strategies for Effective AI Governance in Healthcare

The AMA emphasizes the necessity for CMOs and healthcare leaders to establish policies for AI tool adoption and governance due to the rapid expansion of AI in healthcare. Key foundational elements for...

UN Establishes AI Governance Panel for Global Cooperation

The United Nations General Assembly has adopted a resolution to establish an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance. This...

Emerging Cyber Threats: AI Risks and Solutions for Brokers

As artificial intelligence (AI) tools rapidly spread across industries, they present new cyber risks alongside their benefits. Brokers are advised to help clients navigate these risks by understanding...