How the EU Artificial Intelligence Act Impacts the Financial Market
The new EU Artificial Intelligence Act has been published, set to enter into force in February 2025. This legislation aims to regulate AI systems operating within the European Union (EU) and applies to all providers of such systems, regardless of their geographical location. It is crucial for both providers and users of AI systems within the EU to understand the impending regulations and their implications.
Primary Concerns of the Act
The Act is primarily concerned with ensuring that AI systems do not jeopardize users’ safety, security, or human rights. Articles 7 and 27 highlight the necessity for providers to analyze the purpose of their AI systems and evaluate the long-term impacts on human rights, thereby establishing a governance standard.
Impact on the Financial Market
For companies in the financial sector, awareness of two key points is essential:
- Whether the AI system they engage with is located within the EU.
- How this provider will comply with the Act.
Failure to consider these factors may lead to the oversight of significant risks that financial companies must monitor.
Hypothetical Risks
In a hypothetical scenario, a financial services company could face risks if it relies on an AI system that delivers inaccurate or misleading responses to queries. Such inaccuracies could influence financial decisions negatively. Additionally, using outdated templates that do not reflect current market conditions or regulatory requirements is a common pitfall, particularly regarding policies and contracts needed by the back office.
Another critical risk involves users inputting personal data into the system, which could be subsequently leaked to third parties, violating the General Data Protection Regulation (GDPR).
Data Protection Measures
As part of data protection, the AI system must implement monitoring based on a post-market plan. This plan ensures compliance with regulatory requirements once the AI system is made available to the public. Adhering to the stipulated standards is the only way to prevent adverse occurrences, which is part of the technical documentation.
According to Articles 10, 72, and 98, this plan is crucial. Article 19 further mandates that financial companies under EU regulation maintain logs generated automatically by high-risk AI systems.
Risk-Based Approach
The Act adopts a risk-based approach for AI systems, categorizing risks as unacceptable, high, limited, or minimal. Supervisory measures must correspond proportionately to the assessed risks. Although high-risk AI systems are permissible, they are subject to stringent obligations and standards.
High-Risk Providers
High-risk providers are defined as those possessing advanced complexity, capabilities, and performance that influence data quality, robustness, and transparency for both providers and users. Compliance with tests to ensure that the system does not replace human evaluation is outlined in Article 14.
Providers must identify all necessary measures to ensure consistent performance for the intended purpose, adhering to the requirements in Articles 9 and 60.
Document Retention
Providers of high-risk systems are required to keep the documents listed in Article 18 available to competent national authorities for a period of 10 years after the system is placed on the market. All providers must also implement a policy to comply with EU legislation as per Article 4 of Directive (EU) 2019/790 and Article 53 of the Act.
Penalties for Non-Compliance
The penalties for non-compliance with the Act can vary significantly, ranging from restricting market access to fines that can amount to 35 million euros or 7% of global turnover. Lesser fines can also apply, such as 7.5 million euros or 1.5% of turnover. Articles 5 and 99 detail these consequences, with evaluations considering the company’s size and the nature of the infringement.
Ensuring Compliance
It is imperative that AI systems comply with the Act, focusing on proper management of service quality to guarantee adherence to all conformity assessment procedures and change management protocols. The provider must establish processes for examination, testing, and validation of all necessary procedures throughout the system’s development, in accordance with Article 72.
Moreover, Article 17 mandates that providers manage all data involved in the system. A policy must be in place to establish the frequency of examinations, addressing aspects such as data acquisition, data collection, data analysis, data labeling, data storage, data filtration, data mining, data aggregation, data retention, and any other operations.
Conclusion
Financial companies must implement trustworthy AI systems that comply with all governance standards. It is advisable to follow reports and news related to AI providers to mitigate risks. Additionally, establishing a comprehensive policy and training for all employees on the safe use of these systems is highly recommended.