Financial Sector Unprepared for AI Compliance, CCO Must Lead Controls
In the rapidly evolving landscape of the financial sector, the integration of artificial intelligence (AI) presents significant opportunities and challenges. To mitigate the risks associated with AI, industry leaders advocate for a structured framework aimed at enhancing governance and compliance.
The Necessity of AI Decision-Making Bodies
Lee Jong-oh, Deputy Governor for Digital and IT at the Financial Supervisory Service (FSS), emphasizes the need for an AI decision-making body led by an executive-level chairman. This body should encompass all relevant departments, including IT and risk management, with the Chief Consumer Officer (CCO) playing a critical role.
During a keynote address at the 2nd Seoul Economic Daily Internal Control Policy Forum, Lee stated, “The FSS recommends separating the AI risk management organization from AI planning and development units to resolve conflicts of interest.” This separation is essential for developing effective risk management regulations that cover AI system development, consumer protection, and ethical considerations.
Addressing Security Breaches
Recent security breaches within the financial sector highlight the urgency of these measures. Lee pointed out that many incidents stem from a historical tendency to view security investments as mere costs. Potential threats include:
- Inaccurate information provided to customers through AI consultations.
- Exposure of personal data in chatbot training sets during responses.
The Potential of AI in Finance
The financial industry—including banking, insurance, and securities—is recognized as having immense potential for AI integration. According to the World Economic Forum, efficiency within this sector could improve by 69-73% through automation and enhanced workflows. A recent report from the Institute of International Finance reveals that 84% of financial institutions globally have adopted generative AI, compared to about 56% in Korea.
However, the biggest barrier to AI adoption remains the lack of governance and risk management frameworks. Lee noted that domestic financial companies are particularly lacking in compliance obligations related to high-impact AI—systems that may significantly affect life, physical safety, or fundamental rights.
AI Risk Management Framework
To address these challenges, the FSS has developed the AI Risk Management Framework (AI RMF), which focuses on three pillars: governance, risk assessment, and risk control. This framework aims to provide financial companies with the structural foundation necessary for independent risk management.
In risk assessment, where companies often struggle, the framework evaluates risk levels by subdividing four principles—legality, reliability, good faith, and security—into three to five evaluation items each.
Future Considerations for High-Impact AI
Currently, under the AI Basic Act implemented in January, high-impact AI services in the financial sector are limited to “loans through screening without human intervention” in banking. However, as technology advances, this scope is likely to expand. Lee urges financial institutions to communicate actively with the Ministry of Science and ICT and the FSS to clarify which services qualify as high-impact AI.
The Importance of Autonomous Internal Controls
Lee concluded by stressing the importance of building autonomous internal control systems. He observed that regulations often lag behind technological advancements. “The recent cryptocurrency exchange incident could have been prevented if internal self-regulations had been properly followed,” he noted, emphasizing that sustainable growth in the financial sector is contingent upon developing responsible risk management systems.