AI Compliance Challenges in the Financial Sector: The CCO’s Essential Role

Financial Sector Unprepared for AI Compliance, CCO Must Lead Controls

In the rapidly evolving landscape of the financial sector, the integration of artificial intelligence (AI) presents significant opportunities and challenges. To mitigate the risks associated with AI, industry leaders advocate for a structured framework aimed at enhancing governance and compliance.

The Necessity of AI Decision-Making Bodies

Lee Jong-oh, Deputy Governor for Digital and IT at the Financial Supervisory Service (FSS), emphasizes the need for an AI decision-making body led by an executive-level chairman. This body should encompass all relevant departments, including IT and risk management, with the Chief Consumer Officer (CCO) playing a critical role.

During a keynote address at the 2nd Seoul Economic Daily Internal Control Policy Forum, Lee stated, “The FSS recommends separating the AI risk management organization from AI planning and development units to resolve conflicts of interest.” This separation is essential for developing effective risk management regulations that cover AI system development, consumer protection, and ethical considerations.

Addressing Security Breaches

Recent security breaches within the financial sector highlight the urgency of these measures. Lee pointed out that many incidents stem from a historical tendency to view security investments as mere costs. Potential threats include:

  • Inaccurate information provided to customers through AI consultations.
  • Exposure of personal data in chatbot training sets during responses.

The Potential of AI in Finance

The financial industry—including banking, insurance, and securities—is recognized as having immense potential for AI integration. According to the World Economic Forum, efficiency within this sector could improve by 69-73% through automation and enhanced workflows. A recent report from the Institute of International Finance reveals that 84% of financial institutions globally have adopted generative AI, compared to about 56% in Korea.

However, the biggest barrier to AI adoption remains the lack of governance and risk management frameworks. Lee noted that domestic financial companies are particularly lacking in compliance obligations related to high-impact AI—systems that may significantly affect life, physical safety, or fundamental rights.

AI Risk Management Framework

To address these challenges, the FSS has developed the AI Risk Management Framework (AI RMF), which focuses on three pillars: governance, risk assessment, and risk control. This framework aims to provide financial companies with the structural foundation necessary for independent risk management.

In risk assessment, where companies often struggle, the framework evaluates risk levels by subdividing four principles—legality, reliability, good faith, and security—into three to five evaluation items each.

Future Considerations for High-Impact AI

Currently, under the AI Basic Act implemented in January, high-impact AI services in the financial sector are limited to “loans through screening without human intervention” in banking. However, as technology advances, this scope is likely to expand. Lee urges financial institutions to communicate actively with the Ministry of Science and ICT and the FSS to clarify which services qualify as high-impact AI.

The Importance of Autonomous Internal Controls

Lee concluded by stressing the importance of building autonomous internal control systems. He observed that regulations often lag behind technological advancements. “The recent cryptocurrency exchange incident could have been prevented if internal self-regulations had been properly followed,” he noted, emphasizing that sustainable growth in the financial sector is contingent upon developing responsible risk management systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...