Korea’s New AI Basic Act: Characteristics and Significance
President Lee Jae-myung has clearly defined AI as a “game changer” that will shift the global economic paradigm, presenting it as a core engine for South Korea’s technology-led growth during his service term.
In line with this national strategy and goals, the new AI Basic Act came into effect in January 2026. Similar to the EU AI Act, it regulates AI systems that pose significant risks to human life, physical safety, and fundamental rights as high-impact AI areas, while also reflecting government commitments to developing AI technology and related industries and ensuring safety for the people.
Although the enforcement of penalty provisions has been postponed for one year, it is attracting global attention as the world’s first enforceable law to implement mandatory regulations for high-impact AI business operators.
High-Impact AI Regulations
The definition of “high-impact AI” in the AI Basic Act is identical to the “high-risk system” in the EU AI Act, suggesting the operator regulation mechanism is very similar. However, the AI Basic Act is characterized by a system where business operators first take voluntary measures to ensure AI safety and reliability, with ex-post supervision by the Minister of Science and ICT (MSIT) added. While there are penalty sanctions against violations, the maximum administrative fine is KRW30 million (USD20,300).
Specifically, the AI Basic Act lacks the same level of compulsory enforcement through sanctions as the EU AI Act. It requires AI operators to self-review in advance whether they fall under high-impact AI and recommends that they establish risk management plans through business operator obligations and related notices.
Pre-Market Requirements
Unlike the EU AI Act’s regulatory system—which requires high-risk AI model developers to undergo verification and attach a CE marking for market distribution beforehand—there is no mandatory pre-market control over high-impact AI systems unless the operator voluntarily requests the minister of the MSIT to confirm the high-impact AI status. The minister is granted legal authority to determine whether a product or service can be classified as high-impact AI.
While the AI Basic Act establishes a management and supervision system for high-impact AI to protect users through regulations, it also stipulates government support and promotion for AI utilization. Specifically, it defines the government’s role as supporting AI technology development, safe use, and technology standardization.
Support for SMEs and Startups
The Act mandates that SMEs be prioritized when implementing AI industry support measures and includes provisions for promoting startups and attracting foreign investment. Furthermore, it enables the designation of AI clusters for functional, physical, and regional clustering of related companies and organizations, emphasizing policies related to AI data centres.
Safety Assurance Obligations
The AI Basic Act lists safety assurance obligations for high-impact AI operators such as risk identification, assessment, and mitigation throughout the model’s lifecycle, similar to the EU AI Act. However, the cumulative computation threshold for training is set 10 times higher than the EU’s, effectively excluding domestic operators from these regulations.
In other words, the AI Basic Act encourages the development of industrially specialised (vertical) GPAI models through an intentional regulatory gap for super-scale GPAI models.
Scope of Regulation
The AI Basic Act lists 10 high-impact regulated areas, but its scope can be narrower compared to the EU AI Act. For example, in the financial industry, it considers “judgement or evaluation in loan screening” among high-impact AI areas, making its practical application scope much narrower than the “creditworthiness” and “credit scoring” gateways regulated by the EU AI Act.
Likewise, the AI Basic Act classifies AI use only in recruitment as high-impact, leaving worker management unregulated, unlike the EU AI Act.
Transparency Obligations
The Act protects final users by imposing a transparency obligation to notify or display the fact that AI is being used when providing products or services using high-impact or generative AI. While the EU AI Act imposes transparency obligations on model providers, the AI Basic Act mandates transparency obligations only in informing the final user of the AI’s use.
Copyright Considerations
Since the AI Basic Act lacks special provisions for GPAI, there is no mention of compliance with copyright law similar to the EU AI Act. Furthermore, because South Korea’s current Copyright Act lacks exception clauses like TDM (text and data mining), copyright infringement disputes during AI model training and development could become a significant legal issue. To address this, the AI Basic Act stipulates that the Minister of MSIT shall promote policies for production, collection, management, distribution, and utilization of training data.
Targeting Global Operators
Due to the technical threshold for operators’ obligations being set 10 times higher than the EU AI Act, these obligations are realistically targeted at global big tech GPAI operators doing business in the South Korean market. Consequently, such foreign operators will be indirectly regulated by designating a domestic agent under the AI Basic Act.
This is a voluntary recommendation to allow high-impact AI model deployers to systematically identify and analyze the negative impact on fundamental rights before market launch and take voluntary corrective actions.
Government Structure and Future Policies
While the MSIT is the primary ministry for the AI Basic Act, the Ministry of the Interior and Safety (MOIS), responsible for general public affairs, has recently passed the Public AI Act. This results in a separate regulatory structure where AI is utilized within the e-government administrative system, centered on public data to support overall public administration.
The AI Basic Act serves as a general law that materializes basic principles and guidelines for AI development and use through business operators. Symbolically, it declares as a fundamental principle, in article 3, that AI should improve the quality of people’s lives through safety and reliability, stipulating the state’s responsibility to devise measures so all citizens can adapt stably to the changes brought by AI.
The AI Basic Act requires that comprehensive action plans from the existing Framework Act on Intelligent Informatisation be considered when establishing government-wide AI promotion plans. However, it establishes the National AI Strategy Committee, chaired by the president, to deliberate and decide on major policies for AI development and build a foundation of trust.
This confirms the administration’s strong will to prioritize the national goal of becoming a top-three AI global power, making AI-related government policies and decision-making a top priority.
As it is the basic law, even if individual laws in various fields, including the MOIS’s Public AI Act, may be enacted or amended in the near future, the governance of AI-related organizations within the government and the obligations of AI-related business operators are expected to be maintained within the framework of the AI Basic Act.
In this process, the National AI Strategy Committee is expected to act as the final national control tower, with the minister of the MSIT, whose status has been recently elevated to deputy prime minister, playing a coordinating role within the executive branch.
Particularly, as the minister of the MSIT is also supported by the Basic Act on the Promotion of Data Industry and Use of Data to implement various policies for the data industry, the role and weight of the MSIT have become greater than ever.