AI Basic Law: Industry Calls for Delay Amid Regulatory Ambiguities

AI Basic Law Faces Industry Pushback

Concerns have been raised within the AI industry regarding the ambiguous regulatory standards outlined in the AI Basic Law, which could potentially hinder the growth of the sector. Recently, Huang Jeong-a, a member of the Democratic Party of Korea, proposed a bill aimed at postponing the regulations established by the AI Basic Law for three years, citing anticipated challenges ahead of the law’s implementation next year.

Overview of the AI Basic Law

The AI Basic Law, officially known as the Basic Act on the Promotion of Artificial Intelligence and the Establishment of Trust, was passed during the National Assembly plenary session in December of last year. It is set to be fully implemented on January 22 next year, marking South Korea as the first country to implement such legislation globally, following the European Union’s AI Act, which is expected to take effect in August next year.

This law primarily focuses on industrial development and the establishment of trust through risk management. It categorizes AI into two main types: ‘high-impact AI’ and ‘general AI’. High-impact AI is defined as having a significant effect on life, safety, and basic rights, imposing prior notification, verification, and certification obligations on operators.

Proposed Amendments and Industry Concerns

The amendment suggested by Huang allows for the postponement of these regulations until January 2029. Huang noted, “Amid the intensifying global competition for AI supremacy, there are concerns that an immature regulatory policy may cause us to miss the golden time to become an AI powerhouse.”

Industry stakeholders have been vocal about the need for revisions to the ambiguous provisions in the AI Basic Law that do not align with the realities of the domestic AI ecosystem. The Ministry of Science and ICT has initiated the formation of a subsidiary law reform team and is currently drafting an enforcement decree. However, there are growing apprehensions about the feasibility of developing a thorough and effective enforcement decree given the limited time before the law’s implementation.

Key Issues Under Review

Some critical issues surrounding the enforcement decree of the AI Basic Law include:

  1. Definition of high-impact AI
  2. Mandatory watermarking
  3. Government’s investigative authority

One major critique is that the criteria for defining high-impact AI are excessively abstract and broad. The law categorizes AI systems that significantly affect physical security and basic rights or pose risks, primarily in sectors such as energy, healthcare, transportation, and lending. However, what specific AIs fall into this category remains unclear.

The Business Software Alliance (BSA), which includes prominent IT corporations such as Microsoft, OpenAI, and Amazon Web Services (AWS), argued that high-impact AI classification should depend on usage rather than system or industry specifics. For instance, it is challenging to classify AI as ‘high-impact’ if its sole application is generating credit scores, even within the lending sector.

Controversy Over Watermarking Regulations

The regulation mandating the marking of AI-generated content is also contentious. Currently, AI is frequently utilized as a supplementary tool in the production processes of movies, webtoons, and animations, creating background images. The industry argues that attaching watermarks in these instances could compromise content quality and hinder creative endeavors.

Concerns About Privacy and Cybersecurity

Furthermore, there are significant concerns regarding potential personal and sensitive information leaks and cybersecurity threats that may arise during the government’s prior verification and certification process for operators’ high-impact AI-related businesses or products. This underscores the urgency for establishing standards to prevent excessive factual investigations and verifications.

A representative from the startup organization Startup Alliance expressed worries that the insights of industry practitioners may not be adequately represented, as those operating AI models or services and technical experts from the industrial sector are not included in the subsidiary law reform team for the AI Basic Law.

The Need for Ongoing Legislative Adaptation

Choi Byeong-ho, a professor at the Korea University AI Research Institute, emphasized the rapid technological expansion within the AI industry and the significant risk that the law may not keep pace with technological advancements. He advocates for the preparation of subsidiary laws that allow for timely updates to the AI Basic Law in alignment with its objective of fostering industry growth.

The Ministry of Science and ICT remains committed to implementing the AI Basic Law in January as scheduled, after gathering feedback from the industry. The plan is to prepare the final draft of the AI Basic Law by June at the earliest and to announce the enforcement decree containing detailed regulations by July and August. A ministry official stated, “We are preparing an enforcement decree that minimizes regulations while focusing on promotion, gathering as broad as possible industry opinions.”

More Insights

Driving Responsible AI: The Business Case for Ethical Innovation

Philosophical principles and regulatory frameworks have often dominated discussions on AI ethics, failing to resonate with key decision-makers. This article identifies three primary drivers—top-down...

Streamlining AI Regulations for Competitive Advantage in Europe

The General Data Protection Regulation (GDPR) complicates the necessary use of data and AI, hindering companies from leveraging AI's potential effectively. To enhance European competitiveness, there...

Colorado’s AI Act: Legislative Setback and Compliance Challenges Ahead

The Colorado Legislature recently failed to amend the Artificial Intelligence Act, originally passed in 2024, which imposes strict regulations on high-risk AI systems. Proposed amendments aimed to...

AI in Recruitment: Balancing Innovation and Compliance

AI is revolutionizing recruitment by streamlining processes such as resume screening and candidate engagement, but it also raises concerns about bias and compliance with regulations. While the EU has...

EU Member States Struggle to Fund AI Act Enforcement

EU policy adviser Kai Zenner has warned that many EU member states are facing financial difficulties and a shortage of expertise necessary to enforce the AI Act effectively. As the phased...

Colorado’s AI Act: Key Consumer Protections Unveiled

The Colorado Artificial Intelligence Act (CAIA) requires developers and deployers of high-risk AI systems to protect consumers from algorithmic discrimination and disclose when consumers are...

Smart AI Regulation: Safeguarding Our Future

Sen. Gounardes emphasizes the urgent need for smart and responsible AI regulation to safeguard communities and prevent potential risks associated with advanced AI technologies. The RAISE Act aims to...

Responsible AI: The Key to Trust and Innovation

At SAS Innovate 2025, Reggie Townsend emphasized the importance of ethics and governance in the use of AI within enterprises, stating that responsible innovation begins before coding. He highlighted...

Neurotechnologies and the EU AI Act: Legal Implications and Challenges

The article discusses the implications of the EU Artificial Intelligence Act on neurotechnologies, particularly in the context of neurorights and the regulation of AI systems. It highlights the...