Unclear Guidance and Vague Terms in AI Basic Act Leave Businesses in Limbo
Korea’s revised Basic Act on the Development of Artificial Intelligence (AI), effective January 22, represents the world’s first comprehensive legal framework for AI. However, with broad new obligations imposed on companies developing or deploying AI technologies, industry players are struggling to interpret these regulations amidst growing concerns over their vague standards.
The Objectives of the AI Basic Act
The regulations aim to balance AI innovation with safety and public trust, establishing a national governance framework led by a committee chaired by the president. Key mandates include:
- An AI master plan every three years
- Strengthened powers for the presidential committee
- Government support for research and development, training data infrastructure, and special measures for small and medium-sized enterprises (SMEs) and startups
Industry Obligations
The act requires that AI-generated content be disclosed, employing measures like watermarks to ensure transparency. Additionally, systems classified as “high-impact” face risk controls.
Despite these obligations, companies are left navigating murky definitions and unclear standards. This uncertainty raises fears that compliance could hinder innovation. Professor Lee Seong-yeob from Korea University warns that engineers might hesitate to proceed with projects, fearing potential breaches of the law.
Mandatory Transparency Yet Unclear Guidance
Entities using AI for commercial purposes must notify users when content is AI-generated through visible watermarks. Nonetheless, practical details are missing, especially regarding:
- When a watermark is required
- Who must apply it
This ambiguity could create loopholes, as firms using generative tools may not be classified as AI service providers and thus are exempt from labeling duties. This lack of clarity extends to platforms hosting AI-assisted works, which face fewer obligations unless they operate the underlying models.
High-Impact AI: A Controversial Definition
Another contentious aspect of the act is the classification of high-impact AI systems, which are defined as those potentially affecting human life, safety, or fundamental rights. However, the act fails to establish quantitative thresholds such as specific error rates or incident probabilities that would automatically categorize a system as high-impact.
Vague terms like “significant impact” and “risk of harm” may leave too much room for regulatory judgment, complicating investment planning for large-scale AI deployments. If businesses cannot predict whether their models will be treated as high-impact, they may delay launches or shift projects abroad.
Call for Revision During the Grace Period
During the current one-year grace period for implementation, while penalties are not yet enforced, there is a pressing need to refine the legislation. Professor Lee emphasizes that the law should be adjusted in this preparation phase to prevent it from being a barrier to development.
Industry Response
As the grace period unfolds, tech companies are reorganizing internal governance to comply with the new rules. Major telecommunications firms are reviewing their compliance frameworks and establishing risk management protocols. Tech giants like Naver and Kakao are also aligning their products with the transparency obligations, having previously introduced internal AI governance frameworks voluntarily.
As the AI landscape continues to evolve, the success of the AI Basic Act will depend on clear guidance and the adaptability of the legislation to meet the needs of a rapidly changing technology environment.