AI Basic Act Leaves Businesses Uncertain Amid Vague Regulations

Unclear Guidance and Vague Terms in AI Basic Act Leave Businesses in Limbo

Korea’s revised Basic Act on the Development of Artificial Intelligence (AI), effective January 22, represents the world’s first comprehensive legal framework for AI. However, with broad new obligations imposed on companies developing or deploying AI technologies, industry players are struggling to interpret these regulations amidst growing concerns over their vague standards.

The Objectives of the AI Basic Act

The regulations aim to balance AI innovation with safety and public trust, establishing a national governance framework led by a committee chaired by the president. Key mandates include:

  • An AI master plan every three years
  • Strengthened powers for the presidential committee
  • Government support for research and development, training data infrastructure, and special measures for small and medium-sized enterprises (SMEs) and startups

Industry Obligations

The act requires that AI-generated content be disclosed, employing measures like watermarks to ensure transparency. Additionally, systems classified as “high-impact” face risk controls.

Despite these obligations, companies are left navigating murky definitions and unclear standards. This uncertainty raises fears that compliance could hinder innovation. Professor Lee Seong-yeob from Korea University warns that engineers might hesitate to proceed with projects, fearing potential breaches of the law.

Mandatory Transparency Yet Unclear Guidance

Entities using AI for commercial purposes must notify users when content is AI-generated through visible watermarks. Nonetheless, practical details are missing, especially regarding:

  • When a watermark is required
  • Who must apply it

This ambiguity could create loopholes, as firms using generative tools may not be classified as AI service providers and thus are exempt from labeling duties. This lack of clarity extends to platforms hosting AI-assisted works, which face fewer obligations unless they operate the underlying models.

High-Impact AI: A Controversial Definition

Another contentious aspect of the act is the classification of high-impact AI systems, which are defined as those potentially affecting human life, safety, or fundamental rights. However, the act fails to establish quantitative thresholds such as specific error rates or incident probabilities that would automatically categorize a system as high-impact.

Vague terms like “significant impact” and “risk of harm” may leave too much room for regulatory judgment, complicating investment planning for large-scale AI deployments. If businesses cannot predict whether their models will be treated as high-impact, they may delay launches or shift projects abroad.

Call for Revision During the Grace Period

During the current one-year grace period for implementation, while penalties are not yet enforced, there is a pressing need to refine the legislation. Professor Lee emphasizes that the law should be adjusted in this preparation phase to prevent it from being a barrier to development.

Industry Response

As the grace period unfolds, tech companies are reorganizing internal governance to comply with the new rules. Major telecommunications firms are reviewing their compliance frameworks and establishing risk management protocols. Tech giants like Naver and Kakao are also aligning their products with the transparency obligations, having previously introduced internal AI governance frameworks voluntarily.

As the AI landscape continues to evolve, the success of the AI Basic Act will depend on clear guidance and the adaptability of the legislation to meet the needs of a rapidly changing technology environment.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...