South Korea’s Bold AI Regulations Face Resistance

South Korea’s ‘World-First’ AI Laws Face Pushback Amid Bid to Become Leading Tech Power

South Korea has embarked on a significant initiative to regulate AI, launching what has been described as the most comprehensive set of laws globally, which could potentially serve as a model for other countries. However, these new regulations have already encountered considerable pushback.

Overview of the Legislation

The newly implemented laws will require companies to label AI-generated content. This requirement has drawn criticism from local tech startups, which argue that the laws are overly restrictive, while civil society groups contend they do not go far enough to protect consumers.

The AI Basic Act, which took effect last week, has been introduced amid increasing global concerns regarding artificially created media and automated decision-making processes. Governments worldwide are struggling to keep pace with the rapid advancements in technology.

Key Provisions of the AI Basic Act

Under the new legislation, companies providing AI services must:

  • Add invisible digital watermarks for clearly artificial outputs, such as cartoons or artwork.
  • Apply visible labels for realistic deepfakes.
  • Conduct risk assessments and document decision-making processes for high-impact AI systems, such as those used in medical diagnosis, hiring, and loan approvals.

Furthermore, extremely powerful AI models will necessitate safety reports; however, the threshold for these models is set so high that government officials acknowledge no models currently meet it.

Companies that fail to comply with the regulations may face fines of up to 30 million won (approximately £15,000). Nevertheless, the government has promised a grace period of at least a year before penalties are enforced.

Ambition to Become a Leading AI Power

This legislation is being touted as the “world’s first” to be fully enforced by a single country, aligning with South Korea’s ambition to rank among the top three AI powers globally, alongside the United States and China. Government officials assert that the law is primarily focused on promoting industry rather than imposing restrictions.

Alice Oh, a computer science professor at the Korea Advanced Institute of Science and Technology (KAIST), noted that while the law is not perfect, it aims to evolve without hindering innovation. Despite this, a survey from the Startup Alliance revealed that 98% of AI startups are unprepared for compliance, leading to widespread frustration within the industry.

Concerns Over Compliance and Competitive Imbalance

Companies are required to self-determine whether their systems qualify as high-impact AI, a process criticized for being lengthy and creating uncertainty. Additionally, there are concerns regarding competitive imbalance: all Korean companies are subject to regulation regardless of size, while only foreign firms meeting specific thresholds—such as Google and OpenAI—must comply.

Civil Society Concerns

The push for regulation has unfolded against a backdrop of rising civil society concerns that the legislation does not go far enough. A 2023 report indicated that South Korea accounts for 53% of all global deepfake pornography victims. An investigation in August 2024 revealed extensive networks creating and distributing AI-generated sexual imagery.

Although the law’s origins predate this crisis, with the first AI-related bill introduced in July 2020, it has faced repeated stalls, partly due to provisions accused of prioritizing industry interests over citizen protection.

Civil society organizations argue that the new legislation offers limited protection to individuals harmed by AI systems. A joint statement from four organizations, including Minbyun, a collective of human rights lawyers, criticized the law for containing minimal provisions to protect citizens from AI risks.

Expert Opinions on Regulatory Framework

Experts have highlighted that South Korea is pursuing a different regulatory path compared to other jurisdictions. Unlike the EU’s strict risk-based model, or the US and UK’s sector-specific approaches, South Korea has opted for a more flexible, principles-based framework, described as trust-based promotion and regulation.

Melissa Hyesun Yoon, a law professor at Hanyang University specializing in AI governance, stated that Korea’s framework could serve as a valuable reference point in global AI governance discussions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...