South Korea’s ‘World-First’ AI Laws Face Pushback Amid Bid to Become Leading Tech Power
South Korea has embarked on a significant initiative to regulate AI, launching what has been described as the most comprehensive set of laws globally, which could potentially serve as a model for other countries. However, these new regulations have already encountered considerable pushback.
Overview of the Legislation
The newly implemented laws will require companies to label AI-generated content. This requirement has drawn criticism from local tech startups, which argue that the laws are overly restrictive, while civil society groups contend they do not go far enough to protect consumers.
The AI Basic Act, which took effect last week, has been introduced amid increasing global concerns regarding artificially created media and automated decision-making processes. Governments worldwide are struggling to keep pace with the rapid advancements in technology.
Key Provisions of the AI Basic Act
Under the new legislation, companies providing AI services must:
- Add invisible digital watermarks for clearly artificial outputs, such as cartoons or artwork.
- Apply visible labels for realistic deepfakes.
- Conduct risk assessments and document decision-making processes for high-impact AI systems, such as those used in medical diagnosis, hiring, and loan approvals.
Furthermore, extremely powerful AI models will necessitate safety reports; however, the threshold for these models is set so high that government officials acknowledge no models currently meet it.
Companies that fail to comply with the regulations may face fines of up to 30 million won (approximately £15,000). Nevertheless, the government has promised a grace period of at least a year before penalties are enforced.
Ambition to Become a Leading AI Power
This legislation is being touted as the “world’s first” to be fully enforced by a single country, aligning with South Korea’s ambition to rank among the top three AI powers globally, alongside the United States and China. Government officials assert that the law is primarily focused on promoting industry rather than imposing restrictions.
Alice Oh, a computer science professor at the Korea Advanced Institute of Science and Technology (KAIST), noted that while the law is not perfect, it aims to evolve without hindering innovation. Despite this, a survey from the Startup Alliance revealed that 98% of AI startups are unprepared for compliance, leading to widespread frustration within the industry.
Concerns Over Compliance and Competitive Imbalance
Companies are required to self-determine whether their systems qualify as high-impact AI, a process criticized for being lengthy and creating uncertainty. Additionally, there are concerns regarding competitive imbalance: all Korean companies are subject to regulation regardless of size, while only foreign firms meeting specific thresholds—such as Google and OpenAI—must comply.
Civil Society Concerns
The push for regulation has unfolded against a backdrop of rising civil society concerns that the legislation does not go far enough. A 2023 report indicated that South Korea accounts for 53% of all global deepfake pornography victims. An investigation in August 2024 revealed extensive networks creating and distributing AI-generated sexual imagery.
Although the law’s origins predate this crisis, with the first AI-related bill introduced in July 2020, it has faced repeated stalls, partly due to provisions accused of prioritizing industry interests over citizen protection.
Civil society organizations argue that the new legislation offers limited protection to individuals harmed by AI systems. A joint statement from four organizations, including Minbyun, a collective of human rights lawyers, criticized the law for containing minimal provisions to protect citizens from AI risks.
Expert Opinions on Regulatory Framework
Experts have highlighted that South Korea is pursuing a different regulatory path compared to other jurisdictions. Unlike the EU’s strict risk-based model, or the US and UK’s sector-specific approaches, South Korea has opted for a more flexible, principles-based framework, described as trust-based promotion and regulation.
Melissa Hyesun Yoon, a law professor at Hanyang University specializing in AI governance, stated that Korea’s framework could serve as a valuable reference point in global AI governance discussions.