EU Relaxes AI Regulations While South Korea Pursues Minimal Oversight
The European Union (EU) has released the third draft to finalize the detailed rules of the world’s first artificial intelligence (AI) regulation law, the AI Act. This draft indicates a retreat from its previous stringent regulatory stance, featuring numerous softened expressions regarding AI copyright. As global AI competition intensifies, there are calls for South Korea’s AI Basic Law, which was passed at the end of last year, to focus more on fostering rather than regulation.
Overview of the EU’s AI Act
On March 11, 2025, the EU Commission announced the third draft of practical guidelines aimed at finalizing provisions for general purpose AI (GPAI) providers. This follows the release of two previous drafts, with refinements made based on feedback from stakeholders and corporations. The provisions targeting GPAI providers are expected to take effect in August 2025.
Under the AI Act, GPAI providers are obligated to:
- Provide technical documents and user manuals
- Comply with copyright guidelines
- Disclose summaries of the data used for training
The AI Act defines GPAI as AI services utilizing large language models (LLMs) like OpenAI’s ‘ChatGPT’ and Meta’s ‘LLaMA’. Violations could result in fines of up to 3% of annual revenue.
Criticism of the Draft’s Relaxation
The release of this draft has led to criticisms that the previously regulation-heavy content has been relaxed. The EU stated that the latest revision features a ‘streamlined structure’ with sophisticated commitments compared to earlier drafts. Notably, the section on AI learning copyrights now includes vague terms like ‘best efforts’ and ‘reasonable measures’ to mitigate copyright infringement, suggesting uncertainty in their application.
Additionally, a requirement for a single point of contact for copyright issues has been removed, now stating that contact must be designated for those affected. The provision that copyright complaints could be dismissed if they are ‘clearly unfounded or excessive’ raises concerns about rights holders ignoring repeated complaints.
Global Context and South Korea’s Response
The announcement of this draft is viewed as a step back for the EU, which aims to enforce the world’s strongest AI regulations. Concerns are growing that altering previous plans may hinder competitiveness in the global AI landscape. Recent developments, such as the ‘Stargate Project’ announced by the Trump administration, which involves significant investment in AI data centers, highlight the escalating national rivalry in AI model development.
In this context, there are calls for South Korea’s ‘AI Basic Law’ to emphasize fostering over regulation. Passed on December 26, 2024, South Korea’s AI Basic Law, which is the second in the world after the EU’s, defines high-impact AI technologies that significantly affect users’ lives and safety, stipulating obligations for relevant AI operators. However, it faced controversy over excessive regulatory authority during its parliamentary process.
The Ministry of Science and ICT in South Korea has indicated a direction to minimize regulations during the formulation of subordinate regulations for the AI Basic Law, which is set to be implemented in January 2026. Minister Yoosang-im stated that the approach would include only the minimum regulatory requirements, aiming to dispel industry concerns about excessive regulation.
Balancing Fostering and Regulation
Concerns have been raised about the lack of specificity in South Korea’s AI Basic Law compared to the detailed regulations in the EU’s AI Act. Experts emphasize the importance of balancing the need to foster the AI industry with appropriate regulations. This balance is crucial in ensuring that the AI landscape remains competitive while safeguarding users’ rights and interests.