South Korean Law to Regulate AI Takes Effect in World First
SEOUL, South Korea — South Korea has made history by becoming the first country to implement a comprehensive law regulating artificial intelligence (AI). This legislation, which includes specific provisions targeting deepfakes, officially took effect on Thursday.
The AI Basic Act
On this significant day, President Lee Jae Myung announced, “The AI Basic Act comes into full effect today.” This law mandates that companies notify users in advance when their services or products utilize generative AI. It also requires clear labeling of content, including deepfakes, which may not be easily distinguishable from reality.
Passed in December 2024, the act aims to “establish a safety- and trust-based foundation to support AI innovation,” according to the Ministry of Science and ICT. Violators of this law could face fines up to 30 million won (approximately $20,400).
Global Context
South Korean media outlets report that this law is the first of its kind to be fully enacted worldwide. While the European Parliament adopted what it claims as the “world’s first rules on AI” in June 2024, those regulations will be implemented gradually and are not expected to be fully applicable until 2027.
Over the past year, the European Union has permitted regulators to ban AI systems identified as posing “unacceptable risks” to society under its Artificial Intelligence Act. This could encompass systems that identify individuals in real-time using public cameras or assess criminal risk based solely on biometric data.
Investment in AI
In line with its ambitions, South Korea plans to triple its spending on AI this year. The new legislation identifies 10 sensitive fields — including nuclear power, criminal investigations, loan screening, education, and medical care — that will be subject to increased transparency and safety requirements regarding AI.
Despite the optimism, some skepticism surrounds the law’s regulatory implications. Lim Mun-yeong, vice chairman of the presidential council on national AI strategy, expressed concerns, stating, “The nation’s transition toward AI, however, remains in its infancy with insufficient infrastructure and systems.” He emphasized the necessity to accelerate AI innovation to navigate this “unknown era.”
Lim also added that if the situation demands, “the government will accordingly suspend regulation, monitor the situation, and respond appropriately.”
Deepfakes and Safety Measures
The issue of deepfakes has gained renewed attention recently, particularly following controversies surrounding Elon Musk’s Grok AI chatbot, which faced backlash for enabling users to generate inappropriate images of real people, including minors.
The South Korean science ministry advocates for applying digital watermarks or similar identifiers to AI-generated content as a “minimum safety measure” to prevent misuse, particularly concerning manipulated videos or deepfakes. “It is already a global trend adopted by major international companies,” the ministry declared.
Comparative Legislation
In a notable development in California, a landmark law regulating AI chatbots was signed, mandating operators to implement “critical” safeguards for user interactions. This law opens avenues for individuals to file lawsuits if failures lead to tragic outcomes.
As South Korea steps into this new regulatory landscape, the world watches closely to see how these pioneering measures will shape the future of AI technology.