Korea’s Groundbreaking AI Law: Can It Combat Deepfake Crimes?

Can Korea’s AI Law Stop Deepfake Crimes as Musk’s Grok Comes Under Fire?

Elon Musk’s AI company xAI is currently embroiled in controversy over the proliferation of deepfake images. As the company faces backlash, Korea’s Framework Act on the Development of Artificial Intelligence is set to take effect next week, raising questions about the legislation’s ability to effectively curb deepfake-related issues.

Legislative Overview

This act is heralded as the world’s first comprehensive AI regulation, aiming to strengthen the responsibilities of AI operators and establish a foundation for trust in an AI-driven society. One of its primary goals is to prevent deepfake crimes.

However, skepticism persists regarding the law’s effectiveness in blocking xAI’s deepfake services or restricting access to them. Industry officials have reported that Grok, xAI’s AI model, continues to facilitate the creation of sexually explicit deepfake images on Musk’s social media platform, X.

International Reactions

In response to these developments, countries like Malaysia and Indonesia have restricted access to the platform, while other nations have initiated legal investigations. As a consequence, xAI has limited deepfake features to paid subscribers. Despite these measures, concerns remain about the ongoing availability of such services.

Key Provisions of the Act

The act, effective January 22, 2026, applies to overseas operators and mandates that they designate domestic agents to fulfill legal obligations. However, the lack of clarity regarding the transparency obligation to label deepfakes may impede timely prevention and response efforts when communicating with foreign operators.

Importantly, the act requires that AI-generated content must be labeled, including visible watermarks on deepfake content that are challenging to distinguish from reality. Violators of this mandate may face correction orders and fines up to 30 million won (approximately $20,300).

Implementation Challenges

Under the current regulations, Grok’s deepfake images will need to feature watermarks. However, the act includes a grace period of at least one year, delaying immediate enforcement. Analysts argue that even after this period, effectively blocking or restricting foreign AI services could be complicated due to potential trade tensions.

Legal experts note that current laws may not offer robust solutions if overseas companies like xAI choose not to cooperate voluntarily. For instance, Jung Chang-woo, a lawyer at Lee & Ko, stated, “Under current laws, it is hard to do more than impose fines.”

Existing Legal Framework

Until the new AI act is fully operational, experts recommend addressing deepfake cases through existing laws such as the Information and Communications Network Act or the Personal Information Protection Act. Yeo Hyun-dong, a lawyer at Yoon & Yang LLC, emphasized that “the binding force of the act alone is weak,” suggesting that regulations must be enhanced by sanctions for specific violations in conjunction with existing laws.

Government Stance

The government currently maintains a policy of minimal regulation while monitoring the situation. A representative from the Ministry of Science and ICT stated, “As AI technology is still developing through trial and error, we will watch the situation for now to allow for self-correction.”

This evolving landscape calls for ongoing scrutiny as the effectiveness of Korea’s AI legislation will be tested in the coming months.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...