Can Korea’s AI Law Stop Deepfake Crimes as Musk’s Grok Comes Under Fire?
Elon Musk’s AI company xAI is currently embroiled in controversy over the proliferation of deepfake images. As the company faces backlash, Korea’s Framework Act on the Development of Artificial Intelligence is set to take effect next week, raising questions about the legislation’s ability to effectively curb deepfake-related issues.
Legislative Overview
This act is heralded as the world’s first comprehensive AI regulation, aiming to strengthen the responsibilities of AI operators and establish a foundation for trust in an AI-driven society. One of its primary goals is to prevent deepfake crimes.
However, skepticism persists regarding the law’s effectiveness in blocking xAI’s deepfake services or restricting access to them. Industry officials have reported that Grok, xAI’s AI model, continues to facilitate the creation of sexually explicit deepfake images on Musk’s social media platform, X.
International Reactions
In response to these developments, countries like Malaysia and Indonesia have restricted access to the platform, while other nations have initiated legal investigations. As a consequence, xAI has limited deepfake features to paid subscribers. Despite these measures, concerns remain about the ongoing availability of such services.
Key Provisions of the Act
The act, effective January 22, 2026, applies to overseas operators and mandates that they designate domestic agents to fulfill legal obligations. However, the lack of clarity regarding the transparency obligation to label deepfakes may impede timely prevention and response efforts when communicating with foreign operators.
Importantly, the act requires that AI-generated content must be labeled, including visible watermarks on deepfake content that are challenging to distinguish from reality. Violators of this mandate may face correction orders and fines up to 30 million won (approximately $20,300).
Implementation Challenges
Under the current regulations, Grok’s deepfake images will need to feature watermarks. However, the act includes a grace period of at least one year, delaying immediate enforcement. Analysts argue that even after this period, effectively blocking or restricting foreign AI services could be complicated due to potential trade tensions.
Legal experts note that current laws may not offer robust solutions if overseas companies like xAI choose not to cooperate voluntarily. For instance, Jung Chang-woo, a lawyer at Lee & Ko, stated, “Under current laws, it is hard to do more than impose fines.”
Existing Legal Framework
Until the new AI act is fully operational, experts recommend addressing deepfake cases through existing laws such as the Information and Communications Network Act or the Personal Information Protection Act. Yeo Hyun-dong, a lawyer at Yoon & Yang LLC, emphasized that “the binding force of the act alone is weak,” suggesting that regulations must be enhanced by sanctions for specific violations in conjunction with existing laws.
Government Stance
The government currently maintains a policy of minimal regulation while monitoring the situation. A representative from the Ministry of Science and ICT stated, “As AI technology is still developing through trial and error, we will watch the situation for now to allow for self-correction.”
This evolving landscape calls for ongoing scrutiny as the effectiveness of Korea’s AI legislation will be tested in the coming months.