Japan’s Bold Move Towards AI Legislation

Japan’s Initiative for AI Legislation

On February 4, 2025, the Japanese Government announced its ambitious plan to establish Japan as “the most AI-friendly country in the world.” This intention is characterized by a lighter regulatory approach compared to that of the EU and other nations. This announcement follows two significant developments: the submission of an AI bill to Japan’s Parliament and proposals by the Japanese Personal Data Protection Commission (PPC) to amend the Japanese Act on the Protection of Personal Information (APPI) to facilitate the use of personal data for AI development.

The AI Bill: A New Regulatory Framework

On February 28, 2025, the Japanese Government submitted its “Bill on the Promotion of Research, Development and Utilization of Artificial Intelligence-Related Technologies” (AI Bill) to Parliament. If enacted, this bill would represent Japan’s first comprehensive legislation on AI.

The AI Bill imposes a singular obligation on private sector entities utilizing AI technology: they must “cooperate” with government-led initiatives regarding AI. This obligation is expected to extend to entities involved in developing, providing, or using AI-based products or services. However, the specifics of what this cooperation entails remain unclear.

In contrast, the AI Bill mandates the government to:

  • Develop AI guidelines aligned with international standards
  • Collect information and conduct research on AI-related technologies

Consequently, private-sector entities may be required to comply with these government-issued guidelines and participate in government-led data collection and research initiatives.

Geographically, the AI Bill does not explicitly state its applicability to companies outside Japan. Nevertheless, the Japanese government has emphasized the importance of applying AI regulations to foreign companies, suggesting that these obligations could potentially extend beyond Japan’s borders.

Notably, the AI Bill does not prescribe penalties for non-compliance. However, it does require the Japanese Government to assess instances where improper or inappropriate use of AI technologies has harmed individual rights or interests and to take necessary actions based on the findings. In severe cases of rights violations, the Government may issue guidance for compliance or publicly disclose the names of offending entities.

Proposed Amendments to the Data Protection Law

On February 5, 2025, the PPC proposed introducing exemptions to the APPI’s requirement for obtaining a data subject’s consent in specific scenarios:

  • When collecting sensitive personal data (e.g., medical history or criminal records)
  • When transferring personal data to third parties

The PPC argues that if personal data is utilized for generating statistical information or developing AI models, where results cannot be traced back to individuals, the risk of infringing on individual rights is minimal.

The PPC suggests that AI developers should be able to collect publicly available sensitive personal data without obtaining consent from data subjects, provided that the data will solely be used for AI model development. Furthermore, data controllers should be able to share such data with AI developers without requiring consent from the data subject.

While a specific draft amendment to the APPI has yet to be released, the PPC’s proposals signify a notable move towards promoting AI development through the relaxation of data protection restrictions.

Conclusion

Japan’s efforts to adopt AI-friendly legislation reflect a strategic move to enhance its position in the global AI landscape. By aiming to create a less restrictive regulatory environment, Japan is poised to attract innovation and investment in AI technologies, while also navigating the complexities of personal data protection.

The ongoing developments in AI regulation will be closely monitored as Japan seeks to balance technological advancement with the safeguarding of individual rights and interests.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...