Japan AI Bill Promotes Research & Coordination Over Penalties
Japan is taking a different route in the global push to govern artificial intelligence (AI). Instead of imposing bans or defining risk categories, the country has passed a new bill to support AI research, development, and use. The focus is on planning, infrastructure, and getting all stakeholders, from the government to businesses involved. This contrasts with the European Union’s model, which leans heavily on regulation and enforcement.
What the AI Promotion Bill of Japan Covers
Japan is taking a structured, government-led approach to AI. It has outlined how it expects AI to be developed and governed. To steer the plan, the AI law mandates the creation of an AI Strategy Headquarters, established by the country’s Cabinet and chaired by the Prime Minister. The bill describes AI as technology that imitates human cognitive skills such as reasoning, decision-making, and learning. It brings together stakeholders from across government, academia, business, and the public to push adoption in a coordinated way.
The LAW prioritises:
- Supporting research and development from basic research to real-world application
- Promoting AI across sectors, including public services and industry
- Preventing misuse by ensuring transparency and aligning with international norms
- Actively participating in the international formulation of norms
- Building AI skills through education and training
Responsibilities are distributed across stakeholders:
- The Central Government will lead policy and provide financial and legislative support for AI development
- Local governments are expected to promote region-specific AI use
- Universities and research institutes will carry out comprehensive interdisciplinary AI-related research
- Companies are encouraged to build responsible AI systems that align with international norms and prevent harmful or inappropriate use
- Citizens are expected to learn about and engage with AI responsibly
Unlike the EU’s regulatory approach, Japan’s law does not introduce new penalties for AI misuse. It instead relies on existing legal frameworks such as the Penal Code and Copyright Act to address risks.
As stated in Article 3(4), AI, “if used for improper purposes or in an inappropriate manner, could result in criminal use, personal data leaks, or copyright infringement—ultimately affecting the peace of people’s lives and their rights and interests.” However, the law does not define what constitutes misuse or specify enforcement mechanisms, raising questions about how these harms will be addressed in practice.
The law allows the government to publicly name businesses involved in harmful or inappropriate uses of AI, such as those leading to criminal use, data leaks, or rights violations. While this adds public accountability, it is not backed by specific penalties or a dedicated enforcement mechanism under the law.
At the same time, Japan’s Copyright law allows AI developers to use copyrighted material without prior approval under Article 30-4, provided the use is non-expressive, such as training algorithms or analyzing data. While this reduces compliance friction for industry, it also limits copyright protections without adding new safeguards in return.
Instead of strict enforcement, Japan’s approach relies on soft measures: investing in infrastructure like computing power, data storage, and large datasets, and expanding the talent pipeline. The government also plans to issue non-binding guidelines to address risks such as data misuse or violations of individual rights. But without clear accountability structures or legal obligations for AI developers, this strategy may fall short in actively preventing harm or ensuring redress when things go wrong.
What the EU AI Act Covers
In March 2024, the EU passed its Artificial Intelligence Act, a wide-ranging law that governs AI use across its member states. The law uses a tiered system to regulate AI based on the level of risk it poses to public safety, fundamental rights, or health.
It classifies AI systems into four categories:
- Prohibited: Systems that manipulate behavior, use live biometric identification, or enable social scoring are banned.
- High-risk: AI used in critical areas like healthcare, law enforcement, and education is allowed but subject to strict oversight and regulation.
- General Purpose AI (GPAI): Tools such as ChatGPT must disclose training data summaries and comply with EU copyright laws. If a general-purpose AI tool poses broader risks, it must meet extra regulatory requirements.
- Minimal Risk: This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter.
The EU also allows for regulatory sandboxes to support innovation and exempts AI systems used for national security from the AI Act’s scope. Each member state must appoint its own regulator to oversee enforcement.
Comparing Approaches
Japan and the EU have embodied divergent strategies regarding how to regulate AI. Japan is embracing innovation by coordination, not by control. Its legislation sidesteps classifications and enforcement mechanisms, seeking to steer the stakeholders towards commonly shared objectives. The EU is more stringent with its approach. It categorizes AI systems into risk classes and supports tiering by binding obligations and sanctions. The European bloc’s policy is focused on mitigating harm to individuals, public safety, and democratic processes while keeping developers accountable throughout.
Japan centralizes responsibility in a Cabinet-level institution headed by the Prime Minister, which will draft the national AI Plan. Within the EU, member states ensure compliance through designated national authorities, including market surveillance bodies and fundamental rights regulators. While decentralized, these authorities operate under a harmonized EU framework set by the AI Act and coordinate via the European AI Office and AI Board to ensure consistent enforcement.
The contrast is equally apparent in how the two handle general-purpose AI (GPAI). The EU requires detailed disclosure by GPAI developers, including overviews of training datasets and protection against systemic risks. It has also tied GPAI to EU copyright compliance. Japan does not yet have GPAI-specific provisions in its law, but the East Asian nation does promote voluntary alignment with international norms.
In essence, Japan is using soft guidance and collaboration to shape AI development, while the EU is setting firm legal limits to control risks and embed safeguards from the outset.
Why this matters
India’s Current Position
India has yet to chart out a well-defined strategy for the governance of AI. While the government approved the IndiaAI Mission in March 2024 with a budget outlay of Rs. 10,371.92 crore, the initiative is primarily focused on promoting AI research and development rather than regulation. Its seven components include building GPU-based compute infrastructure, developing indigenous Large Multimodal Models (LMMs), providing access to non-personal datasets, promoting AI application development, expanding AI education, supporting startups, and implementing “Safe & Trusted AI” projects.
Unlike Japan and the EU, which are pushing ahead with structured regulatory frameworks, India’s governance efforts remain fragmented, lacking a centralized legislative or enforcement mechanism.
IT Minister Ashwini Vaishnaw has previously said that India doesn’t need heavy-handed regulation similar to the EU. He pointed to India’s telecom and data privacy laws as examples of how innovation can grow alongside safeguards. The country’s recent policy efforts, such as focusing on curbing deepfakes and algorithmic discrimination signal a preference for addressing harmful outcomes rather than regulating AI systems as a whole. But as concerns around profiling, misinformation, deepfakes, and copyright infringement grow, this light-touch approach may come under pressure.
NITI Aayog had suggested a liability framework and safe harbor protections back in 2018, but it is still unclear how or when these ideas might be formalized. In the meantime, the absence of a central AI authority could become a serious hurdle if something goes awry.
What Japan and the EU Can Teach India
Japan’s model might be worth exploring. It prioritizes coordination, infrastructure, and partnership, which are elements that can help India scale up computing resources, engage academia, and streamline policy across agencies. The challenge is achieving this without sacrificing regulatory safeguards.
The EU takes a more stringent route. Its AI Act shows what strong, proactive regulation looks like. By legally addressing AI risks, the EU hopes to prevent misuse from the start. Without something similar, India may struggle to stop harmful AI applications from slipping through the cracks.
India now faces a decision. Should it pursue a trust-based model like Japan’s? Or build a legal and institutional framework similar to the EU’s? A balanced approach supporting innovation while protecting rights may be its best way forward.