U.S. and China Reject Military AI Governance Framework

U.S. and China Decline to Join Military AI Responsibility Declaration

The Third Summit on the Responsible Use of Artificial Intelligence in the Military Domain took place from February 4 to 5, 2026, in A Coruña, Spain. This meeting focused on how military AI can be leveraged to enhance international peace and security, while also addressing the risks associated with irresponsible use or system failures.

China’s Position on Military AI

Led by Li Chijiang, the Deputy Director-General of the Department of Arms Control of China’s Ministry of Foreign Affairs, the Chinese delegation emphasized a human-centered military artificial intelligence. During the summit, Li advocated for:

  • Maintaining a strategic balance and stability while abandoning the pursuit of absolute military advantage.
  • Adhering to international humanitarian law and ensuring that weapon systems remain under human control.
  • Implementing the principle of AI for good to promote military applications of AI that contribute to peace and security.
  • Establishing agile governance that balances security controls with technological development.
  • Supporting multilateralism and the role of the United Nations in governance frameworks.

Li noted that the responsible use of AI in military contexts is a shared challenge that concerns the future of humanity. China aims to promote a governance philosophy characterized by extensive consultation, joint contribution, and shared benefits.

Reasons for Non-Signature by the U.S. and China

Despite 85 countries participating in the summit, only 35 signed the joint declaration regulating AI technologies in warfare. Notably, both the United States and China abstained from signing.

U.S. Concerns

The U.S. refusal to endorse binding rules on military AI is driven by strategic considerations. Key points include:

  • Fears that international regulations could limit the flexibility required for rapid technological advancements.
  • A preference for establishing exclusive governance frameworks within its alliance system to maintain technological dominance.
  • A need to preserve strategic ambiguity regarding autonomous weapons and battlefield AI decision-making.

China’s Perspective

China’s decision to abstain stems from concerns regarding vague principles like “responsible use” and potential biases in the declaration that could entrench Western technological hegemony. They argue that:

  • The declaration lacks mechanisms to balance the advantages of early-mover states.
  • Existing frameworks might undermine the technological autonomy and security of developing nations.

Shared Structural Obstacles

Both nations face shared challenges that complicate the signing of the declaration:

  • The sensitive nature of military AI makes it difficult to verify and enforce any international regulations.
  • The rapid pace of AI technological advancement outstrips the slower cycles of rule-making, rendering many provisions ineffective against real-world risks.

Ultimately, both China and the U.S. view the declaration as “incomplete” and lacking practical binding force, which diminishes its value as an instrument for governance in the military AI domain.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...