AI Arms Race: The Gap in Global Cooperation

Military AI Adoption Is Outpacing Global Cooperation

Militaries worldwide are rapidly adopting artificial intelligence (AI) while international cooperation flounders. With the United States and China less engaged at a recent AI military summit, middle powers appear to face a choice: lead the conversation or proceed into a future devoid of guardrails.

This was apparent last week in A Coruña, Spain, when state delegations and representatives from the AI industry, academia, and civil society convened the third multistakeholder summit on Responsible Artificial Intelligence in the Military Domain (REAIM), which aims to direct the future of international cooperation in the field. The previous two summits produced “outcome documents” that were largely backed by the delegations in attendance. Both the 2023 “Call to Action” and the 2024 “Blueprint for Action” were endorsed by about sixty countries. This year, only thirty-five nations—neither the United States nor China among them—endorsed the outcomes document, “Pathways to Action”.

Though not enforceable, the REAIM outcome documents, which typically involve common-sense commitments such as militaries using AI in ways that comply with international humanitarian law, highlight what countries view as the critical concerns in the coming year. The diminished support for this year’s document illustrates the macro geopolitical splintering currently underway, especially between the United States and Europe. The question now raised for REAIM is whether middle powers will drive forward AI rules of the road and confidence-building measures if the great powers become increasingly aloof.

The Donald Trump administration has scrambled U.S. relationships, especially with its NATO partners. If countries are uncertain about their standing and relationships with others—namely with the United States and, to a lesser extent, China—it is difficult for them to commit to international cooperation or sign statements of principle that could be opposed by the great powers. Indeed, the United States and China had substantially smaller delegations at REAIM in Spain than at the 2024 summit in South Korea.

The growing gap between international dialogue on military AI, which tends to emphasize risks and potential constraints on its use, and the accelerating efforts of militaries worldwide to integrate AI should be concerning to all nations. Many traditional multilateral avenues for discussing global governance of AI military applications (including the UN Group of Governmental Experts addressing lethal autonomous weapon systems) continue at the glacial pace of international bureaucracy, much as they have since the 2010s. Yet states are already developing and experimenting with—if not outright adopting, scaling, and deploying—AI capabilities. Ongoing conflicts such as Israel-Gaza and Russia-Ukraine are already seeing new AI tools, techniques, and enabled systems being used to generate efficiencies and power on the battlefield. As UN efforts to create binding regulations on military AI intensify, particularly for autonomous weapon systems, multilateral negotiations run the risk of becoming increasingly disconnected from on-the-ground realities.

What militaries want right now is to figure out how to use AI safely and effectively, as they have done with other technologies in the past. If this divergence continues unchecked, the risks are twofold. In the long term, policy efforts could become divorced from the technical realities of the systems they seek to govern. In the near term, states are deploying these technologies with a patchwork of haphazard policies—if any—and no opportunity to gain valuable insights on best practices from others.

Given the United States is stepping back from leadership in these spaces, middle powers must now grapple with whether and how to steer confidence-building measures on military AI and cooperation. However, this moment could be seen as an opportunity, as the REAIM process has been led by middle powers from the outset. The Netherlands initiated the effort in 2023; South Korea and Singapore hosted the second summit in 2024; and Spain hosted the third event last week. Two of those countries are NATO partners whose relationship with the United States has fundamentally changed over the last year. As a result, they may feel uncertain about their continued partnership with the currently unpredictable United States or whether to more comprehensively pursue their security goals independently.

One path forward is for those middle powers focused on AI adoption to advance the REAIM process, since they created it in the first place. They can use the summit’s momentum and convening power as a locus for international cooperation and capacity building on military AI for the non-great-power players. While the United States and China will always be invited, middle powers should not worry about the extent to which they participate. While this could make broad international consensus less likely, the REAIM could offer middle powers and the Global South crucial capacity building and rules of the road, particularly if the summit’s process absorbs some of the capacity-building work previously done in the U.S.-led Political Declaration on Responsible Military Use of AI and Autonomy. The alternative would be to scale back efforts such as REAIM and wait for the dust to settle on Washington’s changing approach to the world. This would be a mistake.

The REAIM process has been an important bridge between UN efforts led by diplomats, which often focus on regulation and restrictions, and the reality of accelerating military investments and the fielding of AI across various use cases. The changing international landscape is making this bridging role more challenging—as evidenced last week in Spain—but it remains essential. Decisions made now could ripple through confidence-building measures and other opportunities to reduce the military risk of AI use without constraining states from using this important technology. If middle powers choose to take the more difficult path, they could be the ones who define those outcomes.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...