AI Warfare: Rethinking Security and Governance

Security in the Age of AI Warfare Requires Capability and Governance

“Humanity must not entrust its fate to the black box of algorithms.” This warning from the United Nations Secretary General highlights the urgent need for oversight as artificial intelligence (AI) becomes an integral part of modern warfare.

The Shift Towards AI in Warfare

The international order is rapidly shifting towards a reality where AI is a decisive factor in conflicts. Recent conflicts, including the Russia-Ukraine war, the Israel-Hamas war, and the recent U.S.-Israel confrontations with Iran, illustrate the growing integration of AI in key stages of the kill chain.

Unlike the precision warfare witnessed during the Gulf War, where technology played a supportive role, AI now plays a coordinating role on the battlefield, reshaping both the speed and structure of warfare. In the Iran conflict, for instance, AI integrates data from satellites, signals intelligence, drones, and radar in real time, compressing the kill chain to mere minutes.

Evolution of Military Operations

This transformation enables the simultaneous execution of multidomain operations under frameworks such as Combined Joint All-Domain Command and Control. The capacity to strike thousands of targets in a short timeframe reflects a significant structural change in military operations, a shift that has roots in the Maven project initiated during the Trump administration and expanded under President Biden.

AI’s Role in Intelligence and Decision-Making

Post-September 11, a bipartisan commission criticized intelligence agencies for their inability to “connect the dots.” Today, AI is increasingly filling that gap by integrating information from multiple channels into a common operational picture. The kill chain is evolving into a “kill web” that includes cyber operations to neutralize adversaries.

During the Responsible AI in the Military domain (REAIM) summit in September 2024, the United States outlined six key domains for military AI application: intelligence, surveillance and reconnaissance, command and control, logistics, human-machine teaming, and operations in space along with cyber and electromagnetic domains.

The Rise of Algorithmic Warfare

The U.S. Department of Defense now speaks of “algorithmic warfare” and “AI-enabled battlefield operations”. Meanwhile, China promotes a doctrine termed “intelligentized warfare”. As discussions in Washington increasingly focus on entering an AI war era, the implications are profound.

Recent conflicts showcase this shift: drone swarms and loitering munitions in Ukraine, along with systems like Lavender and Gospel in Gaza, exemplify how AI accelerates targeting and operational tempo. Even at early stages, military AI is impacting the pace of warfare beyond conventional weapon systems, while the role of human decision-making appears increasingly limited.

The Risks of Autonomous Weapons

The temptation to transition toward fully autonomous weapons systems is growing. This evolution elevates AI capabilities to the core of military rivalry, where victory in unseen domains can determine battlefield outcomes. Some analysts argue that AI capabilities now hold strategic significance akin to tactical nuclear weapons.

Former U.S. Secretary of State Henry Kissinger predicted that AI would define the 21st-century global order, a forecast now nearing reality. Yet, the rapid pace of military AI development far exceeds global governance efforts, raising concerns about maintaining human control over autonomous systems.

The Need for Governance

Intergovernmental discussions on lethal autonomous weapons systems and United Nations resolutions are ongoing, but the gap between norms and reality continues to widen. A major concern is how to preserve human oversight and define acceptable levels of autonomy in military applications.

Recent tensions involving the U.S. Department of Defense and AI firm Anthropic highlight the differing perspectives on these issues. As AI-driven decision-making accelerates, strategies such as “left of launch” become feasible, but this also increases the risks of miscalculations and civilian casualties.

A study led by Professor Kenneth Payne found that AI models chose to use nuclear options in 20 out of 21 simulated conflicts, underscoring the dangers of delegating critical decisions to machines. The REAIM Global Commission’s report emphasized the need for human control over decisions involving nuclear weapons.

Implications for the Korean Peninsula

The implications of AI warfare are particularly significant on the Korean Peninsula. Shorter response times necessitate a fundamental redesign of crisis management systems. As Korea seeks to transfer wartime operational control, it must ensure capabilities not just in next-generation weapons but also in C4ISR—AI-based Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance—as well as in cyber operations.

President Lee Jae Myung emphasized the dual nature of AI, stating it could either be a dangerous force or a beneficial tool. He urged the international community to establish principles for responsible AI use.

As Korea aims to become a global leader in AI, it must enhance military capabilities while embedding responsibility and safety from the design stage. The National Intelligence Service’s recent initiative with seven countries on “security by design” for AI supply chains reflects this strategic direction.

Conclusion

Ultimately, Korea must pursue a comprehensive strategy that integrates government, military, and private-sector expertise. Strengthening national security and prosperity requires not only technological advancement but also a commitment to ensuring that AI serves humanity responsibly.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...