AI Warfare: The Ethical Dilemma Ahead

History Reloaded as AI Puts Ethics in the Firing Line

In every era, the arrival of new military technologies has forced states to revisit a recurring set of moral and strategic questions. Commanders and legislators have always asked whether a tool is lawful and whether its use is wise. The introduction of the stirrup, for example, altered mounted warfare during the early Middle Ages because it allowed riders to stabilize themselves during impact and transformed cavalry into a decisive force. The spread of gunpowder changed the conduct of sieges, the formation of states, and the hierarchy of military power across Europe and Asia. Mechanized armor and motorized infantry forced governments during the 1930s to revise the role of cavalry, logistics, and industrial mobilization. Radar, early computers, and nuclear command-and-control systems created new decision-making environments during the Second World War and the Cold War.

At each historical juncture, the ethical debate entered later, after demonstrations of battlefield potential and after the strategic consequences became clear. Societies that abstained from new capabilities frequently found that other powers pressed ahead, gained initiative, and reshaped the terms on which conflict took place. Historians record many examples where abstention placed one side at a political or military disadvantage.

The Interwar Period: A Cautionary Tale

The interwar period demonstrates this pattern with particular clarity. The United States adopted an isolationist foreign policy during the 1920s and 1930s and withdrew from active European security involvement. Britain and France reduced military spending and participated in efforts to constrain naval and air power through treaty frameworks. Public opinion encouraged a view that restraint and diplomatic instruments would avert another crisis similar to 1914. Meanwhile, authoritarian regimes in Germany, Italy, and Japan expanded armaments production, revised operational doctrine, and integrated new technologies into their armed forces. The result was a pronounced imbalance between intentions in democratic societies and capabilities in revisionist states. When war returned in 1939, democracies possessed legal and ethical arguments in favor of peace, while their adversaries had spent a decade preparing for industrialized conflict. Abstention from armaments did not deliver stability; it lowered the cost of aggression for those prepared to use force and raised the eventual price of resistance.

The Modern Dilemma: Artificial Intelligence in Warfare

Democracies now face a modern version of that dilemma in relation to artificial intelligence. The pace of development in AI-enabled military systems has accelerated during the past two decades. Since the Wing Loong-1 combat UAV entered service in 2009, China has overseen a steady expansion in autonomous and semi-autonomous aerial and maritime platforms. Beijing has also signaled its intent to distribute AI throughout the People’s Liberation Army, including command automation, ISR (intelligence, surveillance, and reconnaissance) fusion, targeting, electronic warfare, and logistics. Russia, meanwhile, has articulated the stakes in its usual direct terms. In September 2017, President Vladimir Putin stated, “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

This statement reveals a geopolitical perspective in which AI superiority supports global leadership. Many other governments have drawn similar conclusions and have invested accordingly. An initial attempt at establishing shared norms took place in 2024, when 90 governments attended the Responsible AI in the Military Domain summit in Seoul. Some 60 states endorsed a blueprint intended to guide responsible use of AI on the battlefield. Of these, 30 declined to adopt the document, including China, despite having sent official representation.

Ethical Considerations and Just War Principles

The presence of non-state groups with access to improvised weapons systems and digital tools introduces further complexity because such groups do not participate in conventional arms control arrangements. Ethical questions therefore sit inside a larger strategic framework in which abstention carries its own costs and risks.

A structured way to examine the ethical dimension of this transition exists in the form of the Just War tradition. From Cicero in the late Roman Republic through Augustine during the late antique period and Aquinas during the medieval period, the tradition developed methods for reconciling moral duties with the practical reality of armed conflict. Three principles continue to guide contemporary analysis:

  • Jus ad Bellum concerns the justice of going to war;
  • Jus in Bello concerns the justice of conduct during war;
  • Jus post Bellum concerns justice after the cessation of hostilities.

Under Jus ad Bellum, AI-enabled sensor fusion and pattern recognition can improve intelligence and target discrimination at the decision-making stage. These capabilities can help commanders assess proportionality and necessity before engaging in force. Improved intelligence can make less destructive options available, reduce recourse to escalation, and support efforts to contain conflict geographically.

Under Jus in Bello, the Committee of the International Red Cross has observed that machine learning decision-support systems can enable human decision-makers to comply with international humanitarian law by accelerating the analysis of information relevant to distinction and proportionality. Distinguishing civilians from combatants, identifying protected objects, and correlating information across multiple sources become easier when data volumes increase and when analytical tools can process them rapidly.

The Role of Human Judgment

A persistent concern relates to the role of the human. Dwight D. Eisenhower once said that weapons are tools and that their significance derives from the purpose for which they are used. Moral agency therefore resides with the user rather than the object. Critics of military AI frequently argue that machines cannot replicate the judgment and intuition displayed by individuals such as Stanislav Petrov, the Soviet officer who in 1983 recognized that an early-warning alert indicating an incoming nuclear attack was false. His choice prevented a destructive reaction during a moment of extreme tension.

The incident is presented as evidence that human intuition forms an essential safety mechanism. It also shows that early nuclear command-and-control systems already delegated elements of inference to machines. Human decision-making contains gaps and weaknesses, and AI can reduce some categories of human error in the same way radar once reduced misidentification.

Models of Human Involvement in AI

The central question concerns the placement of the human within the system. Military doctrine now distinguishes between three models:

  • Human in the loop model: Humans approve or deny lethal or targeting actions, applying the highest degree of ethical caution and legal certainty.
  • Human on the loop model: Autonomous systems perform tasks, but humans supervise and can intervene, applicable in roles benefiting from rapid machine processing.
  • Human out of the loop model: AI handles routine logistics and data manipulation, freeing personnel for tasks requiring creativity and judgment.

The area that generates controversy concerns fully autonomous battlefield action. The Lieber Institute recently recommended the development of “command re-entry pathways” to ensure that commanders can reassert authority over lethal autonomous systems, maintaining hierarchical responsibility and legal accountability.

Conclusion

The broader historical lesson describes a recurring pattern in that democratic societies cannot rely on moral restraint when other actors seek advantage through technological innovation. The most ethically responsible response for democratic states is to ensure that AI development conforms to international law, preserves human accountability, and upholds humanitarian standards. Technology should reflect humanity, and humanity should set the rules. The alternative would create a geopolitical environment shaped by coercion rather than law, with consequences extending far beyond the battlefield.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...