Quantifying Ethics in Military AI Decision-Making

A Combat Scenario-Based Model for Quantifying Ethical Decision-Making in Military AI

In the 21st century, artificial intelligence (AI) technology has been driving a paradigm shift in defense, particularly by significantly enhancing operational capabilities in the air force domain. AI-powered combat systems support a wide range of functions such as target detection, tactical decision-making, situational awareness, and autonomous flight, thereby reducing the cognitive burden on human pilots and improving real-time responsiveness and survivability.

However, the increasing autonomy of AI systems introduces complex challenges, including potential violations of international law, decision-making errors, and ambiguous attribution of ethical responsibility. This has led to a growing need to redefine the role of human fighter pilots within AI-integrated operations. To address these issues, this study proposes a quantitative ethical decision-making model that mathematically integrates national military ethics principles and international legal norms, while incorporating dynamic battlefield variables. The proposed model aims to contribute to defense policy development and combat training systems by offering a structured and operationally applicable ethical evaluation framework.

Related Work

This article proposes a foundational framework for mathematically modeling ethical decision-making in AI-enabled combat systems. To support this framework, the paper examines and analyzes prior research across three key domains:

  • The development of AI technologies integrated into fighter aircraft;
  • National military ethical standards;
  • Approaches to the quantification of ethical judgment.

Based on this analysis, the article emphasizes its unique and differentiated contribution to the current literature on military AI ethics.

Evolution of AI-Based Fighter Aircraft Systems

AI fighter jet technology is advancing in diverse ways depending on national strategies. The United States is pursuing unmanned–manned teaming and next-generation combat platforms through initiatives such as Air Combat Evolution (ACE), Skyborg, and Next Generation Air Dominance (NGAD) programs. China is integrating AI pilot systems into the J-20 and enhancing the autonomous combat capabilities of AI-powered drones. Europe is focusing on AI-assisted human operations and cloud-based battlefield analysis technologies under programs like the Future Combat Air System (FCAS) and Tempest, with a strong emphasis on ethical compliance and operational safety.

These developments are shifting the role of human pilots from operators to strategic decision-makers or supervisors, thereby highlighting the need for quantitative models that support ethical design and accountability frameworks.

National Standards for Military AI Ethics

As the military application of AI technologies expands, countries are establishing military AI ethics standards based on their strategic objectives and philosophical principles. The United States emphasizes responsibility grounded in practical utility, the European Union promotes legislated human-centric principles, China prioritizes state-centered strategic ethics, and South Korea remains in the early stages of institutional development.

  • United States: AI ethics standards align with existing military doctrine and legal frameworks.
  • China: Ethics are guided by socialist core values and national security interests.
  • European Union: Focuses on human-centric principles, transparency, and accountability.
  • South Korea: Developing comprehensive guidelines for ethical AI in defense applications.

In conclusion, as military AI technology continues to evolve, the integration of ethical decision-making frameworks will be crucial to ensure compliance with both national and international standards. This study lays the groundwork for future research and implementation in this vital area.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...