A Combat Scenario-Based Model for Quantifying Ethical Decision-Making in Military AI
In the 21st century, artificial intelligence (AI) technology has been driving a paradigm shift in defense, particularly by significantly enhancing operational capabilities in the air force domain. AI-powered combat systems support a wide range of functions such as target detection, tactical decision-making, situational awareness, and autonomous flight, thereby reducing the cognitive burden on human pilots and improving real-time responsiveness and survivability.
However, the increasing autonomy of AI systems introduces complex challenges, including potential violations of international law, decision-making errors, and ambiguous attribution of ethical responsibility. This has led to a growing need to redefine the role of human fighter pilots within AI-integrated operations. To address these issues, this study proposes a quantitative ethical decision-making model that mathematically integrates national military ethics principles and international legal norms, while incorporating dynamic battlefield variables. The proposed model aims to contribute to defense policy development and combat training systems by offering a structured and operationally applicable ethical evaluation framework.
Related Work
This article proposes a foundational framework for mathematically modeling ethical decision-making in AI-enabled combat systems. To support this framework, the paper examines and analyzes prior research across three key domains:
- The development of AI technologies integrated into fighter aircraft;
- National military ethical standards;
- Approaches to the quantification of ethical judgment.
Based on this analysis, the article emphasizes its unique and differentiated contribution to the current literature on military AI ethics.
Evolution of AI-Based Fighter Aircraft Systems
AI fighter jet technology is advancing in diverse ways depending on national strategies. The United States is pursuing unmanned–manned teaming and next-generation combat platforms through initiatives such as Air Combat Evolution (ACE), Skyborg, and Next Generation Air Dominance (NGAD) programs. China is integrating AI pilot systems into the J-20 and enhancing the autonomous combat capabilities of AI-powered drones. Europe is focusing on AI-assisted human operations and cloud-based battlefield analysis technologies under programs like the Future Combat Air System (FCAS) and Tempest, with a strong emphasis on ethical compliance and operational safety.
These developments are shifting the role of human pilots from operators to strategic decision-makers or supervisors, thereby highlighting the need for quantitative models that support ethical design and accountability frameworks.
National Standards for Military AI Ethics
As the military application of AI technologies expands, countries are establishing military AI ethics standards based on their strategic objectives and philosophical principles. The United States emphasizes responsibility grounded in practical utility, the European Union promotes legislated human-centric principles, China prioritizes state-centered strategic ethics, and South Korea remains in the early stages of institutional development.
- United States: AI ethics standards align with existing military doctrine and legal frameworks.
- China: Ethics are guided by socialist core values and national security interests.
- European Union: Focuses on human-centric principles, transparency, and accountability.
- South Korea: Developing comprehensive guidelines for ethical AI in defense applications.
In conclusion, as military AI technology continues to evolve, the integration of ethical decision-making frameworks will be crucial to ensure compliance with both national and international standards. This study lays the groundwork for future research and implementation in this vital area.