Grok Is In, Ethics Are Out in Pentagon’s New AI-Acceleration Strategy
The Pentagon has unveiled its third AI-acceleration strategy in just four years, outlining seven “pace-setting projects” intended to unlock critical foundational enablers for various U.S. military operations. This announcement was made on Monday and reflects a significant shift in focus for the military’s approach to artificial intelligence.
Overview of the New Strategy
This six-page document emphasizes a four-year objective to centralize data for AI training and analysis across different military branches. Notably absent from the strategy is any mention of the ethical use of AI, raising concerns about the implications of this omission. The strategy explicitly bans the incorporation of models that include Diversity, Equity, and Inclusion (DEI) related “ideological tuning.”
Access to Controversial AI Tools
On the same day, Secretary Pete Hegseth announced that Pentagon networks, including classified systems, would allow access to Grok, an AI chatbot owned by Elon Musk, which has been criticized for its partisan bias and controversial content.
Comparison with Previous Strategies
While the new strategy shares similarities with its 2023 predecessor under the Biden administration—both emphasizing rapid adoption of commercially available AI technologies—the latest iteration provides more defined pathways for implementing AI across military functions. Projects like “Swarms Forge” aim to discover and test innovative AI applications in combat scenarios.
Specific Projects and Goals
Among the various projects outlined, one focuses on integrating agentic AI for enhanced battle management and decision support. Another initiative aims to transform intelligence gathering, allowing for rapid weaponization of intel within hours rather than years. Additionally, there are plans to make AI tools, including Grok and Google’s Gemini, accessible to personnel at higher classification levels.
Data Sharing and Innovation
One of the most significant aspects of the new strategy is its mandate to eliminate “blockers” that hinder data sharing within the Department of Defense. This initiative aims to establish open-architecture systems, which are generally seen as conducive to fostering innovation, particularly for startups.
Concerns Over Ethical Considerations
The strategy’s dismissal of responsible AI and ethical considerations raises eyebrows, especially under a section titled “Clarifying ‘Responsible AI’ at the Department of War – Out with Utopian Idealism, In with Hard-Nosed Realism.” Here, it is stated that social ideologies have no place within the Department of War, emphasizing a shift towards objective truthfulness in AI responses.
Legal Standards and Human Control
In a notable development, the defense undersecretary for research and engineering is instructed to incorporate standard “any lawful use” language into contracts for AI services within a defined timeframe. This means that AI applications need only meet the general legal standards used by the Department, potentially complicating the ongoing preference for meaningful human control in military operations.
Public Trust and Global Context
The Pentagon rolls out this strategy amidst increasing AI adoption by global adversaries like Russia and China, while public trust in AI continues to dwindle across the political spectrum in the United States. Additionally, many European allies are distancing themselves from U.S. technology companies due to the current administration’s aggressive posture towards democracies.
This comprehensive overview highlights the Pentagon’s evolving approach to AI, prioritizing rapid deployment and innovative applications while sidelining ethical concerns that could shape the future of military engagement.