Critical Infrastructure at Risk: The Impacts of Agentic AI

Agentic AI Expands Critical Infrastructure Attack Surface Beyond Governance

In late winter, two significant developments arrived within weeks of each other. Amazon expanded its Health AI agent, integrated with One Medical, to over 200 million Prime members. This expansion granted the system autonomous authority to interpret lab results, manage appointments, and support prescriptions around the clock, without requiring a clinician for every routine interaction.

Simultaneously, a Booz Allen Hamilton threat report revealed that the average time for an attacker to gain initial access and begin lateral movement across a network had dropped to under 30 minutes, with the fastest cases measured in mere seconds.

The Intersection of Autonomous AI and Cybersecurity Threats

These developments represent two intersecting curves, creating a largely ungoverned space between them. Agentic AI, which takes autonomous actions and makes decisions on behalf of organizations, has evolved rapidly from pilot programs to operational infrastructure in just over a year.

In sectors like healthcare, AI agents are integrated into clinical decision support, patient routing, lab results interpretation, and medication management. In financial services, they facilitate fraud detection, loan origination approvals, and real-time trading decisions. Notably, Google’s AlphaEvolve agent exemplifies this trend, having optimized a critical infrastructure component by recovering 0.7 percent of Google’s worldwide computing resources and improving its speed by 23 percent.

The Security Model Shift

As AI agents embed themselves within the operational layers of critical infrastructure sectors—designated under the DHS framework—the security model undergoes radical changes. A compromised AI agent in a hospital’s clinical workflow cannot be treated like a traditional malware incident; it becomes a corrupted decision-maker operating within critical infrastructure.

Attackers may not need to breach the underlying network to cause harm; they merely need access to the agent. In some scenarios, they can induce harmful actions through a technique called prompt injection, injecting malicious instructions into the content the agent processes.

Emerging Threats and Recent Incidents

In September 2025, Anthropic documented the first large-scale cyberattack where an AI system executed the majority of the operation autonomously. A state-sponsored group from China exploited Claude Code, Anthropic’s AI coding agent, to infiltrate around 30 global targets, including financial institutions and government agencies. The attackers cleverly disguised their operation as legitimate cybersecurity testing, enabling the agent to conduct reconnaissance and lateral movement without sustained human involvement.

According to Booz Allen Hamilton’s March 2026 threat report, the average attacker breakout time has fallen to under 30 minutes, showcasing how adversaries are using AI to automate reconnaissance and accelerate vulnerability identification.

Challenges in Defensive Operations

Current defensive operations are still reliant on human timelines, often taking days to detect and address breaches. When traditional IT systems are the target, this gap is severe but manageable. However, when AI agents with access to sensitive data are compromised, the implications can extend beyond mere data loss to significant physical outcomes.

HiddenLayer’s 2026 AI Threat Landscape Report indicates that one in eight reported AI breaches is now linked to agentic systems. The vulnerabilities specific to these agents, such as injected malicious inputs and privilege escalation, do not align with existing intrusion detection frameworks, complicating the defense landscape.

Government Response and Recommendations

The federal government has begun to address these challenges. In November 2024, the DHS published a Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure, marking the first framework of its kind. This framework addresses all 16 critical infrastructure sectors, focusing on the risks posed by AI agents.

Recommendations from a GAO assessment in December 2024 urged DHS to enhance its risk assessment guidance to comprehensively address the potential harm of AI-related attacks. The urgency for actionable guidance is paramount, as the pace of AI deployment in critical sectors continues to accelerate.

Proposed Changes to Mitigate Risks

To substantially reduce risks without awaiting comprehensive legislation, three key changes are recommended:

  1. DHS and CISA should transition from voluntary guidance to mandatory minimum security requirements for AI agents deployed in critical infrastructure. This includes implementing prompt injection protections, documented human-override mechanisms, and audit logging for autonomous actions.
  2. Critical infrastructure operators should conduct AI-specific risk assessments that evaluate both the likelihood and impact of agent compromise, aligning with GAO’s recommendations.
  3. Sector Risk Management Agencies must be granted explicit authority and resources to assess AI agent deployments and establish sector-specific security standards.

The urgency for effective governance is underscored by the private sector’s response, as demonstrated by Booz Allen’s launch of a specialized agentic cyber defense product suite. This market behavior indicates that the risks associated with AI in critical infrastructure are real and pressing.

Conclusion

The intersection of AI deployment and cybersecurity threats presents a complex challenge. The rapid advancement of agentic AI necessitates swift and decisive governance measures to protect critical infrastructure from potential adversarial actions. The question remains whether the necessary frameworks will be established before adversaries exploit these vulnerabilities on a large scale in high-consequence sectors.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...