New Tools and Guidance: Announcing Zero Trust for AI
In recent discussions with security leaders across various disciplines, the undeniable energy surrounding AI has come to the forefront. Organizations are rapidly adopting AI technologies, prompting security teams to adapt swiftly. A recurring question arises: “We’re adopting AI fast, how do we make sure our security keeps pace?”
Introducing Microsoft’s Approach to Zero Trust for AI (ZT4AI)
Microsoft is addressing this critical question by updating the tools and guidance essential for organizations. The approach to Zero Trust for AI extends proven Zero Trust principles across the entire AI lifecycle, including:
- Data ingestion
- Model training
- Deployment
- Agent behavior
Today marks the release of a new suite of tools and guidance designed to help organizations move forward confidently:
- A new AI pillar in the Zero Trust Workshop
- Updated Data and Networking pillars in the Zero Trust Assessment tool
- A new Zero Trust reference architecture for AI
- Practical patterns and practices for securing AI at scale
Why Zero Trust Principles Must Extend to AI
AI systems often do not conform to traditional security models, introducing new trust boundaries between:
- Users and agents
- Models and data
- Humans and automated decision-making
As organizations begin to adopt autonomous and semi-autonomous AI agents, a new class of risk emerges. Agents that are overprivileged, manipulated, or misaligned can act as “double agents,” undermining the very outcomes they were designed to support.
To mitigate these risks, three foundational principles of Zero Trust are applied to AI:
- Verify explicitly: Continuously evaluate the identity and behavior of AI agents, workloads, and users.
- Apply least privilege: Restrict access to models, prompts, plugins, and data sources to only what is necessary.
- Assume breach: Design AI systems to be resilient against prompt injection, data poisoning, and lateral movement.
A Unified Journey: Strategy → Assessment → Implementation
Security leaders frequently express a need for a clear, structured path from understanding what to do to actual execution. Microsoft’s Zero Trust for AI is designed to bridge this gap, facilitating quick progression to actionable steps.
Zero Trust Workshop – Now with an AI Pillar
The updated Zero Trust Workshop now includes a dedicated AI pillar, covering:
- 700 security controls
- 116 logical groups
- 33 functional swim lanes
This scenario-based and prescriptive workshop aids organizations in:
- Aligning security, IT, and business stakeholders on shared outcomes
- Applying Zero Trust principles across all pillars, including AI
- Exploring real-world AI scenarios and associated risks
- Identifying cross-product integrations to drive measurable progress
Zero Trust Assessment – Expanded to Data and Networking
As AI agents become increasingly capable, the stakes surrounding data and network security have never been higher. Insufficiently governed agents can expose sensitive data or act on malicious prompts, making data classification, labeling, governance, and loss prevention essential.
The Zero Trust Assessment automates the evaluation of security configurations across identity, endpoints, data, and network controls, now expanding to include:
- Data
- Network
Tests are derived from trusted industry sources, including:
- National Institute of Standards and Technology (NIST)
- Cybersecurity and Infrastructure Security Agency (CISA)
- Center for Internet Security (CIS)
- Insights from real-world customer implementations
Zero Trust for AI Reference Architecture
The new Zero Trust for AI reference architecture illustrates how policy-driven access controls, continuous verification, monitoring, and governance collaborate to secure AI systems while enhancing resilience during incidents. This architecture provides a shared mental model for security, IT, and engineering teams, clarifying how trust boundaries shift with AI.
Practical Patterns and Practices for AI Security
Operationalizing AI security at scale is crucial. The provided patterns and practices offer repeatable solutions to complex AI security challenges. Key patterns include:
- Threat modeling for AI: Redesigning traditional threat modeling to address real-world AI risks.
- AI observability: Implementing end-to-end logging, traceability, and monitoring.
- Securing agentic systems: Guidance on lifecycle management, identity and access controls.
- Principles of robust safety engineering: Applying core safety engineering principles in AI systems.
- Defense-in-depth for Indirect Prompt Injection (XPIA): A comprehensive approach to mitigate risks.
Get Started with Zero Trust for AI
Zero Trust for AI integrates proven security principles with modern AI realities. Organizations can:
- Explore Microsoft’s unique approach to Zero Trust.
- Implement their Zero Trust architecture for AI.
- Execute the Zero Trust Workshop for scenario-based guidance.
- Assess their Zero Trust posture using the new Data and Network pillars.
Join the Microsoft Security Community to continue the conversation, where practitioners and experts share insights on Zero Trust and AI security.