Unified AI Security: Strengthening Governance for Agentic Systems

Unified AI Security and Governance for Agentic Systems

As enterprises scale AI agents across their organizations, the industry has witnessed the introduction of the first software designed to unify AI security and AI governance. The new capabilities enhance and integrate watsonx.governance and Guardium AI Security to help clients maintain security and responsibility in their AI systems, including agents, at scale.

Integrating and Automating Agentic AI Security

The integration of IBM Guardium AI Security and watsonx.governance provides the first unified solution to manage security and governance risks associated with various AI use cases. This integration supports user processes to validate compliance standards against 12 different frameworks, including the EU AI Act and ISO 42001.

In collaboration with AllTrue.ai, IBM is enhancing Guardium AI Security’s capabilities to detect new AI use cases in cloud environments, code repositories, and embedded systems. This development offers broad visibility and protection in a decentralized AI ecosystem. Once identified, Guardium AI Security can automatically trigger appropriate governance workflows from watsonx.governance.

Recent updates to Guardium AI Security include automated red teaming to help enterprises identify and fix vulnerabilities and misconfigurations across AI use cases. Additionally, it allows users to define custom security policies that analyze both input and output prompts, mitigating risks such as code injection, sensitive data exposure, and data leakage.

Enhanced Agentic AI Evaluation and Lifecycle Governance

IBM watsonx.governance now monitors and manages AI agents throughout their lifecycle from development to deployment. Users can build evaluation nodes directly into agents, enabling them to monitor key metrics like answer relevance, context relevance, and faithfulness. Planned future capabilities include agent onboarding risk assessment, agent audit trails, and an agentic tool catalogue, anticipated to be available on June 27.

Off-the-Shelf Compliance Capabilities

IBM watsonx.governance Compliance Accelerators offer pre-loaded regulations, standards, and frameworks from across the globe. This feature enables users to identify relevant obligations and map them onto their AI use cases. The content covers significant regulations such as the EU AI Act, the U.S. Federal Reserve’s SR 11-7, and New York City Local Law 144, along with global standards like ISO/IEC 42001 and frameworks like the NIST AI RMF.

Expertise to Scale AI Responsibly

To facilitate responsible AI scaling, IBM Consulting Cybersecurity Services is introducing new services that integrate data security platforms like Guardium AI Security with comprehensive AI technology and domain consulting. These services aim to support organizations through their AI transformation journey, from discovering AI deployments and potential vulnerabilities to implementing secure-by-design practices across various AI layers.

To enhance offerings for AWS clients, watsonx.governance is now available in an AWS data center in India, featuring improved model monitoring capabilities.

Conclusion

Today’s new capabilities and integrations equip businesses with the comprehensive governance and security necessary to thrive in the era of agentic AI. These innovations align with IBM’s broader suite of watsonx AI solutions, designed to enable companies to responsibly and securely accelerate the impact of generative AI.

The rapid adoption of AI agents presents both transformative opportunities and significant challenges. Proper governance and security are crucial to mitigating risks and ensuring sustainable AI deployment.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...