Unified AI Security: Strengthening Governance for Agentic Systems

Unified AI Security and Governance for Agentic Systems

As enterprises scale AI agents across their organizations, the industry has witnessed the introduction of the first software designed to unify AI security and AI governance. The new capabilities enhance and integrate watsonx.governance and Guardium AI Security to help clients maintain security and responsibility in their AI systems, including agents, at scale.

Integrating and Automating Agentic AI Security

The integration of IBM Guardium AI Security and watsonx.governance provides the first unified solution to manage security and governance risks associated with various AI use cases. This integration supports user processes to validate compliance standards against 12 different frameworks, including the EU AI Act and ISO 42001.

In collaboration with AllTrue.ai, IBM is enhancing Guardium AI Security’s capabilities to detect new AI use cases in cloud environments, code repositories, and embedded systems. This development offers broad visibility and protection in a decentralized AI ecosystem. Once identified, Guardium AI Security can automatically trigger appropriate governance workflows from watsonx.governance.

Recent updates to Guardium AI Security include automated red teaming to help enterprises identify and fix vulnerabilities and misconfigurations across AI use cases. Additionally, it allows users to define custom security policies that analyze both input and output prompts, mitigating risks such as code injection, sensitive data exposure, and data leakage.

Enhanced Agentic AI Evaluation and Lifecycle Governance

IBM watsonx.governance now monitors and manages AI agents throughout their lifecycle from development to deployment. Users can build evaluation nodes directly into agents, enabling them to monitor key metrics like answer relevance, context relevance, and faithfulness. Planned future capabilities include agent onboarding risk assessment, agent audit trails, and an agentic tool catalogue, anticipated to be available on June 27.

Off-the-Shelf Compliance Capabilities

IBM watsonx.governance Compliance Accelerators offer pre-loaded regulations, standards, and frameworks from across the globe. This feature enables users to identify relevant obligations and map them onto their AI use cases. The content covers significant regulations such as the EU AI Act, the U.S. Federal Reserve’s SR 11-7, and New York City Local Law 144, along with global standards like ISO/IEC 42001 and frameworks like the NIST AI RMF.

Expertise to Scale AI Responsibly

To facilitate responsible AI scaling, IBM Consulting Cybersecurity Services is introducing new services that integrate data security platforms like Guardium AI Security with comprehensive AI technology and domain consulting. These services aim to support organizations through their AI transformation journey, from discovering AI deployments and potential vulnerabilities to implementing secure-by-design practices across various AI layers.

To enhance offerings for AWS clients, watsonx.governance is now available in an AWS data center in India, featuring improved model monitoring capabilities.

Conclusion

Today’s new capabilities and integrations equip businesses with the comprehensive governance and security necessary to thrive in the era of agentic AI. These innovations align with IBM’s broader suite of watsonx AI solutions, designed to enable companies to responsibly and securely accelerate the impact of generative AI.

The rapid adoption of AI agents presents both transformative opportunities and significant challenges. Proper governance and security are crucial to mitigating risks and ensuring sustainable AI deployment.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...