Kiteworks Unveils Innovative AI Governance Platform for Enhanced Data Security

Kiteworks Launches Data-Layer AI Governance Platform

Kiteworks today introduced a new data-layer governance platform to address growing enterprise concerns about AI agent security and compliance, positioning the offering as a first-of-its-kind solution for regulated environments.

Kiteworks Targets AI Governance Gap with Data-Layer Approach

The new platform, Kiteworks Compliant AI, is designed to enforce governance controls directly at the data access layer, rather than at the model or application level. This ensures that every AI agent interaction with sensitive data is authenticated, policy-governed, encrypted, and logged.

According to the company, the platform applies attribute-based access control (ABAC), FIPS 140-3 validated encryption, and tamper-evident audit logging regardless of the AI model, prompt, or agent framework in use.

David Byrnes, Kiteworks’ VP of global channels, stated, “The Kiteworks unified platform has always given channel partners a strong story around secure data exchange—unified governance across email, file sharing, SFTP, managed file transfer, APIs, and data forms. Kiteworks Compliant AI extends that same governance to AI agents as a first-class channel.”

Agentic AI Becomes a New Employee and Introduces New Risks

This architecture is intended to close what Kiteworks describes as a widening governance gap as enterprises rapidly deploy agentic AI systems. “AI agents are the new digital employees—and like all employees, they access, handle, share, and act on regulated data,” said Kiteworks Chief Product Officer Yaron Galant.

Galant emphasized, “The difference is that AI agents exercise zero independent ethical judgment. They will access any data they are not explicitly prevented from touching. HIPAA does not care whether a human or an AI agent accessed that patient record. Kiteworks Compliant AI governs the data layer—not the model—so every agent interaction is authenticated, ABAC policy-governed, FIPS 140-3 encrypted, and logged in a tamper-evident audit trail before any regulated data is touched.”

Enterprise Adoption Outpaces Control Mechanisms

The launch comes as organizations accelerate AI adoption without corresponding governance maturity. Kiteworks cited its 2026 Data Security and Compliance Risk Forecast Report, which found that all surveyed organizations have agentic AI on their roadmap, with more than half already running agents in production.

However, significant control gaps remain. The report found that 63% of organizations cannot enforce purpose limitations on AI agents, while 60% lack the ability to terminate misbehaving agents. Broader industry data points to similar concerns, indicating that AI-related vulnerabilities are now considered the fastest-growing cyber risk by a majority of organizations.

Byrnes noted, “This is where channel partners step in. Most mid-market and enterprise organizations don’t have the internal expertise to govern AI data access across regulatory frameworks. They know they need AI. They know they need guardrails. They need a trusted partner to bridge the gap.”

Governed Assists Aim to Operationalize Compliance

To operationalize governance, Kiteworks introduced three “Governed Agent Assists”, which are compliance-ready workflows built on its Data Policy Engine. These include capabilities for folder operations, file lifecycle management, and automated form creation in regulated environments.

Each assist enforces policy controls across actions such as creating, moving, or deleting data, while maintaining compliance with frameworks such as HIPAA, SOX, PCI, and FISMA.

How Channel Partners Can Build New Service Offerings Around AI Governance

Byrnes believes that channel partners will benefit significantly from Kiteworks’ product evolution. “For resellers, MSPs, and MSSPs, this isn’t just a product announcement. It’s the foundation for an entirely new service category,” he remarked.

The platform emphasizes auditability as a core differentiator, allowing organizations to generate complete compliance evidence packages—including access policies, encryption validation, and audit logs—within hours rather than weeks.

Why Data-Layer Governance Matters

Kiteworks argues that data-layer enforcement is the only reliable control point for AI agents, as they can bypass model-level guardrails through techniques such as prompt injection. By embedding governance directly into data access workflows, the company aims to provide a consistent enforcement mechanism across disparate AI ecosystems.

The company plans to showcase Compliant AI at the RSA Conference 2026, where it intends to demonstrate real-time governance of AI agent interactions for enterprise and public sector use cases. As regulatory scrutiny around AI intensifies, Kiteworks is betting that compliance, and the ability to prove it, will become a critical differentiator for organizations deploying AI at scale.

“Channel partners who move early on governed AI data access will own a category that didn’t exist 18 months ago,” Byrnes concluded. “The combination of AI agent deployment, regulatory urgency, and platform consolidation pressure creates a service opportunity that spans assessment, implementation, and ongoing management.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...