How AI Sees More Clearly with Policy as Code
The possibility of erroneous findings has always overshadowed the potential of consumer AI. Ask the wrong question of the free download on your phone, and it just might make things up. But there’s no room for those kinds of errors in the enterprise.
The difference between toys and tools is that the latter help organizations get things done better. Designed and deployed correctly, enterprise AI — including agentic AI that can autonomously execute a series of tasks under human oversight — can be engineered to operate without hallucinations. When people embed operational code that aligns with specific policies and regulations directly into agentic AI, they create the guardrails that keep its data analytics on track. That’s what we mean by “policy as code.”
Understanding Policy as Code
Policy as code is the practice of converting an organization’s rules, policies, and compliance requirements into machine-readable code so AI systems can follow them automatically. This breakthrough innovation directly addresses the top enterprise concern, especially for those in highly regulated industries, about AI: the organization’s ability to execute workflows that require regulatory compliance and maintain trust.
By developing code that prevents unauthorized actions — and by establishing guardrails within which AI can operate — policy as code helps organizations ensure consistent policy interpretations and provides traceable, explainable reasoning. People oversee all activities related to these processes. This makes policy as code particularly valuable in heavily regulated industries, such as financial services, healthcare, and government.
Challenges in Regulated Industries
All industries require experts to collaborate on the design, implementation, and maintenance of their AI-infused systems. But regulated industries face additional challenges related to compliance, governance, and trust. According to the Kyndryl Readiness Report, 31% of organizations cite regulatory or compliance concerns as a primary barrier limiting their ability to scale recent technology investments — the second highest ranking of all IT modernization barriers.
Policy as code can help public- and private-sector entities overcome some of the biggest obstacles to a better allocation of resources — compliance, governance, auditability, and observability. By enforcing programmatic rules at scale, policy as code helps eliminate the human error that can lead to granting inappropriate permissions to AI, interpreting rules and regulations inconsistently, and failing to document exceptions to standard operations.
How Policy as Code Works
Organizations typically implement policy as code through a combination of declarative policy languages and enforcement engines. In other words, they incorporate the appropriate regulations and operational rules into code that AI agents can read and must obey. If it’s in the code, the AI agent must execute it. And if an instruction is not in the code, the AI agent cannot see or act upon it.
The people who architect the code rely on Policy Decision Points (PDPs) and Policy Enforcement Points (PEPs) to develop policy as code rules that determine whether an action should be allowed and whether it violates policy. The bottom line is that an AI agent, by design, is unable to act outside the parameters of its allowed operations. This capability also enables system observability and accurate record keeping.
Kyndryl’s Unique Approach
The Kyndryl differentiator is that they embed their policy-as-code capability directly into the Kyndryl Agentic AI Framework. In the same way that all Kyndryl solutions are fit-for-purpose instead of off-the-shelf, their approach to policy as code governs every aspect of digital workflow — from initial data retrieval to final approval. By design, people supervise the system. They don’t just observe and report.
As a result, Kyndryl’s approach to policy as code eliminates the impact of AI hallucinations, provides end-to-end oversight and auditing, and can enable faster deployment of agentic AI without jeopardizing safety, transparency, or human control.