Compliance Challenges of Agentic AI in Enterprises

Agentic AI Compliance and Regulation: What to Know

The widescale adoption of artificial intelligence by organizations has brought countless benefits, but it has also come with downsides.

In fact, 95% of executives said their organizations experienced negative consequences in the past two years as a result of their enterprise AI use, according to an August 2025 report from Infosys, “Responsible Enterprise AI in the Agentic Era.” A direct financial loss was the most common consequence, reported in 77% of the cases.

As dire as those figures might seem, they could get even worse as organizations begin to implement agentic AI. Infosys found that 86% of executives who were aware of agentic AI believed that the technology poses additional risks and compliance challenges to their business.

“Agentic AI, because of its autonomous decision-making and autonomous action without a human in the loop, introduces additional risks,” said Valence Howden, an advisory fellow with Info-Tech Research Group.

The term agentic AI, or AI agents, refers to AI systems that can make independent decisions and adapt their behavior autonomously to achieve a specific goal. Unlike traditional automation tools that follow a rigid, fixed set of instructions, agentic AI systems use learned patterns and relationships to reason and adjust their actions in real time. The capability to act independently is what sets AI agents apart from basic automation.

Why Agentic AI Needs New Compliance Strategies

Agentic AI’s ability to make decisions and execute actions on its own introduces heightened risk into the organization, prompting AI experts and compliance officers to advise executives to be more attentive to embedding needed controls into the systems from the start.

Valence Howden explained, “An agentic AI agent is parsing data through lots of layers, and there are compliance and governance and risk across all those layers.” The more complex and important the activities performed by agents are, the more companies are increasing that risk.

At the same time, compliance under any circumstances is hard to do because it’s a moving target. “It’s moving all the time, and yet you have to build a compliance structure for something that doesn’t stay the same,” he said.

Asha Palmer, senior vice president of compliance at Skillsoft, has witnessed how the additional security risk agentic AI poses can manifest. She cited a case at another company where an AI agent broke through a firewall to access confidential data during its testing phase.

Indeed, accessing and exposing sensitive data is one of the main risks that agentic AI presents. If programmed to gather insights, for example, an AI agent might access sensitive areas of the system without proper safeguards, leading to unintended exposure. If the agentic agent is compromised, it could also be manipulated to expose those weak spots.

Other risks of agentic AI include AI hallucinations, infringement on copyrighted or otherwise protected material, the use of biased or bad information to make decisions, and unauthorized actions.

These risks are not necessarily unique to agentic AI, as they also are associated with artificial intelligence in general. However, as Palmer and others noted, these risks are heightened in agentic AI: The sequence of agentic AI actions happening within the workflow, the layers in which the actions are happening, the speed at which those actions take place, and the autonomous nature of those actions all make it more difficult to root out where, what, and why something goes wrong.

Addressing New Risks and Implementing Controls for Agentic AI

How can enterprises address the risks inherent in using agentic AI? Palmer said her approach to ensuring agentic AI complies with any relevant regulations and standards is the same approach she takes to ensure compliance and reduce risk with other types of AI:

  • Understand and assess the use case. Work with a cross-functional team to start with understanding and assessing the use case where AI will be deployed. List the specific risks associated with the use case.
  • Identify key stakeholders. To ensure accountability, identify both the technology developer responsible for the AI system and the business owner in charge of the use case.
  • Consider the purpose of the use case. Clarify what the objective of the use case is. Understand how AI is being used to achieve that objective.
  • Identify the data involved. Pinpoint the data the AI system will access during its operation. Assess the sensitivity and safeguarding required by that data to mitigate security risks.

Palmer said the information gleaned from these steps determines what controls are put in place to ensure the AI tool operates in compliance with all relevant regulations, standards, and best practices. Those controls include technical controls as well as ongoing testing, human oversight, and revisions.

Grosso stressed the need for significant human oversight during the agentic AI training period. “Eventually, after many ‘on-the-job’ training exercises, the system will become sufficiently adept at the job it was designed to perform, and that human oversight can be rolled back or possibly eliminated,” he said. However, he noted that “a real problem is that professionals may become too comfortable with their machine counterparts too early and let up on oversight too readily.”

Emerging AI Compliance Frameworks for Enterprises

Ensuring AI agents are compliant with any applicable rules, regulations, standards, and best practices falls under the idea of responsible AI. Responsible AI is an approach to developing and deploying AI to ensure it is accountable, ethical, fair, safe, transparent, and trustworthy.

There are several frameworks that organizations can use to help ensure they’re developing responsible AI and, as part of that, compliant AI agents:

  • European Union’s AI Act. This act promotes safe, transparent AI by categorizing risk levels, guiding responsible development, and ensuring compliance through clear rules, accountability, and enforcement mechanisms.
  • G7 Code of Conduct for AI. This set of voluntary guidelines promotes the safe, secure, and trustworthy development and deployment of advanced AI systems and advises organizations to identify, evaluate, and mitigate risks throughout the AI lifecycle.
  • ISO/IEC 42001. This set of voluntary guidelines covers the development and use of responsible AI by ensuring accountability, transparency, and risk management; it helps align AI systems with ethical principles and regulatory requirements.
  • NIST AI Risk Management Framework. This framework helps organizations design, develop, and deploy responsible AI by addressing risks across those efforts and promoting trustworthy AI through core functions — govern, map, measure, and manage.

Regulatory Trends in Agentic AI

The Infosys report found that 78% of surveyed executives viewed “Responsible AI practices as having a positive impact on their business growth” and noted that most of the surveyed executives also said they “welcome new AI regulations, mainly because such regulations will provide clarity, confidence, and trust in enterprise AI both internally and for their customers.”

However, regulations are still evolving, with experts saying none specifically addresses agentic AI. “The trend right now is to use the EU’s AI Act framework as a foundation,” the report stated, noting that most countries are using the framework with only slight variations to ensure their rules align with the EU’s.

Lawmakers in the U.S, both at the federal and state levels, are considering regulations but have yet to offer organizations any firm direction. In 2023, an executive order on safe, secure, and trustworthy AI was issued, but subsequent administrations have altered these directives, reflecting the ongoing complexities in AI regulation.

How Enterprises Can Prepare for AI Agent Compliance Today: 7 Steps

Even in an evolving regulatory environment, compliance experts said organizations can take the following seven steps to ensure their development and deployment of AI agents comply with laws and standards:

  1. Ensure compliance programs are aligned with the business strategy and operations to clarify objectives and necessary compliance measures.
  2. Identify actions happening at all layers and points along the workflow to address compliance needs and ensure accountability and transparency.
  3. Audit AI agents to check the responses they’re giving to ensure compliance with regulations.
  4. Train employees on responsible AI, ensuring that each unit of an agentic AI system undergoes a training, review, and certification process.
  5. Resist becoming too reliant on agent-based AI systems too early, maintaining a clear sense of control and responsibility.
  6. Make no assumptions about the system’s behavior; consult with designers and experts to address potential quirks and errors.
  7. Develop adequate ongoing resources to ensure compliance and governance in AI development and deployment, adapting as AI systems evolve.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...