Harnessing Agentic AI: Current Rules and Future Implications

AI Companies Claim Existing Rules Can Govern Agentic AI

AI companies are increasingly embracing agentic AI, touted as the next evolution of generative AI. With major players like Meta and OpenAI offering platforms for businesses to build their own AI agents, the question arises: can existing safety processes and regulations effectively govern this new technology?

The Role of Agentic AI

Agentic AI refers to AI systems composed of multiple AI agents that can operate autonomously to complete tasks. As innovation progresses, developers argue that current consumer protection laws, contracts, and sector-specific regulations can act as technical guardrails for these new capabilities. Erica Finkle, AI policy director at Meta, emphasized the importance of understanding how existing regulations can be applied effectively to agentic AI.

According to Finkle, “Seeing how all of that comes to play with respect to agents and AI more generally is a really important part of understanding where to go from here, and applying what exists in the best ways possible for the new and developing technology.” In enterprise use cases, agentic AI essentially becomes an extension of an organization’s IT system.

Assessing Current Laws

Panelist A.J. Bhadelia, AI public policy leader at Cohere, echoed Finkle’s sentiments, stating it is crucial to evaluate where current laws can be applied to AI agents and identify any gaps that may necessitate new regulations. The administration under former President Donald Trump is reportedly developing an action plan to guide U.S. policy on AI.

Bhadelia pointed out that different agentic AI applications carry varying levels of risk. For instance, an AI agent designed for enterprise use presents a different risk profile than one intended for consumer interaction. “A consumer agent might operate in an uncontrolled environment, handle unpredictable content, and have limited assurances of security,” he stated. Conversely, the enterprise agent is subject to the same security and audit requirements as the organization’s IT system.

The Need for Standards in AI Communication

As AI agents become more prevalent, establishing a standard vocabulary for communication between agents is essential. Finkle noted that various vendors are developing specialized AI agents tailored for specific sectors, such as healthcare and energy. Maintaining an open and interoperable AI infrastructure will be critical for facilitating multi-agent interactions.

In environments designed for AI agents to communicate, human control remains paramount for safety and security. Avijit Ghosh, a researcher at the AI platform Hugging Face, argued against fully autonomous agentic AI due to risks including the potential for overriding human control, malicious use, and data privacy concerns. Ghosh emphasized the necessity of maintaining human oversight at every level of the agentic workflow.

Delegating Responsibility and Liability

As companies advance their agentic AI capabilities, questions regarding the delegation of responsibility and liability to humans arise. This issue will be a crucial consideration as the technology evolves. Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, noted that transparency and disclosure regarding AI agent training will help inform policymakers as they consider the establishment of new rules.

Toner remarked, “It doesn’t directly solve any particular problem, but it puts policymakers, the public, and civil society in a much better position to understand and respond as the space changes, as it tends to do very quickly.”

Conclusion

The emergence of agentic AI presents both opportunities and challenges. As AI companies continue to innovate, the dialogue surrounding the application of existing regulations and the establishment of new standards will be critical in ensuring that this technology is developed responsibly and safely.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...