Harnessing Agentic AI: Current Rules and Future Implications

AI Companies Claim Existing Rules Can Govern Agentic AI

AI companies are increasingly embracing agentic AI, touted as the next evolution of generative AI. With major players like Meta and OpenAI offering platforms for businesses to build their own AI agents, the question arises: can existing safety processes and regulations effectively govern this new technology?

The Role of Agentic AI

Agentic AI refers to AI systems composed of multiple AI agents that can operate autonomously to complete tasks. As innovation progresses, developers argue that current consumer protection laws, contracts, and sector-specific regulations can act as technical guardrails for these new capabilities. Erica Finkle, AI policy director at Meta, emphasized the importance of understanding how existing regulations can be applied effectively to agentic AI.

According to Finkle, “Seeing how all of that comes to play with respect to agents and AI more generally is a really important part of understanding where to go from here, and applying what exists in the best ways possible for the new and developing technology.” In enterprise use cases, agentic AI essentially becomes an extension of an organization’s IT system.

Assessing Current Laws

Panelist A.J. Bhadelia, AI public policy leader at Cohere, echoed Finkle’s sentiments, stating it is crucial to evaluate where current laws can be applied to AI agents and identify any gaps that may necessitate new regulations. The administration under former President Donald Trump is reportedly developing an action plan to guide U.S. policy on AI.

Bhadelia pointed out that different agentic AI applications carry varying levels of risk. For instance, an AI agent designed for enterprise use presents a different risk profile than one intended for consumer interaction. “A consumer agent might operate in an uncontrolled environment, handle unpredictable content, and have limited assurances of security,” he stated. Conversely, the enterprise agent is subject to the same security and audit requirements as the organization’s IT system.

The Need for Standards in AI Communication

As AI agents become more prevalent, establishing a standard vocabulary for communication between agents is essential. Finkle noted that various vendors are developing specialized AI agents tailored for specific sectors, such as healthcare and energy. Maintaining an open and interoperable AI infrastructure will be critical for facilitating multi-agent interactions.

In environments designed for AI agents to communicate, human control remains paramount for safety and security. Avijit Ghosh, a researcher at the AI platform Hugging Face, argued against fully autonomous agentic AI due to risks including the potential for overriding human control, malicious use, and data privacy concerns. Ghosh emphasized the necessity of maintaining human oversight at every level of the agentic workflow.

Delegating Responsibility and Liability

As companies advance their agentic AI capabilities, questions regarding the delegation of responsibility and liability to humans arise. This issue will be a crucial consideration as the technology evolves. Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, noted that transparency and disclosure regarding AI agent training will help inform policymakers as they consider the establishment of new rules.

Toner remarked, “It doesn’t directly solve any particular problem, but it puts policymakers, the public, and civil society in a much better position to understand and respond as the space changes, as it tends to do very quickly.”

Conclusion

The emergence of agentic AI presents both opportunities and challenges. As AI companies continue to innovate, the dialogue surrounding the application of existing regulations and the establishment of new standards will be critical in ensuring that this technology is developed responsibly and safely.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...