AI Companies Claim Existing Rules Can Govern Agentic AI
AI companies are increasingly embracing agentic AI, touted as the next evolution of generative AI. With major players like Meta and OpenAI offering platforms for businesses to build their own AI agents, the question arises: can existing safety processes and regulations effectively govern this new technology?
The Role of Agentic AI
Agentic AI refers to AI systems composed of multiple AI agents that can operate autonomously to complete tasks. As innovation progresses, developers argue that current consumer protection laws, contracts, and sector-specific regulations can act as technical guardrails for these new capabilities. Erica Finkle, AI policy director at Meta, emphasized the importance of understanding how existing regulations can be applied effectively to agentic AI.
According to Finkle, “Seeing how all of that comes to play with respect to agents and AI more generally is a really important part of understanding where to go from here, and applying what exists in the best ways possible for the new and developing technology.” In enterprise use cases, agentic AI essentially becomes an extension of an organization’s IT system.
Assessing Current Laws
Panelist A.J. Bhadelia, AI public policy leader at Cohere, echoed Finkle’s sentiments, stating it is crucial to evaluate where current laws can be applied to AI agents and identify any gaps that may necessitate new regulations. The administration under former President Donald Trump is reportedly developing an action plan to guide U.S. policy on AI.
Bhadelia pointed out that different agentic AI applications carry varying levels of risk. For instance, an AI agent designed for enterprise use presents a different risk profile than one intended for consumer interaction. “A consumer agent might operate in an uncontrolled environment, handle unpredictable content, and have limited assurances of security,” he stated. Conversely, the enterprise agent is subject to the same security and audit requirements as the organization’s IT system.
The Need for Standards in AI Communication
As AI agents become more prevalent, establishing a standard vocabulary for communication between agents is essential. Finkle noted that various vendors are developing specialized AI agents tailored for specific sectors, such as healthcare and energy. Maintaining an open and interoperable AI infrastructure will be critical for facilitating multi-agent interactions.
In environments designed for AI agents to communicate, human control remains paramount for safety and security. Avijit Ghosh, a researcher at the AI platform Hugging Face, argued against fully autonomous agentic AI due to risks including the potential for overriding human control, malicious use, and data privacy concerns. Ghosh emphasized the necessity of maintaining human oversight at every level of the agentic workflow.
Delegating Responsibility and Liability
As companies advance their agentic AI capabilities, questions regarding the delegation of responsibility and liability to humans arise. This issue will be a crucial consideration as the technology evolves. Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, noted that transparency and disclosure regarding AI agent training will help inform policymakers as they consider the establishment of new rules.
Toner remarked, “It doesn’t directly solve any particular problem, but it puts policymakers, the public, and civil society in a much better position to understand and respond as the space changes, as it tends to do very quickly.”
Conclusion
The emergence of agentic AI presents both opportunities and challenges. As AI companies continue to innovate, the dialogue surrounding the application of existing regulations and the establishment of new standards will be critical in ensuring that this technology is developed responsibly and safely.