Harnessing Agentic AI: Current Rules and Future Implications

AI Companies Claim Existing Rules Can Govern Agentic AI

AI companies are increasingly embracing agentic AI, touted as the next evolution of generative AI. With major players like Meta and OpenAI offering platforms for businesses to build their own AI agents, the question arises: can existing safety processes and regulations effectively govern this new technology?

The Role of Agentic AI

Agentic AI refers to AI systems composed of multiple AI agents that can operate autonomously to complete tasks. As innovation progresses, developers argue that current consumer protection laws, contracts, and sector-specific regulations can act as technical guardrails for these new capabilities. Erica Finkle, AI policy director at Meta, emphasized the importance of understanding how existing regulations can be applied effectively to agentic AI.

According to Finkle, “Seeing how all of that comes to play with respect to agents and AI more generally is a really important part of understanding where to go from here, and applying what exists in the best ways possible for the new and developing technology.” In enterprise use cases, agentic AI essentially becomes an extension of an organization’s IT system.

Assessing Current Laws

Panelist A.J. Bhadelia, AI public policy leader at Cohere, echoed Finkle’s sentiments, stating it is crucial to evaluate where current laws can be applied to AI agents and identify any gaps that may necessitate new regulations. The administration under former President Donald Trump is reportedly developing an action plan to guide U.S. policy on AI.

Bhadelia pointed out that different agentic AI applications carry varying levels of risk. For instance, an AI agent designed for enterprise use presents a different risk profile than one intended for consumer interaction. “A consumer agent might operate in an uncontrolled environment, handle unpredictable content, and have limited assurances of security,” he stated. Conversely, the enterprise agent is subject to the same security and audit requirements as the organization’s IT system.

The Need for Standards in AI Communication

As AI agents become more prevalent, establishing a standard vocabulary for communication between agents is essential. Finkle noted that various vendors are developing specialized AI agents tailored for specific sectors, such as healthcare and energy. Maintaining an open and interoperable AI infrastructure will be critical for facilitating multi-agent interactions.

In environments designed for AI agents to communicate, human control remains paramount for safety and security. Avijit Ghosh, a researcher at the AI platform Hugging Face, argued against fully autonomous agentic AI due to risks including the potential for overriding human control, malicious use, and data privacy concerns. Ghosh emphasized the necessity of maintaining human oversight at every level of the agentic workflow.

Delegating Responsibility and Liability

As companies advance their agentic AI capabilities, questions regarding the delegation of responsibility and liability to humans arise. This issue will be a crucial consideration as the technology evolves. Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, noted that transparency and disclosure regarding AI agent training will help inform policymakers as they consider the establishment of new rules.

Toner remarked, “It doesn’t directly solve any particular problem, but it puts policymakers, the public, and civil society in a much better position to understand and respond as the space changes, as it tends to do very quickly.”

Conclusion

The emergence of agentic AI presents both opportunities and challenges. As AI companies continue to innovate, the dialogue surrounding the application of existing regulations and the establishment of new standards will be critical in ensuring that this technology is developed responsibly and safely.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...