Harnessing Agentic AI: Current Rules and Future Implications

AI Companies Claim Existing Rules Can Govern Agentic AI

AI companies are increasingly embracing agentic AI, touted as the next evolution of generative AI. With major players like Meta and OpenAI offering platforms for businesses to build their own AI agents, the question arises: can existing safety processes and regulations effectively govern this new technology?

The Role of Agentic AI

Agentic AI refers to AI systems composed of multiple AI agents that can operate autonomously to complete tasks. As innovation progresses, developers argue that current consumer protection laws, contracts, and sector-specific regulations can act as technical guardrails for these new capabilities. Erica Finkle, AI policy director at Meta, emphasized the importance of understanding how existing regulations can be applied effectively to agentic AI.

According to Finkle, “Seeing how all of that comes to play with respect to agents and AI more generally is a really important part of understanding where to go from here, and applying what exists in the best ways possible for the new and developing technology.” In enterprise use cases, agentic AI essentially becomes an extension of an organization’s IT system.

Assessing Current Laws

Panelist A.J. Bhadelia, AI public policy leader at Cohere, echoed Finkle’s sentiments, stating it is crucial to evaluate where current laws can be applied to AI agents and identify any gaps that may necessitate new regulations. The administration under former President Donald Trump is reportedly developing an action plan to guide U.S. policy on AI.

Bhadelia pointed out that different agentic AI applications carry varying levels of risk. For instance, an AI agent designed for enterprise use presents a different risk profile than one intended for consumer interaction. “A consumer agent might operate in an uncontrolled environment, handle unpredictable content, and have limited assurances of security,” he stated. Conversely, the enterprise agent is subject to the same security and audit requirements as the organization’s IT system.

The Need for Standards in AI Communication

As AI agents become more prevalent, establishing a standard vocabulary for communication between agents is essential. Finkle noted that various vendors are developing specialized AI agents tailored for specific sectors, such as healthcare and energy. Maintaining an open and interoperable AI infrastructure will be critical for facilitating multi-agent interactions.

In environments designed for AI agents to communicate, human control remains paramount for safety and security. Avijit Ghosh, a researcher at the AI platform Hugging Face, argued against fully autonomous agentic AI due to risks including the potential for overriding human control, malicious use, and data privacy concerns. Ghosh emphasized the necessity of maintaining human oversight at every level of the agentic workflow.

Delegating Responsibility and Liability

As companies advance their agentic AI capabilities, questions regarding the delegation of responsibility and liability to humans arise. This issue will be a crucial consideration as the technology evolves. Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, noted that transparency and disclosure regarding AI agent training will help inform policymakers as they consider the establishment of new rules.

Toner remarked, “It doesn’t directly solve any particular problem, but it puts policymakers, the public, and civil society in a much better position to understand and respond as the space changes, as it tends to do very quickly.”

Conclusion

The emergence of agentic AI presents both opportunities and challenges. As AI companies continue to innovate, the dialogue surrounding the application of existing regulations and the establishment of new standards will be critical in ensuring that this technology is developed responsibly and safely.

More Insights

Driving Responsible AI: The Business Case for Ethical Innovation

Philosophical principles and regulatory frameworks have often dominated discussions on AI ethics, failing to resonate with key decision-makers. This article identifies three primary drivers—top-down...

Streamlining AI Regulations for Competitive Advantage in Europe

The General Data Protection Regulation (GDPR) complicates the necessary use of data and AI, hindering companies from leveraging AI's potential effectively. To enhance European competitiveness, there...

Colorado’s AI Act: Legislative Setback and Compliance Challenges Ahead

The Colorado Legislature recently failed to amend the Artificial Intelligence Act, originally passed in 2024, which imposes strict regulations on high-risk AI systems. Proposed amendments aimed to...

AI in Recruitment: Balancing Innovation and Compliance

AI is revolutionizing recruitment by streamlining processes such as resume screening and candidate engagement, but it also raises concerns about bias and compliance with regulations. While the EU has...

EU Member States Struggle to Fund AI Act Enforcement

EU policy adviser Kai Zenner has warned that many EU member states are facing financial difficulties and a shortage of expertise necessary to enforce the AI Act effectively. As the phased...

Colorado’s AI Act: Key Consumer Protections Unveiled

The Colorado Artificial Intelligence Act (CAIA) requires developers and deployers of high-risk AI systems to protect consumers from algorithmic discrimination and disclose when consumers are...

Smart AI Regulation: Safeguarding Our Future

Sen. Gounardes emphasizes the urgent need for smart and responsible AI regulation to safeguard communities and prevent potential risks associated with advanced AI technologies. The RAISE Act aims to...

Responsible AI: The Key to Trust and Innovation

At SAS Innovate 2025, Reggie Townsend emphasized the importance of ethics and governance in the use of AI within enterprises, stating that responsible innovation begins before coding. He highlighted...

Neurotechnologies and the EU AI Act: Legal Implications and Challenges

The article discusses the implications of the EU Artificial Intelligence Act on neurotechnologies, particularly in the context of neurorights and the regulation of AI systems. It highlights the...