AI Regulations and Governance: What You Need to Know

Bloomberg Law to Host Virtual Forum on Navigating AI Regulations & Governance

In an era where artificial intelligence (AI) is rapidly evolving, the need for comprehensive understanding and governance of AI technologies has never been more crucial. Bloomberg Law is set to host a virtual In-House Forum titled “Navigating AI Regulations & Governance” on Thursday, May 1, 2025, from 1 to 3 p.m. ET. This event is designed to equip in-house legal and compliance professionals with essential insights and tools needed to navigate the intricate landscape of AI regulation.

Event Overview

The forum comes at a pivotal time, particularly following the issuance of three executive orders by the Trump administration that emphasize AI development and national competitiveness while mandating a U.S. federal AI Action Plan. These developments, coupled with the comprehensive EU AI Act and a rise in state-level AI laws, present unique challenges and opportunities for corporate legal teams.

Attendees will gain detailed insights into federal and state policies, international standards, and emerging enforcement trends. The focus will be on the implications of these regulations for AI deployment, risk management, and the governance frameworks organizations must establish to remain compliant.

Keynote Speakers and Discussions

The event will feature an opening keynote discussion on AI regulation and innovation with Tom Lue, VP of Frontier AI Global Affairs at Google DeepMind. Following this conversation, a panel of state legislators will discuss how their states are taking the lead in regulating AI. This includes representatives from Colorado, Virginia, Utah, California, Texas, New York, and Connecticut.

A fireside chat with Kilian Gross, Head of Unit for Artificial Intelligence – Regulation and Compliance at the European Commission, will delve into key aspects of the EU AI Act, including its implementation and enforcement. The closing panel will focus on the EU AI’s impact on U.S. companies, highlighting who is impacted, what enforcement will look like, and the proactive steps organizations must take to mitigate compliance risks and avoid significant financial and reputational consequences.

The Importance of AI Literacy

As AI and machine learning reshape the innovation landscape, the critical question remains: are regulations helping or hindering this progress? The Bloomberg Law In-House Forum aims to explore this dilemma. It will delve into the details of existing AI regulations at both state and EU levels, and assess whether these regulations foster or stifle innovation.

Moreover, a key focus will be on the importance of AI literacy, discussing strategies to ensure that organizations and their legal teams are well-equipped to navigate this complex terrain.

Additional Speakers

Besides the keynote speakers, the forum will feature a diverse range of experts, including:

  • Giovanni Capriglione, State Representative (R), Texas
  • Kristen Gonzalez, State Senator (D), New York
  • James Maroney, State Senator (D), Connecticut
  • Monique Priestley, State Representative (D), Vermont
  • Robert Rodriguez, State Senator (D), Colorado
  • Paula Goldman, Chief Ethical and Humane Use Officer & EVP, Product, Salesforce
  • Moya Novella, Global Privacy and AI Counsel, IBM
  • Kelly Trindel, Chief Responsible AI Officer, Workday
  • Dr. Anandhi Vivek Dhukaram, Chief Responsible AI Officer, Esdha

Conclusion

The Bloomberg Law virtual forum promises to be an enlightening event for professionals navigating the complexities of AI regulations. With a focus on real-world implications, practical insights, and expert discussions, participants will be better prepared to address the challenges and opportunities presented by the evolving AI landscape.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...