AI Regulation: Trends and Enforcement Strategies

AI and the Law: Emerging Trends in Enforcement

Artificial Intelligence (AI) has ushered in the Fourth Industrial Revolution, marking a significant technological evolution in the 21st century. Its rapid development and deployment have caught society and governments off guard, raising questions about regulatory frameworks and enforcement strategies.

Overview of Regulatory Landscape

The U.S. government has initiated various aspirational steps towards AI regulation, but legislative actions remain notably absent. Recent enforcement actions by federal and state regulators highlight the evolving priorities regarding AI compliance and enforcement. While the priorities and nature of enforcement may shift with the incoming administration, existing enforcement actions reveal available tools for AI governance.

Executive Actions and Legislative Efforts

In the aftermath of the rise of ChatGPT, the Biden administration took several executive actions addressing AI, focusing on responsible usage. In October 2023, an executive order was issued for the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This order mandated the U.S. Department of Commerce to create guidance for content authentication and watermarking AI-generated content, emphasizing transparency.

President Biden also established the AI Safety Institute at the National Institute for Standards and Technology, aiming to foster responsible AI development. However, the incoming Trump administration has indicated intentions to repeal these executive measures, potentially impacting the regulatory landscape.

In Congress, Senator Chuck Schumer’s advocacy for a comprehensive legislative framework for AI has not yet resulted in significant federal legislation. However, states like Colorado and California have proactively enacted laws addressing AI usage, highlighting the growing state-level regulatory response.

State-Level Legislation

In June 2024, Colorado implemented the Artificial Intelligence Act, establishing a framework for high-risk AI development, mandating consumer protection against algorithmic discrimination. California followed suit in September 2024, passing numerous AI-related bills, including disclosure requirements for datasets used in AI training and regulations concerning healthcare providers’ use of generative AI.

States such as Indiana, Illinois, and Texas have also initiated committees and task forces focusing on AI, demonstrating a commitment to regulating AI technologies through existing consumer protection laws.

Enforcement Actions by Regulatory Bodies

In the absence of a comprehensive federal approach to AI regulation, agencies like the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC) have begun to enforce existing laws against misleading practices involving AI.

For instance, in September 2024, the FTC launched Operation AI Comply, targeting companies that utilize AI tools to deceive consumers. Notable enforcement actions included:

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...