AI and the Law: Emerging Trends in Enforcement
Artificial Intelligence (AI) has ushered in the Fourth Industrial Revolution, marking a significant technological evolution in the 21st century. Its rapid development and deployment have caught society and governments off guard, raising questions about regulatory frameworks and enforcement strategies.
Overview of Regulatory Landscape
The U.S. government has initiated various aspirational steps towards AI regulation, but legislative actions remain notably absent. Recent enforcement actions by federal and state regulators highlight the evolving priorities regarding AI compliance and enforcement. While the priorities and nature of enforcement may shift with the incoming administration, existing enforcement actions reveal available tools for AI governance.
Executive Actions and Legislative Efforts
In the aftermath of the rise of ChatGPT, the Biden administration took several executive actions addressing AI, focusing on responsible usage. In October 2023, an executive order was issued for the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This order mandated the U.S. Department of Commerce to create guidance for content authentication and watermarking AI-generated content, emphasizing transparency.
President Biden also established the AI Safety Institute at the National Institute for Standards and Technology, aiming to foster responsible AI development. However, the incoming Trump administration has indicated intentions to repeal these executive measures, potentially impacting the regulatory landscape.
In Congress, Senator Chuck Schumer’s advocacy for a comprehensive legislative framework for AI has not yet resulted in significant federal legislation. However, states like Colorado and California have proactively enacted laws addressing AI usage, highlighting the growing state-level regulatory response.
State-Level Legislation
In June 2024, Colorado implemented the Artificial Intelligence Act, establishing a framework for high-risk AI development, mandating consumer protection against algorithmic discrimination. California followed suit in September 2024, passing numerous AI-related bills, including disclosure requirements for datasets used in AI training and regulations concerning healthcare providers’ use of generative AI.
States such as Indiana, Illinois, and Texas have also initiated committees and task forces focusing on AI, demonstrating a commitment to regulating AI technologies through existing consumer protection laws.
Enforcement Actions by Regulatory Bodies
In the absence of a comprehensive federal approach to AI regulation, agencies like the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC) have begun to enforce existing laws against misleading practices involving AI.
For instance, in September 2024, the FTC launched Operation AI Comply, targeting companies that utilize AI tools to deceive consumers. Notable enforcement actions included: