Regulating AI: The Ongoing Battle for Control

The Ongoing Battle Over AI Regulation

The debate surrounding who should regulate artificial intelligence (AI) is far from settled. Recently, significant legislative actions have taken place, highlighting the complexities and challenges of governing this rapidly evolving technology.

The AI Regulation Freeze: A Legislative Overview

The passage of the Republicans’ One Big Beautiful Bill Act through the Senate has brought AI regulation into the spotlight. Prior to its approval, a controversial amendment that proposed a five-year freeze on state-level regulation of AI models and applications was removed. This decision has significant implications for how states can address AI-related concerns.

The bill contains substantial funding for new federal AI initiatives across various departments, including Defense and Homeland Security. However, critics argue that removing the amendment could lead to a chaotic regulatory environment. Michael Kleinman from the Future of Life Institute noted that the rushed nature of the bill—over 900 pages reviewed in just 72 hours—could hinder effective legislation.

State-Level Efforts and Momentum

Despite federal legislation, many states are already in the thick of regulating AI. States like California, Colorado, Illinois, New York, and Utah have been particularly proactive, with all 50 states introducing new AI legislation in 2025. Currently, 28 states have enacted laws related to AI, indicating a robust momentum that is unlikely to diminish, especially as job displacement from AI-driven automation becomes more evident.

Public Support for AI Regulation

Public sentiment appears to favor AI regulation, with many voters supporting measures that seek to mitigate risks while fostering innovation. However, the proposed freeze amendment posed financial penalties for states attempting to enact protective legislation, raising concerns about the balance between innovation and public safety.

Copyright Issues in AI Training Data

In a separate but related issue, recent court rulings have set a new precedent regarding the use of copyrighted materials in training AI models. In the case of Bartz v. Anthropic, the court ruled that training AI on lawfully purchased books qualifies as fair use. However, this ruling was complicated by the inclusion of pirated materials in Anthropic’s training data, which will be addressed in future trials.

Similarly, in Kadrey v. Meta Platforms, a lawsuit by authors claiming their works were used without permission was dismissed. The judge highlighted the need for stronger legal arguments that demonstrate the market impact of AI-generated works on human-written texts. This indicates a potential shift in how copyright law may evolve alongside AI technology.

Apple’s Strategic Moves in AI

Amidst these developments, Apple is reshaping its AI strategy by appointing Mike Rockwell to lead the Siri team. This transition aims to revitalize the AI assistant, which has struggled to meet expectations since its 2024 announcement. Reports suggest that Apple is evaluating whether to utilize its own AI models or collaborate with established companies like OpenAI or Anthropic to enhance Siri’s capabilities.

As Apple navigates this competitive landscape, it underscores the pressing need for companies to adapt and innovate in response to changing regulatory and technological contexts.

Conclusion

The fight over AI regulation is a dynamic and multifaceted issue. As states push forward with their own regulations and federal legislation attempts to provide a framework, the future of AI governance remains uncertain. Stakeholders must navigate the challenges of innovation, public safety, and legal implications to ensure a balanced approach to AI development.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...