Regulating AI: The Ongoing Battle for Control

The Ongoing Battle Over AI Regulation

The debate surrounding who should regulate artificial intelligence (AI) is far from settled. Recently, significant legislative actions have taken place, highlighting the complexities and challenges of governing this rapidly evolving technology.

The AI Regulation Freeze: A Legislative Overview

The passage of the Republicans’ One Big Beautiful Bill Act through the Senate has brought AI regulation into the spotlight. Prior to its approval, a controversial amendment that proposed a five-year freeze on state-level regulation of AI models and applications was removed. This decision has significant implications for how states can address AI-related concerns.

The bill contains substantial funding for new federal AI initiatives across various departments, including Defense and Homeland Security. However, critics argue that removing the amendment could lead to a chaotic regulatory environment. Michael Kleinman from the Future of Life Institute noted that the rushed nature of the bill—over 900 pages reviewed in just 72 hours—could hinder effective legislation.

State-Level Efforts and Momentum

Despite federal legislation, many states are already in the thick of regulating AI. States like California, Colorado, Illinois, New York, and Utah have been particularly proactive, with all 50 states introducing new AI legislation in 2025. Currently, 28 states have enacted laws related to AI, indicating a robust momentum that is unlikely to diminish, especially as job displacement from AI-driven automation becomes more evident.

Public Support for AI Regulation

Public sentiment appears to favor AI regulation, with many voters supporting measures that seek to mitigate risks while fostering innovation. However, the proposed freeze amendment posed financial penalties for states attempting to enact protective legislation, raising concerns about the balance between innovation and public safety.

Copyright Issues in AI Training Data

In a separate but related issue, recent court rulings have set a new precedent regarding the use of copyrighted materials in training AI models. In the case of Bartz v. Anthropic, the court ruled that training AI on lawfully purchased books qualifies as fair use. However, this ruling was complicated by the inclusion of pirated materials in Anthropic’s training data, which will be addressed in future trials.

Similarly, in Kadrey v. Meta Platforms, a lawsuit by authors claiming their works were used without permission was dismissed. The judge highlighted the need for stronger legal arguments that demonstrate the market impact of AI-generated works on human-written texts. This indicates a potential shift in how copyright law may evolve alongside AI technology.

Apple’s Strategic Moves in AI

Amidst these developments, Apple is reshaping its AI strategy by appointing Mike Rockwell to lead the Siri team. This transition aims to revitalize the AI assistant, which has struggled to meet expectations since its 2024 announcement. Reports suggest that Apple is evaluating whether to utilize its own AI models or collaborate with established companies like OpenAI or Anthropic to enhance Siri’s capabilities.

As Apple navigates this competitive landscape, it underscores the pressing need for companies to adapt and innovate in response to changing regulatory and technological contexts.

Conclusion

The fight over AI regulation is a dynamic and multifaceted issue. As states push forward with their own regulations and federal legislation attempts to provide a framework, the future of AI governance remains uncertain. Stakeholders must navigate the challenges of innovation, public safety, and legal implications to ensure a balanced approach to AI development.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...