States Take Charge: The Future of AI Regulation

Preparing for State-Level AI Regulation

In 2025, the landscape of AI and data privacy regulations is marked by uncertainty and inconsistency across the United States. With the change in administration, the regulatory focus has shifted dramatically, leading to a complex environment for businesses navigating compliance.

Policy Whiplash: A Shift in Direction

When President Trump took office, one of his initial actions was to revoke the previous administration’s Executive Order on AI regulation, replacing it with a directive aimed at deregulating AI development. This shift has resulted in a pendulum swing between regulatory guardrails and an emphasis on acceleration, leaving organizations grappling with the implications.

The growing disconnect between federal ambitions and practical compliance presents a daunting challenge for CISOs and governance, risk, and compliance (GRC) leaders. With federal regulatory bodies like the FTC and CFPB facing budget cuts and diminished authority, comprehensive federal data privacy legislation appears unlikely in 2025.

The Patchwork of State Regulations

Since 2018, attempts to unify the country’s fragmented state regulations have stalled, despite increasing pressure from the private sector for a cohesive standard. On the AI front, the failure of the House to impose a moratorium on state-level enforcement of AI regulations has shifted the regulatory spotlight to state legislatures, where a wave of privacy and AI bills are being introduced.

This state-driven regulatory environment is anything but uniform. Notable legislative efforts include:

  • California AB 2930: This proposed bill mandates developers of automated decision systems to conduct impact assessments and notify users when these systems play a substantial role in consequential decisions.
  • Colorado Artificial Intelligence Act SB 24-205: Enacted in 2024 and effective in 2026, this law imposes strict obligations on developers of high-risk AI systems, requiring transparency disclosures and measures to prevent algorithmic discrimination.
  • New York AB 3265 – The “AI Bill of Rights”: This broad proposal includes consumer rights to opt out of automated systems and mandates human oversight in decision-making processes.

Additionally, while not a state bill, the EU AI Act introduces tiered risk classifications for AI systems, imposing varying compliance obligations that international organizations must navigate.

Navigating a Complex Compliance Landscape

For enterprise organizations investing heavily in AI, the lack of a clear regulatory map complicates compliance efforts. Companies must contend with overlapping and conflicting requirements from different states.

Best Practices for Compliance

To successfully navigate this uncertain regulatory climate, organizations can adopt the following best practices:

  • Connect the Dots in the AI Stack: Achieving visibility into the data flow within AI systems is crucial. Understanding how sensitive data integrates into AI processes allows organizations to maintain compliance and enforce policy effectively.
  • Bring Order to Unstructured Chaos: Unstructured data, often governed poorly, poses significant risks. Next-generation data security posture management tools can classify and safeguard sensitive information before it is ingested by AI models.
  • Contextualize the Risk: Regulations are not just about compliance; they revolve around understanding the nuances of who, where, and why. Context-aware tools that create real-time knowledge graphs are essential for enforcing dynamic controls and adapting to evolving regulatory requirements.

Conclusion: The Road Ahead

The remainder of 2025 is likely to remain a regulatory gray zone for AI. Organizations must recognize the reputational and legal risks associated with noncompliance. Those that perceive AI governance as a mere checkbox will fall behind; conversely, those that integrate security, privacy, and compliance into a cohesive strategy will be better positioned to innovate responsibly.

As the regulatory landscape evolves, proactive engagement in AI and data governance is not just a best practice; it is essential for building trust and ensuring sustainable business operations amidst changing regulations.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...