State-Led AI Governance in the U.S.

Technology Federalism: U.S. States at the Vanguard of AI Governance

U.S. states have emerged as regulators of first resort concerning various emergent technologies, particularly in areas where Congress has been slow to legislate. This phenomenon, termed emerging technology federalism, is especially evident in the state-level response to artificial intelligence (AI).

The Rise of AI and Regulatory Landscape

AI surged into public awareness with the 2022 launch of ChatGPT, but prior advancements in machine learning technologies had already begun reshaping societal perceptions of the tech sector. As AI gained attention, it coincided with a growing public sentiment shift towards skepticism regarding technology and its societal implications.

Polling data shows that positive public sentiment toward the internet industry peaked in 2015 and began to decline thereafter, culminating in what became known as the techlash. By 2020, a significant majority of Americans expressed serious concerns about misinformation and personal data privacy online. This backdrop prompted a global policy response, with Europe leading the way by enacting strict regulations such as the General Data Protection Regulation (GDPR).

State-Level Responses to AI

While the federal government has begun to respond to the implications of AI, a less coordinated but equally vigorous response has unfolded at the state level. States have traditionally served as laboratories of democracy, taking the initiative on numerous technology regulations. Since 2018, following California’s enactment of the California Consumer Privacy Act, numerous states have adopted comprehensive consumer privacy laws that regulate automated decision-making systems, thereby establishing a preliminary framework for AI governance.

The Surge in State-Level Policymaking on AI

States have quickly transitioned from incidental regulation of AI to direct policy-making, aiming to foster AI’s economic and social benefits while safeguarding against potential harms. For instance, President Biden’s 2023 executive order emphasizes the dual imperative of promoting responsible innovation while mitigating risks associated with AI.

In Maryland, Governor Wes Moore established an AI Subcabinet tasked with implementing an AI Action Plan to embed values such as equity and safety into AI workflows. Other states like Massachusetts and Rhode Island have formed public-private AI task forces to assess risks and opportunities related to AI.

Transparency and Misinformation

The rapid advancement of generative AI models raises concerns about misinformation and the erosion of trust in democratic processes. States such as Utah have begun to implement legislation requiring AI systems to disclose their nature to users. For example, Utah’s Artificial Intelligence Policy Act mandates that generative AI systems disclose they are not human when engaged with users, especially in regulated services like healthcare.

AI Safety Initiatives

As the AI landscape evolves, the safety implications of AI models have become a global concern. Various national governments and organizations have initiated AI safety summits, and legislative measures are being developed to address safety risks. In California, Senator Scott Wiener’s SB 1047 bill sparked a debate focused on imposing a duty of care on AI developers. Although vetoed by Governor Newsom, he expressed intentions to collaborate on a new AI safety proposal in the future.

This ongoing conversation highlights the critical role that states play in shaping AI governance, as they navigate the complexities of innovation while ensuring public safety and ethical use of technology.

Conclusion

The evolving landscape of AI governance in the United States showcases a unique interplay between federal and state-level initiatives. As states continue to act as regulators of first resort, their proactive approaches to AI policy will likely set important precedents for the future of technology governance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...