Technology Federalism: U.S. States at the Vanguard of AI Governance
U.S. states have emerged as regulators of first resort concerning various emergent technologies, particularly in areas where Congress has been slow to legislate. This phenomenon, termed emerging technology federalism, is especially evident in the state-level response to artificial intelligence (AI).
The Rise of AI and Regulatory Landscape
AI surged into public awareness with the 2022 launch of ChatGPT, but prior advancements in machine learning technologies had already begun reshaping societal perceptions of the tech sector. As AI gained attention, it coincided with a growing public sentiment shift towards skepticism regarding technology and its societal implications.
Polling data shows that positive public sentiment toward the internet industry peaked in 2015 and began to decline thereafter, culminating in what became known as the techlash. By 2020, a significant majority of Americans expressed serious concerns about misinformation and personal data privacy online. This backdrop prompted a global policy response, with Europe leading the way by enacting strict regulations such as the General Data Protection Regulation (GDPR).
State-Level Responses to AI
While the federal government has begun to respond to the implications of AI, a less coordinated but equally vigorous response has unfolded at the state level. States have traditionally served as laboratories of democracy, taking the initiative on numerous technology regulations. Since 2018, following California’s enactment of the California Consumer Privacy Act, numerous states have adopted comprehensive consumer privacy laws that regulate automated decision-making systems, thereby establishing a preliminary framework for AI governance.
The Surge in State-Level Policymaking on AI
States have quickly transitioned from incidental regulation of AI to direct policy-making, aiming to foster AI’s economic and social benefits while safeguarding against potential harms. For instance, President Biden’s 2023 executive order emphasizes the dual imperative of promoting responsible innovation while mitigating risks associated with AI.
In Maryland, Governor Wes Moore established an AI Subcabinet tasked with implementing an AI Action Plan to embed values such as equity and safety into AI workflows. Other states like Massachusetts and Rhode Island have formed public-private AI task forces to assess risks and opportunities related to AI.
Transparency and Misinformation
The rapid advancement of generative AI models raises concerns about misinformation and the erosion of trust in democratic processes. States such as Utah have begun to implement legislation requiring AI systems to disclose their nature to users. For example, Utah’s Artificial Intelligence Policy Act mandates that generative AI systems disclose they are not human when engaged with users, especially in regulated services like healthcare.
AI Safety Initiatives
As the AI landscape evolves, the safety implications of AI models have become a global concern. Various national governments and organizations have initiated AI safety summits, and legislative measures are being developed to address safety risks. In California, Senator Scott Wiener’s SB 1047 bill sparked a debate focused on imposing a duty of care on AI developers. Although vetoed by Governor Newsom, he expressed intentions to collaborate on a new AI safety proposal in the future.
This ongoing conversation highlights the critical role that states play in shaping AI governance, as they navigate the complexities of innovation while ensuring public safety and ethical use of technology.
Conclusion
The evolving landscape of AI governance in the United States showcases a unique interplay between federal and state-level initiatives. As states continue to act as regulators of first resort, their proactive approaches to AI policy will likely set important precedents for the future of technology governance.