As artificial intelligence increasingly shapes our lives, state governments across the United States are grappling with how to ensure its responsible development and use. This examination delves into current legislative efforts focused on governing AI, specifically in the context of decisions that profoundly impact individuals. We will explore the core principles and practical frameworks emerging as states strive to balance innovation with fairness, transparency, and accountability in this rapidly evolving technological landscape. By understanding the key components of these emerging regulatory approaches, we can better anticipate their potential impact on businesses, consumers, and the future of AI itself. The following analysis highlights the key definitional challenges, obligations, and rights contained in these emerging AI laws.
What are the key considerations in the predominant state-level approach to governing AI in consequential decision making?
Across the US, state lawmakers are increasingly focused on regulating AI used in “consequential decisions”—those that significantly impact individuals’ livelihoods and life opportunities. This approach is designed to foster fairness, transparency, oversight, and accountability, particularly in areas traditionally covered by civil rights laws such as education, housing, financial services, healthcare, and employment. The regulatory landscape is complex, but key considerations define this predominant approach.
Defining the Scope
Lawmakers often follow a five-part framework to define the scope of “high-risk AI systems” or “automated decision-making tools”:
- Defining AI: Most states align with the OECD definition: an AI system infers from inputs to generate outputs like predictions, content, or decisions influencing physical or virtual environments.
- Relevant Context: Regulation primarily focuses on sectors protected by civil rights laws, where AI impacts education, employment, housing, finance, essential government services, healthcare, insurance, and legal services. Some proposals, like California’s AB 2930, are more prescriptive, including essential utilities, criminal justice, adoption services, reproductive services, and voting.
- Impact & Role of AI: This is the most debated area. Lawmakers consider terms like “facilitating decision making” (lowest threshold), “substantial factor” (median), and “controlling factor” (highest). The core issue is balancing regulatory breadth with operational clarity for innovators.
- Regulated Entities: The approach typically distinguishes between developers (those who build AI systems) and deployers (those who use them), assigning role-specific obligations for transparency, risk assessments, and governance.
- Common Exceptions: Exceptions often exist for specific technologies (calculators, databases, etc.), sectors already governed by existing laws, small businesses, and public interest activities.
Addressing Algorithmic Discrimination
Mitigating algorithmic discrimination against protected classes is a primary goal. Most proposals define it as unjustified differential treatment based on protected class. Some legislative frameworks set forth a blanket prohibition against algorithmic discrimination, While other frameworks like in Colorado create a duty of care to prevent algorithmic discrimination. Interactions between the new laws and existing civil rights laws is currently unclear, leading to confusion and uncertainty. Additionally a disparity exists between the views of consumers and industry representatives, where consumers seek greater protection, while industries demand less constraints.
Obligations for Developers and Deployers
Common obligations for both developers and deployers revolve around:
- Transparency: This includes notices to individuals about AI use, public transparency measures, and documentation shared between developers and deployers.
- Assessments: There are substantive differences between testing AI systems to evaluate technical aspects based on certain metrics such as accuracy and reliability for bias based on protected characteristics, and Assessments or AI impact assessments that asses and document the whether and to what extent an AI system poses a risk of discrimination to individuals.
- AI Governance Programs: Structured frameworks are required to oversee and manage AI development and deployment responsibly.
Consumer Rights
Frameworks establish rights for individuals impacted by AI systems, including:
- Right to Notice and Explanation: Individuals should be informed about AI usage and its impact.
- Right of Correction: Opportunity to correct inaccurate information used in decision-making.
- Right to Opt-Out or Appeal: Ability to opt-out of automated decisions or appeal for human review.
Enforcement
Enforcement is typically managed by the state Attorney General’s office. Regulatory tools include affirmative reporting and document production, alongside enforcement mechanisms such as rights to cure and rebuttable presumptions of compliance. Currently, most state lawmakers have been hesitant to include a private right of action in AI and data privacy bills, due to potential legal burden.
How do technology-specific approaches to regulating AI address unique challenges, considering examples of generative AI and frontier models?
While many U.S. states are taking a risk-based approach to AI regulation, some lawmakers are pursuing technology-specific rules. These focus on unique risks associated with certain types of AI, notably generative AI and frontier models.
Generative AI: Transparency and Disclosure
Regulations for generative AI primarily aim to boost transparency. This involves:
- Consumer Notices: Informing users when they interact with generative AI systems. A good example of this is Utah’s SB 149, which requires entities to disclose when an individual is interacting with generative AI.
- Content Labeling: Clearly marking content as synthetic or AI-generated.
- Watermarking: Implementing watermarks to identify AI-created content.
- AI Detection Tools: Like in California SB 942, providing tools to check if content was generated or modified by AI.
- Documentation Disclosure: California AB 2013 mandates that generative AI developers publicly disclose data used to train their systems.
Frontier Models: Safety and Oversight
Regulations for frontier AI or Foundation Models (large AI models that can be used in a wide variety of use cases and applications, sometimes referred to as “general-purpose AI”) address risks stemming from their scale and power. Key areas being considered include:
- Safety Protocols: Requiring developers to have documented safety and security protocols. An example comes from proposed California legislation in 2024, SB 1047.
- Shutdown Capability: Ensuring the ability to promptly shut down models if needed. SB 1047 passed by the legislature is an example of this.
Challenges in Regulating Frontier/Foundation Models
Regulating such models poses unique challenges:
- Complexity and Scale: These models’ intricacies make it tough to establish effective standards.
- Computing Power as a Threshold: Some proposals use computing power (e.g. FLOPs) as a default restriction. Critics argue this measurement isn’t always a reliable risk indicator. They contend there’s too much focus on speculative vs evidenced risks, such as algorithmic bias.
- Impact on Open Source: Requirements placed on developers could limit the availability and modification of open-source models. Responding to concerns about the impact on open source, California SB 1047 received amendments to exclude models created by fine-tuning a covered model, using less than ten million dollars in compute cost.
Which specific obligations and rights related to AI systems are typically established for developers, deployers, and consumers, and how are they enforced?
State AI legislation is increasingly focused on defining specific rights and obligations for developers, deployers, and consumers to ensure fairness, transparency, and accountability. Let’s break down what each role typically entails.
Obligations for Developers
Developers, who build AI systems, face obligations related to:
- Transparency and Documentation: Providing comprehensive documentation about the AI system’s functionality, intended purpose, and potential risks. This often includes disclosing information about the data used to train the model.
- Risk Assessment: Testing systems for bias and discrimination vulnerabilities and providing this information to deployers.
- Reasonable Care: In states with a duty of care regulatory model developers undertake the responsibility of “reasonable care” to protect consumers from algorithmic discrimination.
Obligations for Deployers
Deployers, who use AI systems, are generally responsible for:
- Transparency and Notice: Informing individuals when and how an AI system is being used to make consequential decisions that impact them.
- AI Governance Programs: Implementing structured programs and risk management policies to oversee AI usage. These programs often require specifications regarding risk mitigation and iterative updates.
- Post-Deployment Monitoring: Continuously monitoring AI systems for bias, accuracy, and risks of discrimination.
- Providing Individual Rights: Honoring consumer rights (detailed below), which often includes correcting inaccurate information used in decision-making.
Consumer Rights
Consumers are being granted new rights under proposed AI legislation:
- Right to Notice and Explanation: Receiving clear and accessible information about the use of AI in decision-making processes. This includes understanding the system’s purpose and how it works.
- Right of Correction: Correcting erroneous personal data used by an the AI system, particularly if an adverse decision was made.
- Right to Appeal or Opt-Out: Some legislation provides the right to appeal an AI-driven decision for human review or to opt-out of automated decision-making altogether.
Enforcement Mechanisms
Enforcement is typically handled by the state Attorney General. Common mechanisms include:
- Affirmative Reporting: Developers disclosing potential risks associated with AI systems.
- Document Production: Requiring developers and deployers to maintain and produce documentation.
- Right to Cure: Giving organizations an opportunity to correct violations within a specific timeframe.
- Rebuttable Presumption: Providing a “rebuttable presumption” of compliance if the businesses comply with regulatory requirements.
- Affirmative Defense: Providing defenses against action if developers cure any violoations in 30 days and are complyiing with recognized risk managment frameworks.
It’s worth noting that state lawmakers are hesitant to include a private right of action in AI legislation, because this could potentially lead to excessive litigation.
The emerging state-level AI governance landscape reveals a multifaceted effort to balance innovation with robust protections. These frameworks, while varying in scope and enforcement, underscore a commitment to fairness, transparency, and accountability in AI’s deployment. As states grapple with defining high-risk systems, addressing algorithmic discrimination, and establishing clear obligations, the focus remains on empowering individuals and ensuring AI benefits society as a whole. The trajectory of responsible AI development hinges on continuous dialogue refining regulations, fostering collaboration between developers, deployers, and policymakers, and prioritizing ethical considerations in algorithms that shape our future.