States Can Lead on AI Implementation: Here’s How
Imagine you are a state-level technology leader. Recent advancements in artificial intelligence promise to make approving small business licenses faster, improve K-12 student learning, and standardize compliance between agencies. All of these innovations aim to enhance the experience of your state’s constituents. Eager to deploy this new technology responsibly, you seek guidance from peers in other states. However, their responses vary widely, and in the absence of federal guidance, it becomes clear that there is no standardized playbook. You must chart the path forward on your own, with far more limited resources.
This scenario is increasingly common as AI systems rapidly enter consumer-facing services. Without federal action on AI, state government leaders are shouldering the dual responsibility of protecting consumers from potential algorithmic harms while supporting responsible innovation to improve service delivery. States possess structural advantages that enable them to experiment with regulatory approaches: shorter legislative cycles allow for quicker adjustments, they have the authority to pilot programs, and sunset provisions make it easier to revise or retire early-stage governance models. This positions states as agile regulators capable of setting up guardrails for rapidly evolving AI technologies that impact residents.
However, this regulatory agility must be matched with the necessary government capacity to be successful. The current lack of federal action is forcing states to pass new AI laws and tackle significant implementation challenges without the AI expertise typically found in federal agencies or major private employers. Building this capacity within state governments demands resources and technical expertise that most states are only beginning to establish. Without deliberate investment in transparency and talent, even the most well-crafted legislation may fall short of achieving its intended goals.
Increased Transparency to Build Public Trust
One immediate way state legislatures can move forward is through the passage and successful implementation of use-case inventories. A use-case inventory is a public-facing publication of algorithmic tools and their specific uses. It discloses when and where state governments utilize algorithmic tools in consumer-facing transactions, such as applications for social programs and public assistance benefits. Conducted by governments as a mechanism for transparency, these inventories facilitate third-party auditing of outcomes.
The benefits of public-facing AI use-case inventories are extensive. They enhance government transparency regarding automated decision-making outcomes, provide valuable insights to private-sector vendors, facilitate third-party auditing and bias-testing, and promote interagency sharing of best practices when AI tools are effectively used. This is particularly crucial in high-risk decisions related to government benefits and services. Conversely, a lack of transparency in expenditures on private and third-party vendor tools can leave an agency unaware of what tools they have acquired and whether they are safe for deployment in consumer-facing settings.
As skepticism among Americans regarding the practical uses of AI tools grows, it is vital to design public systems that promote transparency in the deployment of algorithmic tools in both public and private sectors.
Case Study: Implementation Challenges in California
While federal experience demonstrates that AI use-case inventories can be effective, it also reveals limitations: transparency mechanisms depend on technical talent and focused implementation. California serves as a cautionary example. In 2023, the state legislature passed Assembly Bill 302, requesting the State Department of Technology to “conduct a comprehensive inventory of all high-risk automated decision systems [ADS] used by state agencies” and submit a report to the legislature. The bill aimed to gain insight into AI deployment in consumer-facing interactions, responding to public reports of biased technology affecting public service applicants.
However, the initial implementation deadline came and went, and the only report released stated that there were “no high-risk ADS [tools] being used by State agencies”—a claim that can easily be disputed. For instance, the state healthcare exchange uses automated document processing tools to assess eligibility for health insurance, while the unemployment insurance program employs an algorithmic tool to evaluate the likelihood of application fraud. These significant decisions can have real repercussions for California residents.
Instead of creating a transparent use-case inventory, the report provided misleading information. The following list highlights additional examples of publicly disclosed automated decision-making system use cases in California’s state government:
- Domain: Government Benefits
Agency: Covered California
Use Case: Automated document processing for health insurance eligibility
Link: Link - Domain: Governance
Agency: CA Department of Finance
Use Case: Using generative AI to assess the fiscal impact of legislative proposals
Link: Link - Domain: Taxation
Agency: California Department of Tax and Fee Administration
Use Case: Using GenAI tools to assist in responses to taxpayers
Link: Link - Domain: Government Benefits
Agency: CA Employment Development Department
Use Case: Algorithm rating the likelihood of a fraudulent application
Link: Link - Domain: Government Benefits
Agency: California Student Aid Commission
Use Case: Chatbot engagement platform for financial aid applications
Link: Link - Domain: Government Benefits
Agency: CalHHS
Use Case: Algorithms for data matching across healthcare systems
Link: Link - Domain: Transportation
Agency: California Department of Transportation (CalTrans)
Use Case: Pilot programs in traffic safety and congestion
Link: Link
These findings underscore the urgent need to embed technical talent within state governments to ensure effective implementation of laws. The federal government provided guidance during the collection process of its use-case inventories, publicly releasing a final inventory for most agencies. Even with substantial support, notable challenges were encountered during the federal government’s inventory creation. The importance of transparency in the deployment of AI technologies cannot be overstated, particularly in enhancing public trust and accountability.