Minimum Standards for AI in Administrative Law

Toward Minimum Administrative Law Standards for Agency Usage of AI

Much of the emerging thinking about the relationship between administrative law and generative artificial intelligence is premised, expressly or implicitly, on the assumption that AI systems might come to play a leading role in shaping and explaining administrative action. It is easy to see why. One of the principal features of the Administrative Procedure Act (APA) is that it forces policymakers to offer cogent, written accounts of their reasoning, a task that can often be complicated and time-consuming. But even at their current stage of development, large language models (LLMs) are capable of quickly generating high-verisimilitude prose, even on technical matters.

If the prospect of agencies setting machines loose to generate and justify regulatory proposals once seemed far-fetched, it no longer does. Over the summer, the Washington Post obtained a proposal by the U.S. DOE Service to use AI to facilitate the rescission of half of all federal regulations by January 2026. DOE touted that AI would revolutionize the rulemaking process, saving “93% of Man Hours” and “automat[ing]” research, writing, and analysis of public comments.

The fate of the proposal—and DOE itself—is unknown. The White House equivocated at the time, and the avalanche of regulatory actions the presentation anticipated would be submitted to the Office of Information and Regulatory Affairs for review this fall has yet to materialize. And yet the administration continues to signal that it hopes to use artificial intelligence to accelerate the rulemaking process. More recent reporting indicates that senior officials at the Department of Transportation are planning to use Google Gemini to draft proposed rules “in a matter of minutes or even seconds.”

The Role of AI in the Rulemaking Process

These proposals bring within the realm of possibility a maximalist vision of AI’s role in the rulemaking process: an LLM identifying areas for regulatory action, deciding what action to take, offering justifications for that action, and rebutting public comments—all subject to only the most cursory human review and rubber-stamping.

As noted, agencies acting in this way would likely face significant legal obstacles. At the very least, rules produced by LLMs, even if judged on their own terms, might be particularly vulnerable to APA challenge. More fundamentally, the APA and the cases interpreting it are plausibly read to require certain forms of substantive human involvement in the rulemaking process, which would preclude agencies from entirely outsourcing their work to AI.

Reasoned Decisionmaking & Human Involvement

As readers of this forum know, the APA directs courts to “set aside” agency action that is “arbitrary” and “capricious.” LLMs are, at present, subject to limitations and prone to systemic errors. For instance, LLMs have been found to “hallucinate” false information, a problem that has persisted even as technology has advanced in other respects. They might act sycophantically, validating or agreeing with even objectively incorrect user prompts. LLMs also have limited “context windows,” which refer to the amount of text or information they can consider at one time. They may thus struggle to accurately process long documents, which is a particular concern in rulemaking that often requires analyzing complicated and extensive agency records.

These issues mirror the classic categories of errors identified by the Supreme Court as arbitrary and capricious in State Farm, such as when an agency “relie[s] on factors which Congress has not intended it to consider” or “fail[s] to consider an important aspect of the problem.” An LLM incapable of faithfully reviewing an administrative record might produce a result that misstates or disregards evidence, increasing exposure to arbitrary and capricious claims.

Substantive Human Involvement Required

Reliability concerns highlight one way in which the APA requires substantive human involvement in the rulemaking process: an agency that wishes to rely on AI must explain and justify that methodological choice. This requirement is not new. Agencies have long been required to explain their methods when using mathematical models to inform rules. In such contexts, agencies bear an “affirmative burden” to explain the assumptions and methodology used in preparing any model and provide a full analytical defense of any challenged aspects.

This requirement extends to agencies using LLMs in rulemaking, which must reasonably explain how they chose and developed their model, how they prompted the model and validated its outputs, and why they view those results as reliable. Rubber-stamping the output of a tool known to be prone to error without this explanation would be arbitrary and capricious.

Moreover, the APA’s notice-and-comment procedures obligate agencies to consider and address significant public comments. Agencies have broad discretion in how they fulfill this task, but that discretion is not unlimited. For instance, the D.C. Circuit has stated that “dependence on severely skewed staff summaries may breach the decisionmaker’s statutory duty to accord ‘consideration’ to relevant comments.” An agency relying on an LLM to respond to public comments without independent human effort risks violating this obligation.

Conclusion

This analysis is provisional. While the rubber-stamping model of AI-driven rulemaking may prove unrealistic, it serves as a device to tease out the basic principles that could govern agency use of AI. The minimum standards outlined should apply to less aggressive deployments of AI in rulemaking, with the line-drawing question likely becoming a focal point of litigation, regulation, and perhaps legislation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...