Determining the Reasonableness of Regulating With AI
This article explores the intersection of artificial intelligence (AI) and administrative law, particularly focusing on how current AI advancements challenge traditional regulatory standards.
Introduction
In the realm of administrative law, agencies are required to examine relevant data and provide satisfactory explanations for their actions, commonly referred to as the reasoned decision-making requirement. This necessitates that agencies present a record allowing for judicial review under the Administrative Procedure Act (APA) to ensure they do not act arbitrarily. However, the advent of AI raises questions about how these requirements align with AI capabilities.
Guiding Factors for Judicial Review
To address these concerns, three preliminary factors are proposed for courts to consider when evaluating the use of AI in the regulatory process:
- Statutory Authority: The degree to which Congress grants agencies the latitude to engage in value-laden decision-making.
- Deployment of AI: Where and how AI is utilized in formulating agency actions.
- Impact on Rights and Safety: Whether the agency action affects domains that involve rights or safety considerations.
The article posits that courts should scrutinize AI use more closely in broader statutory mandates or in areas impacting rights and safety, while being less skeptical in more narrowly defined contexts.
Characteristics of AI Systems
AI systems, particularly those utilizing machine learning, can produce outputs that are impressive yet often lack comprehensive explanations. These systems draw patterns from vast training data and can lead to “hallucinations,” where outputs may be contradictory or factually incorrect. This raises concerns about the reliability of AI in fulfilling the APA’s reasoning requirements.
Evaluating Statutory Authority
Courts should assess the breadth of statutory authority when considering AI’s role in regulatory actions. For instance, agencies with broad discretionary authority, like the Food and Drug Administration (FDA), may be granted more leeway when implementing AI, yet must also exercise this discretion with caution.
Role of AI in the Regulatory Process
AI may be less concerning when used for administrative tasks such as summarizing documents or retrieving factual information. However, if AI is asked to draft substantive regulatory text, the agency’s obligation to provide reasoned decision-making shifts to the AI, which may not be capable of justifying its actions adequately.
Rights and Safety Considerations
Particular caution is warranted when AI is used in contexts that affect rights or safety. Both the Biden-Harris and Trump-Vance administrations have emphasized the need for greater transparency and documentation due to the potential risks associated with AI outputs in these sensitive areas.
Conclusion
As AI technology evolves, the implications for administrative law will depend on specific case facts. The factors outlined serve as guideposts for courts, lawmakers, and the public in navigating the complexities of AI in regulatory contexts. Ensuring that agencies can articulate satisfactory explanations for their actions remains paramount.