Assessing AI’s Role in Regulatory Decision-Making

Determining the Reasonableness of Regulating With AI

This article explores the intersection of artificial intelligence (AI) and administrative law, particularly focusing on how current AI advancements challenge traditional regulatory standards.

Introduction

In the realm of administrative law, agencies are required to examine relevant data and provide satisfactory explanations for their actions, commonly referred to as the reasoned decision-making requirement. This necessitates that agencies present a record allowing for judicial review under the Administrative Procedure Act (APA) to ensure they do not act arbitrarily. However, the advent of AI raises questions about how these requirements align with AI capabilities.

Guiding Factors for Judicial Review

To address these concerns, three preliminary factors are proposed for courts to consider when evaluating the use of AI in the regulatory process:

  1. Statutory Authority: The degree to which Congress grants agencies the latitude to engage in value-laden decision-making.
  2. Deployment of AI: Where and how AI is utilized in formulating agency actions.
  3. Impact on Rights and Safety: Whether the agency action affects domains that involve rights or safety considerations.

The article posits that courts should scrutinize AI use more closely in broader statutory mandates or in areas impacting rights and safety, while being less skeptical in more narrowly defined contexts.

Characteristics of AI Systems

AI systems, particularly those utilizing machine learning, can produce outputs that are impressive yet often lack comprehensive explanations. These systems draw patterns from vast training data and can lead to “hallucinations,” where outputs may be contradictory or factually incorrect. This raises concerns about the reliability of AI in fulfilling the APA’s reasoning requirements.

Evaluating Statutory Authority

Courts should assess the breadth of statutory authority when considering AI’s role in regulatory actions. For instance, agencies with broad discretionary authority, like the Food and Drug Administration (FDA), may be granted more leeway when implementing AI, yet must also exercise this discretion with caution.

Role of AI in the Regulatory Process

AI may be less concerning when used for administrative tasks such as summarizing documents or retrieving factual information. However, if AI is asked to draft substantive regulatory text, the agency’s obligation to provide reasoned decision-making shifts to the AI, which may not be capable of justifying its actions adequately.

Rights and Safety Considerations

Particular caution is warranted when AI is used in contexts that affect rights or safety. Both the Biden-Harris and Trump-Vance administrations have emphasized the need for greater transparency and documentation due to the potential risks associated with AI outputs in these sensitive areas.

Conclusion

As AI technology evolves, the implications for administrative law will depend on specific case facts. The factors outlined serve as guideposts for courts, lawmakers, and the public in navigating the complexities of AI in regulatory contexts. Ensuring that agencies can articulate satisfactory explanations for their actions remains paramount.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...