Trump’s AI Executive Order: Reducing Regulatory Fragmentation, Not Employer Responsibility
Introduction
The White House’s December 11, 2025 executive order (EO) titled “Ensuring a National Policy Framework for Artificial Intelligence” sets forth a federal policy aimed at coordinating a cohesive national approach to artificial intelligence (AI) while reducing regulatory fragmentation. However, it is crucial to note that this EO does not change existing antidiscrimination laws that govern employment decisions.
Employer Liability for AI-Assisted Decisions
Employer liability for AI-assisted employment decisions remains anchored in longstanding civil rights laws, which apply irrespective of whether decisions are made by humans or algorithms. Employers must evaluate AI-influenced decisions through traditional discrimination frameworks, ensuring they maintain documentation that supports job-relatedness and utilize adaptable governance processes.
Understanding the Legal Landscape
Two distinct bodies of law govern the use of AI in employment:
- AI-specific statutes: These laws regulate how automated tools are built and deployed.
- Civil rights laws: These laws regulate the legality of employment decisions themselves and remain unchanged by the introduction of AI technologies.
What the EO Addresses
The EO promotes a unified national approach to AI and instructs federal agencies to identify state AI laws that may conflict with federal objectives. It establishes a task force within the Department of Justice to tackle these inconsistencies and encourages the Secretary of Commerce to catalog state requirements deemed burdensome. However, it does not alter the core discrimination laws that apply to employment practices.
Core Discrimination Laws Remain Unchanged
Existing federal antidiscrimination statutes, including Title VII of the Civil Rights Act and the Americans with Disabilities Act, continue to govern employment practices. These laws apply regardless of whether the decision-making tool is algorithmic or human-driven. Employers must ensure that their practices do not result in unlawful discrimination.
Evaluating AI Tools in the Legal Context
Courts are increasingly scrutinizing automated hiring and screening tools under familiar civil rights principles. They are examining whether these tools produce discriminatory outcomes through established theories such as disparate impact and disparate treatment. The EO does not change this trajectory; courts will continue to apply traditional legal frameworks to AI-assisted decision-making.
Practical Guidance for Employers
To navigate this complex legal landscape, employers should consider the following:
- Evaluate AI-influenced decisions under traditional discrimination frameworks.
- Maintain documentation that supports job-relatedness and validates the use of automated tools.
- Track preemption efforts without relying on them, as changes in state AI regulations do not reduce exposure under civil rights laws.
- Develop adaptable governance processes that can evolve alongside changing regulations and business needs.
The Bottom Line
While the EO may reshape certain aspects of AI governance, it does not alter the foundational laws that directly impact employers. The framework established by existing antidiscrimination laws will continue to guide compliance and legal responsibilities in the realm of AI-assisted employment decisions.