Trump’s AI Strategy: A Shift Towards Deregulation and Global Leadership

Trump’s AI Policy Shift: A New Era of US Dominance and Deregulation

Recent actions from the Trump administration indicate a significant shift in US artificial intelligence (AI) policy. This new direction moves away from the previous administration’s emphasis on oversight, risk mitigation, and equity, instead favoring a framework focused on deregulation and the promotion of AI innovation to maintain US global dominance.

The administration believes that this shift will better position US tech companies to continue leading in AI development. However, challenges persist for companies operating in foreign jurisdictions with stricter AI regulations and in specific US states that have enacted their own AI regulatory rules.

Divergence in Regulatory Strategies

The contrast between the federal government’s pro-innovation strategy and the precautionary regulatory approach adopted by regions such as the EU, South Korea, and individual states emphasizes the necessity for companies to adopt flexible compliance strategies that address varying regulatory standards.

Deregulation and National Dominance

Vice President JD Vance articulated the administration’s commitment to US AI dominance during a recent policy speech. He asserted that the future of AI would not be won through safety concerns but through innovation. Vance criticized foreign governments for imposing stringent regulations on US tech firms and urged European countries to adopt a more optimistic view of AI.

This declaration fortifies the policy outlined in President Trump’s January 25 executive order, which replaced the previous administration’s directives on AI. The Trump order explicitly frames AI development as a matter of national competitiveness and economic strength, prioritizing policies that remove perceived regulatory obstacles to innovation. It critiques the influence of “engineered social agendas” in AI systems, striving to ensure that AI technologies remain free from ideological bias.

In contrast, the previous administration focused significantly on responsible AI development, emphasizing the need to address risks such as bias, disinformation, and national security vulnerabilities.

Immediate Review of Existing Policies

Trump’s directive mandates an immediate review and potential rescission of all policies established under the previous administration that could hinder AI innovation. This shift is expected to result in the removal or substantial overhaul of the structured oversight framework that had been previously established.

While the former order also promoted innovation and competitiveness, it was paired with risk mitigation measures, enhanced cybersecurity protocols, and monitoring requirements for AI used in critical infrastructure.

Global AI Governance and Fragmentation

The current administration’s deregulatory approach occurs in a global context where stricter regulatory frameworks for AI are being advanced in other jurisdictions. The EU’s 2024 AI Act introduces comprehensive rules for AI technologies, focusing on safety, transparency, accountability, and ethics.

Countries like Japan, the UK, South Korea, and Australia are also developing AI laws, many of which emphasize accountability and ethics, contrasting sharply with the US’s current pro-innovation stance.

This divergence could lead to friction between the regulatory environments of the US and the EU, especially for global companies that must navigate both systems. While the EU may slightly ease its stance on innovation, alignment between US and EU regulatory approaches is unlikely.

State-Level AI Regulatory Challenges

The administration’s approach is likely to widen the gap between federal and state AI regulatory systems. States like California, Colorado, and Utah have already enacted AI laws with varying degrees of oversight. Increased state-level activity in AI might lead to regulatory fragmentation, as states implement their own rules to address concerns related to high-risk AI applications.

If Congress enacts an AI law prioritizing innovation over risk mitigation, stricter state regulations could face federal preemption, meaning that federal law could override conflicting state laws. Organizations must closely monitor developments across international, national, and state levels to effectively navigate the fragmented AI regulatory landscape.

This ongoing debate over AI governance reflects broader tensions between innovation and regulation, shaping the future of artificial intelligence in the United States.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...