Federal AI Initiatives and Florida’s Bill of Rights: A Regulatory Showdown

NIST Expands Federal AI Capacity; Florida Proposes AI Bill of Rights

Main Points

On December 22, 2025, the US Department of Commerce’s National Institute of Standards and Technology (NIST) announced the establishment of two AI centers aimed at “strengthening U.S. manufacturing and cybersecurity for critical infrastructure.” Through an expanded partnership with nonprofit MITRE Corporation, NIST is investing $20 million to establish:

  • The AI Economic Security Center for US Manufacturing Productivity
  • The AI Economic Security Center to Secure US Critical Infrastructure from Cyberthreats

The launch of these AI centers furthers Pillars I and II of the White House AI Action Plan, which emphasize accelerating AI innovation and building American AI infrastructure, particularly through investments in data centers and manufacturing ecosystems.

Meanwhile, in Florida, Gov. Ron DeSantis (R-FL) announced a sweeping AI “Bill of Rights” for consumers and an AI data centers proposal to establish comprehensive state-level AI regulations and consumer protections on December 5, 2025. Both have been filed in the state Senate and House for consideration during the legislative session beginning on January 13.

The Florida proposals aim to protect consumers and local communities from perceived harms associated with AI and the physical infrastructure that underpins it. They arrive amidst a growing debate between state authority and federal preemption efforts, particularly following President Trump’s Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence,” signed on December 11, 2025. This Executive Order seeks to limit state-level AI regulation in favor of a uniform national policy.

NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

On December 22, 2025, NIST announced the establishment of two AI centers aimed at strengthening U.S. manufacturing and cybersecurity for critical infrastructure. The centers are designed to accelerate deployment of AI-driven tools and solutions in their respective domains by advancing applied research, standards, and public-private collaboration. MITRE will operate the centers in coordination with NIST experts, industry partners, and academic stakeholders.

This initiative is a key component of NIST’s Strategy for American Technology Leadership in the 21st Century, which aims to speed critical and emerging technologies from development to broader adoption. NIST’s approach reflects a direct commitment to advancing these priorities, as highlighted in its press release.

The new centers expand NIST’s broader AI ecosystem, including the Center for AI Standards and Innovation (CAISI), which aims to define best practices and measurement standards for both US and adversarial AI systems. In addition, NIST plans to award funding for an AI for Resilient Manufacturing Institute under the Manufacturing USA program, expected to receive up to $70 million in federal funding over five years.

Positioning Within the US AI Regulatory Landscape

The launch of these AI centers comes amid broader federal efforts to shape a coordinated national AI policy framework. In late 2025, an Executive Order establishing the Genesis Mission underscored the administration’s ambition to harness AI for scientific discovery and economic growth across sectors.

NIST’s centers function as non-regulatory capacity-building institutions, complementing federal standards designed to promote secure AI systems. By providing technical foundations and practical tools, these centers support the administration’s federal AI policy goals while avoiding the fracturing effects of divergent state AI laws.

Florida’s AI Bill of Rights

While the federal government rolls out a national AI strategy, states are increasingly asserting their own regulatory visions. In Florida, Gov. Ron DeSantis announced a comprehensive AI “Bill of Rights” for consumers and an AI data centers proposal aimed at establishing state-level regulations.

The Florida proposals include:

  • AI Bill of Rights: 15 major provisions addressing consumer protection, child safety, and industry accountability.
  • Data Centers Regulation: 12 provisions focusing on preventing cost-shifting and protecting local control.

Key measures include:

  • Prohibiting AI from using a person’s name, image, or likeness without consent.
  • Requiring explicit notices when users interact with AI chatbots.
  • Enabling parental oversight of child-chatbot conversations.
  • Banning AI “therapy” without human oversight.
  • Reinforcing bans on deepfakes involving minors.
  • Blocking the use of Chinese-origin AI tools by state or local agencies.
  • Restricting insurers from relying solely on AI for claims adjudication.

The companion data-center proposal restricts the development of hyperscale AI centers, reflecting concerns over the costs and local impacts of AI infrastructure.

Intersection with Federal AI Policy

Following President Trump’s Executive Order aimed at preempting state authority over AI governance, Gov. DeSantis asserted Florida’s right to regulate AI in areas like privacy and infrastructure impacts. He stated that Florida would not allow the federal government to strip away its ability to protect Floridians.

However, the Executive Order does exempt child safety measures from preemption, which may help Florida defend parts of its AI Bill of Rights if challenged. While both the Florida proposal and the Executive Order share an emphasis on child safety, they diverge in the assertion of authority.

The interplay between these approaches may shape where regulatory authority ultimately resides. Florida’s AI Bill of Rights represents a counterpoint to the federal push for a national AI regime, advocating for state-level consumer and infrastructure protections.

As developments unfold, continued monitoring and analysis will be essential in understanding the evolving landscape of AI regulation in the United States.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...