AI Legislation: Blackburn’s Comprehensive Plan for National Standards

Blackburn Rolls Out 300-Page AI Plan to Create One Federal Rulebook

The Senate’s draft proposal for national artificial intelligence (AI) legislation, unveiled on Wednesday, aims to preempt state laws by focusing on regulations that protect the “4 Cs” – children, creators, conservatives, and communities.

Led by Sen. Marsha Blackburn of Tennessee, the measure, dubbed the TRUMP AMERICA AI Act, spans nearly 300 pages. This proposal follows a December executive order from former President Donald Trump, which aimed to create a unified AI framework to avoid a confusing patchwork of state regulations that could stifle innovation.

Blackburn’s proposal marks the first legislative effort since Trump’s order, formalizing two key federal resources related to AI:

  • The Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST).
  • A governance and funding structure for the National Science Foundation’s (NSF) National Artificial Intelligence Research Resource (NAIRR) pilot.

In a statement, Blackburn emphasized the need for Congress to establish a single federal rulebook for AI to ensure that America remains competitive in the global race for AI dominance.

Child Safety Regulations

The proposal includes measures to protect users under the age of 17 on online and social media platforms. Key requirements include:

  • Implementing tools and guardrails designed to enhance user safety.
  • Adopting safer design practices and adding privacy and parental control tools.
  • Restricting research involving children and teens and improving transparency over algorithm-driven content.

Additionally, platforms must verify the ages of underage chatbot users through government-issued IDs and disclose that bots are not human or licensed professionals. Non-compliance could lead to penalties of up to $100,000.

A Kids Online Safety Council would also be established to advise Congress on emerging online risks to minors.

Protections for Creators

Under Blackburn’s draft proposal, copyright holders would gain a legal tool requiring transparency about how AI models are trained. Creators suspecting their work has been used to train generative AI systems could request a court-issued subpoena to disclose the training data.

The proposal mandates federal agencies, particularly NIST, to create standards for identifying and labeling AI-generated content, ensuring that certain works, such as journalism, are marked as authentic or AI-generated.

Addressing AI Bias

To combat perceived bias against conservative figures in AI systems, Blackburn proposes that developers of high-risk AI conduct annual independent audits to detect political discrimination. It also outlines that federal agencies can only procure models adhering to unbiased AI principles.

AI Safety Measures

The TRUMP AMERICA AI Act introduces a baseline “duty of care” for AI chatbot developers, requiring them to take reasonable steps to mitigate foreseeable user harm. A risk-based regulatory framework for AI systems will also mandate developers to participate in evaluation programs.

Developers and deployers of AI will be held legally accountable for harm caused by their systems. Companies and federal agencies are required to report quarterly on AI-related job impacts, including layoffs or job displacements.

To address rising energy costs linked to AI data centers, Blackburn proposed safeguards ensuring that ratepayers are not unduly burdened by AI infrastructure expenses. This initiative aligns with Trump’s recent pledge to protect ratepayers from the costs associated with powering AI and data centers.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...