Proposed Senate Bill Could Bring Sweeping Changes to AI Liability, Section 230, and State Regulation, and Introduces Protections for AI Use
The Trump America AI Act is a discussion draft aiming to establish a robust regulatory framework around artificial intelligence (AI). This nearly 300-page bill introduces significant changes to liability for AI developers and deployers, moving toward a products liability framework.
Key Highlights of the Bill
- The Bill imposes a duty of care on AI developers and deployers to prevent foreseeable harm.
- It includes provisions that preempt certain state AI regulations but allows states to enact stronger protections.
- The Bill aims to repeal Section 230 of the Communications Act two years post-enactment, fundamentally reshaping platform and AI litigation exposure.
Liabilities Created
The proposed legislation establishes new liability frameworks for AI systems that cause harm, including:
- Negligence, strict liability, and warranty-based claims for developers.
- Deployers who modify or misuse products are treated as developers regarding liability.
- Joint and several liability applies if both parties contribute to harm.
Furthermore, a chatbot duty of care is introduced, with violations classified as unfair or deceptive acts. Notably, violations could lead to federal criminal offenses for AI chatbots soliciting minors.
Potential Preemption
The Bill is structured to allow concurrent state regulation but supersedes state law where it conflicts with federal provisions. Notably, it includes the No Fakes Act to preempt state digital-replica causes of action while allowing for more protective state laws.
Protections for Minors and Creators
Title IV of the Bill imposes a duty of care to prevent harm to minors, which includes:
- Communication limits and data-exposure protections.
- Age verification for chatbot users.
Additionally, it introduces new rights for creators, establishing federal property rights in an individual’s voice and likeness. The Bill denies copyright protection to unauthorized AI-generated works.
Audit Requirements
Providers of high-risk AI systems are mandated to undergo annual independent audits for bias and viewpoint discrimination. Reports must be submitted to the FTC within 180 days, emphasizing accountability and transparency.
Innovation Initiatives
The Bill mandates the establishment of a Center for AI Standards and Innovation and promotes public-private collaboration for AI evaluations. It aims to provide access to computational resources for researchers and small businesses.
Takeaways
This discussion draft signals a federal approach that combines innovation policy with stringent product-liability, child-safety, copyright, and platform-accountability provisions. Companies involved in AI development should assess their exposure to new audit and transparency duties, as the proposed changes could significantly alter the litigation landscape.