AI Regulation: Balancing Innovation and Free Speech

Another Year, Another Session of AI Overregulation

As lawmakers kick off the 2026 legislative session, a new and consequential phase in the conversation about free speech and artificial intelligence is already taking shape in statehouses across the country. Yet another crop of AI bills is set to dictate how people use machines to speak and communicate, raising fundamental constitutional questions about freedom of expression in this country.

The First Amendment and AI

The First Amendment applies to artificial intelligence in much the same way it applies to earlier expressive technologies. Like the printing press, the camera, the internet, and social media, AI is a tool people use to communicate ideas, access information, and generate knowledge. Regardless of the medium involved, our Constitution protects these forms of expression.

Existing Laws and New Regulations

As lawmakers revisit AI policy in 2026, it bears repeating that existing law already deals with many of the harms they seek to address — fraud, forgery, defamation, discrimination, and election interference — whether or not AI is used. Fraud is still fraud, whether you use a pen or a keyboard, because liability properly attaches to the person who commits unlawful acts rather than the instrument they used to do it.

Many of the AI bills introduced or expected this year rely on regulatory approaches that raise serious First Amendment concerns. Some would require developers or users to attach disclaimers, labels, or other statements to lawful AI-generated expression, forcing them to serve as government mouthpieces for views they may not hold.

Political Speech and Deepfake Legislation

Election-related deepfake legislation remains a central focus in 2026. Over the past year, multiple states have introduced bills aimed at controlling AI-generated political content. However, these laws often restrict core political speech, and courts have applied well-settled First Amendment jurisprudence to find them unconstitutional. For example, in Kohls v. Bonta, a federal district court struck down California’s election-related deepfake statute, holding its restrictions on AI-generated political content and accompanying disclosure requirements violated the First Amendment.

The court emphasized that constitutional protections for political speech, including satire, parody, and criticism of public officials, apply even when new technologies are used to create that expression.

Regulations on Chatbots

Another growing category of legislation seeks to restrict chatbots, or conversational AI, using frameworks borrowed from social media laws. These include blanket warning requirements telling users they are interacting with AI, sweeping in many ordinary, low-risk interactions where no warning is needed.

Some proposals would categorically prohibit chatbots from being trained to provide emotional support to users, effectively imposing a direct and amorphous regulation on the tone and content of AI-generated responses. Other proposals require age or identity verification, either explicitly or as a practical matter, before a user may access the chatbot.

The Burden of Regulation

These kinds of constraints place the government between the people and information they have a constitutionally protected right to access. They censor lawful expression and burden the right to speak and listen anonymously. For that reason, courts have repeatedly blocked similar restrictions when applied to social media users and platforms. The result is likely to be similar for AI.

Broad AI Regulatory Bills

Broad, overarching AI regulatory bills have also returned this year, with at least one state introducing such a proposal so far this cycle. These bills, which were introduced in several states in 2025, go well beyond narrow use cases, seeking to impose sprawling regulatory frameworks on AI developers, deployers, and users through expansive government oversight and sweeping liability for third-party uses of AI tools.

When applied to expressive AI systems, these approaches raise serious First Amendment concerns, particularly when they involve compelled disclosures and interfere with editorial judgment in AI design.

Addressing Real Harm

Addressing real harms, including fraud, discrimination, and election interference, can be legitimate legislative goals. However, through extensive experience defending free expression, we’ve observed how expansive, vague, and preemptive regulation of expressive tools often chills lawful speech without effectively targeting misconduct. That risk is especially acute when laws incentivize AI developers to suppress lawful outputs, restrict model capabilities, or deny access to information to avoid regulatory exposure.

Rather than targeting political speech, imposing age gates on expressive tools, or mandating government-scripted disclosures, government officials should begin with the legal tools already available to them. Existing laws provide remedies for unlawful conduct and allow enforcement against bad actors without burdening protected expression or innovation. Where gaps truly exist, any legislative response should be narrow, precise, and focused on actionable conduct.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...