AI’s Legal Landscape: Congress and Courts Take Action

Congress and the Courts Confront AI’s Legal Gray Areas

Artificial intelligence (AI) is rapidly becoming an integral part of everyday life. As the technology evolves, the debate intensifies regarding how to harness it properly and ethically. This article examines how AI is being used responsibly in Congress, the challenges it poses in schools, and how legislative and judicial branches are addressing abuses targeting vulnerable individuals and intellectual property.

As incidents of explicit deepfake images increase and copyright disputes arise in courtrooms, the legislative and judicial branches are racing to establish boundaries around large language models (LLMs). These technologies are redefining societal standards at a pace that outstrips governmental response.

The pressing question surrounding AI regulation emerges: When do laws mitigate abuse, and when do they infringe upon First Amendment rights?

Legislative Efforts in Addressing AI Abuse

In Washington, lawmakers are advancing bills aimed at curbing the worst abuses of AI, particularly concerning sexually exploitative content and unauthorized digital impersonation. A notable piece of legislation is the Take It Down Act, which seeks to criminalize the nonconsensual distribution of explicit images, whether real or AI-generated. This bill mandates that websites and platforms must remove such content within 48 hours of a valid request.

The Take It Down Act, sponsored by Senators Ted Cruz (R-TX) and Amy Klobuchar (D-MN), passed the Senate by unanimous consent and was recently approved by the House Energy and Commerce Committee in a 49–1 vote. Its momentum is bolstered by high-profile backing from first lady Melania Trump, who has made this a central focus of her “Be Best” initiative.

Supporters of the bill argue that it is long overdue and necessary for ending the spread of exploitative material online. Critics, however, express concerns regarding potential overreach and misuse of the legislation, particularly regarding free speech.

Free Speech Concerns and Enforcement Gaps

The discourse surrounding the Take It Down Act highlights fears from civil liberties groups that the bill’s notice-and-takedown system could be exploited to suppress criticism. The Electronic Frontier Foundation cautions that the absence of safeguards could allow individuals with substantial resources to misuse the law for censorship.

Additionally, concerns about enforcement arise, particularly with a “shorthanded FTC” potentially making oversight nearly impossible. Amendments proposed to create safeguards against fraudulent takedown requests have been rejected, raising alarms about the law’s practical implementation.

NO FAKES Act and Protection Against AI Impersonation

Alongside the Take It Down Act is the NO FAKES Act, aimed at combating unauthorized digital impersonations using AI, particularly of artists and public figures. The proposed legislation would establish a federal right of publicity to sue over unauthorized use of likeness and voice, and impose penalties on platforms failing to comply with takedown requests.

This bill is supported by industry giants such as Google and OpenAI, and seeks to unify the current patchwork of state-level regulations that vary widely in scope and enforcement.

Copyright Battles and Their Implications

In parallel to legislative efforts, significant court battles are shaping the landscape of AI regulation. A pivotal case is New York Times v. OpenAI, where the newspaper accuses OpenAI of copyright infringement for using its articles without permission to train ChatGPT. The U.S. District Judge has allowed the core claims of the lawsuit to proceed, highlighting the potential ramifications for AI competitiveness in the U.S.

As the outcome of this lawsuit could force major changes in how AI models are trained and outputs are generated, it underscores the tension between protecting intellectual property and fostering innovation in the AI field.

The Future of AI Regulation

The current legal landscape regarding AI is fraught with challenges that reflect broader societal concerns about privacy, free speech, and intellectual property. As Congress and the courts grapple with these issues, the stakes are high for both the technological industry and the rights of individuals.

Ultimately, the evolving nature of AI necessitates a careful balance between regulation and innovation, underscoring the importance of adaptive legislative frameworks that can respond to the rapid advancements in technology.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...