AI’s Legal Landscape: Congress and Courts Take Action

Congress and the Courts Confront AI’s Legal Gray Areas

Artificial intelligence (AI) is rapidly becoming an integral part of everyday life. As the technology evolves, the debate intensifies regarding how to harness it properly and ethically. This article examines how AI is being used responsibly in Congress, the challenges it poses in schools, and how legislative and judicial branches are addressing abuses targeting vulnerable individuals and intellectual property.

As incidents of explicit deepfake images increase and copyright disputes arise in courtrooms, the legislative and judicial branches are racing to establish boundaries around large language models (LLMs). These technologies are redefining societal standards at a pace that outstrips governmental response.

The pressing question surrounding AI regulation emerges: When do laws mitigate abuse, and when do they infringe upon First Amendment rights?

Legislative Efforts in Addressing AI Abuse

In Washington, lawmakers are advancing bills aimed at curbing the worst abuses of AI, particularly concerning sexually exploitative content and unauthorized digital impersonation. A notable piece of legislation is the Take It Down Act, which seeks to criminalize the nonconsensual distribution of explicit images, whether real or AI-generated. This bill mandates that websites and platforms must remove such content within 48 hours of a valid request.

The Take It Down Act, sponsored by Senators Ted Cruz (R-TX) and Amy Klobuchar (D-MN), passed the Senate by unanimous consent and was recently approved by the House Energy and Commerce Committee in a 49–1 vote. Its momentum is bolstered by high-profile backing from first lady Melania Trump, who has made this a central focus of her “Be Best” initiative.

Supporters of the bill argue that it is long overdue and necessary for ending the spread of exploitative material online. Critics, however, express concerns regarding potential overreach and misuse of the legislation, particularly regarding free speech.

Free Speech Concerns and Enforcement Gaps

The discourse surrounding the Take It Down Act highlights fears from civil liberties groups that the bill’s notice-and-takedown system could be exploited to suppress criticism. The Electronic Frontier Foundation cautions that the absence of safeguards could allow individuals with substantial resources to misuse the law for censorship.

Additionally, concerns about enforcement arise, particularly with a “shorthanded FTC” potentially making oversight nearly impossible. Amendments proposed to create safeguards against fraudulent takedown requests have been rejected, raising alarms about the law’s practical implementation.

NO FAKES Act and Protection Against AI Impersonation

Alongside the Take It Down Act is the NO FAKES Act, aimed at combating unauthorized digital impersonations using AI, particularly of artists and public figures. The proposed legislation would establish a federal right of publicity to sue over unauthorized use of likeness and voice, and impose penalties on platforms failing to comply with takedown requests.

This bill is supported by industry giants such as Google and OpenAI, and seeks to unify the current patchwork of state-level regulations that vary widely in scope and enforcement.

Copyright Battles and Their Implications

In parallel to legislative efforts, significant court battles are shaping the landscape of AI regulation. A pivotal case is New York Times v. OpenAI, where the newspaper accuses OpenAI of copyright infringement for using its articles without permission to train ChatGPT. The U.S. District Judge has allowed the core claims of the lawsuit to proceed, highlighting the potential ramifications for AI competitiveness in the U.S.

As the outcome of this lawsuit could force major changes in how AI models are trained and outputs are generated, it underscores the tension between protecting intellectual property and fostering innovation in the AI field.

The Future of AI Regulation

The current legal landscape regarding AI is fraught with challenges that reflect broader societal concerns about privacy, free speech, and intellectual property. As Congress and the courts grapple with these issues, the stakes are high for both the technological industry and the rights of individuals.

Ultimately, the evolving nature of AI necessitates a careful balance between regulation and innovation, underscoring the importance of adaptive legislative frameworks that can respond to the rapid advancements in technology.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...