Court Ruling Highlights AI Access Risks to User Accounts

Court Finds AI Agent May Violate State and Federal Law by Accessing Amazon Accounts Without Authorization

A court in the Northern District of California has ruled at the preliminary injunction stage that when a website prohibits artificial intelligence (AI) agents from accessing user accounts, continued access by these agents may violate both state and federal law. This finding holds true even if the user has granted permission for the agent’s access. The case is currently on appeal to the US Court of Appeals for the Ninth Circuit, raising significant questions for both platform operators and AI developers.

The Case Overview

In the case of Amazon.com Services LLC v. Perplexity AI, Inc., Amazon accused Perplexity of configuring its AI agent to access users’ password-protected Amazon accounts at the users’ direction. The agent, known as Comet, allowed users to browse products and even make purchases.

Amazon’s terms of service require AI agents to identify themselves via a user-agent string and restrict their access to public sections of the website. Amazon alleged that Comet breached these terms by accessing the Amazon e-commerce platform while logged in, without identifying itself as an AI agent. This raised concerns for Amazon, which was unable to differentiate between the actions of Comet and those of a human user. Consequently, Amazon sought a preliminary injunction to halt Comet’s access.

The Court’s Decision

On March 9, 2026, Judge Maxine M. Chesney granted Amazon’s motion for preliminary injunctive relief. The court found that Amazon was likely to succeed on its claims under the federal Computer Fraud and Abuse Act (CFAA) and the California Comprehensive Computer Data Access and Fraud Act (CDAFA). A pivotal question was whether user consent for the AI agent’s access constituted sufficient authorization, or if the website operator’s terms of service prevailed.

The court sided with Amazon, determining that Comet’s access was unauthorized despite any permission granted by the user. Amazon had previously sent cease-and-desist correspondence to Perplexity, emphasizing its stance that Perplexity’s AI agent’s ongoing access was unauthorized. The court prohibited Perplexity from using AI agents to access Amazon’s protected computer systems and required the deletion of any customer data collected through unauthorized access.

Implications for Websites

Websites aiming to prevent AI agents from accessing account data or performing actions like purchasing on behalf of users should consider drafting explicit terms that prohibit such behaviors. Additionally, requiring AI agents to identify themselves as such during interactions with the website could allow for differentiated treatment of agent traffic compared to human visitors. Should AI agents violate these terms, sending cease-and-desist correspondence may bolster the argument that the access is unauthorized, supporting efforts to obtain injunctions against such conduct.

Implications for AI Agents

Developers of AI agents that access password-protected accounts must heed the potential implications of this ruling. The court’s decision suggests that violating a website’s terms of service could lead to claims under both the CFAA and CDAFA, and that user consent alone may not provide sufficient authorization when a website operator has explicitly revoked it. Nonetheless, this was a preliminary ruling, and there are significant counterarguments, including whether a website’s terms should override a user’s decision to authorize an agent to operate on their behalf, as well as the enforceability of such terms in this context.

The Ninth Circuit’s review on appeal may offer further clarity on these issues. In the interim, developers of AI agents should remain cognizant that both statutes not only provide private rights of action but also may entail potential criminal liability.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...