FTC Cracks Down on AI-Washing: Key Takeaways from Growth Cave Case

FTC Resolves Another Case Involving “AI-washing”

On January 27, 2026, the Federal Trade Commission (FTC) announced the resolution of its case against Growth Cave, highlighting the agency’s ongoing scrutiny of artificial intelligence (AI)-related marketing practices. Despite significant changes within the FTC, its enforcement priorities remain consistent, particularly regarding the application of the FTC Act against companies making deceptive claims about AI products.

FTC’s Position on AI-Related Marketing Claims

In December 2025, FTC Chairman Andrew N. Ferguson stated that the agency discovered that representations made by AI companies are often “wildly inaccurate.” At an event in September 2025, Chris Mufarrige, Director of the FTC’s Bureau of Consumer Protection, reiterated the importance of trust in the marketplace for the broad adoption of AI:

“Just like everyday products and services will have difficulty being adopted in the presence of fraud or other unfair methods of competition, AI cannot be broadly adopted in the market without trust in the marketplace.” He emphasized that the FTC will enforce the law when companies deceive consumers about their AI products or the expected sales they will generate.

Since April 2025, the FTC has initiated three cases alleging deceptive marketing claims related to AI, including one case against Workado, which was resolved earlier in 2026, and another against Air AI, which remains pending.

Resolution of Growth Cave

In the case of Growth Cave, allegations included that the company misrepresented its “AI software,” GrowthBox, claiming it would “automate nearly 100% of the process” of setting up and running an online education course. However, the FTC found that the technology actually required users to manually upload advertisements, set appointments, and input messages for potential customers via text and email.

The proposed orders from the FTC include a provision that prohibits defendants from misrepresenting that a product or service will utilize AI to maximize revenues or enhance profitability, effectiveness, or efficiency. This language is significant as it aims to prevent:

  • Misrepresentation of a product using AI when it does not;
  • Misleading claims that AI will improve the product’s profitability or effectiveness.

The FTC has previously used similar language in orders filed to resolve AI-related business opportunity cases involving Ascend Eco, Empire Holdings Group, and FBA Machine. The agency is likely to continue utilizing this framework in future orders addressing similar marketing claims.

Looking Ahead

The FTC has consistently warned marketers of AI products to ensure the accuracy of their claims, backing these warnings with enforcement actions. Sellers are encouraged to substantiate claims made for AI products and avoid misleading consumers through AI references in their marketing materials.

As the marketplace evolves, the FTC’s stance on AI-related marketing practices serves as a crucial reminder for businesses to maintain transparency and integrity, fostering trust with consumers.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...