AI Washing: The New Greenwashing
The U.S. Securities and Exchange Commission’s creation of the Cybersecurity & Emerging Technologies Unit (CETU) marks a decisive shift in how regulators approach artificial intelligence. What began as marketing optimism has become a regulated space with real enforcement teeth. CETU’s mission includes investigating AI fraud, AI-themed scams, cybersecurity deception, and false or misleading statements about emerging technologies.
Understanding AI Washing
Central to this new landscape is the emergence of AI washing—the practice of overstating, exaggerating, or inventing the capabilities of artificial intelligence systems. Much like greenwashing, which misrepresents environmental sustainability, AI washing misrepresents technological sophistication.
AI Washing vs Greenwashing
For more than a decade, regulators have pursued companies for greenwashing—false environmental claims used to attract investors or customers. Now, AI washing is emerging as the next major frontier.
Both AI washing and greenwashing involve:
- Overstated claims in marketing or disclosures;
- Lack of supporting evidence;
- Appealing to investor sentiment (sustainability, innovation, ESG, “AI-powered” solutions);
- Failure to disclose limitations or risks;
- Pressure from competitive markets fueling exaggeration;
- Whistleblower-driven investigations;
- Material misrepresentation under securities law;
- Misleading advertising under competition law.
AI washing can be even more problematic as it may involve:
- Claims about algorithms, automation, or risk modeling that investors rely on;
- Statements about cyber or operational resilience tied directly to AI performance;
- Undisclosed manual processes masked as AI;
- AI-themed fraud schemes targeting retail investors;
- AI systems that, if misrepresented, create cybersecurity or privacy liabilities.
Put simply, if greenwashing misleads about sustainability, AI washing misleads about capability—and capability goes directly to value, risk, and governance.
CETU: The SEC’s Enforcement Response to AI Washing
CETU is charged with investigating:
- Misleading AI disclosures;
- AI-driven deception and online scams;
- Cyber intrusions and stolen credentials;
- Crypto and blockchain fraud;
- False or incomplete disclosures about cybersecurity incidents;
- Promotional overstatements of “AI-driven” financial strategies.
Building on the SEC’s first AI-washing enforcement actions in 2024, CETU represents a shift in U.S. approach: if you claim AI capability, you must prove it.
Global Perspective: UK, EU, and Canada
In the United Kingdom, AI washing is addressed indirectly through:
- FCA’s rule that communications must be clear, fair, and not misleading;
- ASA/CMA enforcement against misleading technology claims;
- Crackdown on finfluencers promoting “AI trading bots”.
In the European Union, the EU AI Act introduces:
- Transparency and documentation obligations;
- Risk-based categorization for AI systems;
- Administrative fines for misleading statements or incomplete information.
Canada’s regulatory bodies have warned against AI washing, with the potential revival of the Artificial Intelligence and Data Act (AIDA) imposing governance obligations similar to those in the EU.
Essential Steps for Companies to Combat AI Washing
To address both greenwashing and AI washing, companies must implement structural, ongoing, and verifiable controls. The necessary steps include:
1. Evidence-Based Claims
Keep technical documentation that substantiates every AI or sustainability claim; ensure claims can be backed up by internal records, testing data, audits, and model validation.
2. Cross-Functional Sign-Off
AI claims require review from engineering, forensics, legal, marketing, and compliance. If any group cannot verify a claim, it should not be made or published.
3. Avoid Superlatives Without Proof
Words such as proprietary, cutting-edge, predictive, automated, and sustainable are red flags when not supported by measurable evidence.
4. Maintain Documentation of Limitations
Disclosures must include known risks, data constraints, accuracy limitations, and required human oversight.
5. Ensure Governance in Place Before Any Whistleblower Complaint
Companies should undertake AI risk assessments, disclosure controls, audit trails, and crisis-response plans to prepare for potential whistleblower incidents.
Navigating the Compliance Landscape
Regulators expect companies to demonstrate not just compliance but due diligence. Firms specializing in forensic and legal reviews assist companies in ensuring:
1. Evaluating Whether AI Claims Are Accurate
A forensic consultant can review code, data flows, and model behavior, distinguishing between human-driven steps and automated functions.
2. Advising on a Path Forward When Claims Are Overstated
Legal counsel and forensic consultants develop management and corrective strategies when claims go too far.
3. Building a Framework That Withstands Whistleblower Scrutiny
Pre-emptive actions include disclosure policies, model validation protocols, audit trails, and board oversight frameworks.
Conclusion: The Path to Compliance
AI washing is not a temporary trend; it is the next major enforcement theme in global regulation. Companies must prioritize accuracy, governance, and provability to build trust, enhance credibility, and position themselves as responsible leaders in a rapidly evolving AI landscape.
As the SEC, along with regulators in the UK, EU, and Canada, tightens its grip on AI claims, organizations must take proactive steps to avoid the pitfalls of misrepresentation and ensure a robust compliance framework.