AI Industry Players Vow Compliance After China’s Annual 315 Gala Uncovers AI ‘Data Poisoning’
In recent developments following the China Central Television (CCTV) annual 3.15 Consumer Rights Gala, significant concerns have emerged regarding AI poisoning linked to a commercial technique known as generative engine optimization (GEO). The Gala highlighted alarming practices within the GEO industry, prompting some start-up companies to issue compliance statements.
Spotlight on GEO Industry
According to reports from CCTV, certain GEO service providers assured clients that, for a fee, their products could be prominently featured in mainstream AI-generated responses. This alarming trend includes the presentation of misleading advertising as “standard answers” in AI outputs.
The Gala brought to light various unethical practices such as false information generation, AI data poisoning, ranking manipulation, and malicious competition—all perceived as growing threats to the integrity of large AI models.
Industry Responses
In response to these revelations, Henan Henghui Hehuan issued a statement via its official WeChat account. The company asserted that its product, Zhaixing GEO, adheres to the principle of “technology for good, compliance first.” They emphasized a commitment to distance themselves from illegal operators and pledged to control content inflows strictly. Furthermore, they rejected any digital marketing techniques or fake traffic to artificially boost rankings, urging peers in the industry to maintain integrity and operate within the law.
Similarly, another Shanghai-based company, ABKE, released an emergency statement denouncing all forms of false promotion, data falsification, or malicious manipulation of AI outputs. They clarified their non-involvement in operations described in industry reports as “brainwashing AI” or “manipulating standard answers”.
The Threat of AI Poisoning
The CCTV report outlined how GEO tools, initially developed to enhance information dissemination and promotional effectiveness, can be misused. Industry insiders indicated that large volumes of systematically targeted false information could infiltrate AI training systems, subsequently being presented as high-priority responses to user queries.
Experts define AI poisoning as the intentional introduction of fabricated information into the ecosystem—such as fake expert identities and bogus research reports—causing AI systems to learn and replicate these inaccuracies. Those engaging in this practice are labeled as “poisoners”, posing a significant risk to the integrity of corpus data.
Consequences for Consumers and the Market
The GEO services pose a unique challenge, as they can mask commercial promotions as seemingly objective AI-generated knowledge, making it difficult for consumers to discern the commercial intent behind such content. Pan Helin, a veteran economist, noted that this behavior may infringe on consumers’ rights to informed choices and could breach advertising regulations regarding ad identifiability. Moreover, the fabrication of facts risks distorting market competition and eroding public trust in AI as a reliable source of information.
Call for Action
According to experts, the training data for Chinese large models primarily derive from the Chinese internet, making it crucial for multiple stakeholders to address this issue collaboratively. If not managed properly, speculators employing GEO technology could rapidly degrade the quality of large models.
Pan emphasizes the need for authorities to actively improve the ecological landscape of internet information, enhance the quality of online content, and increase the availability of objective and scientifically grounded information.
In January, the State Administration for Market Regulation recognized AI-generated advertising as a significant challenge for internet advertising regulation in 2026, urging enhanced supervision and management of such practices.