The Dangers of AI-Washing in Nutrition

How Real is the Threat of AI Washing?

AI-washing is a deceptive marketing tactic where companies exaggerate the role of AI in promoting their products or services. This can pose potential hazards, as explained by food risk scientists and food regulations experts.

The Growing Use of AI in the Nutrition Space

AI is increasingly being leveraged in the nutrition sector to improve health outcomes. Various AI-powered apps and platforms are now capable of providing individualized nutrition advice based on factors such as genetic makeup, metabolic health, digestive health, and activity levels. For instance, brands like Zoe and Lumen utilize AI to offer tailored dietary recommendations.

Moreover, machine learning algorithms analyze massive datasets from clinical studies, food diaries, and health metrics to identify patterns in how specific foods affect particular populations. This analytical capability allows companies like InsideTracker to suggest effective and science-backed dietary interventions.

AI image recognition technology has also advanced significantly, enabling applications like Yuka to identify food items from photos. Users can scan barcodes, and the AI estimates the nutritional breakdown of products.

Furthermore, AI technology is being used to predict the nutritional needs of individuals based on their health status, such as identifying nutrient deficiencies in infants through the app Alba Health. Brands like Withings leverage AI in their Health Mate smartwatch to perform comprehensive health assessments.

Challenges with AI in Nutrition

While established brands in the nutrition industry recognize the transformative potential of AI and actively invest in its development, many face significant challenges in its implementation. If brands claim their products are powered by AI without genuinely utilizing meaningful AI, they can mislead consumers into believing their offerings are more advanced or scientifically backed than they truly are.

According to experts, claims of being “AI-powered” are often exaggerated. Additionally, if AI generates complex content that humans cannot fully review, it may fail to comply with regulations, leading to misinformation and decreased trust in AI-generated content.

However, excessive caution may render AI less useful. Over-restrictions could prevent AI from generating valid health and dietary advice, thereby necessitating a careful risk assessment by companies.

The EU AI Act

The EU AI Act is a recent proposal by the European Union aimed at regulating AI technologies, ensuring they are used ethically, safely, and transparently. The Act seeks to maximize the benefits of AI while minimizing risks, establishing Europe as a leader in AI regulation across all industries, including food and nutrition.

Most AI tools utilized in the nutrition space are low-risk and do not require specific regulations. However, the Act establishes rules concerning transparency, risk classification, and certain prohibitions. For instance, it mandates the disclosure of virtual influencers that resemble real people, which are becoming more common in supplement promotions.

AI-powered medical devices used by consumers to guide food or supplement choices are also subject to regulation under this Act. Compliance with the AI Act’s guidelines is essential for food and nutrition businesses.

Advice for Brands

Regulators are in the early stages of assessing AI systems in the nutrition space for compliance with the Act. Although the act is not overly complicated and guidance is available, some issues still require clarification.

Brands intending to deploy AI should closely review the Act, providing training for staff or engaging professionals for assistance. Both AI developers and deployers must understand their obligations, particularly when using AI developed outside the EU, which can introduce additional challenges.

Companies utilizing AI-generated health or nutrition advice without proper oversight face legal risks, as the AI Act includes enforcement mechanisms and penalties.

In summary, skepticism regarding AI in nutrition is a prudent approach during these early stages of development and regulation.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...