AI Advertising: Ethical Concerns and Consumer Manipulation

From Large Language Models to Long-Lasting Manipulations: The AI Act and Generative AI Advertising

General-purpose Large Language Models (LLMs) have emerged as one of the most significant innovations of the century, garnering widespread discussion and recognition, including being named persons of the year by Time magazine. Among these models, ChatGPT, developed by OpenAI, stands out as a leading example. Despite its popularity, OpenAI operates at substantial financial losses, with annual revenues projected at $13 billion, barely covering a fraction of its estimated computing costs of $1.4 trillion over the next eight years. This financial strain has led OpenAI to consider the introduction of advertisements within ChatGPT, raising critical ethical and legal concerns.

Personalization and Privacy Concerns

ChatGPT has the ability to remember useful details between chats, enhancing user experience by tailoring responses based on previous interactions. However, this feature raises significant privacy concerns, especially as advertisements become integrated into the user experience. While some users may appreciate the personalization, others may find it intrusive, particularly if advertisements exploit sensitive personal information.

Manipulation Risks

One potential manipulation scenario involves an individual seeking advice from ChatGPT during a personal crisis, such as a breakup. The model might suggest self-care products with links to shopping sites, exploiting the user’s emotional vulnerability for commercial gain. This raises ethical questions about the manipulative techniques employed by LLMs, characterized by covert influences on users’ decision-making.

Manipulation can manifest in various forms, including non-rational influence and trickery. As defined by Susser et al., manipulation is the imposition of hidden influences on decision-making, particularly concerning users’ emotional vulnerabilities. The design choices made by developers contribute to a landscape where users may unknowingly trust LLMs, increasing the risk of exploitation.

Legal Remedies Under the AI Act

The AI Act, particularly Article 5, prohibits manipulative AI, yet defining manipulation remains complex. The Commission’s Guidelines state that manipulative techniques exploit cognitive biases and psychological vulnerabilities. This raises questions about whether advertisements in ChatGPT could be deemed manipulative, especially if they exploit users’ vulnerabilities.

Consumer protection laws, such as the Unfair Commercial Practices Directive, stipulate that advertisements should not manipulate reasonably informed consumers. The challenge lies in determining what a “reasonably informed” AI user is, particularly when considering the diverse motivations for using LLMs.

Future Considerations

While the introduction of advertisements in LLMs like ChatGPT has been paused, the financial motivations driving AI development persist. As regulatory frameworks evolve, particularly in the European Union, the need for robust protections against manipulation becomes increasingly urgent. The balance between innovation and ethical responsibility will define the future landscape of AI advertising.

By taking a strong regulatory stance, institutions can establish critical boundaries that safeguard user autonomy against manipulative practices, ensuring that the benefits of technological advancements do not come at the cost of consumer rights.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...