From Large Language Models to Long-Lasting Manipulations: The AI Act and Generative AI Advertising
General-purpose Large Language Models (LLMs) have emerged as one of the most significant innovations of the century, garnering widespread discussion and recognition, including being named persons of the year by Time magazine. Among these models, ChatGPT, developed by OpenAI, stands out as a leading example. Despite its popularity, OpenAI operates at substantial financial losses, with annual revenues projected at $13 billion, barely covering a fraction of its estimated computing costs of $1.4 trillion over the next eight years. This financial strain has led OpenAI to consider the introduction of advertisements within ChatGPT, raising critical ethical and legal concerns.
Personalization and Privacy Concerns
ChatGPT has the ability to remember useful details between chats, enhancing user experience by tailoring responses based on previous interactions. However, this feature raises significant privacy concerns, especially as advertisements become integrated into the user experience. While some users may appreciate the personalization, others may find it intrusive, particularly if advertisements exploit sensitive personal information.
Manipulation Risks
One potential manipulation scenario involves an individual seeking advice from ChatGPT during a personal crisis, such as a breakup. The model might suggest self-care products with links to shopping sites, exploiting the user’s emotional vulnerability for commercial gain. This raises ethical questions about the manipulative techniques employed by LLMs, characterized by covert influences on users’ decision-making.
Manipulation can manifest in various forms, including non-rational influence and trickery. As defined by Susser et al., manipulation is the imposition of hidden influences on decision-making, particularly concerning users’ emotional vulnerabilities. The design choices made by developers contribute to a landscape where users may unknowingly trust LLMs, increasing the risk of exploitation.
Legal Remedies Under the AI Act
The AI Act, particularly Article 5, prohibits manipulative AI, yet defining manipulation remains complex. The Commission’s Guidelines state that manipulative techniques exploit cognitive biases and psychological vulnerabilities. This raises questions about whether advertisements in ChatGPT could be deemed manipulative, especially if they exploit users’ vulnerabilities.
Consumer protection laws, such as the Unfair Commercial Practices Directive, stipulate that advertisements should not manipulate reasonably informed consumers. The challenge lies in determining what a “reasonably informed” AI user is, particularly when considering the diverse motivations for using LLMs.
Future Considerations
While the introduction of advertisements in LLMs like ChatGPT has been paused, the financial motivations driving AI development persist. As regulatory frameworks evolve, particularly in the European Union, the need for robust protections against manipulation becomes increasingly urgent. The balance between innovation and ethical responsibility will define the future landscape of AI advertising.
By taking a strong regulatory stance, institutions can establish critical boundaries that safeguard user autonomy against manipulative practices, ensuring that the benefits of technological advancements do not come at the cost of consumer rights.