EU Classifies ChatGPT as VLOSE: New Challenges for AI Regulation

EU Classifies ChatGPT as a Search Engine, Boosting AI Oversight

Introduction

The European Union (EU) has recently designated OpenAI’s ChatGPT as a “very large search engine” (VLOSE) under its Digital Services Act (DSA). This classification marks a significant regulatory development, particularly as ChatGPT has reportedly attracted over 120 million monthly active users within the EU, surpassing the 45 million user threshold required for VLOSE status.

Implications of the VLOSE Label

As a VLOSE, OpenAI is now subject to a more rigorous compliance framework. Key requirements include:

  • Assessments for systemic risks related to illegal content, user rights, public safety, and well-being.
  • Implementing risk mitigation measures and undergoing annual independent audits.
  • Providing data access to approved researchers.
  • Ensuring transparency in recommendation systems, including offering users at least one option that does not rely on user profiling.

These requirements are similar to those imposed on major search platforms such as Google Search and Microsoft Bing.

OpenAI at a Strategic Crossroads

This regulatory change comes at a crucial time for OpenAI, which has recently secured significant funding, enhancing its valuation. The EU’s focus on risk management and fundamental rights contrasts with the innovation-centric approach typically seen in the United States. By categorizing ChatGPT as a search engine, the EU is extending its digital governance model to advanced AI systems, aiming to balance technological progress with societal safeguards.

Challenges Ahead

Adapting to these enhanced transparency and accountability rules presents a considerable challenge for OpenAI in the European market. The implications include:

  • Potential delays in development timelines and feature rollouts specific to the EU.
  • Increased operational costs and complexity due to strict compliance demands.
  • Possible restrictions on data use strategies for model training in Europe, which may disadvantage OpenAI compared to regions with less stringent regulations.

Concerns Over Innovation

Some analysts express concern that the VLOSE classification could slow OpenAI’s rapid innovation cycle within the EU. The comprehensive compliance requirements, while aimed at user protection, may inadvertently hinder the very innovation they seek to govern.

Global Implications of EU Regulation

The EU’s push for AI regulation through the DSA and the AI Act is intended to shape global AI governance. However, this could lead to market divisions and create compliance challenges for international technology companies. Firms may face a dilemma between maintaining global consistency and adapting to EU-specific regulations, potentially impacting the availability and full functionality of advanced AI tools for European users.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...