Deregulation Risks AI Transparency and Innovation in Europe

Gutting AI Transparency in the Name of Deregulation Will Not Help Europe

There has been a vibe shift in Brussels. Over are the days in which the EU postures as the global digital rulemaker who reins in Big Tech.

In an attempt to shake its reputation as a tech laggard and adjust to new geopolitical realities, the EU is now pivoting toward regulatory simplification. But in a rush to simplify rules for homegrown businesses, the EU risks not only compromising on long-held principles.

Simplification should not be a guise for deregulation. By mistaking transparency and openness for an obstacle, not a driver, of innovation, it would shoot itself in the foot. The new transparency rules for AI and data under the EU’s AI Act may become one of the first casualties of this new impetus to roll back some of the recently adopted requirements for the providers of so-called general-purpose AI (GPAI) models.

Under the EU’s AI Act, developers of GPAI models — that is, very large AI models such as OpenAI’s GPT or Google’s Gemini models — will soon have to present a “sufficiently detailed” public summary of the data they used to train the models. This summary could be a light-touch way to drastically advance transparency around the use of one of AI’s most precious inputs, data, at little additional cost to developers.

However, if the EU’s AI Office gives in to industry pressure to water down the level of detail, this summary will turn into a performative checkbox exercise that ultimately offers little value to anyone. This would be misguided and short-sighted.

Who is Scared of Transparency?

From a fundamental rights perspective, transparency cannot be treated as an optional ‘add-on’ or a ‘nice to have’. Transparency enables the exercise of rights, helps to hold tech companies accountable, informs public debate, and allows for oversight of this emerging technology without interfering with its development.

In short, we cannot hope to govern AI without better transparency. Economically, too, the fear of transparency is ill-conceived. Robust transparency standards are not barriers to socially beneficial tech innovation.

On the contrary, they promote the diffusion of innovation and drive competition in a more trustworthy and sustainable way. The success of open-source software — which today serves as the bedrock of technology everywhere — proves that openness, not secrecy, fosters technological advancement and leadership.

A secrecy-based approach, on the other hand, favors market incumbents, keeps knowledge and scientific advancements walled in, and slows technological progress. It is far more likely that the real motivation here is to avoid public scrutiny and potential legal liability for copyright or data protection breaches, and to use their data to build a competitive moat.

In AI, as in other digital industries, long-term competitive advantage comes from innovation, not secrecy. The EU should not abandon its commitment to transparency on the mistaken assumption that competitiveness and transparency are mutually exclusive.

When it comes to transparency around the data used to train cutting-edge AI models, AI developers and their industry associations are vociferously opposed — making blanket claims of trade secrecy, voicing unspecified reservations regarding supposed security vulnerabilities, and citing overbearing compliance burdens.

However, it is far more likely that the real motivation here is to avoid public scrutiny and potential legal liability for copyright or data protection breaches, for example, and to use their data to build a competitive moat. The irony is that personal data and copyright protections are among the interests that led EU lawmakers to include the transparency obligations in the AI Act in the first place.

Case Against Transparency Deeply-Flawed

AI developers have provided little clarity on what specific information should be protected as a trade secret, despite the well-defined criteria outlined in EU trade secret law. There are reasonable arguments for why much, if not most, of the information that we have argued should be included in the template does not meet these criteria.

What’s more, basing a strategy to bolster the competitiveness of EU AI companies on trade secrecy around training data is misdirected. Relying on trade secrets to wall off information about training data may also discourage AI developers from using diverse, high-quality datasets, resulting in biased, less reliable or even harmful AI applications. This is precisely the opposite of what the EU’s economy needs.

Security arguments against transparency in this context have also come devoid of specifics. There may well be special circumstances where disclosing certain information may introduce risks; this should be taken seriously. But without any explanation of what information, if disclosed publicly, would introduce which vulnerability, it is hard to do so.

Regarding the prospective compliance costs of providing clear and meaningful training data documentation, almost all of the information required is either easily produced or already available for developers that follow standard documentation practices internally.

Ultimately, the EU’s current push for simplified regulation cannot be a way to undermine the spirit of what was agreed upon by co-legislators in a democratic process.

The desire to simplify rules for business cannot come at the expense of the EU’s core values, which include both transparency and socially beneficial and environmentally sustainable technological innovation.

Instead of allowing select industry interests — especially leading tech companies — to dilute transparency, the EU should treat it as a tool to foster an open and competitive digital market that works for private businesses and the public interest. There’s no need to trade transparency for competitiveness. If anything, the real risk is that in its rush to simplify, the EU will end up trimming rules for those who are already ahead of the game.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...