Trade Secrets in the AI Era: Navigating Transparency Under the EU AI Acth2>
In the rapidly evolving landscape of artificial intelligence (AI), traditional methods of protecting intellectual property (IP) are being challenged. As AI systems increasingly depend on cumulative learning and optimization, the competitive advantage often lies not in the source code itself but in the training of these systems. This shift has led AI developers to rely more heavily on b>trade secretsb> alongside, or instead of, b>patentsb>.p>
The Value of Trade Secretsh3>
Trade secrets offer immediate and flexible protection, lasting indefinitely as long as secrecy is maintained. Key assets ideally suited for trade secret protection in AI ecosystems include:p>
-
li>b>Proprietary data pipelinesb> and preprocessing techniquesli>
li>b>Reinforcement learning strategiesb> and training protocolsli>
li>b>Model weightsb>, architectures, and internal algorithmsli>
li>b>Promptsb> and instructionsli>
li>b>Safety guardrailsb> and evaluation frameworksli>
ul>
However, the b>EU AI Actb> introduces rigorous transparency requirements that may necessitate the documentation and partial disclosure of these very elements, creating a legal paradox: how to comply with these obligations without compromising the confidentiality that supports competitive advantages.p>
The EU Transparency Landscapeh3>
Effective from 2024, the EU AI Act implements a tiered transparency framework that requires different levels of disclosure based on the AI system’s risk level:p>
ol>
li>b>User-facing transparencyb> (Article 50): AI systems that interact directly with users must inform them when they are engaging with AI or encountering AI-generated content.li>
li>b>High-risk system documentationb> (Article 13): Providers of high-risk AI systems must supply “clear, complete and correct” instructions, including details about the system’s purpose and the training data used.li>
li>b>General-purpose AI (GPAI) obligationsb> (Article 53): All GPAI model providers must maintain comprehensive documentation and share information with regulatory authorities when requested.li>
ol>
This framework intensifies the conflict between the need for transparency and the protection of trade secrets, particularly for high-risk systems and GPAI models, where disclosures may expose proprietary methods and datasets.p>
Practical Relevance: Transparency Meets Confidentialityh3>
Recent legal cases exemplify this tension. In the United States, for instance, the case of b>The New York Times v. OpenAI and Microsoftb> involves allegations that OpenAI’s models were trained using copyrighted material from the Times. The plaintiffs are pressing for disclosure of the training data, which OpenAI argues constitutes core trade secrets. Similarly, in the case of b>CK v. Magistrat der Stadt Wienb>, the Court of Justice of the European Union ruled that operators relying on automated decision-making must provide accessible information about the criteria used, even if those criteria are trade secrets.p>
Operationalising Transparency: The Commission’s Implementation Toolsh3>
Two key instruments adopted in 2025 have transformed the EU AI Act’s transparency obligations into compliance requirements:p>
-
li>b>The GPAI Code of Practiceb>: Finalized in July 2025, this code offers guidance for general-purpose AI model providers on meeting their transparency obligations. It introduces a Model Documentation Form to demonstrate compliance, requiring details on architecture, data sources, and safety-testing methodologies.li>
li>b>Commission guidelines on GPAI transparencyb>: Published in September 2025, these guidelines clarify how transparency obligations should be applied, emphasizing that confidentiality claims must be justified and limited.li>
ul>
These developments create a new legal and strategic tension: how can companies comply with transparency obligations while protecting against reverse engineering and information leakage?p>
Navigating the Tension: From Reactive Disclosure to Structured Transparencyh3>
To effectively navigate this landscape, companies should shift from reactive disclosure to structured transparency management by:p>
-
li>b>Mapping sensitive informationb> early, classifying AI assets that are relevant for trade secret protection.li>
li>b>Preparing “defensible transparency”b>, ensuring that disclosures meet legal requirements without revealing sensitive information.li>
li>b>Embedding contractual protectionb> to govern downstream sharing, such as restrictions on model reconstruction.li>
li>b>Maintaining audit evidenceb> to justify confidentiality claims and demonstrate compliance.li>
ul>
Conclusionh3>
The EU AI Act signifies a shift from voluntary to enforced transparency in AI development. For AI developers and rights-holders, this means that protecting trade secrets can no longer rely solely on silence. The future of compliant AI innovation will depend on creating governance processes that preserve confidentiality while strategically managing transparency within a broader IP and compliance framework.p>