Tech Giants Clash with EU Over AI Transparency: Creatives Demand Fair Compensation

OpenAI and Tech Companies Challenge Transparency in Europe’s AI Act

The European Union’s recent passage of the AI Act has sparked significant debate among technology companies and creative professionals alike. This legislation is hailed as a landmark move, establishing the world’s first regulatory framework for artificial intelligence. A key requirement of the AI Act mandates that AI companies must inform the public when content is generated by AI.

Transparency and Rights Holders

One of the most contentious aspects of the AI Act is its transparency obligation during the training phase of AI models. Companies are required to notify rightsholders when their works are used to train generative AI systems. This obligation is pivotal for creators seeking compensation and new revenue streams. However, major companies, including OpenAI, Meta, and MistralAI, have criticized the law as a barrier to innovation.

OpenAI’s CEO, Sam Altman, has publicly argued against the AI Act, stressing the need for European regulators to consider the long-term implications of their decisions on technological advancements. He referenced comments from Mario Draghi, the President of the European Central Bank, who noted an “innovation gap” between Europe and other regions, particularly the U.S. and China.

Historical Context of Tech Regulation

This isn’t the first instance of tension between tech giants and EU regulators. In 2018, the EU faced backlash from U.S. companies, notably Meta, for enforcing the stringent GDPR (General Data Protection Regulation), which has since influenced privacy laws worldwide.

Currently, OpenAI is involved in legal disputes regarding copyright issues, with a group of news outlets, led by The New York Times, taking the company to federal court over alleged copyright infringements. In France, where OpenAI has a licensing agreement with Le Monde, further legal threats loom as local press groups seek to protect their intellectual property.

Concerns Over Content Scraping

Concerns have been raised about AI companies utilizing vast amounts of content without proper compensation. According to Jane C. Ginsburg, a prominent professor of literary and artistic property law, AI companies have accessed millions of works through methods sometimes described as “scraping” the internet, often without paying rightsholders. She pointed out that many companies justify this practice under exceptions for “text and data mining” in the EU and “fair use” in the U.S.

The “fair use” doctrine allows for the use of copyrighted material, provided it results in a new product that does not compete with the original. Conversely, the EU’s “text and data mining” exception offers rightsholders the option to decline commercial use of their works, leading to an increase in opt-outs from various organizations.

The Future of AI and Content Creation

Despite the potential benefits of these regulations, many AI companies remain reluctant to engage in licensing agreements with content creators, suggesting that they prefer to use lower-quality data sources rather than invest in quality content that could enhance their models. The ongoing debate reflects a struggle between innovation and the protection of intellectual property.

Louette, a representative from a major French press group, expressed concerns over the exploitation of journalistic content and called for fair compensation for past and future use of their works. He emphasized that while companies like OpenAI sell subscriptions, they are essentially profiting from “harvesting” the work of others without proper remuneration.

Regulatory Framework and Innovation

As the EU gears up to enforce the AI Act, there is a strong push for transparency among AI companies regarding their training data. Activists and creators alike are advocating for a regulatory framework that supports both innovation and the rights of content creators.

Ayouch, a French-Moroccan filmmaker, highlighted the critical need for regulation in the tech industry, arguing that history has shown that technological innovations thrive under protective frameworks. He posited that without regulation, innovation is at risk of collapsing.

As the AI landscape evolves, the relationship between tech companies and content creators will be pivotal in shaping the future of artificial intelligence and its impact on society. The ongoing dialogue will likely determine how both parties can coexist and benefit from each other’s contributions.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...