Tech Giants Clash with EU Over AI Transparency: Creatives Demand Fair Compensation

OpenAI and Tech Companies Challenge Transparency in Europe’s AI Act

The European Union’s recent passage of the AI Act has sparked significant debate among technology companies and creative professionals alike. This legislation is hailed as a landmark move, establishing the world’s first regulatory framework for artificial intelligence. A key requirement of the AI Act mandates that AI companies must inform the public when content is generated by AI.

Transparency and Rights Holders

One of the most contentious aspects of the AI Act is its transparency obligation during the training phase of AI models. Companies are required to notify rightsholders when their works are used to train generative AI systems. This obligation is pivotal for creators seeking compensation and new revenue streams. However, major companies, including OpenAI, Meta, and MistralAI, have criticized the law as a barrier to innovation.

OpenAI’s CEO, Sam Altman, has publicly argued against the AI Act, stressing the need for European regulators to consider the long-term implications of their decisions on technological advancements. He referenced comments from Mario Draghi, the President of the European Central Bank, who noted an “innovation gap” between Europe and other regions, particularly the U.S. and China.

Historical Context of Tech Regulation

This isn’t the first instance of tension between tech giants and EU regulators. In 2018, the EU faced backlash from U.S. companies, notably Meta, for enforcing the stringent GDPR (General Data Protection Regulation), which has since influenced privacy laws worldwide.

Currently, OpenAI is involved in legal disputes regarding copyright issues, with a group of news outlets, led by The New York Times, taking the company to federal court over alleged copyright infringements. In France, where OpenAI has a licensing agreement with Le Monde, further legal threats loom as local press groups seek to protect their intellectual property.

Concerns Over Content Scraping

Concerns have been raised about AI companies utilizing vast amounts of content without proper compensation. According to Jane C. Ginsburg, a prominent professor of literary and artistic property law, AI companies have accessed millions of works through methods sometimes described as “scraping” the internet, often without paying rightsholders. She pointed out that many companies justify this practice under exceptions for “text and data mining” in the EU and “fair use” in the U.S.

The “fair use” doctrine allows for the use of copyrighted material, provided it results in a new product that does not compete with the original. Conversely, the EU’s “text and data mining” exception offers rightsholders the option to decline commercial use of their works, leading to an increase in opt-outs from various organizations.

The Future of AI and Content Creation

Despite the potential benefits of these regulations, many AI companies remain reluctant to engage in licensing agreements with content creators, suggesting that they prefer to use lower-quality data sources rather than invest in quality content that could enhance their models. The ongoing debate reflects a struggle between innovation and the protection of intellectual property.

Louette, a representative from a major French press group, expressed concerns over the exploitation of journalistic content and called for fair compensation for past and future use of their works. He emphasized that while companies like OpenAI sell subscriptions, they are essentially profiting from “harvesting” the work of others without proper remuneration.

Regulatory Framework and Innovation

As the EU gears up to enforce the AI Act, there is a strong push for transparency among AI companies regarding their training data. Activists and creators alike are advocating for a regulatory framework that supports both innovation and the rights of content creators.

Ayouch, a French-Moroccan filmmaker, highlighted the critical need for regulation in the tech industry, arguing that history has shown that technological innovations thrive under protective frameworks. He posited that without regulation, innovation is at risk of collapsing.

As the AI landscape evolves, the relationship between tech companies and content creators will be pivotal in shaping the future of artificial intelligence and its impact on society. The ongoing dialogue will likely determine how both parties can coexist and benefit from each other’s contributions.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...