Regulating the Unseen: The AI Act’s Blind Spot Regarding Large Language Models’ Influence on Literary Creativity
The Artificial Intelligence Act 2024 (henceforth AIA) has positioned the EU as a frontrunner in the regulation of digital spaces. This pioneering framework is celebrated for its comprehensive approach to critical issues such as algorithmic bias, transparency, and accountability in AI systems. However, it overlooks a crucial aspect: the implications of rapidly evolving large language models (LLMs) on literary creativity and intellectual property rights.
As lawmakers revised the Act to consider the existing capabilities of generative AI models, the explosive growth of LLMs has outpaced current regulations. These models are redefining the creative process, producing text that closely mimics human writing. This evolution disrupts long-standing frameworks for authorship, ownership, and fair compensation within Europe’s literary ecosystem.
LLMs pose challenges to cultural values, artistic expression, and societal consciousness. They threaten the identity of authors, undermining their rights to original works and creating barriers to equitable distribution of creative benefits. The ability of LLMs to replicate an author’s style risks stripping away their creative essence, eroding what has traditionally been a deeply human endeavor.
Litigation and Industry Response
The tensions between LLMs and literary creativity have led to numerous litigations worldwide. In 2023, Sarah Silverman and other authors filed lawsuits against OpenAI and Meta for using their copyrighted works to train AI models without permission. Similarly, The New York Times sued OpenAI and Microsoft for unauthorized use of their content.
These high-profile cases highlight a broader issue: while well-known authors may have the resources to combat AI-led appropriation, many other writers face significant challenges in protecting their work from generative AI.
AI and Literary Creativity: A Contested Landscape
The impact of AI extends beyond copyright infringement; it threatens the very survival of human authors in an increasingly saturated market. As AI-generated content floods the market, human authors struggle to compete for readers’ attention and publishers’ support. AI-driven tools are already shifting the focus of book marketing and reader analytics, prioritizing profitability over literary quality.
For instance, Penguin Random House has acknowledged the use of AI for sales forecasting, which indicates a shift towards data-driven decision-making in publishing. However, this raises concerns about the potential for algorithmic gatekeeping to sideline authors whose work does not conform to AI-generated trends.
The Global Implications of the AI Act
The AIA is European legislation with significant global ramifications. The erosion of creative integrity due to LLMs threatens not only cultural loss but also economic detriment, given the substantial contributions of the literary sector to Europe’s creative economy. In 2025, revenue in the European books market is projected to reach 26.29 billion USD, highlighting the industry’s critical role in the region’s cultural and economic fabric.
A Narrow Risk-Based Framework vs. Unquantifiable Cultural Consequences
The AIA adopts a risk-based framework focusing on high-risk applications in sectors like healthcare and finance. However, this framework tends to overlook the cultural and societal harm posed by LLMs, as these risks are less tangible than those found in other sectors. By primarily emphasizing technical standards, the AIA fails to address the reshaping of cultural norms and creative practices.
Intellectual Property Rights and AI Authorship
LLMs present a dual challenge to intellectual property rights. The legality of the vast amounts of training data used by these models is the first issue. Companies in the generative AI space have begun acquiring copyrighted material for training, raising ethical questions about data usage.
The second challenge involves the ownership of AI-generated or AI-assisted work. It remains unclear whether rights belong to the model’s trainer, the AI platform, or the individual who prompted the AI. This ambiguity complicates the landscape for authors navigating the complexities of AI in creative writing.
Conclusion: The Ripple Effects of AI on Publishing
The implications of AI on publishing extend beyond authors’ rights; they influence the industry’s practices overall. Publishers may increasingly rely on algorithmic models to evaluate manuscripts, potentially prioritizing formulaic plots over unique voices. This trend risks marginalizing emerging authors and diminishing literary diversity.
As EU policymakers grapple with these challenges, they face both a challenge and an opportunity. The current regulatory framework has profound implications for a region rich in literary tradition. To protect and nurture creativity, it is essential to build legal structures that recognize the complexities of AI-enabled creativity, define derivative works, and safeguard authors’ rights. European legislation can set a global precedent in addressing the multifaceted challenges posed by LLMs.