AI’s Impact on Literary Creativity: Unseen Challenges for Authors

Regulating the Unseen: The AI Act’s Blind Spot Regarding Large Language Models’ Influence on Literary Creativity

The Artificial Intelligence Act 2024 (henceforth AIA) has positioned the EU as a frontrunner in the regulation of digital spaces. This pioneering framework is celebrated for its comprehensive approach to critical issues such as algorithmic bias, transparency, and accountability in AI systems. However, it overlooks a crucial aspect: the implications of rapidly evolving large language models (LLMs) on literary creativity and intellectual property rights.

As lawmakers revised the Act to consider the existing capabilities of generative AI models, the explosive growth of LLMs has outpaced current regulations. These models are redefining the creative process, producing text that closely mimics human writing. This evolution disrupts long-standing frameworks for authorship, ownership, and fair compensation within Europe’s literary ecosystem.

LLMs pose challenges to cultural values, artistic expression, and societal consciousness. They threaten the identity of authors, undermining their rights to original works and creating barriers to equitable distribution of creative benefits. The ability of LLMs to replicate an author’s style risks stripping away their creative essence, eroding what has traditionally been a deeply human endeavor.

Litigation and Industry Response

The tensions between LLMs and literary creativity have led to numerous litigations worldwide. In 2023, Sarah Silverman and other authors filed lawsuits against OpenAI and Meta for using their copyrighted works to train AI models without permission. Similarly, The New York Times sued OpenAI and Microsoft for unauthorized use of their content.

These high-profile cases highlight a broader issue: while well-known authors may have the resources to combat AI-led appropriation, many other writers face significant challenges in protecting their work from generative AI.

AI and Literary Creativity: A Contested Landscape

The impact of AI extends beyond copyright infringement; it threatens the very survival of human authors in an increasingly saturated market. As AI-generated content floods the market, human authors struggle to compete for readers’ attention and publishers’ support. AI-driven tools are already shifting the focus of book marketing and reader analytics, prioritizing profitability over literary quality.

For instance, Penguin Random House has acknowledged the use of AI for sales forecasting, which indicates a shift towards data-driven decision-making in publishing. However, this raises concerns about the potential for algorithmic gatekeeping to sideline authors whose work does not conform to AI-generated trends.

The Global Implications of the AI Act

The AIA is European legislation with significant global ramifications. The erosion of creative integrity due to LLMs threatens not only cultural loss but also economic detriment, given the substantial contributions of the literary sector to Europe’s creative economy. In 2025, revenue in the European books market is projected to reach 26.29 billion USD, highlighting the industry’s critical role in the region’s cultural and economic fabric.

A Narrow Risk-Based Framework vs. Unquantifiable Cultural Consequences

The AIA adopts a risk-based framework focusing on high-risk applications in sectors like healthcare and finance. However, this framework tends to overlook the cultural and societal harm posed by LLMs, as these risks are less tangible than those found in other sectors. By primarily emphasizing technical standards, the AIA fails to address the reshaping of cultural norms and creative practices.

Intellectual Property Rights and AI Authorship

LLMs present a dual challenge to intellectual property rights. The legality of the vast amounts of training data used by these models is the first issue. Companies in the generative AI space have begun acquiring copyrighted material for training, raising ethical questions about data usage.

The second challenge involves the ownership of AI-generated or AI-assisted work. It remains unclear whether rights belong to the model’s trainer, the AI platform, or the individual who prompted the AI. This ambiguity complicates the landscape for authors navigating the complexities of AI in creative writing.

Conclusion: The Ripple Effects of AI on Publishing

The implications of AI on publishing extend beyond authors’ rights; they influence the industry’s practices overall. Publishers may increasingly rely on algorithmic models to evaluate manuscripts, potentially prioritizing formulaic plots over unique voices. This trend risks marginalizing emerging authors and diminishing literary diversity.

As EU policymakers grapple with these challenges, they face both a challenge and an opportunity. The current regulatory framework has profound implications for a region rich in literary tradition. To protect and nurture creativity, it is essential to build legal structures that recognize the complexities of AI-enabled creativity, define derivative works, and safeguard authors’ rights. European legislation can set a global precedent in addressing the multifaceted challenges posed by LLMs.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...