AI’s Impact on Literary Creativity: Unseen Challenges for Authors

Regulating the Unseen: The AI Act’s Blind Spot Regarding Large Language Models’ Influence on Literary Creativity

The Artificial Intelligence Act 2024 (henceforth AIA) has positioned the EU as a frontrunner in the regulation of digital spaces. This pioneering framework is celebrated for its comprehensive approach to critical issues such as algorithmic bias, transparency, and accountability in AI systems. However, it overlooks a crucial aspect: the implications of rapidly evolving large language models (LLMs) on literary creativity and intellectual property rights.

As lawmakers revised the Act to consider the existing capabilities of generative AI models, the explosive growth of LLMs has outpaced current regulations. These models are redefining the creative process, producing text that closely mimics human writing. This evolution disrupts long-standing frameworks for authorship, ownership, and fair compensation within Europe’s literary ecosystem.

LLMs pose challenges to cultural values, artistic expression, and societal consciousness. They threaten the identity of authors, undermining their rights to original works and creating barriers to equitable distribution of creative benefits. The ability of LLMs to replicate an author’s style risks stripping away their creative essence, eroding what has traditionally been a deeply human endeavor.

Litigation and Industry Response

The tensions between LLMs and literary creativity have led to numerous litigations worldwide. In 2023, Sarah Silverman and other authors filed lawsuits against OpenAI and Meta for using their copyrighted works to train AI models without permission. Similarly, The New York Times sued OpenAI and Microsoft for unauthorized use of their content.

These high-profile cases highlight a broader issue: while well-known authors may have the resources to combat AI-led appropriation, many other writers face significant challenges in protecting their work from generative AI.

AI and Literary Creativity: A Contested Landscape

The impact of AI extends beyond copyright infringement; it threatens the very survival of human authors in an increasingly saturated market. As AI-generated content floods the market, human authors struggle to compete for readers’ attention and publishers’ support. AI-driven tools are already shifting the focus of book marketing and reader analytics, prioritizing profitability over literary quality.

For instance, Penguin Random House has acknowledged the use of AI for sales forecasting, which indicates a shift towards data-driven decision-making in publishing. However, this raises concerns about the potential for algorithmic gatekeeping to sideline authors whose work does not conform to AI-generated trends.

The Global Implications of the AI Act

The AIA is European legislation with significant global ramifications. The erosion of creative integrity due to LLMs threatens not only cultural loss but also economic detriment, given the substantial contributions of the literary sector to Europe’s creative economy. In 2025, revenue in the European books market is projected to reach 26.29 billion USD, highlighting the industry’s critical role in the region’s cultural and economic fabric.

A Narrow Risk-Based Framework vs. Unquantifiable Cultural Consequences

The AIA adopts a risk-based framework focusing on high-risk applications in sectors like healthcare and finance. However, this framework tends to overlook the cultural and societal harm posed by LLMs, as these risks are less tangible than those found in other sectors. By primarily emphasizing technical standards, the AIA fails to address the reshaping of cultural norms and creative practices.

Intellectual Property Rights and AI Authorship

LLMs present a dual challenge to intellectual property rights. The legality of the vast amounts of training data used by these models is the first issue. Companies in the generative AI space have begun acquiring copyrighted material for training, raising ethical questions about data usage.

The second challenge involves the ownership of AI-generated or AI-assisted work. It remains unclear whether rights belong to the model’s trainer, the AI platform, or the individual who prompted the AI. This ambiguity complicates the landscape for authors navigating the complexities of AI in creative writing.

Conclusion: The Ripple Effects of AI on Publishing

The implications of AI on publishing extend beyond authors’ rights; they influence the industry’s practices overall. Publishers may increasingly rely on algorithmic models to evaluate manuscripts, potentially prioritizing formulaic plots over unique voices. This trend risks marginalizing emerging authors and diminishing literary diversity.

As EU policymakers grapple with these challenges, they face both a challenge and an opportunity. The current regulatory framework has profound implications for a region rich in literary tradition. To protect and nurture creativity, it is essential to build legal structures that recognize the complexities of AI-enabled creativity, define derivative works, and safeguard authors’ rights. European legislation can set a global precedent in addressing the multifaceted challenges posed by LLMs.

More Insights

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...

Global AI Regulation: Trends and Challenges

This document discusses the current state of AI regulation in Israel, highlighting the absence of specific laws directly regulating AI. It also outlines the government's efforts to promote responsible...

AI and Regulatory Challenges in the Gambling Industry

The article discusses the integration of Artificial Intelligence (AI) in the gambling industry, emphasizing the balance between technological advancements and regulatory compliance. It highlights the...