AI-Generated Band Sparks Controversy in Music Streaming

The AI Band That Sparked Spotify’s Content Regulations Row

An AI-generated band called The Velvet Sundown gained millions of Spotify streams before being exposed, triggering discussion over copyright and regulation. This incident highlights the challenges that music platforms face in handling synthetic content.

Introduction

The Velvet Sundown appeared on streaming services in June, presenting itself as a regular folk rock band with polished photos and a carefully crafted sound. Within weeks, the group had notched up millions of listens on Spotify. However, music fans quickly started noticing something odd about the whole setup.

Discovery of the AI Nature

The band’s promotional photos had that slightly unsettling quality that’s become the hallmark of AI-generated images. The situation escalated when someone claiming to be connected to the project revealed that the band had used Suno, an AI platform for creating music, to generate their tracks. Initially, the team behind The Velvet Sundown tried to deny these claims through social media but eventually came clean about the artificial nature of the project.

Industry Backlash

This admission triggered a backlash from across the music industry and raised awkward questions about how streaming platforms police their content. Ed Newton Rex, who runs Fairly Trained, a non-profit focused on AI ethics, stated, “This is exactly what artists have been worried about; it’s theft dressed up as competition.”

Concerns Over Transparency

The controversy has exposed how little oversight there is of AI-generated content in music streaming. Current rules don’t require platforms to flag synthetic music, leaving both artists and listeners in the dark about what they’re hearing. Roberto Neri, CEO of the Ivors Academy, argues that AI-generated bands raise serious concerns around transparency, authorship, and consent.

Sophie Jones, Chief Strategy Officer at the British Phonographic Industry (BPI), emphasizes, “We believe that AI should be used to serve human creativity, not supplant it.” The BPI has been pushing for more transparency from AI companies regarding how they train their systems and generate content.

Risks of AI-Generated Music

Writer Liz Pelly warns that AI-generated music could homogenize sound. In her book “Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist,” she notes that music trends produced in the streaming era are inherently connected to attention, leading to a risk of everything sounding the same.

Streaming Platforms’ Responses

Different streaming services have taken varied approaches to handling AI-generated music, highlighting the lack of industry-wide standards. Deezer, for instance, has rolled out detection software that spots AI-generated tracks and tags them for users. Aurélien Hérault, Deezer’s Chief Innovation Officer, views this as a transparency issue, stating that platforms need to inform users.

In contrast, Spotify has adopted a more hands-off approach, insisting it doesn’t favor AI-generated music over human-made tracks. However, research indicates that Spotify often fills its playlists with AI-generated songs to reduce royalty payouts, raising concerns about its promotion of synthetic content.

Conclusion

The Velvet Sundown’s rise and subsequent exposure have ignited a vital conversation about the future of music in the age of AI. As the industry grapples with these challenges, the need for clear regulations and transparency becomes increasingly urgent. The current lack of oversight poses significant risks for both artists and listeners, demanding a reevaluation of how platforms manage synthetic content.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...