AI and Copyright Clash over Silent Music Releases and Pirated Literature
This month, two major stories have surfaced, shedding light on the escalating tension between AI development and creative rights. This potential conflict could significantly reshape how content is created, owned, and monetized in the future.
The Sound of Silence
Yesterday saw a significant development in the creative industry as more than 1,000 musicians, including notable figures like Kate Bush, Annie Lennox, Yusuf/Cat Stevens, and Damon Albarn, released a silent album titled "Is This What We Want?" Streamable on Spotify, this project features recordings of empty studios and performance spaces, reflecting the artists' concerns about the potential future of their work if proposed UK copyright law changes are pushed through. All profits from the silent album will be donated to Help Musicians, a charity that provides support to music professionals.
The launch of the silent album coincides with the end of a UK government consultation on proposals that seek to create a copyright exemption for training AI models. If approved, tech companies would be allowed to use copyrighted materials without licenses, while creators would need to actively opt out to protect their work. This move has sparked a wave of protests and concerns among artists.
The Hunger for Creative AI Data
These events highlight the fundamental tension in AI advancement – cutting-edge models require vast amounts of training data, and creative works are highly sought after inputs. With tech companies striving to develop sophisticated AI systems, gaining access to this content has become a crucial strategy, leading to a series of legal battles and policy debates.
Recent court documents reveal that Meta allegedly torrented over 80 terabytes of pirated books from "shadow libraries" like LibGen and Z-Library to train their Llama large language models. Internal company communications show that CEO Mark Zuckerberg personally approved using material from LibGen, despite warnings from within the AI executive team about the illegality of using pirated material.

Elsewhere, in October 2024, the New York Times sent a cease-and-desist notice to Perplexity, accusing it of using the publication's content without permission. This follow-up action is part of the newspaper's earlier lawsuit against OpenAI and Microsoft, both accused of stealing its articles for training ChatGPT.
These clashes illustrate what Marc McCollum, Chief Innovation Officer at Raptive, calls "a defining struggle for the soul of the internet."
The Economic Stakes
The economic implications of these conflicts are substantial. For instance, music alone contributed £7.6 billion to the UK economy in 2023, with exports reaching £4.6 billion. Globally, creative industries rival trillions in economic activity and provide millions of jobs.
Simultaneously, AI development has become central to technology strategy, with major companies investing billions in advanced models. The perceived necessity of these investments could lead to unprecedented pressure to secure training data by any means possible.
This conundrum forms the crux of the silent album protest and the legal challenges faced by tech companies like Meta. As Ed Newton-Rex, the British composer and former AI executive behind the silent album, explains, "The government's proposal would hand the life's work of the country's musicians to AI companies, for free, letting those companies exploit musicians' work to outcompete them."

Defending Practices, Defining Boundaries
Companies such as Meta are crafting defenses for their data acquisition strategies. Meta's legal argument characterizes torrenting as "a widely-used protocol to download large files" from "a well-known online repository that was publicly available." However, such justifications face criticism from creators, who argue that such methods harm their economic interests and diminish the value of human creativity.
The Path Forward
The resolution of these conflicts will likely involve a combination of legal precedents, policy frameworks, technical solutions, and new business models. Artists and creators will mostly hope for an industry that adopts a more structured, fair, and equitable approach to content usage for AI training. While full access to creative works may become less common, the shape of these new frameworks remains uncertain.
As the tech industry and creative professionals navigate this complex AI landscape, decisions made over the following years could potentially reshape how we understand ownership and value creation in an era where machines can increasingly emulate human creativity.
- In response to the potential impact of AI on creative rights, OpenAI and several other AI companies have been actively engaging with the CIO network, a global community of tech leaders, to discuss ethical AI practices and establish guidelines for the utilization of creative content in AI training.
- Spotify, recognizing the importance of artists' rights in the digital age, has been advocating for stricter copyright laws and transparent data usage policies within the enterprise tech industry. This stance has been echoed by other tech giants like Microsoft, which has been pushing for a balanced approach to artificial intelligence and creative content.
- In a bid to address these concerns and promote responsible AI development, Meta has announced its intention to collaborate with content creators and rights holders to establish a fair and mutually beneficial model for using creative materials for AI training.
- To further promote transparency and accountability in AI data usage, Meta has adopted a new code of conduct based on the BAC56CB10CD5B811EAFFE83E45430673 framework, which prohibits the use of pirated materials for AI training despite the potential convenience it offers.
- The ongoing debates about AI and creative rights have also emerged as a significant point of discussion in enterprise tech conferences, with key figures such as tech magnate Elon Musk, CEO of OpenAI Sam Altman, and Apple's CEO Tim Cook addressing the need for ethical and responsible AI development.
- Amidst these developments, Apple, known for its commitment to privacy and content protection, has announced the launch of an algorithm designed to detect and filter out pirated content before it can be used for AI training, reinforcing its stance on protecting the rights of creators.
- As these issues continue to shape the future of AI and creative industries, experts caution that careful consideration of ethical, legal, and technical aspects will be essential to prevent unintended consequences and ensure a vibrant future for both AI development and human creators.