Byline: Sophia Winters, Senior Entertainment Reporter
Sony Music has requested the removal of more than 135,000 AI-generated deepfake songs that used the voices and likenesses of its artists without authorization. The takedown campaign, one of the largest in the music industry's history, targeted fake tracks featuring artists including Beyonce, Queen, Harry Styles, Bad Bunny, Miley Cyrus, and Mark Ronson. It is a decisive move that underscores just how aggressively the major labels are now fighting back against a problem that, until recently, many in the industry treated as a nuisance rather than an existential threat.
The scale of the problem is staggering. According to Sony's global digital president Dennis Kooker, the company identified 60,000 deepfake tracks in the last year alone, nearly doubling the pace of detection from the previous twelve months. "Deepfakes are a demand-driven event," Kooker told reporters, noting that the proliferation of AI music tools has made it trivially easy for anyone with a laptop to generate convincing imitations of major artists.
The Deepfake Explosion in Music
The rise of AI-generated music deepfakes has been one of the most disruptive developments in the entertainment industry over the past two years. Tools that can clone an artist's voice from a few minutes of audio have become widely available, some of them free and open-source. The result is a flood of content that sounds convincingly like established artists but was created without their knowledge, consent, or compensation.
The most high-profile early example was the viral AI-generated Drake and The Weeknd track "Heart on My Sleeve," which circulated widely in 2023 before being removed from streaming platforms. That incident served as a wake-up call for the industry, but the problem has only accelerated since then. For every high-profile deepfake that goes viral, thousands of others are uploaded quietly to streaming platforms, social media, and content-sharing sites where they accumulate plays, generate ad revenue, and dilute the value of legitimate releases.
Sony's 135,000-track takedown represents the most comprehensive response to date, but it also highlights the whack-a-mole nature of the problem. New deepfakes are generated faster than they can be identified and removed. The tools for creating them are improving constantly, making detection harder with each passing month. And the platforms where these tracks appear have limited incentive to police the content aggressively, since every play, real or fake, generates revenue.
"Deepfakes are a demand-driven event. As long as listeners seek out AI-generated content featuring our artists, there will be people creating it. Our job is to make sure they cannot profit from it."
Dennis Kooker, President of Global Digital Business, Sony Music
The Global Music Market: Growth Despite Disruption
The deepfake crisis arrives at a moment when the global recorded music market is otherwise thriving. The IFPI Global Music Report, released in conjunction with this year's data, shows that recorded music revenues grew 6.4% in 2025 to reach $31.7 billion. It marks the 11th consecutive year of growth for the industry, a streak that has transformed a business that was, not so long ago, widely considered to be in terminal decline.
Several factors are driving that growth. Streaming remains the dominant revenue source, with paid subscription services continuing to add users globally. Emerging markets, particularly in Latin America, Africa, and Asia, are contributing an increasing share of total revenue as smartphone penetration and affordable data plans bring hundreds of millions of new listeners onto streaming platforms.
Taylor Swift was the biggest artist of 2025 according to IFPI's global rankings, a position she has occupied with remarkable consistency. Her ability to drive both streaming numbers and physical sales (her vinyl and CD releases continue to sell in enormous quantities) makes her a singular force in the industry. The UK retained its position as the world's third-largest music market, while China overtook Germany to become the fourth-largest, a shift that reflects the explosive growth of music streaming in the Chinese market.
The revenue numbers are encouraging, but the deepfake problem threatens to undermine the economic foundations that support them. If AI-generated content can replicate the sound of major artists at scale, the premium that those artists command on streaming platforms is at risk. Why would a listener choose the official Beyonce track over an AI-generated one that sounds nearly identical, especially if the AI version appears in their recommendations?
For a deeper look at how AI is transforming other industries, see our reporting on artificial intelligence applications in scientific research.
Deezer's Alarming Data
The streaming platform Deezer has provided some of the most alarming data about the scale of AI-generated music. According to the company's internal analysis, 34% of all songs submitted to the platform are now identified as AI-generated. That number, if representative of the broader industry, suggests that AI content is not a fringe phenomenon but a significant and growing share of the total music being created and distributed globally.
Deezer has been more transparent than most streaming platforms about the AI content issue, developing detection tools and sharing its findings publicly. The company's data suggests that much of the AI-generated content is not deepfakes of specific artists but rather generic, low-quality tracks designed to game the streaming system. These "functional" AI tracks, often ambient music, lo-fi beats, or royalty-free background music, are uploaded in bulk by entities seeking to collect streaming royalties at scale.
The combination of artist deepfakes and functional AI music creates a two-pronged threat to the legitimate music industry. Deepfakes attack the value of specific artists' catalogs, while functional AI music dilutes the overall royalty pool, reducing payments to human creators across the board. Both problems demand different solutions, and the industry is still in the early stages of developing effective responses to either.
The Regulatory Landscape
The legal and regulatory response to AI-generated music has been uneven at best. In the United States, the question of whether AI-generated content infringes on existing copyrights remains unsettled, with multiple cases working their way through the courts. The fundamental question, whether an AI model that was trained on copyrighted music produces infringing output when it generates new content, has not been definitively answered by any court.
In the UK, the situation is, if anything, more uncertain. IFPI CEO Victoria Oakley has expressed frustration with the UK government's decision to pause its planned AI copyright legislation, leaving the country's creative industries in a state of regulatory limbo. "The UK was poised to lead on AI copyright protections," Oakley said in a recent statement. "The pause sends a troubling signal to creators about the government's priorities."
The European Union has moved more aggressively, with the AI Act establishing transparency requirements for AI-generated content and the Copyright Directive providing a framework for licensing negotiations between rights holders and AI companies. But enforcement remains a challenge, and the cross-border nature of the internet means that content removed in one jurisdiction can reappear in another within hours.
The music industry is lobbying hard for stronger protections, arguing that without clear legal frameworks, the economic incentives to create AI deepfakes will only grow. Labels like Sony are not waiting for legislation, however. The 135,000-track takedown demonstrates a willingness to use existing legal tools, including copyright infringement claims and platform terms-of-service violations, to combat the problem in the near term.
The Artist Perspective
For the artists whose voices are being cloned, the deepfake issue is deeply personal. It is one thing to have your music pirated. It is another to have someone create new content that sounds like you, may express views you disagree with, and circulates under your implicit endorsement without your consent.
Several high-profile artists have spoken out about the issue. The concern is not limited to lost revenue, though that is significant. It is about identity, artistic integrity, and the fundamental right to control how your voice and likeness are used. An AI-generated track that sounds like Beyonce but contains lyrics she would never sing is not just a copyright violation. It is a violation of her artistic identity.
Smaller and mid-tier artists face an even more acute version of the problem. While Sony can deploy significant legal and technological resources to protect its roster, independent artists lack the infrastructure to monitor the internet for deepfakes of their work, let alone pursue takedowns. The democratization of AI music tools, which was supposed to empower independent creators, has in many cases created new threats that disproportionately affect those least equipped to respond.
- 135,000+ AI deepfake songs targeted for removal by Sony
- 60,000 deepfakes identified in the past year alone
- 34% of songs submitted to Deezer flagged as AI-generated
- $31.7 billion in global recorded music revenue (2025)
- 6.4% revenue growth, marking 11 consecutive years
- 11th consecutive year of industry growth
What Happens Next
Sony's takedown campaign is a significant escalation in the music industry's war against AI deepfakes, but it is unlikely to be the last word. The technology will continue to improve, making detection harder and creation easier. The legal frameworks are still catching up. And the economic incentives driving deepfake creation show no signs of diminishing.
The most likely near-term developments include expanded use of audio watermarking and fingerprinting technologies, which can identify copyrighted content even when it has been modified by AI. Several major labels are investing in these technologies, and platform partnerships are being developed to integrate detection tools directly into the upload process.
Longer-term, the industry will need to grapple with more fundamental questions about the relationship between human creativity and artificial intelligence. AI tools that assist human musicians, helping them compose, arrange, or produce music more efficiently, are widely seen as beneficial. AI tools that replace human musicians, generating content that competes directly with human-created work on streaming platforms, are viewed very differently.
The line between assistance and replacement is not always clear, and drawing it in a way that protects creators without stifling innovation is one of the central challenges facing the music industry in 2026. Sony's 135,000-track takedown is a necessary defensive action, but the industry will eventually need to develop a more comprehensive strategy that addresses the root causes of the deepfake problem rather than just its symptoms.
For related coverage on AI's broader impact, read our piece on how technology is reshaping our understanding of unexpected phenomena.




