Chart Success Without Artists: AI Music and the Fragmentation of Value

In November 2025, a mysterious country music act named Breaking Rust achieved something unprecedented: the AI-generated song “Walk My Walk” topped Billboard's Country Digital Song Sales chart, marking the first time an artificial intelligence creation had claimed the number one position on any Billboard chart. The track, produced entirely without human performers using generative AI tools for vocals, instrumentation, and lyrics, reached its peak with approximately 3,000 digital downloads. That same month, Xania Monet, an AI R&B artist created using the Suno platform, became the first known AI artist to earn enough radio airplay to debut on a Billboard radio chart, entering the Adult R&B Airplay ranking at number 30.
These milestones arrived not with fanfare but with an uncomfortable silence from an industry still grappling with what they mean. The charts that have long served as the music industry's primary measure of success had been successfully penetrated by entities that possess neither lived experience nor artistic intention in any conventional sense. The question that follows is not merely whether AI can achieve commercial validation through existing distribution and ranking systems. It clearly can. The more unsettling question is what this reveals about those systems themselves, and whether the metrics the industry has constructed to measure success have become so disconnected from traditional notions of artistic value that they can no longer distinguish between human creativity and algorithmic output.
From Smoky Clubs to Algorithmic Playlists
The music industry has always operated through gatekeeping structures. For most of the twentieth century, these gates were controlled by human intermediaries: A&R executives who discovered talent in smoky clubs, radio programmers who decided which songs reached mass audiences, music journalists who shaped critical discourse, and record label executives who determined which artists received investment and promotion. These gatekeepers were imperfect, often biased, and frequently wrong, but they operated according to evaluative frameworks that at least attempted to assess artistic merit alongside commercial potential.
The transformation began with digital distribution and accelerated with streaming. By the early 2020s, the typical song on the Billboard Hot 100 derived approximately 73 per cent of its chart position from streaming, 25 per cent from radio airplay, and a mere 2 per cent from digital sales. This represented a dramatic inversion from the late 1990s, when radio airplay accounted for 75 per cent of a song's chart fortunes. Billboard's methodology has continued to evolve, with the company announcing in late 2025 that effective January 2026, the ratio between paid subscription and ad-supported on-demand streaming would be adjusted to 1:2.5, further cementing streaming's dominance whilst simultaneously prompting YouTube to withdraw its data from Billboard charts in protest over what it characterised as unfair undervaluation of ad-supported listening. The metrics that now crown hits are fundamentally different in character: stream counts, skip rates, playlist additions, save rates, and downstream consumption patterns. These are measures of engagement behaviour, not assessments of artistic quality.
Streaming platforms have become what scholars describe as the “new gatekeepers” of the music industry. Unlike their predecessors, these platforms wield what researchers Tiziano Bonini and Alessandro Gandini term “algo-torial power,” a fusion of algorithmic and curatorial capabilities that far exceeds the influence of traditional intermediaries. Spotify alone, commanding approximately 35 per cent of the global streaming market in 2025, manages over 3,000 official editorial playlists, with flagship lists like Today's Top Hits commanding over 34 million followers. A single placement on such a playlist can translate into millions of streams overnight, with artists reporting that high positions on editorial playlists generate cascading effects across their entire catalogues.
Yet the balance has shifted even further toward automation. Since 2017, Spotify has developed what it calls “Algotorial” technology, combining human editorial expertise with algorithmic personalisation. The company reports that over 81 per cent of users cite personalisation as what they value most about the platform. The influence of human-curated playlists has declined correspondingly. Major music labels have reported significant drops in streams from flagship playlists like RapCaviar and Dance Hits, signalling a fundamental change in how listeners engage with curated content. Editorial playlists, whilst still powerful, often feature songs for only about a week, limiting their long-term impact compared to algorithmic recommendation systems that continuously surface content based on listening patterns.
This shift has consequences for what can succeed commercially. Algorithmic recommendation systems favour predictable structures and familiar sonic elements. Data analysis suggests songs that maintain listener engagement within the first 30 seconds receive preferential treatment, incentivising shorter introductions and immediate hooks, often at the expense of nuanced musical development.
Artists and their teams are encouraged to optimise for “asset rank,” a function of user feedback reflecting how well a song performs in particular consumption contexts. The most successful strategies involve understanding algorithmic nuances, social media marketing, and digital engagement techniques.
Into this optimisation landscape, AI-generated music arrives perfectly suited. Systems like Suno, the platform behind both Xania Monet and numerous other AI artists, can produce content calibrated to the precise engagement patterns that algorithms reward. The music need not express lived experience or demonstrate artistic growth. It need only trigger the behavioural signals that platforms interpret as success.
When 97 Per Cent of Ears Cannot Distinguish
In November 2025, French streaming service Deezer commissioned what it described as the world's first survey focused on perceptions and attitudes toward AI-generated music. Conducted by Ipsos across 9,000 participants in eight countries, the study produced a startling headline finding: when asked to listen to three tracks and identify which was fully AI-generated, 97 per cent of respondents failed.
A majority of participants (71 per cent) expressed surprise at this result, whilst more than half (52 per cent) reported feeling uncomfortable at their inability to distinguish machine-made music from human creativity. The findings carried particular weight given the survey's scale and geographic breadth, spanning markets with different musical traditions and consumption patterns.
The implications extend beyond parlour game failures. If listeners cannot reliably identify AI-generated music, then the primary quality filter that has historically separated commercially successful music from unsuccessful music has been compromised. Human audiences, consciously or not, have traditionally evaluated music according to criteria that include emotional authenticity, creative originality, and the sense that a human being is communicating something meaningful.
If AI can convincingly simulate these qualities to most listeners, then the market mechanism that was supposed to reward genuine artistic achievement has become unreliable.
Research from MIT Media Lab exposed participants to both AI and human music under various labelling conditions, finding that participants were significantly more likely to rate human-composed music as more effective at eliciting target emotional states, regardless of whether they knew the composer's identity. A 2024 study published in PLOS One compared emotional reactions to AI-generated and human-composed music among 88 participants monitored through heart rate, skin conductance, and self-reported emotion.
Both types triggered feelings, but human compositions scored consistently higher for expressiveness, authenticity, and memorability. Many respondents described AI music as “technically correct” but “emotionally flat.” The distinction between technical competence and emotional resonance emerged as a recurring theme across multiple research efforts, suggesting that whilst AI can successfully mimic surface-level musical characteristics, deeper qualities associated with human expression remain more elusive.
These findings suggest that humans can perceive meaningful differences when prompted to evaluate carefully. But streaming consumption is rarely careful evaluation. It is background listening during commutes, ambient accompaniment to work tasks, algorithmic playlists shuffling in the background of social gatherings. In these passive consumption contexts, the distinctions that laboratory studies reveal may not register at all.
The SyncVault 2025 Trends Report found that 74 per cent of content creators now prefer to license music from identifiable human composers, citing creative trust and legal clarity. A survey of 100 music industry insiders found that 98 per cent consider it “very important” to know if music is human-made, and 96 per cent would consider paying a premium for a human-verified music service. Industry professionals, at least, believe the distinction matters. Whether consumers will pay for that distinction in practice remains uncertain.
Four Stakeholders, Four Incompatible Scorecards
The chart success of AI-generated music exposes a deeper fragmentation: different stakeholder groups in the music industry operate according to fundamentally different definitions of what “success” means, and these definitions are becoming increasingly incompatible.
For streaming platforms and their algorithms, success is engagement. A successful track is one that generates streams, maintains listener attention, triggers saves and playlist additions, and encourages downstream consumption. These metrics are agnostic about the source of the music. An AI-generated track that triggers the right engagement patterns is, from the platform's perspective, indistinguishable from a human creation that does the same. The platform's business model depends on maximising time spent listening, regardless of whether that listening involves human artistry or algorithmic simulation.
For record labels and investors, success is revenue. The global music market reached $40.5 billion in 2024, with streaming accounting for 69 per cent of global recorded music revenues, surpassing $20 billion for the first time. Goldman Sachs projects the market will reach $110.8 billion by 2030.
In this financial framework, AI music represents an opportunity to generate content with dramatically reduced labour costs. An AI artist requires no advances, no touring support, no management of creative disagreements or personal crises. As Victoria Monet observed when commenting on AI artist Xania Monet, “our time is more finite. We have to rest at night. So, the eight hours, nine hours that we're resting, an AI artist could potentially still be running, studying, and creating songs like a machine.”
Hallwood Media, the company that signed Xania Monet to a reported $3 million deal, is led by Neil Jacobson, formerly president of Geffen Records. The company has positioned itself at the forefront of AI music commercialisation, also signing imoliver, described as the top-streaming “music designer” on Suno, in what was characterised as the first traditional label signing of an AI music creator. Jacobson framed these moves as embracing innovation, stating that imoliver “represents the future of our medium.”
For traditional gatekeeping institutions like the Grammy Awards, success involves human authorship as a precondition. The Recording Academy clarified in its 66th Rules and Guidelines that “A work that contains no human authorship is not eligible in any Categories.” CEO Harvey Mason Jr. elaborated: “Here's the super easy, headline statement: AI, or music that contains AI-created elements is absolutely eligible for entry and for consideration for Grammy nomination. Period. What's not going to happen is we are not going to give a Grammy or Grammy nomination to the AI portion.”
This creates a category distinction: AI-assisted human creativity can receive institutional recognition, but pure AI generation cannot. The Grammy position attempts to preserve human authorship as a prerequisite for the highest forms of cultural validation.
But this distinction may prove difficult to maintain. If AI tools become sufficiently sophisticated, determining where “meaningful human contribution” begins and ends may become arbitrary. And if AI creations achieve commercial success that rivals or exceeds Grammy-winning human artists, the cultural authority of the Grammy distinction may erode.
For human artists, success often encompasses dimensions that neither algorithms nor financial metrics capture: creative fulfilment, authentic emotional expression, the sense of communicating something true about human experience, and recognition from peers and critics who understand the craft involved.
When Kehlani criticised the Xania Monet deal in a social media post, she articulated this perspective: “There is an AI R&B artist who just signed a multimillion-dollar deal... and the person is doing none of the work.” The objection is not merely economic but existential. Success that bypasses creative labour does not register as success in the traditional artistic sense.
SZA connected her critique to broader concerns, noting that AI technology causes “harm” to marginalised neighbourhoods through the energy demands of data centres. She asked fans not to create AI images or songs using her likeness.
Muni Long questioned why AI artists appeared to be gaining acceptance in R&B specifically, suggesting a genre-specific vulnerability: “It wouldn't be allowed to happen in country or pop.” This observation points to power dynamics within the industry, where some artistic communities may be more exposed to AI disruption than others.
What the Charts Reveal About Themselves
If AI systems can achieve commercial validation through existing distribution and ranking systems without the cultural legitimacy or institutional endorsement traditionally required of human artists, what does this reveal about those gatekeeping institutions?
The first revelation is that commercial gatekeeping has largely decoupled from quality assessment. Billboard charts measure commercial performance. They count downloads, streams, and airplay. They do not and cannot assess whether the music being counted represents artistic achievement.
For most of chart history, this limitation mattered less because commercial success and artistic recognition, whilst never perfectly aligned, operated in the same general neighbourhood. The processes that led to commercial success included human gatekeepers making evaluative judgements about which artists to invest in, which songs to programme, and which acts to promote. AI success bypasses these evaluative filters entirely.
The second revelation concerns the vulnerability of metrics-based systems to manipulation. Billboard's digital sales charts have been targets for manipulation for years. The Country Digital Song Sales chart that Breaking Rust topped requires only approximately 2,500 downloads to claim the number one position.
This is a vestige of an era when iTunes ruled the music industry, before streaming subscription models made downloads a relic. In 2024, downloads accounted for just $329 million according to the RIAA, approximately 2 per cent of US recorded music revenue.
Critics have argued that the situation represents “a Milli Vanilli-level fraud being perpetrated on music consumers, facilitated by Billboard's permissive approach to their charts.” The Saving Country Music publication declared that “Billboard must address AI on the charts NOW,” suggesting the chart organisation is avoiding “gatekeeping” accusations by remaining content with AI encroaching on its rankings without directly addressing the issue.
If the industry's most prestigious measurement system can be topped by AI-generated content with minimal organic engagement, the system's legitimacy as a measure of popular success comes into question.
The third revelation is that cultural legitimacy and commercial success have become separable in ways they previously were not. Throughout the twentieth century, chart success generally brought cultural legitimacy. Artists who topped charts received media attention, critical engagement, and the presumption that their success reflected some form of popular validation.
AI chart success does not translate into cultural legitimacy in the same way. No one regards Breaking Rust as a significant country artist regardless of its chart position. The chart placement functions as a technical achievement rather than a cultural coronation.
This separability creates an unstable situation. If commercial metrics can be achieved without cultural legitimacy, and cultural legitimacy cannot be achieved through commercial metrics alone, then the unified system that connected commercial success to cultural status has fractured. Different stakeholders now operate in different legitimacy frameworks that may be incompatible.
Royalty Dilution and the Economics of Content Flooding
Beyond questions of legitimacy, AI-generated music creates concrete economic pressures on human artists through royalty pool dilution. Streaming platforms operate on pro-rata payment models: subscription revenue enters a shared pool divided according to total streams. When more content enters the system, the per-stream value for all creators decreases.
Deezer has been the most transparent about the scale of this phenomenon. The platform reported receiving approximately 10,000 fully AI-generated tracks daily in January 2025. By April, this had risen to 20,000. By September, 28 per cent of all content delivered to Deezer was fully AI-generated. By November, the figure had reached 34 per cent, representing over 50,000 AI-generated tracks uploaded daily.
These tracks represent not merely competition for listener attention but direct extraction from the royalty pool. Deezer has found that up to 70 per cent of streams generated by fully AI-generated tracks are fraudulent.
The company's Beatdapp co-CEO Morgan Hayduk noted: “Every point of market share is worth a couple hundred million US dollars today. So we're talking about a billion dollars minimum, that's a billion dollars being taken out of a finite pool of royalties.”
The connection between AI music generation and streaming fraud became explicit in September 2024, when a North Carolina musician named Michael Smith was indicted by federal prosecutors over allegations that he used an AI music company to help create “hundreds of thousands” of songs, then used those AI tracks to steal more than $10 million in fraudulent streaming royalty payments since 2017. Manhattan federal prosecutors charged Smith with three counts of wire fraud, wire fraud conspiracy, and money laundering conspiracy, making it the first federal case targeting streaming fraud.
Universal Music Group addressed this threat pre-emptively, placing provisions in agreements with digital service providers that prevent AI-generated content from being counted in the same royalty pools as human artists. UMG chief Lucian Grainge criticised the “exponential growth of AI slop” on streaming services. But artists not represented by major labels may lack similar protections.
A study conducted by CISAC (the International Confederation of Societies of Authors and Composers, representing over 5 million creators worldwide) and PMP Strategy projected that nearly 24 per cent of music creators' revenues are at risk by 2028, representing cumulative losses of 10 billion euros over five years and annual losses of 4 billion euros by 2028 specifically. The study further predicted that generative AI music would account for approximately 20 per cent of music streaming platforms' revenues and 60 per cent of music library revenues by 2028. Notably, CISAC reported that not a single AI developer has signed a licensing agreement with any of the 225 collective management organisations that are members of CISAC worldwide, despite societies approaching hundreds of AI companies with requests to negotiate licences. The model that has sustained recorded music revenues for the streaming era may be fundamentally threatened if AI content continues its current growth trajectory.
Human Artists as Raw Material
The relationship between AI music systems and human artists extends beyond competition. The AI platforms achieving chart success were trained on human creativity. Suno CEO Mikey Shulman acknowledged that the company trains on copyrighted music, stating: “We train our models on medium- and high-quality music we can find on the open internet. Much of the open internet indeed contains copyrighted materials.”
Major record labels responded with landmark lawsuits in June 2024 against Suno and Udio, the two leading AI music generation platforms, seeking damages of up to $150,000 per infringed recording. The legal battle represents one of the most significant intellectual property disputes of the streaming era, with outcomes that could fundamentally reshape how AI companies source training data and how human creators are compensated when their work is used to train commercial AI systems.
This creates a paradox: AI systems that threaten human artists' livelihoods were made possible by consuming those artists' creative output without compensation. The US Copyright Office's May 2025 report provided significant guidance on this matter, finding that training and deploying generative AI systems using copyright-protected material involves multiple acts that could establish prima facie infringement. The report specifically noted that “the use of more creative or expressive works (such as novels, movies, art, or music) is less likely to be fair use than use of factual or functional works” and warned that “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets... goes beyond established fair use boundaries.” Yet legal resolution remains distant, and in the interim, AI platforms continue generating content that competes with the human artists whose work trained them.
When Victoria Monet confronted the existence of Xania Monet, an AI persona whose name, appearance, and vocal style bore resemblance to her own, she described an experiment: a friend typed the prompt “Victoria Monet making tacos” into an AI image generator, and the system produced visuals that looked uncannily similar to Xania Monet's promotional imagery.
Whether this reflects direct training on Victoria Monet's work or emergent patterns from broader R&B training data, the practical effect remains the same. An artist's distinctive identity becomes raw material for generating commercial competitors. The boundaries between inspiration, derivation, and extraction blur when machine learning systems can absorb and recombine stylistic elements at industrial scale.
Possible Reckonings and Plausible Futures
The situation the music industry faces is not one problem but many interconnected problems that compound each other. Commercial metrics have been detached from quality assessment. Gatekeeping institutions have lost their filtering function. Listener perception has become unreliable as a quality signal. Royalty economics are being undermined by content flooding. Training data extraction has turned human creativity against its creators. And different stakeholder groups operate according to incompatible success frameworks.
Could widespread AI chart performance actually force a reckoning with how the music industry measures and defines value itself? There are reasons for cautious optimism.
Deezer has positioned itself as the first streaming service to automatically label 100 per cent AI-generated tracks, removing them from algorithmic recommendations and editorial playlists. This represents an attempt to preserve human music's privileged position in the discovery ecosystem. If other platforms adopt similar approaches, AI content might be effectively segregated into a separate category that does not compete directly with human artists.
The EU's AI Act, which entered into force on 1 August 2024, mandates unprecedented transparency about training data. Article 53 requires providers of general-purpose AI models to publish sufficiently detailed summaries of their training data, including content protected by copyright, according to a template published by the European Commission's AI Office in July 2025. Compliance became applicable from 2 August 2025, with the AI Office empowered to verify compliance and issue corrective measures from August 2026, with potential fines reaching 15 million euros or 3 per cent of global annual revenue. The GPAI Code of Practice operationalises these requirements by mandating that providers maintain copyright policies, rely only on lawful data sources, respect machine-readable rights reservations, and implement safeguards against infringing outputs. This transparency requirement could make it harder for AI music platforms to operate without addressing rights holder concerns.
Human premium pricing may emerge as a market response. The survey finding that 96 per cent of music industry insiders would consider paying a premium for human-verified music services suggests latent demand for authenticated human creativity. If platforms can credibly certify human authorship, a tiered market could develop where human music commands higher licensing fees.
Institutional reform remains possible. Billboard could establish separate charts for AI-generated music, preserving the significance of its traditional rankings whilst acknowledging the new category of content. The Recording Academy's human authorship requirement for Grammy eligibility demonstrates that cultural institutions can draw principled distinctions. These distinctions may become more robust if validated by legal and regulatory frameworks.
But there are also reasons for pessimism. Market forces favour efficiency, and AI music production is dramatically more efficient than human creation. If listeners genuinely cannot distinguish AI from human music in typical consumption contexts, there may be insufficient consumer pressure to preserve human-created content.
The 0.5 per cent of streams that AI music currently represents on Deezer, despite comprising 34 per cent of uploads, suggests the content is not yet finding significant audiences. But this could change as AI capabilities improve.
The fragmentation of success definitions may prove permanent. If streaming platforms, financial investors, cultural institutions, and human artists cannot agree on what success means, each group may simply operate according to its own framework, acknowledging the others' legitimacy selectively or not at all.
A track could simultaneously be a chart success, a financial investment, an ineligible Grammy submission, and an object of contempt from human artists. The unified status hierarchy that once organised the music industry could dissolve into parallel status systems that rarely intersect.
What Commercial Metrics Cannot Capture
Perhaps what the AI chart success reveals most clearly is that commercial metrics have always been inadequate measures of what music means. They were useful proxies when the systems generating commercially successful music also contained human judgement, human creativity, and human emotional expression. When those systems can be bypassed by algorithmic optimisation, the metrics are exposed as measuring only engagement behaviours, not the qualities those behaviours were supposed to indicate.
The traditional understanding of musical success included dimensions that are difficult to quantify: the sense that an artist had something to say and found a compelling way to say it, the recognition that creative skill and emotional honesty had produced something of value, the feeling of connection between artist and audience based on shared human experience.
These dimensions were always in tension with commercial metrics, but they were present in the evaluative frameworks that shaped which music received investment and promotion.
AI-generated music can trigger engagement behaviours. It can accumulate streams, achieve chart positions, and generate revenue. What it cannot do is mean something in the way human creative expression means something. It cannot represent the authentic voice of an artist working through lived experience. It cannot reward careful listening with the sense of encountering another human consciousness.
Whether listeners actually care about these distinctions is an empirical question that the market will answer. The preliminary evidence is mixed. The 97 per cent of listeners who cannot identify AI-generated music in blind tests suggest that, in passive consumption contexts, meaning may not be the operative criterion.
But the 80 per cent who agree that AI-generated music should be clearly labelled suggest discomfort with being fooled. And the premium that industry professionals say they would pay for human-verified music suggests that at least some market segments value authenticity.
The reckoning, if it comes, will force the industry to articulate what it believes music is for. If music is primarily engagement content designed to fill attention and generate revenue, then AI-generated music is simply more efficient production of that content. If music is a form of human communication that derives meaning from its human origins, then AI-generated music is a category error masquerading as the real thing.
These are not technical questions that data can resolve. They are value questions that different stakeholders will answer differently.
What seems certain is that the status quo cannot hold. The same metrics that crown hits cannot simultaneously serve as quality filters when algorithmic output can game those metrics. The same gatekeeping institutions cannot simultaneously validate commercial success and preserve human authorship requirements when commercial success becomes achievable without human authorship. The same royalty pools cannot sustain human artists if flooded with AI content competing for the same finite attention and revenue.
The chart success of AI-generated music is not the end of human music. It is the beginning of a sorting process that will determine what human music is worth in a world where its commercial position can no longer be assumed. That process will reshape not just the music industry but our understanding of what distinguishes human creativity from its algorithmic simulation.
The answer we arrive at will say as much about what we value as listeners and as a culture as it does about the capabilities of the machines.
References and Sources
Billboard. “How Many AI Artists Have Debuted on Billboard's Charts?” https://www.billboard.com/lists/ai-artists-on-billboard-charts/
Billboard. “AI Artist Xania Monet Debuts on Adult R&B Airplay – a Radio Chart Breakthrough.” https://www.billboard.com/music/chart-beat/ai-artist-xania-monet-debut-adult-rb-airplay-chart-1236102665/
Billboard. “AI Music Artist Xania Monet Signs Multimillion-Dollar Record Deal.” https://www.billboard.com/pro/ai-music-artist-xania-monet-multimillion-dollar-record-deal/
Billboard. “The 10 Biggest AI Music Stories of 2025: Suno & Udio Settlements, AI on the Charts & More.” https://www.billboard.com/lists/biggest-ai-music-stories-2025-suno-udio-charts-more/
Billboard. “AI Music Artists Are on the Charts, But They Aren't That Popular – Yet.” https://www.billboard.com/pro/ai-music-artists-charts-popular/
Billboard. “Kehlani Slams AI Artist Xania Monet Over $3 Million Record Deal Offer.” https://www.billboard.com/music/music-news/kehlani-slams-ai-artist-xania-monet-million-record-deal-1236071158/
Bensound. “Human vs AI Music: Data, Emotion & Authenticity in 2025.” https://www.bensound.com/blog/human-generated-music-vs-ai-generated-music/
CBS News. “People can't tell AI-generated music from real thing anymore, survey shows.” https://www.cbsnews.com/news/ai-generated-music-real-thing-cant-tell/
CBS News. “New Grammy rule addresses artificial intelligence.” https://www.cbsnews.com/news/grammy-rule-artificial-intelligence-only-human-creators-eligible-awards/
CISAC. “Global economic study shows human creators' future at risk from generative AI.” https://www.cisac.org/Newsroom/news-releases/global-economic-study-shows-human-creators-future-risk-generative-ai
Deezer Newsroom. “Deezer and Ipsos study: AI fools 97% of listeners.” https://newsroom-deezer.com/2025/11/deezer-ipsos-survey-ai-music/
Deezer Newsroom. “Deezer: 28% of all delivered music is now fully AI-generated.” https://newsroom-deezer.com/2025/09/28-fully-ai-generated-music/
GOV.UK. “The impact of algorithmically driven recommendation systems on music consumption and production.” https://www.gov.uk/government/publications/research-into-the-impact-of-streaming-services-algorithms-on-music-consumption/
Hollywood Reporter. “Hallwood Media Signs Record Deal With an 'AI Music Designer.'” https://www.hollywoodreporter.com/music/music-industry-news/hallwood-inks-record-deal-ai-music-designer-imoliver-1236328964/
IFPI. “Global Music Report 2025.” https://globalmusicreport.ifpi.org/
Medium (Anoxia Lau). “The Human Premium: What 100 Music Insiders Reveal About the Real Value of Art in the AI Era.” https://anoxia2.medium.com/the-human-premium-what-100-music-insiders-reveal-about-the-real-value-of-art-in-the-ai-era-c4e12a498c4a
MIT Media Lab. “Exploring listeners' perceptions of AI-generated and human-composed music.” https://www.media.mit.edu/publications/exploring-listeners-perceptions-of-ai-generated-and-human-composed-music-for-functional-emotional-applications/
Music Ally. “UMG boss slams 'exponential growth of AI slop' on streaming services.” https://musically.com/2026/01/09/umg-boss-slams-exponential-growth-of-ai-slop-on-streaming-services/
Music Business Worldwide. “50,000 AI tracks flood Deezer daily.” https://www.musicbusinessworldwide.com/50000-ai-tracks-flood-deezer-daily-as-study-shows-97-of-listeners-cant-tell-the-difference-between-human-made-vs-fully-ai-generated-music/
Rap-Up. “Baby Tate & Muni Long Push Back Against AI Artist Xania Monet.” https://www.rap-up.com/article/baby-tate-muni-long-xania-monet-ai-artist-backlash
SAGE Journals (Bonini & Gandini). “First Week Is Editorial, Second Week Is Algorithmic: Platform Gatekeepers and the Platformization of Music Curation.” https://journals.sagepub.com/doi/full/10.1177/2056305119880006
Saving Country Music. “Billboard Must Address AI on the Charts NOW.” https://savingcountrymusic.com/billboard-must-address-ai-on-the-charts-now/
Spotify Engineering. “Humans + Machines: A Look Behind the Playlists Powered by Spotify's Algotorial Technology.” https://engineering.atspotify.com/2023/04/humans-machines-a-look-behind-spotifys-algotorial-playlists
TIME. “No, AI Artist Breaking Rust's 'Walk My Walk' Is Not a No. 1 Hit.” https://time.com/7333738/ai-country-song-breaking-rust-walk-my/
US Copyright Office. “Copyright and Artificial Intelligence Part 3: Generative AI Training.” https://www.copyright.gov/ai/
WIPO Magazine. “How AI-generated songs are fueling the rise of streaming farms.” https://www.wipo.int/en/web/wipo-magazine/articles/how-ai-generated-songs-are-fueling-the-rise-of-streaming-farms-74310
Yahoo Entertainment. “Kehlani, SZA Slam AI Artist Xania Monet's Multimillion-Dollar Record Deal.” https://www.yahoo.com/entertainment/music/articles/kehlani-sza-slam-ai-artist-203344886.html

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk