The Identity Loophole: How AI Builds Stars From Stolen Styles

In November 2025, Grammy-winning artist Victoria Monet sat for an interview with Vanity Fair and confronted something unprecedented in her fifteen-year career. Not a rival artist. Not a legal dispute over songwriting credits. Instead, she faced an algorithmic apparition: an AI-generated persona called Xania Monet, whose name, appearance, and vocal style bore an uncanny resemblance to her own. “It's hard to comprehend that, within a prompt, my name was not used for this artist to capitalise on,” Monet told the magazine. “I don't support that. I don't think that's fair.”

The emergence of Xania Monet, who secured a $3 million deal with Hallwood Media and became the first AI artist to debut on a Billboard radio chart, represents far more than a curiosity of technological progress. It exposes fundamental inadequacies in how intellectual property law conceives of artistic identity, and it reveals the emergence of business models specifically designed to exploit zones of legal ambiguity around voice, style, and likeness. The question is no longer whether AI can approximate human creativity. The question is what happens when that approximation becomes indistinguishable enough to extract commercial value from an artist's foundational assets while maintaining plausible deniability about having done so.

The controversy arrives at a moment when the music industry is already grappling with existential questions about AI. Major record labels have filed landmark lawsuits against AI music platforms. European courts have issued rulings that challenge the foundations of how AI companies operate. Congress is debating legislation that would create the first federal right of publicity in American history. And streaming platforms face mounting evidence that AI-generated content is flooding their catalogues, diluting the royalty pool that sustains human artists. Xania Monet sits at the intersection of all these forces, a test case for whether our existing frameworks can protect artistic identity in an age of sophisticated machine learning.

The Anatomy of Approximation

Victoria Monet's concern centres on something that existing copyright law struggles to address: the space between direct copying and inspired derivation. Copyright protects specific expressions of ideas, not the ideas themselves. It cannot protect a vocal timbre, a stylistic approach to melody, or the ineffable quality that makes an artist recognisable across their catalogue. You can copyright a particular song, but you cannot copyright the essence of how Victoria Monet sounds.

This legal gap has always existed, but it mattered less when imitation required human effort and inevitably produced human variation. A singer influenced by Monet would naturally develop their own interpretations, their own quirks, their own identity over time. But generative AI systems can analyse thousands of hours of an artist's work and produce outputs that capture stylistic fingerprints with unprecedented fidelity. The approximation can be close enough to trigger audience recognition without being close enough to constitute legal infringement.

The technical process behind this approximation involves training neural networks on vast corpora of existing music. These systems learn to recognise patterns across multiple dimensions simultaneously: harmonic progressions, rhythmic structures, timbral characteristics, production techniques, and vocal stylings. The resulting model does not store copies of the training data in any conventional sense. Instead, it encodes statistical relationships that allow it to generate new outputs exhibiting similar characteristics. This architecture creates a genuine conceptual challenge for intellectual property frameworks designed around the notion of copying specific works.

Xania Monet exemplifies this phenomenon. The vocals and instrumental music released under her name are created using Suno, the AI music generation platform. The lyrics come from Mississippi poet and designer Telisha Jones, who serves as the creative force behind the virtual persona. But the sonic character, the R&B vocal stylings, the melodic sensibilities that drew comparisons to Victoria Monet, emerge from an AI system trained on vast quantities of existing music. In an interview with Gayle King, Jones defended her creative role, describing Xania Monet as “an extension of myself” and framing AI as simply “a tool, an instrument” to be utilised.

Victoria Monet described a telling experiment: a friend typed the prompt “Victoria Monet making tacos” into ChatGPT's image generator, and the system produced visuals that looked uncannily similar to Xania Monet's promotional imagery. Whether this reflects direct training on Victoria Monet's work or the emergence of stylistic patterns from broader R&B training data, the practical effect remains the same. An artist's distinctive identity becomes raw material for generating commercial competitors.

The precedent for this kind of AI-mediated imitation emerged dramatically in April 2023, when a song called “Heart on My Sleeve” appeared on streaming platforms. Created by an anonymous producer using the pseudonym Ghostwriter977, the track featured AI-generated vocals designed to sound like Drake and the Weeknd. Neither artist had any involvement in its creation. Universal Music Group quickly filed takedown notices citing copyright violation, but the song had already gone viral, demonstrating how convincingly AI could approximate celebrity vocal identities. Ghostwriter later revealed that the actual composition was entirely human-created, with only the vocal filters being AI-generated. The Recording Academy initially considered the track for Grammy eligibility before determining that the AI voice modelling made it ineligible.

The Training Data Black Box

At the heart of these concerns lies a fundamental opacity: the companies building generative AI systems have largely refused to disclose what training data their models consumed. This deliberate obscurity creates a structural advantage. When provenance cannot be verified, liability becomes nearly impossible to establish. When the creative lineage of an AI output remains hidden, artists cannot prove that their work contributed to the system producing outputs that compete with them.

The major record labels, Universal Music Group, Sony Music Entertainment, and Warner Music Group, recognised this threat early. In June 2024, they filed landmark lawsuits against Suno and Udio, the two leading AI music generation platforms, accusing them of “willful copyright infringement at an almost unimaginable scale.” The Recording Industry Association of America alleged that Udio's system had produced outputs with striking similarities to specific protected recordings, including songs by Michael Jackson, the Beach Boys, ABBA, and Mariah Carey. The lawsuits sought damages of up to $150,000 per infringed recording, potentially amounting to hundreds of millions of dollars.

Suno's defence hinged on a revealing argument. CEO Mikey Shulman acknowledged that the company trains on copyrighted music, stating, “We train our models on medium- and high-quality music we can find on the open internet. Much of the open internet indeed contains copyrighted materials.” But he argued this constitutes fair use, comparing it to “a kid writing their own rock songs after listening to the genre.” In subsequent legal filings, Suno claimed that none of the millions of tracks generated on its platform “contain anything like a sample” of existing recordings.

This argument attempts to draw a bright line between the training process and the outputs it produces. Even if the model learned from copyrighted works, Suno contends, the music it generates represents entirely new creations. The analogy to human learning, however, obscures a crucial difference: when humans learn from existing music, they cannot perfectly replicate the statistical patterns of that music's acoustic characteristics. AI systems can. And the scale differs by orders of magnitude. A human musician might absorb influences from hundreds or thousands of songs over a lifetime. An AI system can process millions of tracks and encode their patterns with mathematical precision.

The United States Copyright Office weighed in on this debate with a 108-page report published in May 2025, concluding that using copyrighted materials to train AI models may constitute prima facie infringement and warning that transformative arguments are not inherently valid. Where AI-generated outputs demonstrate substantial similarity to training data inputs, the report suggested, the model weights themselves may infringe reproduction and derivative work rights. The report also noted that the transformative use doctrine was never intended to permit wholesale appropriation of creative works for commercial AI development.

Separately, the Copyright Office had addressed the question of AI authorship. In a January 2025 decision, the office stated that AI-generated work can receive copyright protection “when and if it embodies meaningful human authorship.” This creates an interesting dynamic: the outputs of AI music generation may be copyrightable by the humans who shaped them, even as the training process that made those outputs possible may itself constitute infringement of others' copyrights.

The Personality Protection Gap

The Xania Monet controversy illuminates why copyright law alone cannot protect artists in the age of generative AI. Even if the major label lawsuits succeed in establishing that AI companies must license training data, this would not necessarily protect individual artists from having their identities approximated.

Consider what Victoria Monet actually lost in this situation. The AI persona did not copy any specific song she recorded. It did not sample her vocals. What it captured, or appeared to capture, was something more fundamental: the quality of her artistic presence, the characteristics that make audiences recognise her work. This touches on what legal scholars call the right of publicity, the right to control commercial use of one's name, image, and likeness.

But here the legal landscape becomes fragmented and inadequate. In the United States, there is no federal right of publicity law. Protection varies dramatically by state, with around 30 states providing statutory rights and others relying on common law protections. All 50 states recognise some form of common law rights against unauthorised use of a person's name, image, or likeness, but the scope and enforceability of these protections differ substantially across jurisdictions.

Tennessee's ELVIS Act, which took effect on 1 July 2024, became the first state legislation specifically designed to protect musicians from unauthorised AI replication of their voices. Named in tribute to Elvis Presley, whose estate had litigated to control his posthumous image rights, the law explicitly includes voice as protected property, defining it to encompass both actual voice and AI-generated simulations. The legislation passed unanimously in both chambers of the Tennessee legislature, with 93 ayes in the House and 30 in the Senate, reflecting bipartisan recognition of the threat AI poses to the state's music industry.

Notably, the ELVIS Act contains provisions targeting not just those who create deepfakes without authorisation but also the providers of the systems used to create them. The law allows lawsuits against any person who “makes available an algorithm, software, tool, or other technology, service, or device” whose “primary purpose or function” is creating unauthorised voice recordings. This represents a significant expansion of liability that could potentially reach AI platform developers themselves.

California followed with its own protective measures. In September 2024, Governor Gavin Newsom signed AB 2602, which requires contracts specifying the use of AI-generated digital replicas of a performer's voice or likeness to include specific consent and professional representation during negotiations. The law defines a “digital replica” as a “computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual.” AB 1836 prohibits creating or distributing digital replicas of deceased personalities without permission from their estates, extending these protections beyond the performer's lifetime.

Yet these state-level protections remain geographically limited and inconsistently applied. An AI artist created using platforms based outside these jurisdictions, distributed through global streaming services, and promoted through international digital channels exists in a regulatory grey zone. The Copyright Office's July 2024 report on digital replicas concluded there was an urgent need for federal right of publicity legislation protecting all people from unauthorised use of their likeness and voice, noting that the current patchwork of state laws creates “gaps and inconsistencies” that are “far too inconsistent to remedy generative AI commercial appropriation.”

The NO FAKES Act, first introduced in Congress in July 2024 by a bipartisan group of senators including Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis, represents the most comprehensive attempt to address this gap at the federal level. The legislation would establish the first federal right of publicity in the United States, providing a national standard to protect creators' likenesses from unauthorised use while allowing control over digital personas for 70 years after death. The reintroduction in April 2025 gained support from an unusual coalition including major record labels, SAG-AFTRA, Google, and OpenAI. Country music artist Randy Travis, whose voice was digitally recreated using AI after a stroke left him unable to sing, appeared at the legislation's relaunch.

But even comprehensive right of publicity protection faces a fundamental challenge: proving that a particular AI persona was specifically created to exploit another artist's identity. Xania Monet's creators have not acknowledged any intention to capitalise on Victoria Monet's identity. The similarity in names could be coincidental. The stylistic resemblances could emerge organically from training on R&B music generally. Without transparency about training data composition, artists face the impossible task of proving a negative.

The Business Logic of Ambiguity

What makes the Xania Monet case particularly significant is what it reveals about emerging business models in AI music. This is not an accidental byproduct of technological progress. It represents a deliberate commercial strategy that exploits the gap between what AI can approximate and what law can protect.

Hallwood Media, the company that signed Xania Monet to her $3 million deal, is led by Neil Jacobson, formerly president of Geffen Records. Hallwood operates as a multi-faceted music company servicing talent through recording, management, publishing, distribution, and merchandising divisions. The company had already invested in Suno and, in July 2025, signed imoliver, described as the top-streaming “music designer” on Suno, in what was billed as the first traditional label signing of an AI music creator. Jacobson positioned these moves as embracing innovation, stating that imoliver “represents the future of our medium. He's a music designer who stands at the intersection of craftwork and taste.”

The distinction between imoliver and Xania Monet is worth noting. Hallwood describes imoliver as a real human creator who uses AI tools, whereas Xania Monet is presented as a virtual artist persona. But in both cases, the commercial model extracts value from AI's ability to generate music at scale with reduced human labour costs.

The economics are straightforward. An AI artist requires no rest, no touring support, no advance payments against future royalties, no management of interpersonal conflicts or creative disagreements. Victoria Monet herself articulated this asymmetry: “It definitely puts creators in a dangerous spot because our time is more finite. We have to rest at night. So, the eight hours, nine hours that we're resting, an AI artist could potentially still be running, studying, and creating songs like a machine.”

Xania Monet's commercial success demonstrates the model's viability. Her song “How Was I Supposed to Know” reached number one on R&B Digital Song Sales and number three on R&B/Hip-Hop Digital Song Sales. Her catalogue accumulated 9.8 million on-demand streams in the United States, with 5.4 million coming in a single tracking week. She became the first AI artist to debut on a Billboard radio chart, entering the Adult R&B Airplay chart at number 30. Her song “Let Go, Let God” debuted at number 21 on Hot Gospel Songs.

For investors and labels, this represents an opportunity to capture streaming revenue without many of the costs associated with human artists. For human artists, it represents an existential threat: the possibility that their own stylistic innovations could be extracted, aggregated, and turned against them in the form of competitors who never tire, never renegotiate contracts, and never demand creative control. The music industry has long relied on finding and developing talent, but AI offers a shortcut that could fundamentally alter how value is created and distributed.

The Industry Response and Its Limits

Human artists have pushed back against AI music with remarkable consistency across genres and career levels. Kehlani took to TikTok to express her frustration about Xania Monet's deal, stating, “There is an AI R&B artist who just signed a multi-million-dollar deal, and has a Top 5 R&B album, and the person is doing none of the work.” She declared that “nothing and no one on Earth will ever be able to justify AI to me.”

SZA expressed environmental and ethical concerns, posting on Instagram that AI technology causes “harm” to marginalised neighbourhoods and asking fans not to create AI images or songs using her likeness. Baby Tate criticised Xania Monet's creator for lacking creativity and authenticity in her music process. Muni Long questioned why AI artists appeared to be gaining acceptance in R&B specifically, asking, “It wouldn't be allowed to happen in country or pop.” She also noted that Xania Monet's Apple Music biography listed her, Keyshia Cole, and K. Michelle as references, adding, “I'm not happy about it at all. Zero percent.”

Beyonce reportedly expressed fear after hearing an AI version of her own voice, highlighting how even artists at the highest commercial tier feel vulnerable to this technology.

This criticism highlights an uncomfortable pattern: the AI music entities gaining commercial traction have disproportionately drawn comparisons to Black R&B artists. Whether this reflects biases in training data composition, market targeting decisions, or coincidental emergence, the effect raises questions about which artistic communities bear the greatest risks from AI appropriation. The history of American popular music includes numerous examples of Black musical innovations being appropriated by white artists and industry figures. AI potentially automates and accelerates this dynamic.

The creator behind Xania Monet has not remained silent. In December 2025, the AI artist released a track titled “Say My Name With Respect,” which directly addressed critics including Kehlani. While the song does not mention Kehlani by name, the accompanying video displayed screenshots of her previous statements about AI alongside comments from other detractors.

The major labels' lawsuits against Suno and Udio remain ongoing, though Universal Music Group announced in 2025 that it had settled with Udio and struck a licensing deal, following similar action by Warner Music Group. These settlements suggest that large rights holders may secure compensation and control over how their catalogues are used in AI training. But individual artists, particularly those not signed to major labels, may find themselves excluded from whatever protections these arrangements provide.

The European Precedent

While American litigation proceeds through discovery and motions, Europe has produced the first major judicial ruling holding an AI developer liable for copyright infringement related to training. On 11 November 2025, the Munich Regional Court ruled largely in favour of GEMA, the German collecting society representing songwriters, in its lawsuit against OpenAI.

The case centred on nine songs whose lyrics ChatGPT could reproduce almost verbatim in response to simple user prompts. The songs at issue included well-known German tracks such as “Atemlos” and “Wie schon, dass du geboren bist.” The court accepted GEMA's argument that training data becomes embedded in model weights and remains retrievable, a phenomenon researchers call “memorisation.” Even a 15-word passage was sufficient to establish infringement, the court found, because such specific text would not realistically be generated from scratch.

Crucially, the court rejected OpenAI's attempt to benefit from text and data mining exceptions applicable to non-profit research. OpenAI argued that while some of its legal entities pursue commercial objectives, the parent company was founded as a non-profit. Presiding Judge Dr Elke Schwager dismissed this argument, stating that to qualify for research exemptions, OpenAI would need to prove it reinvests 100 percent of profits in research and development or operates with a governmentally recognised public interest mandate.

The ruling ordered OpenAI to cease storing unlicensed German lyrics on infrastructure in Germany, provide information about the scope of use and related revenues, and pay damages. The court also ordered that the judgment be published in a local newspaper. Finding that OpenAI had acted with at minimum negligence, the court denied the company a grace period for making the necessary changes. OpenAI announced plans to appeal, and the judgment may ultimately reach the Court of Justice of the European Union. But as the first major European decision holding an AI developer liable for training on protected works, it establishes a significant precedent.

GEMA is pursuing parallel action against Suno in another lawsuit, with a hearing expected before the Munich Regional Court in January 2026. If European courts continue to reject fair use-style arguments for AI training, companies may face a choice between licensing music rights or blocking access from EU jurisdictions entirely.

The Royalty Dilution Problem

Beyond the question of training data rights lies another structural threat to human artists: the dilution of streaming royalties by AI-generated content flooding platforms. Streaming services operate on pro-rata payment models where subscription revenue enters a shared pool divided according to total streams. When more content enters the system, the per-stream value for all creators decreases.

In April 2025, streaming platform Deezer estimated that 18 percent of content uploaded daily, approximately 20,000 tracks, is AI-generated. This influx of low-cost content competes for the same finite pool of listener attention and royalty payments that sustains human artists. In 2024, Spotify alone paid out $10 billion to the music industry, with independent artists and labels collectively generating more than $5 billion from the platform. But this revenue gets divided among an ever-expanding universe of content, much of it now machine-generated.

The problem extends beyond legitimate AI music releases to outright fraud. In a notable case, musician Michael Smith allegedly extracted more than $10 million in royalty payments by uploading hundreds of thousands of AI-generated songs and using bots to artificially inflate play counts. According to fraud detection firm Beatdapp, streaming fraud removes approximately $1 billion annually from the royalty pool.

A global study commissioned by CISAC, the international confederation representing over 5 million creators, projected that while generative AI providers will experience dramatic revenue growth, music creators will see approximately 24 percent of their revenues at risk of loss by 2028. Audiovisual creators face a similar 21 percent risk. This represents a fundamental redistribution of value from human creators to technology platforms, enabled by the same legal ambiguities that allow AI personas to approximate existing artists without liability.

The market for AI in music is expanding rapidly. Global AI in music was valued at $2.9 billion in 2024, with projections suggesting growth to $38.7 billion by 2033 at a compound annual growth rate of 25.8 percent. Musicians are increasingly adopting the technology, with approximately 60 percent utilising AI tools in their projects and 36.8 percent of producers integrating AI into their workflows. But this adoption occurs in the context of profound uncertainty about how AI integration will affect long-term career viability.

The Question of Disclosure

Victoria Monet proposed a simple reform that might partially address these concerns: requiring clear labelling of AI-generated music, similar to how food products must disclose their ingredients. “I think AI music, as it is released, needs to be disclosed more,” she told Vanity Fair. “Like on food, we have labels for organic and artificial so that we can make an informed decision about what we consume.”

This transparency principle has gained traction among legislators. In April 2024, California Representative Adam Schiff introduced the Generative AI Copyright Disclosure Act, which would require AI firms to notify the Copyright Office of copyrighted works used in training at least 30 days before publicly releasing a model. Though the bill did not become law, it reflected growing consensus that the opacity of training data represents a policy problem requiring regulatory intervention.

The music industry's lobbying priorities have coalesced around three demands: permission, payment, and transparency. Rights holders want AI companies to seek permission before training on copyrighted music. They want to be paid for such use through licensing deals. And they want transparency about what data sets models actually use, without which the first two demands cannot be verified or enforced.

But disclosure requirements face practical challenges. How does one audit training data composition at scale? How does one verify that an AI system was not trained on particular artists when the systems themselves may not retain explicit records of their training data? The technical architecture of neural networks does not readily reveal which inputs influenced which outputs. Proving that Victoria Monet's recordings contributed to Xania Monet's stylistic character may be technically impossible even with full disclosure of training sets.

Redefining Artistic Value

Perhaps the most profound question raised by AI music personas is not legal but cultural: what do we value about human artistic creation, and can those values survive technological displacement?

Human music carries meanings that transcend sonic characteristics. When Victoria Monet won three Grammy Awards in 2024, including Best New Artist after fifteen years of working primarily as a songwriter for other performers, that recognition reflected not just the quality of her album Jaguar II but her personal journey, her persistence through years when labels declined to spotlight her, her evolution from writing hits for Ariana Grande to commanding her own audience. “This award was a 15-year pursuit,” she said during her acceptance speech. Her work with Ariana Grande had already earned her three Grammy nominations in 2019, including for Album of the Year for Thank U, Next, but her own artistic identity had taken longer to establish. These biographical dimensions inform how listeners relate to her work.

An AI persona has no such biography. Xania Monet cannot discuss the personal experiences that shaped her lyrics because those lyrics emerge from prompts written by Telisha Jones and processed through algorithmic systems. The emotional resonance of human music often derives from audiences knowing that another human experienced something and chose to express it musically. Can AI-generated music provide equivalent emotional value, or does it offer only a simulation of feeling, convincing enough to capture streams but hollow at its core?

The market appears agnostic on this question, at least in the aggregate. Xania Monet's streaming numbers suggest that significant audiences either do not know or do not care that her music is AI-generated. This consumer indifference may represent the greatest long-term threat to human artists: not that AI music will be legally prohibited, but that it will become commercially indistinguishable from human music in ways that erode the premium audiences currently place on human creativity.

The emergence of AI personas that approximate existing artists reveals that our legal and cultural frameworks for artistic identity were built for a world that no longer exists. Copyright law assumed that copying required access to specific works and that derivation would be obvious. Right of publicity law assumed that commercial exploitation of identity would involve clearly identifiable appropriation. The economics of music assumed that creating quality content would always require human labour that commands payment.

Each of these assumptions has been destabilised by generative AI systems that can extract stylistic essences without copying specific works, create virtual identities that approximate real artists without explicit acknowledgment, and produce unlimited content at marginal costs approaching zero.

The solutions being proposed represent necessary but insufficient responses. Federal right of publicity legislation, mandatory training data disclosure, international copyright treaty updates, and licensing frameworks for AI training may constrain the most egregious forms of exploitation while leaving the fundamental dynamic intact: AI systems can transform human creativity into training data, extract commercially valuable patterns, and generate outputs that compete with human artists in ways that existing law struggles to address.

Victoria Monet's experience with Xania Monet may become the template for a new category of artistic grievance: the sense of being approximated, of having one's creative identity absorbed into a system and reconstituted as competition. Whether law and culture can evolve quickly enough to protect against this form of extraction remains uncertain. What is certain is that the question can no longer be avoided. The ghost has emerged from the machine, and it wears a familiar face.


References and Sources

  1. Face2Face Africa. “Victoria Monet criticizes AI artist Xania Monet, suggests it may have been created using her likeness.” https://face2faceafrica.com/article/victoria-monet-criticizes-ai-artist-xania-monet-suggests-it-may-have-been-created-using-her-likeness

  2. TheGrio. “Victoria Monet sounds the alarm on Xania Monet: 'I don't support that. I don't think that's fair.'” https://thegrio.com/2025/11/18/victoria-monet-reacts-to-xania-monet/

  3. Billboard. “AI Music Artist Xania Monet Signs Multimillion-Dollar Record Deal.” https://www.billboard.com/pro/ai-music-artist-xania-monet-multimillion-dollar-record-deal/

  4. Boardroom. “Xania Monet's $3 Million Record Deal Sparks AI Music Debate.” https://boardroom.tv/xania-monet-ai-music-play-by-play/

  5. Music Ally. “Hallwood Media sees chart success with AI artist Xania Monet.” https://musically.com/2025/09/18/hallwood-media-sees-chart-success-with-ai-artist-xania-monet/

  6. RIAA. “Record Companies Bring Landmark Cases for Responsible AI Against Suno and Udio.” https://www.riaa.com/record-companies-bring-landmark-cases-for-responsible-ai-againstsuno-and-udio-in-boston-and-new-york-federal-courts-respectively/

  7. Rolling Stone. “RIAA Sues AI Music Generators For Copyright Infringement.” https://www.rollingstone.com/music/music-news/record-labels-sue-music-generators-suno-and-udio-1235042056/

  8. TechCrunch. “AI music startup Suno claims training model on copyrighted music is 'fair use.'” https://techcrunch.com/2024/08/01/ai-music-startup-suno-response-riaa-lawsuit/

  9. Skadden. “Copyright Office Weighs In on AI Training and Fair Use.” https://www.skadden.com/insights/publications/2025/05/copyright-office-report

  10. U.S. Copyright Office. “Copyright and Artificial Intelligence.” https://www.copyright.gov/ai/

  11. Wikipedia. “ELVIS Act.” https://en.wikipedia.org/wiki/ELVIS_Act

  12. Tennessee Governor's Office. “Tennessee First in the Nation to Address AI Impact on Music Industry.” https://www.tn.gov/governor/news/2024/1/10/tennessee-first-in-the-nation-to-address-ai-impact-on-music-industry.html

  13. ASCAP. “ELVIS Act Signed Into Law in Tennessee To Protect Music Creators from AI Impersonation.” https://www.ascap.com/news-events/articles/2024/03/elvis-act-tn

  14. California Governor's Office. “Governor Newsom signs bills to protect digital likeness of performers.” https://www.gov.ca.gov/2024/09/17/governor-newsom-signs-bills-to-protect-digital-likeness-of-performers/

  15. Manatt, Phelps & Phillips. “California Enacts a Suite of New AI and Digital Replica Laws.” https://www.manatt.com/insights/newsletters/client-alert/california-enacts-a-host-of-new-ai-and-digital-rep

  16. Congress.gov. “NO FAKES Act of 2025.” https://www.congress.gov/bill/119th-congress/house-bill/2794/text

  17. Billboard. “NO FAKES Act Returns to Congress With Support From YouTube, OpenAI for AI Deepfake Bill.” https://www.billboard.com/pro/no-fakes-act-reintroduced-congress-support-ai-deepfake-bill/

  18. Hollywood Reporter. “Hallwood Media Signs Record Deal With an 'AI Music Designer.'” https://www.hollywoodreporter.com/music/music-industry-news/hallwood-inks-record-deal-ai-music-designer-imoliver-1236328964/

  19. Billboard. “Hallwood Signs 'AI Music Designer' imoliver to Record Deal, a First for the Music Business.” https://www.billboard.com/pro/ai-music-creator-imoliver-record-deal-hallwood/

  20. Complex. “Kehlani Blasts AI Musician's $3 Million Record Deal.” https://www.complex.com/music/a/jadegomez510/kehlani-xenia-monet-ai

  21. Billboard. “Kehlani Slams AI Artist Xania Monet Over $3 Million Record Deal Offer.” https://www.billboard.com/music/music-news/kehlani-slams-ai-artist-xania-monet-million-record-deal-1236071158/

  22. Rap-Up. “Baby Tate & Muni Long Push Back Against AI Artist Xania Monet.” https://www.rap-up.com/article/baby-tate-muni-long-xania-monet-ai-artist-backlash

  23. Bird & Bird. “Landmark ruling of the Munich Regional Court (GEMA v OpenAI) on copyright and AI training.” https://www.twobirds.com/en/insights/2025/landmark-ruling-of-the-munich-regional-court-(gema-v-openai)-on-copyright-and-ai-training

  24. Billboard. “German Court Rules OpenAI Infringed Song Lyrics in Europe's First Major AI Music Ruling.” https://www.billboard.com/pro/gema-ai-music-copyright-case-open-ai-chatgpt-song-lyrics/

  25. Norton Rose Fulbright. “Germany delivers landmark copyright ruling against OpenAI: What it means for AI and IP.” https://www.nortonrosefulbright.com/en/knowledge/publications/656613b2/germany-delivers-landmark-copyright-ruling-against-openai-what-it-means-for-ai-and-ip

  26. CISAC. “Global economic study shows human creators' future at risk from generative AI.” https://www.cisac.org/Newsroom/news-releases/global-economic-study-shows-human-creators-future-risk-generative-ai

  27. WIPO Magazine. “How AI-generated songs are fueling the rise of streaming farms.” https://www.wipo.int/en/web/wipo-magazine/articles/how-ai-generated-songs-are-fueling-the-rise-of-streaming-farms-74310

  28. Grammy.com. “2024 GRAMMYs: Victoria Monet Wins The GRAMMY For Best New Artist.” https://www.grammy.com/news/2024-grammys-victoria-monet-best-new-artist-win

  29. Billboard. “Victoria Monet Wins Best New Artist at 2024 Grammys: 'This Award Was a 15-Year Pursuit.'” https://www.billboard.com/music/awards/victoria-monet-grammy-2024-best-new-artist-1235598716/

  30. Harvard Law School. “AI created a song mimicking the work of Drake and The Weeknd. What does that mean for copyright law?” https://hls.harvard.edu/today/ai-created-a-song-mimicking-the-work-of-drake-and-the-weeknd-what-does-that-mean-for-copyright-law/

  31. Variety. “AI-Generated Fake 'Drake'/'Weeknd' Collaboration, 'Heart on My Sleeve,' Delights Fans and Sets Off Industry Alarm Bells.” https://variety.com/2023/music/news/fake-ai-generated-drake-weeknd-collaboration-heart-on-my-sleeve-1235585451/

  32. ArtSmart. “AI in Music Industry Statistics 2025: Market Growth & Trends.” https://artsmart.ai/blog/ai-in-music-industry-statistics/

  33. Rimon Law. “U.S. Copyright Office Will Accept AI-Generated Work for Registration When and if It Embodies Meaningful Human Authorship.” https://www.rimonlaw.com/u-s-copyright-office-will-accept-ai-generated-work-for-registration-when-and-if-it-embodies-meaningful-human-authorship/

  34. Billboard. “AI Artist Xania Monet Fires Back at Kehlani & AI Critics on Prickly 'Say My Name With Respect' Single.” https://www.billboard.com/music/rb-hip-hop/xania-monet-kehlani-ai-artist-say-my-name-with-respect-1236142321/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...