Dead Performers and Machine Creations: The Battle for Authentic Entertainment

In September 2025, Hollywood's unions found themselves confronting an adversary unlike any they had faced before. Tilly Norwood had attracted the attention of multiple talent agencies eager to represent her. She possessed the polish of a seasoned performer, the algorithmic perfection of someone who had never experienced a bad hair day, and one notable characteristic that set her apart from every other aspiring actor in Los Angeles: she did not exist.
Tilly Norwood is not human. She is a fully synthetic creation, generated by the London-based production studio Particle6, whose founder Eline van der Velden announced at the Zurich Film Festival that several agencies were clamouring to sign the AI 'actress'. Van der Velden's ambition was unambiguous: 'We want Tilly to be the next Scarlett Johansson or Natalie Portman'. The entertainment industry's response was swift and polarised. SAG-AFTRA, the Screen Actors Guild, issued a blistering statement declaring that Tilly Norwood 'is not an actor, it's a character generated by a computer program that was trained on the work of countless professional performers' without permission or compensation. The union accused the creation of 'using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry'.
Yet Van der Velden remained sanguine, comparing AI actors to animation, puppetry, and CGI, describing them as simply 'another way to imagine and build stories'. At a conference in Los Angeles, she reported that in her discussions with studios, the conversation had shifted dramatically. Companies that dismissed AI performers as 'nonsense' in February were, by May, eager to explore partnerships with Particle6. The message was clear: whether the entertainment industry likes it or not, synthetic performers have arrived, and they are not waiting for permission.
This moment represents more than a technological novelty or a legal skirmish between unions and production companies. It marks a fundamental inflection point in the history of human creativity and performance. As AI generates synthetic performers who never draw breath and resurrects deceased celebrities who can tour indefinitely without complaint, we face urgent questions about what happens to human artistry, authentic expression, and the very definition of entertainment in an age when anything can be simulated and anyone can be digitally reborn.
The Synthetic Celebrity Industrial Complex
The emergence of AI-generated performers is not an isolated phenomenon but the culmination of decades of technological development and cultural preparation. Japan's Hatsune Miku, a holographic pop idol created in 2007, pioneered the concept of the virtual celebrity. With her turquoise pigtails and synthesised voice, Miku built a devoted global fanbase, held sold-out concerts, and demonstrated that audiences would form emotional connections with explicitly artificial performers. What began as a cultural curiosity has metastasised into a vast ecosystem.
By 2025, AI-generated influencers have established a significant presence on social media platforms, a virtual K-pop group launched in South Korea has attracted a substantial following, and synthetic models appear in advertising campaigns for major brands. The economic logic is compelling. AI performers require no salaries, benefits, or accommodation. They never age, never complain, never experience scandal, and never demand creative control. They can be endlessly replicated, localised for different markets, and modified to match shifting consumer preferences. For entertainment companies operating on increasingly thin margins, the appeal is undeniable.
The technology behind these synthetic celebrities has reached startling sophistication. Companies like Particle6 employ advanced generative AI systems trained on vast databases of human performances. These systems analyse facial expressions, body language, vocal patterns, and emotional nuance from thousands of hours of footage, learning to synthesise new performances that mimic human behaviour with uncanny accuracy. The process involves selecting actors who physically resemble the desired celebrity, capturing their movements, and then digitally overlaying AI-generated faces and voices that achieve near-perfect verisimilitude.
Yet beneath the technological marvel lies a troubling reality. The AI systems creating these performers are trained on copyrighted material, often without permission or compensation to the original artists whose work forms the training data. This creates what critics describe as a form of algorithmic plagiarism, where the accumulated labour of thousands of performers is distilled, homogenised, and repackaged as a product that directly competes with those same artists for employment opportunities.
SAG-AFTRA president Sean Astin has been unequivocal about the threat. During the 2023 strikes, actors secured provisions requiring consent and compensation for digital replicas, but the emergence of wholly synthetic performers trained on unauthorised data represents a more insidious challenge. These entities exist in a legal grey zone, neither exact replicas of specific individuals nor entirely original creations. They are amalgamations, chimeras built from fragments of human artistry without attribution or remuneration.
The displacement concerns extend beyond leading actors. Background performers, voice actors, and character actors face particular vulnerability. Whilst audiences might detect the artificiality of a synthetic Scarlett Johansson in a leading role, they are far less likely to notice when background characters or minor speaking parts are filled by AI-generated performers. This creates a tiered erosion of employment, where the invisible infrastructure of the entertainment industry gradually hollows out whilst marquee names remain, at least temporarily, protected by their irreplicability and star power.
Resurrection as a Service
Parallel to the emergence of synthetic performers is the burgeoning industry of digital resurrection. In recent years, audiences have witnessed holographic performances by Maria Callas, Whitney Houston, Tupac Shakur, Michael Jackson, and Roy Orbison, all deceased artists returned to the stage through a combination of archival footage, motion capture, and AI enhancement. Companies like Base Hologram specialise in these spectral resurrections, creating tours and residencies that allow fans to experience performances by artists who died years or decades ago.
The technology relies primarily on an optical illusion known as Pepper's Ghost, a theatrical technique dating to the 19th century. Modern implementations use the Musion EyeLiner system, which projects high-definition video onto a thin metallised film angled towards the audience, creating the illusion of a three-dimensional figure on stage. When combined with live orchestras or backing bands, the effect can be remarkably convincing, though limitations remain evident. The vocals emanate from speakers rather than the holographic figure, and the performances lack the spontaneity and present-moment responsiveness that define live entertainment.
Recent advances in AI have dramatically enhanced these resurrections. Ten hours of audio can be fed into machine learning models to synthesise new vocal performances in a deceased artist's voice. Motion capture data from actors can be algorithmically modified to mimic the distinctive performance styles of departed celebrities. The result is not merely a replay of archived material but the creation of new performances that the original artist never gave, singing songs they never recorded, appearing in productions they never conceived.
The ethical implications are profound. When the estate of George Carlin sued a media company in 2025 for using AI to create an unauthorised comedy special featuring a synthetic version of the late comedian, the case highlighted the absence of clear legal frameworks governing posthumous digital exploitation. The lawsuit alleged deprivation of the right of publicity, violation of common law publicity rights, and copyright infringement. It settled with a permanent injunction, but the broader questions remained unresolved.
What would Maria Callas, who famously controlled every aspect of her artistic presentation, think about being digitally manipulated to perform in productions she never authorised? Would Prince, who notoriously guarded his artistic output and died without a will, consent to the posthumous hologram performances and album releases that have followed his death? The artists themselves cannot answer, leaving executors, heirs, and corporate entities to make decisions that profoundly shape legacy and memory.
Iain MacKinnon, a Toronto-based media lawyer, articulated the dilemma succinctly: 'It's a tough one, because if the artist never addressed the issue whilst he or she was alive, anybody who's granting these rights, which is typically an executor of an estate, is really just guessing what the artist would have wanted'.
The commercial motivations are transparent. Copyright holders and estates can generate substantial revenue from holographic tours and digital resurrections with minimal ongoing costs. A hologram can perform simultaneously in multiple venues, requires no security detail or travel arrangements, and never cancels due to illness or exhaustion. It represents the ultimate scalability of celebrity, transforming the deceased into endlessly reproducible intellectual property.
Yet fans remain conflicted. A study of Japanese audiences who witnessed AI Hibari, a hologram of singer Misora Hibari who died in 1986, revealed sharply divided responses. Some were moved to tears by the opportunity to experience an artist they had mourned for decades. Others described the performance as 'profaning the dead', a manipulation of memory that felt exploitative and fundamentally disrespectful. Research on audiences attending the ABBA Voyage hologram concert found generally positive responses, with fans expressing gratitude for the chance to see the band 'perform' once more, albeit as digital avatars of their younger selves.
The uncanny valley looms large in these resurrections. When holograms fail to achieve sufficient realism, they provoke discomfort and revulsion. Audiences are acutely sensitive to discrepancies between the spectral figure and their memories of the living artist. Poor quality recreations feel not merely disappointing but actively disturbing, a violation of the dignity owed to the dead.
The Legal Scramble
The entertainment industry's regulatory frameworks, designed for an era of analogue reproduction and clearly defined authorship, have struggled to accommodate the challenges posed by AI-generated and digitally resurrected performers. Recognising this inadequacy, legislators have begun constructing new legal architectures to protect performers' likenesses and voices.
The most significant legislative response has been the NO FAKES Act, a bipartisan bill reintroduced in both the US House and Senate in 2025. The Nurture Originals, Foster Art, and Keep Entertainment Safe Act seeks to establish a federal intellectual property right protecting individuals' voice and visual likeness from unauthorised digital replicas. If enacted, it would represent the first nationwide harmonised right of publicity, superseding the current patchwork of inconsistent state laws.
The NO FAKES Act defines a digital replica as 'a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual' in which the actual individual did not perform or in which the fundamental character of their performance has been materially altered. Crucially, the rights extend beyond living individuals to include post-mortem protections, granting heirs the authority to control deceased relatives' digital likenesses.
The legislation establishes that every individual possesses a federal intellectual property right to their own voice and likeness, including an extension of that right for families after death. It empowers individuals to take action against those who knowingly create, post, or profit from unauthorised digital copies. Platform providers receive safe harbour protections if they promptly respond to valid takedown notices and maintain policies against repeat offenders, mirroring structures familiar from copyright law.
The bill includes exceptions designed to balance protection with free speech. Bona fide news reporting, public affairs programming, sports broadcasts, documentaries, biographical works, and historical content receive exemptions. Parody and satire are explicitly protected. The legislation attempts to navigate the tension between protecting individuals from exploitation whilst preserving legitimate creative and journalistic uses of digital likeness technology.
Significantly, the NO FAKES Act makes the rights non-assignable during an individual's lifetime, though they can be licensed. This provision aims to prevent studios and labels from leveraging their bargaining power to compel artists to transfer their rights permanently, a concern that emerged prominently during the 2023 SAG-AFTRA strikes. The restriction reflects a recognition that performers often occupy positions of relative powerlessness in negotiations with corporate entities that control access to employment and distribution.
Damages for violations range from $5,000 to $750,000 per work, depending on the violator's role and intent, with provisions for injunctive relief and punitive damages in cases of wilful misconduct. The bill grants rights holders the power to compel online services, via court-issued subpoenas, to disclose identifying information of alleged infringers, potentially streamlining enforcement efforts.
California has pursued parallel protections at the state level. Assembly Bill 1836, introduced in 2024, extends the right of publicity for deceased celebrities' heirs, making it tortious to use a celebrity's name, voice, signature, photograph, or likeness for unauthorised commercial purposes within 70 years of death. The law excludes 'expressive works' such as plays, books, magazines, musical compositions, and audiovisual works, attempting to preserve creative freedom whilst limiting commercial exploitation.
The legislative push has garnered broad support from industry stakeholders. SAG-AFTRA, the Recording Industry Association of America, the Motion Picture Association, and the Television Academy have all endorsed the NO FAKES Act. Even major technology companies including Google and OpenAI have expressed support, recognising that clear legal frameworks ultimately benefit platform providers by reducing liability uncertainty and establishing consistent standards.
Yet critics argue that the legislation remains insufficiently protective. The Regulatory Review, a publication of the University of Pennsylvania Law School, warned that the revised NO FAKES Act has been expanded to satisfy the demands of large technology companies whilst leaving individuals vulnerable. The publication expressed concern that the bill could legitimise deceptive uses of digital replicas rather than appropriately regulating them, and that the preemption provisions create significant confusion about the interaction between federal and state laws.
The preemption language, which supersedes state laws regarding digital replicas whilst exempting statutes in existence before January 2025, has been particularly contentious. The phrase 'regarding a digital replica' lacks clear definition, creating ambiguity about which existing state laws remain effective. Many state intimate image laws and longstanding publicity statutes cover digital replicas without explicitly using that terminology, raising questions about their survival under federal preemption.
The challenge extends beyond legislative drafting to fundamental questions about the nature of identity and personhood in a digital age. Current legal frameworks assume that individuals possess clear boundaries of self, that identity is singular and embodied, and that likeness can be neatly demarcated and protected. AI-generated performers complicate these assumptions. When a synthetic entity is trained on thousands of performances by different actors, whose likeness does it represent? When a deceased celebrity's digital replica performs material they never created, who is the author? These questions resist simple answers and may require conceptual innovations beyond what existing legal categories can accommodate.
The Creativity Crisis
The proliferation of AI-generated content and synthetic performers has ignited fierce debate about the nature and value of human creativity. At stake is not merely the economic livelihood of artists but fundamental questions about what art is, where it comes from, and why it matters.
Proponents of AI art argue that the technology represents simply another tool, comparable to the camera, the synthesiser, or digital editing software. They emphasise AI's capacity to democratise creative production, making sophisticated tools accessible to individuals who lack formal training or expensive equipment. Artists increasingly use AI as a collaborative partner, training models on their own work to explore variations, generate inspiration, and expand their creative vocabulary. From this perspective, AI does not replace human creativity but augments and extends it.
Yet critics contend that this framing fundamentally misunderstands what distinguishes human artistic expression from algorithmic pattern recognition. Human creativity, they argue, emerges from lived experience, emotional depth, cultural context, and intentionality. Artists draw upon personal histories, grapple with mortality, navigate social complexities, and imbue their work with meanings that reflect their unique perspectives. This subjective dimension, grounded in consciousness and embodied existence, cannot be replicated by machines that lack experience, emotions, or genuine understanding.
Recent psychological research has revealed complex patterns in how audiences respond to AI-generated art. A study published in Frontiers in Psychology in 2025 presented participants with pairs of artworks, one human-created and one AI-generated, in both preference and discrimination tasks. The results were striking: when presented without attribution labels, participants systematically preferred AI-generated artworks over stylistically similar pieces created by humans. Simultaneously, a separate group of participants performed above chance at detecting which artworks were AI-generated, indicating a perceptible distinction between human and artificial creative works.
These findings suggest a troubling possibility: in the absence of contextual information about authorship, AI-generated art may be aesthetically preferred by audiences, even whilst they remain capable of detecting its artificial origin when prompted to do so. This preference may reflect AI's optimisation for visual appeal, its training on vast datasets of successful artworks, and its capacity to synthesise elements that empirical research has identified as aesthetically pleasing.
However, other research reveals a persistent bias against AI art once its origins are known. Studies consistently show that when participants are informed that a work was created by AI, they evaluate it less favourably than identical works attributed to human artists. This suggests that knowledge about creative process and authorship significantly influences aesthetic judgement. The value audiences assign to art depends not solely on its intrinsic visual properties but on the narrative of its creation, the perception of effort and intention, and the sense of connection to a creative consciousness behind the work.
The devaluation concern extends beyond aesthetic preference to economic and professional domains. As AI tools become more sophisticated and accessible, there is genuine fear that they may displace human artists in commercial markets. Already, companies are using AI to generate stock photography, book illustrations, album artwork, and marketing materials, reducing demand for human illustrators and photographers. Background actors and voice performers face particular vulnerability to replacement by synthetic alternatives that offer comparable quality at dramatically lower cost.
Yet the most profound threat may not be displacement but dilution. If the internet becomes saturated with AI-generated content, finding and valuing genuinely human creative work becomes increasingly difficult. The signal-to-noise ratio deteriorates as algorithmic production scales beyond what human labour can match. This creates a tragedy of the commons in the attention economy, where the proliferation of low-cost synthetic content makes it harder for human artists to reach audiences and sustain creative careers.
Defenders of human creativity emphasise characteristics that AI fundamentally cannot replicate. Human artists bring imperfection, idiosyncrasy, and the marks of struggle that enhance a work's character and emotional resonance. The rough edges, the unexpected juxtapositions, the evidence of revision and reconsideration all signal the presence of a conscious agent grappling with creative challenges. These qualities, often called the 'human touch', create opportunities for connection and recognition that algorithmic perfection precludes.
Cultural authenticity represents another domain where AI struggles. Art emerges from specific cultural contexts, drawing upon traditions, references, and lived experiences that give works depth and specificity. An AI trained on global datasets may mimic surface characteristics of various cultural styles but lacks the embedded knowledge, the tacit understanding, and the personal stake that artists bring from their own backgrounds. This can result in art that feels derivative, appropriative, or culturally shallow despite its technical proficiency.
The intentionality question remains central. Human artists make choices that reflect particular ideas, emotions, and communicative purposes. They select colours to evoke specific moods, arrange compositions to direct attention, and employ techniques to express concepts. This intentionality invites viewers into dialogue, encouraging interpretation and engagement with the work's meanings. AI lacks genuine intention. It optimises outputs based on training data and prompt parameters but does not possess ideas it seeks to communicate or emotions it aims to express. The resulting works may be visually impressive yet ultimately hollow, offering surface without depth.
Defining Authenticity When Everything Can Be Faked
The proliferation of synthetic performers and AI-generated content creates an authenticity crisis that extends beyond entertainment to epistemology itself. When seeing and hearing can no longer be trusted as evidence of reality, what remains as grounds for belief and connection?
Celebrity deepfakes have emerged as a particularly pernicious manifestation of this crisis. In 2025, Steve Harvey reported that scams using his AI-generated likeness were at 'an all-time high', with fraudsters deploying synthetic videos of the television host promoting fake government funding schemes and gambling platforms. A woman in France lost $850,000 after scammers used AI-generated images of Brad Pitt to convince her she was helping the actor. Taylor Swift, Scarlett Johansson, and Selena Gomez have all been targeted by deepfake scandals featuring explicit or misleading content created without their consent.
The scale of the problem has prompted celebrities themselves to advocate for legislative solutions. At congressional hearings, performers have testified about the personal and professional harm caused by unauthorised digital replicas, emphasising the inadequacy of existing legal frameworks to address synthetic impersonation. The challenge extends beyond individual harm to collective trust. When public figures can be convincingly impersonated, when videos and audio recordings can be fabricated, the evidentiary foundations of journalism, law, and democratic discourse erode.
Technology companies have responded with forensic tools designed to detect AI-generated content. Vermillio AI, which partners with major talent agencies and studios, employs a system called TraceID that uses 'fingerprinting' techniques to distinguish authentic content from AI-generated material. The platform crawls the internet for images that have been manipulated using large language models, analysing millions of data points within each image to identify synthetic artefacts. Celebrities like Steve Harvey use these services to track unauthorised uses of their likenesses and automate takedown requests.
Yet detection remains a cat-and-mouse game. As forensic tools improve, so too do generative models. Adversarial training allows AI systems to learn to evade detection methods, creating an escalating technological arms race. Moreover, relying on technical detection shifts the burden from preventive regulation to reactive enforcement, placing victims in the position of constantly monitoring for misuse rather than enjoying proactive protection.
The authenticity crisis manifests differently across generations. Research suggests that younger audiences, particularly Generation Z, demonstrate greater acceptance of digital beings and synthetic celebrities. Having grown up with virtual influencers, animated characters, and heavily edited social media personas, they possess different intuitions about the boundaries between real and artificial. For these audiences, authenticity may reside less in biological origins than in consistency, coherence, and the quality of parasocial connection.
Parasocial relationships, the one-sided emotional bonds that audiences form with media personalities, have always involved elements of illusion. Fans construct imagined connections with celebrities based on curated public personas that may diverge significantly from private selves. AI-generated performers simply make this dynamic explicit. The synthetic celebrity openly acknowledges its artificiality yet still invites emotional investment. For some audiences, this transparency removes the deception inherent in traditional celebrity performance, creating a more honest foundation for fan engagement.
Consumer protection advocates warn of exploitation risks. Synthetic performers can be algorithmically optimised to maximise engagement, deploying psychological techniques designed to sustain attention and encourage parasocial bonding. Without the constraints imposed by human psychology, exhaustion, or ethical consideration, AI-driven celebrities can be engineered for addictiveness in ways that raise serious concerns about emotional manipulation and the commodification of intimacy.
The question of what constitutes 'authentic' entertainment in this landscape resists definitive answers. If audiences derive genuine pleasure from holographic concerts, if they form meaningful emotional connections with synthetic performers, if they find value in AI-generated art, can we dismiss these experiences as inauthentic? Authenticity, in this view, resides not in the ontological status of the creator but in the quality of the audience's experience.
Yet this subjective definition leaves unaddressed the questions of exploitation, displacement, and cultural value. Even if audiences enjoy synthetic performances, the concentration of profits in corporate hands whilst human performers lose employment remains problematic. Even if AI-generated art provides aesthetic pleasure, the training on copyrighted material without compensation constitutes a form of theft. The experience of the audience cannot be the sole criterion for judging the ethics and social value of entertainment technologies.
Some scholars propose that authenticity in entertainment should be understood as transparency. The problem is not synthetic performers per se but their presentation as human. If audiences are clearly informed that they are engaging with AI-generated content, they can make informed choices about consumption and emotional investment. This approach preserves creative freedom and technological innovation whilst protecting against deception.
Others argue for a revival of embodied performance as a response to the synthetic tide. Live theatre, intimate concerts, and interactive art offer experiences that fundamentally cannot be replicated by AI. The presence of human bodies in space, the risk of error, the responsiveness to audience energy, the unrepeatable present-moment quality of live performance all provide value that synthesised entertainment lacks. Rather than competing with AI on its terms, human artists might emphasise precisely those characteristics that machines cannot capture.
Navigating the Future of Human Expression
The questions raised by synthetic performers and AI-generated content will only intensify as technology continues to advance. Generative models are improving rapidly, making detection increasingly difficult and synthesis increasingly convincing. The economic incentives favouring AI deployment remain powerful, as companies seek cost reductions and scalability advantages. Yet the trajectory is not predetermined.
Legal frameworks like the NO FAKES Act, whilst imperfect, represent meaningful attempts to establish boundaries and protections. Union negotiations have secured important provisions requiring consent and compensation for digital replicas. Crucially, artists themselves are organising, speaking out, and demanding recognition that their craft cannot be reduced to training data. When Whoopi Goldberg confronted the Tilly Norwood phenomenon on The View, declaring 'bring it on' and noting that human bodies and faces 'move differently', she articulated a defiant confidence: the peculiarities of human movement, the imperfections of lived bodies, the spontaneity of genuine consciousness remain irreplicable.
The future likely involves hybrid forms that blend human and AI creativity in ways that challenge simple categorisation. Human directors may work with AI-generated actors for specific purposes whilst maintaining human performers for roles requiring emotional depth. Musicians may use algorithmic tools to explore sonic possibilities whilst retaining creative control. Visual artists may harness AI for ideation whilst executing final works through traditional methods. The boundary between human and machine creativity may become increasingly porous, requiring new vocabulary to describe these collaborative processes.
What remains non-negotiable is the need to centre human flourishing in these developments. Technology should serve human needs, not supplant human participation. Entertainment exists ultimately for human audiences, created by human sensibilities, reflecting human concerns. When synthetic performers threaten to displace human artists, when digital resurrections exploit deceased celebrities without clear consent, when AI-generated content saturates culture to the exclusion of human voices, we have lost sight of fundamental purposes.
The challenge facing the entertainment industry, policymakers, and society more broadly is to harness the creative potential of AI whilst preserving space for human artistry. This requires robust legal protections for performers' likenesses, fair compensation for training data, transparency about AI involvement in creative works, and cultural institutions that actively cultivate and value human creativity.
It also requires audiences to exercise discernment and intentionality about consumption choices. Supporting human artists, attending live performances, seeking out authentic human voices amid the synthetic noise, these actions constitute forms of cultural resistance against the homogenising tendencies of algorithmic production. Every ticket purchased for a live concert rather than a holographic resurrection, every commission given to a human illustrator rather than defaulting to AI generation, every choice to value the imperfect authenticity of human creation over algorithmic perfection, these are votes for the kind of culture we wish to inhabit.
In the end, the synthetic performers are here, and more are coming. Tilly Norwood will not be the last AI entity to seek representation by Hollywood agencies. Digital resurrections of deceased celebrities will proliferate as the technology becomes cheaper and more convincing. The deluge of AI-generated content will continue to rise. But whether these developments represent an expansion of creative possibility or a diminishment of human artistry depends entirely on the choices we make now.
SAG-AFTRA's declaration that 'nothing will ever replace a human being' must become more than rhetoric. It must manifest in legislation that protects performers, in industry practices that prioritise human employment, in cultural institutions that champion human creativity, and in audience choices that affirm the irreducible value of work made by conscious beings who have lived, suffered, loved, and transformed experience into expression.
The woman who lost $850,000 to a deepfake Brad Pitt, the background actors worried about displacement by synthetic characters, the families of deceased celebrities watching their loved ones' likenesses commercialised without consent, these are not abstract policy questions. They are human stories about dignity, livelihood, memory, and the right to control one's own image and voice. The technology that makes synthetic performers possible is impressive. But it cannot match the lived reality of human artists whose creativity emerges from depths that algorithms cannot fathom, and whose work carries meanings that transcend what any machine, however sophisticated, can generate from pattern recognition alone.
We stand at a juncture. The path we choose will determine whether the 21st century becomes an era that amplified human creativity through technological tools, or one that allowed efficiency and scalability to eclipse the irreplaceable value of human artistry. The machines are here. The question is whether we remain.
Sources and References
Institute of Internet Economics. (2025). The Rise of Synthetic Celebrities: AI Actors, Supermodels, and Digital Stars. Retrieved from https://instituteofinterneteconomics.org/
NBC News. (2025). Tilly Norwood, fully AI 'actor,' blasted by actors union SAG-AFTRA for 'devaluing human artistry'. Retrieved from https://www.nbcnews.com/
Screen Actors Guild-American Federation of Television and Radio Artists. (2025). Official statements on synthetic performers.
US Congress. (2025). Text – H.R.2794 – 119th Congress (2025-2026): NO FAKES Act of 2025. Retrieved from https://www.congress.gov/
US Congress. (2025). Text – S.1367 – 119th Congress (2025-2026): NO FAKES Act of 2025. Retrieved from https://www.congress.gov/
CNN Business. (2025). Celebrity AI deepfakes are flooding the internet. Hollywood is pushing Congress to fight back.
Benesch, Friedlander, Coplan & Aronoff LLP. From Scarlett Johansson to Tupac: AI is Sparking a Performer Rights Revolution.
Canadian Broadcasting Corporation. (2021). Dead celebrities are being digitally resurrected — and the ethics are murky.
The Conversation. (2025). Holograms and AI can bring performers back from the dead – but will the fans keep buying it? Retrieved from https://theconversation.com/
NPR. (2025). Could 'the next Scarlett Johansson or Natalie Portman' be an AI avatar? Retrieved from https://www.npr.org/
Reed Smith LLP. (2024). AI and publicity rights: The No Fakes Act strikes a chord. Retrieved from https://www.reedsmith.com/
The Regulatory Review. (2025). Reintroduced No FAKES Act Still Needs Revision. University of Pennsylvania Law School.
Frontiers in Psychology. (2025). Human creativity versus artificial intelligence: source attribution, observer attitudes, and eye movements while viewing visual art. Volume 16.
Frontiers in Psychology. (2024). Human perception of art in the age of artificial intelligence. Volume 15.
Interaction Design Foundation. (2025). What Is AI-Generated Art? Retrieved from https://www.interaction-design.org/
Association for Computing Machinery. (2025). Art, Identity, and AI: Navigating Authenticity in Creative Practice. Proceedings of the 2025 Conference on Creativity and Cognition.
Scientific Research Publishing. (2025). The Value of Creativity: Human Produced Art vs. AI-Generated Art.
Recording Academy. (2025). NO FAKES Act Introduced In The Senate: Protecting Artists' Rights In The Age Of AI.
Sheppard Mullin. (2025). Congress Reintroduces the NO FAKES Act with Broader Industry Support.
Representative Maria Salazar. (2024, 2025). Press releases on the NO FAKES Act introduction and reintroduction.
Congresswoman Madeleine Dean. (2024). Dean, Salazar Introduce Bill to Protect Americans from AI Deepfakes.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk