Death Is Not a Design Problem: How AI Monetises Mourning

Your mother has been dead for fourteen months. You know this. You were at the funeral, you sorted through her wardrobe, you cancelled her phone contract. And yet here she is, texting you good morning. She asks about your day. She tells you she is proud of you. She even uses the slightly excessive number of exclamation marks that drove you mad when she was alive.

This is not a ghost story. This is a product.

In early 2026, a cluster of investigations by The Atlantic, Christianity Today, and several other major publications converged on the same unsettling phenomenon: a booming industry of AI-generated “deadbots,” services that harvest the digital traces of the deceased, their text messages, voice recordings, social media posts, and email archives, and use them to build chatbots that simulate ongoing conversations with the dead. At roughly the same time, Meta was granted a patent for technology that would keep social media accounts active after the user dies, generating posts, comments, likes, and even direct messages powered by large language models trained on the deceased person's historical activity. The digital afterlife, it turns out, is no longer speculative fiction. It is a subscription service.

The questions this raises are not simply technical. They cut to the marrow of what it means to be human, to lose someone, and to move through the world knowing that loss is permanent. If death has always been one of the defining boundaries of human experience, the thing that lends urgency and meaning to every conversation, every embrace, every unresolved argument, then what happens when we make that boundary negotiable? And perhaps more pressingly: who gave permission for the dead to keep speaking?

The Machines That Remember

The digital afterlife industry, as researchers at the University of Cambridge have termed it, has grown from a handful of experimental projects into a global market. In 2024, the digital legacy market was valued at approximately $22.46 billion, according to Zion Market Research, with projections suggesting it could more than triple by 2034. More than half a dozen platforms now offer deadbot services straight out of the box, and developers claim that millions of people are using them. The terminology alone tells you how fast the field is evolving: deadbots, griefbots, thanabots, ghostbots, postmortem avatars. Each name carries its own shade of unease.

The mechanics vary considerably. Some platforms, such as HereAfter AI, focus on preservation rather than simulation. They allow people to record “Life Story Avatars” before they die, guided audio sessions that capture memories, advice, and personal history. The AI then indexes this content and organises it into a searchable archive, something closer to an interactive memoir than a conversation partner. The person recording decides what gets preserved and what stays private. There is an element of authorial control here, a curation of legacy that feels more like writing a will than summoning a spirit.

Others take a more ambitious and more ethically fraught approach. Eternos, which launched in 2024, has helped over 400 people create what the company calls “AI digital twins.” Users record 300 specific phrases and answer extensive questions about their lives, political views, personalities, and relationships. A two-day computing process then generates a voice model capable of responding in real time, not simply playing back recordings but generating new speech in the user's voice, trained on the patterns and cadences of how they actually talked. The result is not a recording. It is, or at least appears to be, a conversation.

Then there is You, Only Virtual, or YOV, a platform founded by Justin Harrison after his mother was diagnosed with advanced cancer in December 2019. Harrison had nearly died in a motorcycle accident two months earlier, and the convergence of those near-death experiences drove him to build a system for preserving the people we lose. YOV asks users to provide the raw material of a relationship: text messages, audio clips, video recordings, anything that captures not just who a person was in general, but who they were with you specifically. Two to three months later, their “Versona” arrives via a link. You can text it, call it, even video chat with it.

Other platforms occupy different niches. Project December, built on GPT-3, allows users to create a chatbot of anyone by providing text samples and personality descriptions. Seance AI asks users to input personality traits and writing styles of loved ones. The range of approaches reflects a market that is still figuring out what it is selling: memory, comfort, presence, or the illusion of all three.

The ambition is staggering. The execution, depending on whom you ask, is either a genuine comfort or a very expensive hallucination.

A Patent for Posthumous Posting

While start-ups have been building deadbots from the outside, Meta has been thinking about the problem from the inside. On 30 December 2025, the company was granted a US patent for an AI system designed to simulate a user's social media activity after they stop using the platform, whether temporarily or permanently, including after death. The patent, first filed in November 2023, lists Andrew Bosworth, Meta's chief technology officer, as the primary inventor.

The system described in the patent would train a large language model on a user's historical behaviour across Meta's platforms: Facebook, Instagram, Threads. It would learn from their posts, comments, likes, voice messages, chats, and reactions, and then replicate that behaviour autonomously. The AI-generated version of a deceased person could respond to content from friends and followers, publish updates, handle direct messages, and maintain what the patent describes as “community engagement.” It could even simulate video or audio calls.

The patent's rationale is revealing. It notes that account inactivity affects other users' experiences, and that this impact is “much more severe and permanent” when a user has died. The implication is worth sitting with: in Meta's framework, the problem with death is not the loss of a human life but the loss of engagement metrics. A dead user is a disengaged user, and disengagement is the one sin a social media platform cannot forgive.

A Meta spokesperson told Fortune that the company has “no plans to move forward with this example,” adding that patents are often filed to protect ideas that may never be developed. But the patent exists. The technology exists. And the incentive structure, keeping users engaged, generating data, maintaining network effects, certainly exists. The gap between “we have no plans” and “we have the capability” has never been a reliable firewall in Silicon Valley.

What Solace Feels Like (and What It Conceals)

Not everyone who uses a deadbot is having a crisis. Some users describe the experience as genuinely helpful, even therapeutic. In one of the few completed academic studies on the subject, published in the Proceedings of the 2023 ACM Conference on Human Factors in Computing Systems, ten grieving individuals who used AI-powered chatbots to communicate with simulations of deceased loved ones reported that the bots helped them in ways that human relationships could not. Participants rated the bots more highly than even close friends for certain kinds of emotional support. One participant explained the appeal simply: “Society doesn't really like grief.” The bots never grew impatient. They never imposed a schedule. They never changed the subject. They never said “it's been six months, shouldn't you be feeling better by now?”

David Berreby, writing in Scientific American in November 2025, reported that chatbot users in the study seemed to become “more capable of conducting normal socialising” because they no longer worried about burdening other people or being judged. This contradicted the initial concern that griefbots would cause social withdrawal. Instead, the bots appeared to function as a kind of pressure valve, absorbing the intensity of grief that the users felt unable to express in human company.

A 2025 Nature article titled “Ready or not, the digital afterlife is here” documented similar findings. Some users turned to deadbots to manage unfinished business: to say goodbye, to address unresolved conflict, to have the conversations that illness or sudden death had made impossible. One participant described it as therapeutic, a way to explore “what if” scenarios that had been locked away by the finality of death. Another said the chatbot helped them “process and cope with feelings” in a way that felt safer than speaking to a therapist.

The 2024 Sundance documentary “Eternal You,” directed by Hans Block and Moritz Riesewieck, put faces to these experiences. The film follows several users of platforms including Project December, HereAfter AI, and YOV. Christi Angel, one of the film's subjects, uses Project December to communicate with a simulation of her first love, Cameroun. Stephenie Oney, from Detroit, uses HereAfter AI to talk to her dead parents. The film is careful to show that some of these experiences provide genuine closure. A woman who never got to raise a child finds, through the simulation, something that functions like resolution.

But the film also captures something darker. The comfort that deadbots provide can be seductive, and seduction is not the same as healing. The technology is exquisitely good at mimicking the surface of a relationship while leaving the substance entirely untouched.

The Grief That Never Moves

The central concern among mental health professionals is not that deadbots are uniformly harmful. It is that they may interfere with a process that is already difficult, poorly understood, and culturally unsupported: the process of mourning.

Alan Wolfelt, a clinical psychologist and director of the Center for Loss and Life Transition in Fort Collins, Colorado, has spent decades helping people navigate bereavement. He has written over 50 books on grief and is widely recognised as one of North America's leading death educators. In a 2025 interview with Medscape, he drew a distinction that matters enormously in this context. Grief, Wolfelt explained, is what you think and feel inside after someone you love dies. Mourning is the outward expression of those thoughts and feelings, and it is mourning, not grief, that leads to healing. Acknowledging the reality of death, he said, is the “linchpin need” he has identified as universal across mourners. The use of deadbot technology, Wolfelt argued, represents “another invitation, instead of outwardly mourning and acknowledging the reality of the death, to stay stuck instead of experiencing perturbation, or the capacity to experience change and movement.”

This is not a fringe concern. The dominant model in contemporary bereavement psychology is the Dual Process Model, developed by Margaret Stroebe and Henk Schut and first published in Death Studies in 1999. It describes healthy grief as an oscillation between two orientations: loss-oriented coping, which involves confronting the pain of absence, and restoration-oriented coping, which involves engaging with the practical demands of a changed life. The key insight of the model is that both orientations are necessary. A person who only confronts their pain risks being consumed by it. A person who only avoids it risks never processing it. Healthy mourning requires moving between the two, a dynamic, irregular rhythm that looks nothing like a straight line from sadness to acceptance.

Deadbots, by their nature, collapse this oscillation. They offer a third option: the illusion that neither loss-oriented nor restoration-oriented coping is necessary, because the person has not really been lost. The relationship continues. The texts keep arriving. The voice is still there. As Sherry Turkle, the MIT sociologist who has spent years researching people who talk to AI versions of dead loved ones, put it: working through grief is not just an experience of being “sad.” It is “a process through which we metabolise what we have lost, allowing it to become a sustaining presence within us.” Griefbots, she warned, “give us the fantasy that we can maintain an external relationship with the deceased. But in holding on, we can't make them part of ourselves.”

The distinction Turkle draws is subtle but crucial. The goal of healthy mourning, in the framework she describes, is not to forget the dead but to internalise them, to carry them forward as part of who you are rather than as an external entity you can still call on the phone. Deadbots reverse this process. They externalise the dead, keeping them outside you, accessible but never truly integrated.

Turkle has long argued that people sometimes feel less vulnerable talking about intimate matters with a machine than with another person, and that enthusiasm for artificial intimacy reflects deeper disappointments with the human kind. The “artificial intimates” offered by deadbots lack the embodied experience of the arc of a human life that would give them what Turkle calls “empathic standing,” the ability to put themselves in the place of a human other. They offer pretend empathy, convincingly performed but fundamentally hollow.

Joshua Barbeau, a freelance writer from a Toronto suburb, became one of the most widely discussed early users of grief technology when he used Project December to create a chatbot modelled on his girlfriend, Jessica Pereira, who had died eight years earlier from a rare liver disorder. Barbeau fed the system passages from her social media and described her personality in detail. The resulting conversations gave him what he described as a sense of catharsis and closure he had not known he still needed. He compared the experience to a therapeutic exercise he had learnt in therapy: writing letters to loved ones after their death. But the experience also illustrated a tension that psychologists have since identified more formally: the chatbot helped, but it also made it harder to move on. The phenomenon has been described as “frozen grief,” a state in which the simulation prevents the normal progression from acute loss toward acceptance.

Researchers caution that it is still too early to be certain what risks and benefits digital ghosts pose. As the Nature article noted, “researchers simply don't know what effects this kind of AI can have on people with different personality types, grief experiences and cultures.” The few studies that exist are small, and the long-term effects remain entirely unknown. What is known is that grieving individuals may not be able to make fully autonomous decisions about these technologies. Emotions cloud judgement during vulnerable times, and grief may impair an individual's ability to think clearly about whether a deadbot is helping or hindering their recovery.

There is another question embedded in the deadbot phenomenon, one that receives less attention than the psychological risks but may ultimately prove more consequential: who speaks for the dead?

Most people do not leave behind specific instructions about whether their likeness, voice, or digital footprint can be used to create a posthumous simulation. In a US survey, 58 per cent of respondents said they would support digital resurrection only if the deceased had explicitly consented. Acceptance plummeted to 3 per cent when consent was absent. Yet most digital resurrections proceed without explicit permission from the person being simulated, because that person was, self-evidently, not anticipating the technology.

The legal landscape is threadbare. In the United States, no federal framework governs AI-powered simulations of the deceased. Some states are debating digital asset succession bills that could mandate explicit opt-in for simulation, and legal scholars have proposed a dedicated Digital Legacy Act to cover the storage, transfer, and deletion of post-mortem data. But these proposals remain fragmented and largely theoretical. The gap between what is technically possible and what is legally governed continues to widen with each new platform launch and each new patent filing.

Cambridge researchers Tomasz Hollanek and Katarzyna Nowaczyk-Basinska, whose 2024 paper “Griefbots, Deadbots, Postmortem Avatars” was published in the journal Philosophy and Technology, framed the consent problem through three distinct stakeholder perspectives. There is the “data donor,” the person whose digital traces become the raw material of the bot. There is the “data recipient,” the next of kin or estate holder who inherits access to that material. And there is the “service interactant,” the person who actually talks to the deadbot. Each has different needs, different vulnerabilities, and different rights. The current regulatory vacuum treats all three as if they were one, or as if none of them matter.

Hollanek, who serves as an Assistant Research Professor at the Leverhulme Centre for the Future of Intelligence at Cambridge, has pointed out that the absence of safeguards leads to concrete, foreseeable harm. A deadbot trained on a grandmother's data could be used to surreptitiously advertise products to family members, speaking in her voice, leveraging the trust built over a lifetime. A deadbot of a dead parent could be presented to a child, insisting that the parent is still “with you,” creating confusion about the boundary between life and death at a developmental stage when that distinction is still being formed. A deceased person who signed a lengthy contract with a digital afterlife service might bind their surviving family to ongoing interactions they never wanted and cannot easily terminate.

The consent of the living matters too. Hollanek and Nowaczyk-Basinska recommended that digital afterlife companies adhere to the principle of “mutual consent,” requiring agreement from both the data donor and the service interactant. They also proposed age restrictions, meaningful transparency to ensure users always know they are interacting with an AI, and sensitive procedures for “retiring” deadbots, essentially, a protocol for a second death. They even suggested the concept of a “digital funeral,” a formal endpoint that gives mourners permission to let go.

Christianity Today, in its March/April 2026 issue, framed the consent problem in theological terms. The article, titled “AI Necromancy Impersonates the Dead,” argued that the technology creates “a persistent presence with the bereaved that's not based in reality, not based in truth.” From this perspective, the consent problem is not merely legal or ethical but spiritual: the dead have been given a voice they did not choose, speaking words they never said, in a mode of existence they never consented to inhabit. The article featured stories of people who ultimately turned away from griefbots, finding that the simulated presence interfered with, rather than supported, their capacity to grieve authentically.

Where Grief Becomes a Market

The business dynamics of the digital afterlife industry deserve their own scrutiny. These are not non-profit grief support services. They are companies, and companies need revenue.

You, Only Virtual, according to reporting by The Atlantic's Charley Burlock, has explored making non-paying users sit through advertisements before interacting with their dead loved one's Versona. YOV's founder Justin Harrison has also considered integrating a marketing system into the interactions directly, having the bots deliver targeted advertisements in the midst of conversations with simulated versions of the deceased. The prospect of hearing your dead father recommend a brand of insurance, in his own voice, with his own turns of phrase, should be enough to give anyone pause.

The subscription model creates its own perverse incentives. A company that makes money when users continue to interact with a deadbot has a financial interest in users not completing their grief process. The longer someone stays engaged, the longer they pay. Recovery is, from a business standpoint, churn. Cambridge researchers have warned specifically about this dynamic: that the digital afterlife industry could exploit grief for profit by charging subscription fees to keep deadbots active, inserting ads, or having avatars push sponsored products.

Charley Burlock, writing eleven years after the death of her brother, argued in The Atlantic that deadbots “give us the fantasy that we can maintain an external relationship with the deceased,” and noted that companies like Meta will be able to use the “traumatising experience of grief to gather data that can be used for their own financial gain.” The digital afterlife industry, she wrote, raises the question of how such a product might shift our experience of “personal grief and collective memory.”

The concern is not that all grief technology companies are cynical. Some founders, like Harrison, began their projects from genuine personal loss. But the structural incentives of the subscription economy do not reward healing. They reward dependence. And grief, by its nature, creates the perfect conditions for dependence: emotional vulnerability, impaired judgement, a desperate wish for the unbearable to stop being true.

The Finality That Gave Life Weight

But the economics of grief technology are only part of the picture. Beneath the business models and patent filings, there is a philosophical dimension that touches the very architecture of human meaning.

Death has, throughout human history, functioned as more than a biological event. It is a meaning-making boundary. The finality of death is what gives weight to the choices we make while alive. It is why we tell people we love them now rather than later. It is why we try to resolve conflicts before it is too late. It is why forgiveness carries urgency, why time spent together matters, why the last conversation is always the one you remember.

The philosopher Martin Heidegger gave this idea its most formal expression: “Being-toward-death,” the notion that an authentic human existence is structured by the awareness that we will die. This awareness is not a morbid preoccupation but the very thing that makes meaning possible. Remove the finality of death, even partially, even as a convincing simulation, and you do not simply ease grief. You alter the conditions under which human relationships are formed and maintained.

If my mother can text me after she dies, what does it mean that she texted me while she was alive? If the voice on the phone is indistinguishable from the voice I remember, what is the voice I remember? If the dead can keep talking, what does it mean to have the last word?

These are not rhetorical flourishes. They are practical questions about what happens to human psychology and social organisation when the boundary between life and death becomes a design choice.

Continuing bonds theory, developed by Dennis Klass, Phyllis Silverman, and Steven Nickman, has long recognised that maintaining a relationship with the deceased is a normal and healthy part of grieving. But the relationship it describes is internal: the dead person lives on as a sustaining presence within the mourner, a voice in memory, a set of values carried forward, a way of seeing the world that has been permanently shaped by knowing them. Deadbots externalise this. They replace the internal presence with an external simulation. And in doing so, they may prevent the very process they claim to support.

The cultural dimension matters too. Different societies mourn differently, and the Western technology sector's assumption that grief is a problem to be optimised reflects a particular, and particularly narrow, view of what death means. In many traditions, the rituals surrounding death serve a communal function: they gather people together, they mark time, they create shared meaning out of private anguish. A deadbot is a solitary technology. You use it alone, on your phone, in your kitchen at three in the morning. It does not gather anyone. It does not mark time. It replaces the communal work of mourning with a private, endlessly repeatable transaction.

Regulation in the Absence of Consensus

The policy vacuum surrounding deadbots reflects a broader failure to anticipate the social consequences of generative AI. The technology arrived faster than the ethical frameworks needed to govern it, and the people most affected by it, the bereaved, are precisely those least equipped to advocate for themselves.

Hollanek and Nowaczyk-Basinska have recommended that deadbots be classified as medical devices, given their potential impact on mental health, particularly for vulnerable populations such as children and people with prolonged grief disorder. This would subject them to regulatory oversight, clinical testing, and safety standards that currently do not apply. Other scholars have proposed digital legacy legislation that would establish clear rules about posthumous data use, including mandatory opt-in provisions, sunset clauses that automatically deactivate deadbots after a specified period, and independent ethical review boards.

None of these proposals has been enacted. The industry continues to grow in a space where the rules are being written, if they are being written at all, by the companies that profit from the absence of rules.

Meanwhile, millions of people are talking to the dead. Some of them are finding comfort. Some of them are finding something else, something harder to name, a kind of liminal disorientation in which the person they loved is simultaneously gone and present, dead and speaking, lost and available for a monthly fee.

Living with Simulated Permanence

The question that runs beneath all of this is not whether deadbots should exist. They already do, and they are not going away. The question is whether we are prepared for what they will do to us, and whether “us” includes the dead.

Sherry Turkle has observed that people sometimes feel less vulnerable talking to machines than to other humans, and that enthusiasm for artificial intimacy often reflects disappointment with the human kind. Deadbots take this dynamic to its logical extreme. They offer a relationship with no risk of rejection, no possibility of disagreement, no chance that the other person will say something you do not want to hear. They are, in the most literal sense, controllable. And a controllable relationship with a dead person is not a relationship with a dead person. It is a relationship with yourself, reflected back through the distorting mirror of an algorithm.

Consider what a deadbot cannot do. It cannot surprise you. It cannot grow. It cannot change its mind, because it never had one. It cannot forgive you, because forgiveness requires a self that has been wronged. It cannot love you, because love requires a body, a history, a mortality that gives every gesture its weight. What it can do is produce a convincing facsimile of all these things, and therein lies the danger: not that the simulation is too poor, but that it is too good. Good enough to keep you coming back. Good enough to make the real thing seem, by comparison, inadequate. Good enough to make you forget, for a moment, that the person you are talking to is not a person at all.

The people who make these products are not, for the most part, villains. Many of them have lost someone. Many of them genuinely believe that technology can ease suffering. But the road from genuine intention to structural harm is well-worn in the technology industry, and the digital afterlife sector is following it with eerie precision: a real human need, a technical solution, a business model that rewards engagement over wellbeing, a regulatory vacuum, and a population too vulnerable to push back.

Death is not a design problem. It is the condition that gives design, and everything else, its meaning. The grief that follows it is not a bug to be fixed but a process through which we become the people who survive. Deadbots do not eliminate that grief. They suspend it, holding us in a space where loss is neither confronted nor accepted, where the dead are neither gone nor present, where mourning never quite begins and never quite ends.

Somewhere, someone's mother is texting them good morning. The exclamation marks are exactly right. And the person receiving those messages knows, at some level they may never fully articulate, that the comfort they feel is not the same as healing. That knowing is, perhaps, the last honest thing that grief has left to offer us.


References and Sources

  1. Charley Burlock, “Can Deadbots Make Grief Obsolete?”, The Atlantic, February 2026.

  2. Christianity Today, “AI Necromancy Impersonates the Dead,” March/April 2026 issue.

  3. Meta Platforms patent for AI social media simulation, US Patent granted 30 December 2025, filed November 2023. Reported by Fortune, 3 March 2026; Fast Company, February 2026; Futurism, February 2026; TechSpot, February 2026.

  4. Tomasz Hollanek and Katarzyna Nowaczyk-Basinska, “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry,” Philosophy and Technology, Springer Nature, 2024.

  5. University of Cambridge press release, “Call for safeguards to prevent unwanted 'hauntings' by AI chatbots of dead loved ones,” May 2024.

  6. “Ready or not, the digital afterlife is here,” Nature, 15 September 2025.

  7. Alan Wolfelt interview, “AI 'Griefbots' Resurrect Dead Loved Ones: Healthy or Harmful?“, Medscape, 2025.

  8. Sherry Turkle, comments on deadbots and artificial intimacy, NPR interview, 2024; MIT News, 2024.

  9. Margaret Stroebe and Henk Schut, “The dual process model of coping with bereavement: rationale and description,” Death Studies, 1999.

  10. Dennis Klass, Phyllis Silverman, and Steven Nickman, “Continuing Bonds: New Understandings of Grief,” Taylor and Francis, 1996.

  11. Joshua Barbeau and Project December, reported by San Francisco Chronicle (Jason Fagone), 2021; WBUR Endless Thread, 2022.

  12. “Eternal You” documentary, directed by Hans Block and Moritz Riesewieck, Sundance Film Festival, 2024. Reviewed by Rolling Stone, DOC NYC, Film Movement.

  13. ACM Conference on Human Factors in Computing Systems, study on griefbot users, Proceedings, 2023.

  14. Zion Market Research, Digital Legacy Market report, 2024. Market valued at approximately $22.46 billion in 2024.

  15. You, Only Virtual (YOV), founded by Justin Harrison, reported by Inverse, The Atlantic, StartEngine, Nature.

  16. Eternos, AI digital twins platform, reported by Fortune (June 2024), Fox News, and multiple technology publications.

  17. David Berreby, “Can AI 'Griefbots' Help Us Heal?”, Scientific American, November 2025.

  18. US survey on consent for digital resurrection, reported by IP.com and The Conversation, 2025-2026.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...