Can AI truly bridge the gap between the living and the dead? We embarked on an intriguing journey to uncover the truth behind 'deathbots', AI systems designed to converse with the deceased. Our research, published in Memory, Mind & Media, delves into the ethical and emotional implications of these innovative technologies. We became our own test subjects, interacting with digital versions of ourselves and exploring the capabilities of these 'deathbots'.
The concept of 'deathbots' is not entirely new. Media theorist Simone Natale highlights their deep roots in spiritualist traditions. However, AI brings a new level of sophistication and commercial viability to these illusions. Our study, part of the Synthetic Pasts project, focused on AI services that claim to preserve or recreate a person's voice, memories, or digital presence. We uploaded our own videos, messages, and voice notes, creating 'digital doubles' to understand their inner workings.
Some 'deathbots' aim to preserve memories, helping users record and store personal stories organized by themes like childhood or family. AI then indexes the content, acting as a searchable archive. Others use generative AI to create ongoing conversations. You provide data about a deceased person, and the system builds a chatbot that can respond in their tone and style, evolving over time through machine learning.
While some 'deathbots' present themselves as playful, the experience can feel eerily intimate. The more personalization we attempted, the more artificial it seemed. The bots often repeated our exact phrasing in stiff, scripted replies, and their tone sometimes felt incongruous, especially when discussing death. This highlighted the limitations of algorithms in handling the emotional weight of loss.
The more archival-based tools we tested offered a calmer experience but also imposed rigid categories and limited nuance. Memory, in the age of AI, becomes 'conversational', shaped by interactions between humans and machines. However, our experiments often felt flat, exposing the limits of synthetic intimacy.
Behind these experiences lies a business model. These are not memorial charities but tech startups. Subscription fees, 'freemium' tiers, and partnerships with insurers or care providers reveal how remembrance is being turned into a product. Philosophers Carl Öhman and Luciano Floridi argue that the digital afterlife industry operates within a 'political economy of death', where data continues to generate value long after a person's life ends.
The promise of these systems is a kind of resurrection - the reanimation of the dead through data. They offer to return voices, gestures, and personalities, not as memories recalled but as presences simulated in real time. This 'algorithmic empathy' can be persuasive and moving, but it exists within the limits of code, altering the experience of remembering and smoothing away ambiguity and contradiction.
These platforms demonstrate a tension between archival and generative forms of memory. They normalize certain ways of remembering, prioritizing continuity, coherence, and emotional responsiveness while also producing new, data-driven forms of personhood. As media theorist Wendy Chun observes, digital technologies often conflate 'storage' with 'memory', promising perfect recall while erasing the role of forgetting.
In this sense, digital resurrection risks misunderstanding death itself, replacing the finality of loss with the endless availability of simulation, where the dead are always present, interactive, and updated. While AI can help preserve stories and voices, it cannot replicate the living complexity of a person or a relationship. The 'synthetic afterlives' we encountered are compelling precisely because they fail, reminding us that memory is relational, contextual, and not programmable.