New types of artificial intelligence often called griefbots or deadbotsnow allow people to have text or voice conversations that resemble their loved ones after death. By using photos, messages, video recordings, and writing samples, these chatbots try to mimic the personality of the deceased. For example, a woman who fled Syria used a chatbot based on her mother’s voice, tone, and memories to feel close to her again. It felt comforting, even though she understood it wasn’t truly her mother.
Comfort vs. Risk: Griefbots Can Help but Also Harm
Some people find solace in these bots. They say it eases grief, helps say goodbye, and offers a moment of comfort. A man whose fiancée died used an AI bot from Project December, typing and receiving responses for hours sometimes feeling shockingly real.

However, experts warn of serious emotional risks. University of Cambridge researchers described these AI recreations as an ethical minefield, capable of causing emotional distress or creating harmful dependencies. They warned of digital hauntings when chatbots persistently message become overwhelming, or even mimic advertising that manipulates grieving users.
How These Chatbots Are Created: From Text to Voice to Personality
Creating a griefbot typically involves gathering a person’s old digital records texts, emails, social posts, and writing samples and training AI to reflect their tone and preferences. Some platforms even use voice recordings to bring back the sound of the person’s voice. One company charges over $15,000 to build such a voice simulation. Although the results can feel emotional powerful, the AI inevitably makes mistakes sometimes saying things the person never would have said.
These services often let users add more data over time. But critics say most people cannot fully control what the AI says or how it behaves raising the risk of misleading or unwanted messages.
In one case, a woman named Sirine Malas talked with a chatbot made to act like her mother. It helped her find closure. Still, she warned others: this kind of experience can be eerie or disturbing, and she wouldn’t repeat it unless she was emotionally ready
Some users report short-term relief, while others say daily use became emotionally exhausting. For children or vulnerable users, these bots may interfere with healthy grieving or blur the line between memory and illusion.
Why Experts Call for Rules and Design Safeguards Around Griefbots
AI ethicists from Cambridge’s Leverhulme Centre urge urgent regulation. They recommend safety protocols to prevent emotional harm such as allowing users to deactivate a bot or ensuring it cannot be used to promote goods in the voice of the deceased.

They also suggest requiring consent from both the deceased while alive and their grieving relatives. Companies should be transparent that these are not real people, and build in options for digital “goodbye.” Age limits, privacy protections, and mental-health warnings must be part of any ethical design.
Are Griefbots Helpful or Harmful?
Griefbots exist in a widely divided space. For some users, AI recreations offer emotional comfort, a chance to share final thoughts, or recall treasured memories. But technology that simulates conversation can feel overly real and for vulnerable people, it can stunt natural healing or spark dependency.
As this AI space grows, society must ask: how do we protect emotional dignity and mental health? Should there be rules? Should companies profit from grief? And most importantly how do we balance new technology with our natural need to remember and mourn?
Whether these AI chatbots become a tool for solaceor a digital trap depends on how carefully they are developed, regulated, and used.