In a small apartment in China’s Shandong province, a mother in her 80s sits down each day for a video call with her son. He asks about her day. She fusses over him, telling him to eat well and stay warm. For a full year, this daily ritual has been her lifeline. But there is a heartbreaking truth behind the routine. Her only son is gone, killed in a traffic accident. The face on the screen is not a person but an extraordinarily complex digital twin, built by artificial intelligence.
The family, terrified that the news of his death would devastate the elderly woman suffering from a heart condition, made a difficult choice. Rather than tell her the truth, they turned to technology. They hired an AI specialist, Zhang Zewei, and his team to essentially bring their loved one back to life in a digital form. They provided the team with hundreds of photos, hours of video footage, and voice recordings, including samples of the son speaking in his local dialect.
The result is an AI clone so detailed it not only looks and sounds like him but also mimics his gestures, like his habit of leaning forward while speaking. Every day, the “son” appears, telling his mother he is working hard in another city and will come home once he has earned enough money. “Call me more often so I know how you live there. I miss you so much,” the mother tells him in a recorded exchange. For her, this gentle lie is a world where her son is still alive.
A “Gentle Lie” or a Dangerous Precedent?
The case has sparked a firestorm of debate. Many have called it a “touching white lie” and “the most gentle deception.” They argue that for an elderly person with fragile health, this AI companion provides a vital emotional comfort that preserves her will to live. In a world where grief support is scarce and people are often rushed to “move on,” technology has stepped in to offer a softer landing.
However, a growing number of critics see a much darker side. They worry about what happens when the illusion inevitably shatters. What if a technical glitch reveals the truth, or a family member accidentally lets the secret slip? The emotional fallout from such a discovery could be far more traumatic than the initial loss.
This story is not happening in a vacuum. It is part of a larger, booming industry known as “grief tech.” From AI chatbots trained on a loved one’s text messages to interactive memorial avatars, companies around the world are now selling the promise of a continued connection with the dead. Zhang Zewei himself has been in the field for three years, with clients who are grieving parents seeking to “watch” a child grow up in a virtual world or people with depression finding closure by speaking to a replica of a lost parent. He has even called himself a “deceiver of human emotions.”
A New Frontier for Mourning
While the technology is breathtaking, it also exposes a gaping hole in how modern society handles loss. Experts note that the rush to develop this tech highlights a lack of accessible, effective, and compassionate grief care for many people. The AI becomes a crutch for those left to navigate profound loneliness on their own.
The Chinese family’s story is a poignant, real-world example of this new frontier. It forces us all to confront an uncomfortable question: when it comes to grief, is there a scenario where a beautiful lie is better than a devastating truth? The answer, as technology continues to blur the line between memory and reality, will only get more complicated.
Subscribe to my whatsapp channel