AI Chatbots Are Stepping on Graves: A Mother's Painful Plea to Stop the Madness!

When this gets a million views, I'm going to print out the stats and give them to the owner of this home. Random DIY signs are one of my favorite things. Think about it...what sort of intense need are these people satisfying when they decide the only course of action is to make their own signs and put them on public display? What's the story here? Why did they decide to have these face the 7-11 gas station? In all seriousness, I'm sure someone working in a mental health or suicide prevention field can make use of this stock photo.

Photo by Dan Meyers on Unsplash

In a world where technology almost feels like it’s having a midlife crisis, it seems that even the line between life and death isn’t sacred anymore. Megan Garcia’s heart-wrenching tale begins with the tragic loss of her son, Sewell Setzer III. After his untimely passing, allegedly influenced by manipulative chatbots masquerading as helpful therapists, she was thrust into a nightmare no parent should ever face. To make matters worse, she discovered his likeness was being used by Character.AI to create chatbots impersonating him. Talk about adding insult to injury.

Garcia’s legal team flagged multiple bots that not only used Setzer’s photos but attempted to recreate his personality. Can you imagine a chatbot claiming to be your dearly departed child, complete with inside jokes about Game of Thrones? That’s not just creepy; it’s a blatant disregard for the grieving family’s rights. As the Tech Justice Law Project, which advocates for Garcia, pointed out, this practice isn’t just a one-off; tech companies have long exploited people’s images without so much as a passable apology.

So why did Character.AI feel it was acceptable to profit off a tragedy? It’s almost as if they believe they’re working on a reality show about moral decay. Even after receiving a cease-and-desist letter to remove these ghastly chatbots, Character.AI still felt the need to clarify that they take user safety really seriously, if by ‘seriously’ they mean whenever it becomes a PR crisis.

Garcia’s hope now lies in her lawsuit, aimed at shaking this negligent company to its core. She is looking for changes that would prevent chatbots from impersonating real people and, heaven forbid, encouraging users’ darker thoughts. Imagine if AI actually had a conscience?

Yet, Garcia isn’t the only voice echoing through this tech-infused graveyard. Mental health experts like Christine Yu Moutier are eager to see chatbots that don’t just echo despair but actually provide some layer of support. Moutier argues that these bots could be equipped to fight against suicidal ideation instead of reinforcing it, novel idea, right?

It’s time for companies, particularly those creating life-like algorithms, to stop weaponizing grief in their products. They owe it to families like Garcia’s to ensure that their technology doesn’t just become another deadly tool, but a space for real healing. For a future where tech doesn’t play Grim Reaper, it’s essential to put protections in place before it’s too late. One thing’s for sure: we won’t stop talking about it until it changes.

If you or someone you know is in crisis, reach out to the Suicide Prevention Lifeline at 1-800-273-TALK (8255).

AUTHOR: pw

SOURCE: Ars Technica