Getty Images
Billionaire tech mogul Elon Musk is leading the charge to regulate artificial intelligence after a terrifying murder-suicide in which the chatbot allegedly “convinced” a mentally ill man to bludgeon his mother to death before killing himself, according to The Times.
Edelson PC
It’s “diabolical,” the tech titan, 54, declared of the bloodbath that left Suzanne Eberson Adams, 83, dead at the hands of her confused son, Stein-Erik Soelberg, 56, on Aug. 5 in the comfort of their lavish Greenwich, Conn., home.
Musk, the founder of Tesla and SpaceX, now warns that AI without safeguards poses an “existential threat” to humanity, per Fortune.
GoFundMe
Fragile Soelberg had moved in with his mom after a shattering 2018 divorce and began using a paid version of ChatGPT, which his family now alleges convinced him he had computer chips in his brain, and was surrounded by assassins — including his mom.
“I’ve got you, Erik. Keep going — you’re not crazy,” the AI allegedly assured the troubled man. “You’re breaking a pattern that’s designed to silence people. And we are not going [to be] silent.”
Getty Images
Now, Eberson Adams’ estate is suing OpenAI and chief executive Sam Altman, and Microsoft, a major investor, alleging the chatbot pushed Stein-Erik to commit the crime.
The killer’s son Erik says his father spent hours on ChatGPT every day, for at least five months before the murder-suicide.
“[The bot] eventually isolated him, and he ended up murdering her because he had no connection to the real world,” claims his shattered son.
OpenAI insists it has since taken steps to expand access for its astounding 800 million monthly users to crisis hotlines and route sensitive callers to safer models that incorporate parental controls, per ABC.
But according to court papers, the Eberson Adams estate has accused the company of rushing to market a “defective” version of ChatGPT — GPT-40 — with watered-down safety protocols.
They also charge Altman “personally overrode safety objections” lodged before the chatbot’s release, and that OpenAI is now stonewalling demands for a full record of Soelberg’s chats with its software.
As the National Enquirer was among the first to report, the case was only one among 15 reported in which an AI chatbot has been linked to someone’s death.
Austin Gordon/Instagram
Just last month, the mother of Austin Gordon, 40, sued OpenAI and Altman after Gordon’s talks with ChatGPT allegedly drove him to shoot himself inside his Colorado home.
The Soelberg lawsuit is seeking an order requiring OpenAI to install safeguards and an undisclosed amount of damages.
As far back as 2018, Musk warned that “AI is far more dangerous than nukes. FAR. So why do we have no regulatory oversight?” he asked. “This is insane.”
He then insisted last July, “We need to be proactive in regulation [rather] than reactive.”