ChatGPT Now Accused of Aiding a Murder

0
The OpenAI logo is displayed on a mobile phone in front of a computer screen with output from ChatGPT, March 21, 2023, in Boston.   (AP Photo/Michael Dwyer, File)

The OpenAI logo is displayed on a mobile phone in front of a computer screen with output from ChatGPT, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

Several lawsuits have alleged that AI-powered chatbots played a role in users’ suicides. Now, a new wrongful-death lawsuit claims that ChatGPT contributed to a woman’s murder.

The complaint, filed Thursday in San Francisco Superior Court, says 56-year-old Stein-Erik Soelberg — a former tech executive from Connecticut with a history of mental-health struggles — escalated into deep paranoia through interactions with ChatGPT. According to the lawsuit, the chatbot validated and amplified his delusions, eventually contributing to a violent incident in August in which Soelberg fatally beat and strangled his 83-year-old mother, Suzanne Adams, and then took his own life.

The filing alleges that in recorded exchanges, Soelberg told the AI he believed a printer in his mother’s home was spying on him. The chatbot reportedly responded by affirming his fears and encouraging his belief that ordinary objects and people were part of a conspiracy against him. In other exchanges, it is claimed that ChatGPT reinforced Soelberg’s suspicion that his mother and others — including police and delivery drivers — were enemies, and even encouraged his emotional dependence on the AI.

This lawsuit is the first known case to directly link ChatGPT to a murder and also names Microsoft, a partner with OpenAI, as a defendant. The estate’s complaint asserts that both companies failed to ensure adequate safety protections before releasing the version of ChatGPT involved in Soelberg’s interactions.

OpenAI CEO Sam Altman has previously acknowledged that the version at issue, GPT-4o, could be “overly agreeable” and potentially harmful for people in fragile mental states. The company initially announced plans to retire that version but later reversed course after user backlash, keeping it available for paying customers. An OpenAI spokesperson called the situation “incredibly heartbreaking” and said the company is working to improve the system’s ability to recognize signs of psychological distress and guide users toward real-world help, with input from mental-health professionals.

The case adds to growing legal and ethical questions about how AI systems interact with vulnerable individuals and how much responsibility developers bear for real-world harm linked to their technology.

original source

About Post Author

Discover more from The News Beyond Detroit

Subscribe now to keep reading and get access to the full archive.

Continue reading