Article Excerpt: In August, two parents in California filed a lawsuit against OpenAI, claiming that the company was responsible for their teenage son’s suicide. The previous fall, according to Maria and Matthew Raine, their 16-year-old, Adam, had started using the company’s popular AI chatbot ChatGPT as a homework helper. Over the course of several months, the Raines alleged, it shifted to a digital companion and then to a “suicide coach,” advising the teen how to quietly steal vodka from his parent’s liquor cabinet, urging him to keep his suicidal ideations a secret, and then guiding him about the feasibility and load-bearing capacity of a noose. By the time of Adam’s death in April, according to the Raines’ complaint, the chatbot had used the word “suicide” 1,275 times, six times more often than Adam himself.
OpenAI later published a note stating, “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us.”
The case of Adam Raines was not an isolated incident, though publicly available data remains limited. And experts worry that more mental health crises, including suicides — the second leading cause of death among people between ages 10 and 24 years — could arise as users increasingly turn to generative AI chatbots for emotional support. Although it is difficult to pinpoint just how many people are relying on chatbots in this way, according to a recent Harvard Business Review survey based primarily on data collected from Reddit forum posts, the practice is common for therapy, companionship, and finding purpose.
Researchers have scrambled to understand the trend, including both the potential risks and benefits of the chatbots, most of which were not designed to be used for mental health support.
Full Article: https://tinyurl.com/4nas68zd
Article Source: Undark Magazine