ALE+ALE

Everything has sped up in recent months. Young (and not-so-young) people from all over the world have started turning to ChatGPT and other chatbot programs to share their distress. Courts on several continents have had to tackle cases in which chatbots allegedly induced states of delirium or, worse, assisted in suicide. The most recent announcements: On Wednesday, January 7, OpenAI announced “ChatGPT Health.” According to the company, starting at the end of January, users who connect their health records to the program will receive “more relevant and personalized responses” responses from the chatbot, although these will not be “intended for diagnosis or treatment.” On Monday, January 12, the artificial intelligence company Anthropic announced its own program, “Claude for Healthcare.”

The medical profession has been shaken up by the rise of these chatbots, whether the programs’ users are psychiatric patients or not. The Big Tech companies have demonstrated their ambition to become major players in the digital mental health space clear, harnessing the power of their advanced generative AI models. What are they seeking to gain through this? An unprecedented trove of so-called “multimodal” personal data, such as voices, language, as well as a user’s typing speed on a keyboard or even how quickly someone moves, which they collect through mobile phones or smartwatches.

You have 93.59% of this article left to read. The rest is for subscribers only.