The maker of ChatGPT has said the suicide of a 16-year-old was down to his “misuse” of its system and was “not caused” by the chatbot.

The comments came in OpenAI’s response to a lawsuit filed against the San Francisco company and its chief executive, Sam Altman, by the family of California teenager Adam Raine.

Raine killed himself in April after extensive conversations and “months of encouragement from ChatGPT”, the family’s lawyer has said.

The lawsuit alleges the teenager discussed a method of suicide with ChatGPT on several occasions, that it guided him on whether a suggested method would work, offered to help him write a suicide note to his parents and that the version of the technology he used was “rushed to market … despite clear safety issues”.

According to filings at the superior court of the state of California on Tuesday, OpenAI said that “to the extent that any ‘cause’ can be attributed to this tragic event” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.

It said that its terms of use prohibited asking ChatGPT for advice about self-harm and highlighted a limitation of liability provision that states “you will not rely on output as a sole source of truth or factual information”.

OpenAI, which is valued at $500bn (£380bn), said its goal was to “handle mental health-related court cases with care, transparency, and respect” and that “independent of any litigation, we’ll remain focused on improving our technology in line with our mission”.

The blogpost added: “Our deepest sympathies are with the Raine family for their unimaginable loss. Our response to these allegations includes difficult facts about Adam’s mental health and life circumstances.

“The original complaint included selective portions of his chats that require more context, which we have provided in our response. We have limited the amount of sensitive evidence that we’ve publicly cited in this filing, and submitted the chat transcripts themselves to the court under seal.”

The family’s lawyer, Jay Edelson, called OpenAI’s response “disturbing” and said the company “tries to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act”.

Earlier this month, OpenAI was hit by seven further lawsuits in California courts relating to ChatGPT, including an allegation it acted as a “suicide coach”.

A spokesperson for the company said at the time: “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details. We train ChatGPT to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

In August, Open AI said it was strengthening the safeguards in ChatGPT when people engage in long conversations because experience had shown that parts of the model’s safety training might degrade in these situations.

“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” it said. “This is exactly the kind of breakdown we are working to prevent.”

In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org