The rapid rise of artificial intelligence (AI), particularly generative tools such as conversational chatbots, is transforming industries worldwide, including maritime.
While these technologies offer efficiency, accessibility, and even emotional support, mental health professionals are increasingly warning of unintended psychological consequences. From dependency and emotional attachment to more severe psychiatric outcomes, AI is prompting urgent new questions about mental health, safety, and regulation.
Growing concern across the maritime sector
Recent warnings from Mental Health Support Solutions (MHSS) highlight a troubling trend: “Mental health experts raise alarm over maritime workforce’s AI dependence.”
According to MHSS, ChatGPT addiction is already rising among shore-based personnel, with experts cautioning that similar patterns are likely to emerge onboard vessels. The concern is not only the frequency of use, but the nature of human-AI interaction, particularly when individuals begin to rely on AI tools for emotional support.
The organization stresses that the lack of regulation in AI systems is deeply worrying, especially given the variability in safeguards across platforms. While mainstream tools may attempt to provide safe responses, other lesser-known applications allow largely unrestricted interactions, including exposure to harmful or inappropriate content.
A tragic example underscores these risks: on August 25, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI, alleging that ChatGPT reinforced their son’s suicidal thoughts, ultimately contributing to his death. Since then, experts have intensified calls for caution.
The illusion of companionship
One of the most significant psychological risks identified by MHSS is the illusion of companionship created by AI chatbots. Unlike human professionals, AI systems cannot accurately assess crisis situations, initiate emergency escalation protocols, or transfer users to appropriate human support. Yet their advanced conversational abilities can convincingly simulate empathy.
As Charles Watkins, Founder and Director of Clinical Operations at MHSS, explains: “With maritime health professionals with years of experience, we understand our escalation protocols, who to contact right away and which vessels, and the bot has no absolute capability of doing that. We should try to be more careful, considerate, assessing the risks just as much as the benefits.”
This gap between perceived support and actual capability can lead users, particularly isolated seafarers, to form dangerous emotional dependencies.
Emerging psychological patterns: dependency and delusion
Research increasingly suggests that AI-human interaction is not psychologically neutral and may actively influence cognition, behavior, and emotional regulation. Key patterns identified include:
Psychological dependency and attachment – Users anthropomorphize AI systems, forming parasocial relationships. In maritime contexts, isolation may amplify this risk.
Crisis incidents and harmful outcomes – Prolonged AI interaction can reinforce negative thinking patterns, contribute to emotional dysregulation, and, in extreme cases, correlate with self-harm or suicide. High-profile incidents, such as the death of 14-year-old Sewell Setzer III following intensive chatbot use, illustrate potential severity.
Vulnerability of specific populations – Adolescents, individuals with pre-existing mental health conditions, and socially isolated workers (such as seafarers) are at higher risk. Cognitive dissonance – knowing the AI is not human while experiencing it as such – may fuel delusional thinking in susceptible individuals.
Clinicians have begun describing emerging patterns sometimes referred to as “ChatGPT-induced psychosis,” characterized by obsessive use, delusional beliefs about AI agency, emotional overreliance, and social withdrawal.
Although this is not yet an officially recognized diagnosis, preliminary neuroscientific findings suggest prolonged AI use may be linked to addictive behaviors, impaired cognitive processing, and reduced real-world social engagement.
Leveraging AI responsibly: Training and support
In parallel with concerns about psychological risks, organizations such as the McKinsey Health Institute (MHI), Grand Challenges Canada (GCC), and Google are exploring AI’s positive role in closing global mental health gaps. With more than half of people expected to experience a mental health challenge in their lifetime, and severe shortages of trained professionals, particularly in low- and middle-income countries, AI is being leveraged as a training partner and coaching tool.
Their open-access field guide demonstrates how AI can support task-sharing programs by simulating real-world mental health scenarios, providing tailored feedback, and assisting frontline workers in delivering evidence-based care.
For example, an AI-powered training bot developed in collaboration with The Trevor Project has already helped train over 1,000 crisis counselors, simulating emotionally charged conversations in a safe environment. Dr. Nicole Bardikoff, Associate Director of Global Mental Health at Grand Challenges Canada, notes: “We know the use of AI in mental health care is still emerging, and we’re approaching it with humility and curiosity.”
Balancing risk with opportunity
Despite these concerns, AI also holds significant potential benefits for mental health care:
Improved access to support in underserved regions
Early detection of mental health condition through data analysis
Personalized treatment recommendations
24/7 availability of conversational support
In a world where mental health resources are scarce – particularly in remote maritime environments – AI can play a valuable complementary role. However, experts stress that AI must augment, not replace, human care.
The urgent need for regulation and education
MHSS emphasizes that the solution lies not in rejecting AI, but in using it responsibly. As Güven Kale, Clinical Psychologist and Emotionally Focused Therapist at MHSS, states: “We need to educate people on how to use AI safely, and to know when to seek human support.”
Key priorities moving forward include:
Clear regulatory frameworks for AI tools
Built-in escalation mechanisms to human professionals
Ethical standards for AI-human interaction
Training for both users and mental health practitioners
Implications for the maritime industry
For the maritime sector, the stakes are particularly high. Seafarers often operate in isolated environments and high-stress conditions with limited access to mental health services. In such contexts, AI may quickly become a primary support tool, making safe implementation critical. Without proactive measures, the industry risks facing a new category of mental health challenges driven by human-AI relationships.
Conclusion
AI is reshaping not only how we work, but how we think, feel, and connect. While its benefits are undeniable, the psychological risks – especially in high-risk sectors like maritime – cannot be ignored.
Mental health experts are clear: the question is no longer whether AI affects mental health, but how deeply, and how safely, we manage its impact.
As adoption accelerates, the maritime industry must act decisively, balancing innovation with responsibility to protect the well-being of its workforce.