{"id":354833,"date":"2025-11-04T11:05:15","date_gmt":"2025-11-04T11:05:15","guid":{"rendered":"https:\/\/www.europesays.com\/us\/354833\/"},"modified":"2025-11-04T11:05:15","modified_gmt":"2025-11-04T11:05:15","slug":"chatgpt-restricted-from-giving-medical-legal-or-financial-advice-over-liability-fears-report-technology-news","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/354833\/","title":{"rendered":"ChatGPT &#8216;restricted&#8217; from giving medical, legal, or financial advice over liability fears: Report &#8211; Technology News"},"content":{"rendered":"<p><a href=\"https:\/\/www.financialexpress.com\/life\/technology-after-openai-ceo-sam-altman-questions-tesla-roadster-delivery-elon-musk-now-promises-a-flying-car-4030160\/lite\/\" target=\"_blank\" rel=\"noopener\">OpenAI <\/a>is set to change the way <a href=\"https:\/\/www.financialexpress.com\/jobs-career\/openai-hiring-in-bengaluru-role-to-focus-on-working-closely-with-startups-check-job-details-4028792\/lite\/\" target=\"_blank\" rel=\"noopener\">ChatGPT <\/a>can be used. The system will no longer provide specific medical, legal, or financial advice. As reported by Nexta, ChatGPT is now officially an \u201ceducational tool,\u201d not a \u201cconsultant.\u201d The report explains the change as a result of growing regulations and liability fears.<\/p>\n<p>The updated policies prohibit users from relying on ChatGPT for consultations requiring professional certification. This includes medical and legal advice, financial decision-making, or other high-stakes areas such as housing, education, migration, or employment without human oversight.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.europesays.com\/us\/wp-content\/uploads\/2025\/06\/track_1x1.jpg\" alt=\"\" width=\"1px\" height=\"1px\" style=\"display:none;\"\/><\/p>\n<p>It does not mean the model is totally mute on medical, legal, or financial topics. It can still explain general concepts, e.g., what is a will, how tax brackets work, what typical treatments might be for a condition in broad strokes. It simply should not provide personalised, professional-level advice.<\/p>\n<p>As reported by Nexta, the policy also restricts AI-assisted personal or facial recognition without consent and forbids actions that could lead to academic misconduct. <\/p>\n<p>OpenAI says the changes are intended to \u201cenhance user safety and prevent potential harm\u201d from relying on the system beyond its intended capabilities.<\/p>\n<p>AI to be an educational tool<\/p>\n<p>Under the new rules, ChatGPT will only explain principles, outline general mechanisms, and direct users to qualified professionals. It will no longer provide specific medication names or dosages, generate lawsuit templates, or offer investment tips or buy\/sell suggestions. <\/p>\n<p>Users report that attempts to bypass the restrictions by framing requests as hypotheticals are now blocked by the system\u2019s safety filters. The update comes amid public debate about the increasing number of people turning to AI chatbots for expert advice, especially in the medical field. <\/p>\n<p>Unlike licensed professionals, conversations with ChatGPT are not protected by doctor\u2013patient or attorney\u2013client privilege, meaning chats could potentially be subpoenaed for use in court. <\/p>\n<p>Recently, OpenAI also introduced new safety features to better support users in distress, focusing on mental health issues such as psychosis, mania, self-harm, and suicide, as well as emotional reliance on AI. <\/p>\n<p>Nexta reported that ChatGPT will \u2018no more naming medications or giving dosages\u2026 no lawsuit templates\u2026 no investment tips or buy\/sell suggestions.\u2019<\/p>\n<p>ChatGPT can be helpful for explaining concepts, summarising information, or brainstorming ideas, but it has serious limitations when it comes to real-life decisions. Unlike a licensed therapist, it cannot read body language, feel empathy, or ensure your safety. <\/p>\n<p>If you or someone you care about is in crisis, always reach out to trained professionals, such as dialing 988 in the US rather than relying on an AI. The same caution applies to finances and legal matters. ChatGPT can define terms like ETFs or explain basic tax rules, but it cannot consider your personal circumstances, risk tolerance, or specific regulations. <\/p>\n<p>Using it to draft legal documents or financial plans carries real-world risks, including errors that could be costly or legally invalid.<\/p>\n<p>High-stakes emergencies are also outside its abilities. It cannot detect gas leaks, alert authorities, or provide real-time updates. While ChatGPT can access web data, it does not monitor events continuously, and its outputs can contain mistakes, including misreported statistics or outdated information. <\/p>\n<p>Users should never share confidential or sensitive data, such as financial records, medical charts, or private contracts, since storage and access are not guaranteed. <\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI is set to change the way ChatGPT can be used. The system will no longer provide specific&hellip;\n","protected":false},"author":3,"featured_media":354834,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[147892,691,173261,20446,66083,173259,173268,738,173269,173260,173257,173263,173270,22504,75564,173265,173262,173267,173264,173266,173258,923,13781,158,44689,67,132,68,46778,173271],"class_list":{"0":"post-354833","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-academic-misconduct","9":"tag-ai","10":"tag-ai-consultant","11":"tag-ai-ethics","12":"tag-ai-governance","13":"tag-ai-liability-fears","14":"tag-ai-safety-filters","15":"tag-artificial-intelligence","16":"tag-chatbot-limitations","17":"tag-chatgpt-educational-tool","18":"tag-chatgpt-new-rules","19":"tag-chatgpt-regulations","20":"tag-chatgpt-update","21":"tag-data-privacy","22":"tag-expert-advice","23":"tag-financial-advice-ai","24":"tag-future-of-ai","25":"tag-high-stakes-decisions","26":"tag-legal-advice-ai","27":"tag-medical-advice-ai","28":"tag-openai-policy-change","29":"tag-sam-altman","30":"tag-tech-news","31":"tag-technology","32":"tag-technology-trends","33":"tag-united-states","34":"tag-unitedstates","35":"tag-us","36":"tag-user-safety","37":"tag-viral-tech-news"},"share_on_mastodon":{"url":"","error":"Validation failed: Text character limit of 500 exceeded"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/354833","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=354833"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/354833\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/354834"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=354833"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=354833"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=354833"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}