{"id":254449,"date":"2025-07-10T21:06:11","date_gmt":"2025-07-10T21:06:11","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/254449\/"},"modified":"2025-07-10T21:06:11","modified_gmt":"2025-07-10T21:06:11","slug":"dr-chatgpt-will-see-you-now","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/254449\/","title":{"rendered":"Dr. ChatGPT Will See You Now"},"content":{"rendered":"<p class=\"paywall\">And even if it is right, an AI agent can\u2019t complement the information it provides with the knowledge physicians gain through experience, says fertility doctor Jaime Knopman. When patients at her clinic in midtown Manhattan bring her information from AI chatbots, it isn\u2019t necessarily incorrect, but what the LLM suggests may not be the best approach for a patient\u2019s specific case.<\/p>\n<p class=\"paywall\">For instance, when considering IVF, couples will receive grades for viability for their embryos. But asking ChatGPT to provide recommendations on next steps based on those scores alone doesn\u2019t take into consideration other important factors, Knopman says. \u201cIt\u2019s not just about the grade: There\u2019s other things that go into it\u201d\u2014such as when the embryo was biopsied, the state of the patient\u2019s uterine lining, and whether they have had success in the past with fertility. In addition to her years of training and medical education, Knopman says she has \u201ctaken care of thousands and thousands of women.\u201d This, she says, gives her real-world insights on what next steps to pursue that an LLM lacks.<\/p>\n<p class=\"paywall\">Other patients will come in certain of how they want an embryo transfer done, based on a response they received from AI, Knopman says. However, while the method they\u2019ve been suggested may be common, other courses of action may be more appropriate for the specific patient\u2019s circumstances, she says. \u201cThere\u2019s the science, which we study, and we learn how to do, but then there\u2019s the art of why one treatment modality or protocol is better for a patient than another,\u201d she says.<\/p>\n<p class=\"paywall\">Some of the companies behind these AI chatbots have been building tools to address concerns about the medical information dispensed. OpenAI, the parent company of ChatGPT, <a data-offer-url=\"https:\/\/openai.com\/index\/healthbench\/\" class=\"external-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/openai.com\/index\/healthbench\/&quot;}\" href=\"https:\/\/openai.com\/index\/healthbench\/\" rel=\"nofollow noopener\" target=\"_blank\">announced<\/a> on May 12 it was launching HealthBench, a system designed to measure AI\u2019s capabilities in responding to health questions. OpenAI says the program was built with the help of more than 260 physicians in 60 countries, and includes 5,000 simulated health conversations between users and AI models, with a scoring guide designed by doctors to evaluate the responses. The company says that it found that with earlier versions of its AI models, doctors could improve upon the responses generated by the chatbot, but claims the latest models, available as of April 2025, such as GPT-4.1, were as good as or better than the human doctors.<\/p>\n<p class=\"paywall\">\u201cOur findings show that large language models have improved significantly over time and already outperform experts in writing responses to examples tested in our benchmark,\u201d Open AI says on its website. \u201cYet even the most advanced systems still have substantial room for improvement, particularly in seeking necessary context for underspecified queries and worst-case reliability.\u201d<\/p>\n<p class=\"paywall\">Other companies are building health-specific tools that are specifically designed for medical professionals to use. Microsoft says it has <a href=\"https:\/\/www.wired.com\/story\/microsoft-medical-superintelligence-diagnosis\/\" target=\"_blank\" rel=\"noopener\">created a new AI system<\/a>\u2014called MAI Diagnostic Orchestrator (MAI-DxO)\u2014that in testing diagnosed patients four times as accurately as human doctors. The system works by querying several leading large language models\u2014including OpenAI\u2019s GPT, Google\u2019s Gemini, Anthropic\u2019s Claude, Meta\u2019s Llama, and xAI\u2019s Grok\u2014in a way that loosely mimics multiple human experts working together.<\/p>\n<p class=\"paywall\">New doctors will need to learn how to both use these AI tools as well as counsel patients who use them, says Bernard S. Chang, dean of medical education at Harvard Medical School. That\u2019s why his university was one of the first to offer students <a href=\"https:\/\/cmecatalog.hms.harvard.edu\/ai-clinical-medicine\" target=\"_blank\" rel=\"noopener\">classes<\/a> on how to use the technology in their practices. \u201cIt\u2019s one of the most exciting things that\u2019s happening right now in medical education,\u201d Chang says.<\/p>\n<p class=\"paywall\">The situation reminds Chang of when people started turning to the internet for medical information 20 years ago. Patients would come to him and say, \u201cI hope you\u2019re not one of those doctors that uses Google.\u201d But as the search engine became ubiquitous, he wanted to reply to these patients: \u201cYou wouldn\u2019t want to go to a doctor who didn\u2019t.\u201d He sees the same thing now happening with AI. \u201cWhat kind of doctor is practicing at the forefront of medicine and doesn\u2019t use this powerful tool?\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"And even if it is right, an AI agent can\u2019t complement the information it provides with the knowledge&hellip;\n","protected":false},"author":2,"featured_media":254450,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4316],"tags":[1942,1315,105,3941,4348,3912,1318,70,16,15],"class_list":{"0":"post-254449","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-artificial-intelligence","9":"tag-chatgpt","10":"tag-health","11":"tag-health-care","12":"tag-healthcare","13":"tag-medicine","14":"tag-openai","15":"tag-science","16":"tag-uk","17":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114830978841835169","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/254449","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=254449"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/254449\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/254450"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=254449"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=254449"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=254449"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}