{"id":24683,"date":"2025-08-26T16:47:07","date_gmt":"2025-08-26T16:47:07","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/24683\/"},"modified":"2025-08-26T16:47:07","modified_gmt":"2025-08-26T16:47:07","slug":"openai-makes-a-play-for-healthcare","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/24683\/","title":{"rendered":"OpenAI Makes a Play for Healthcare"},"content":{"rendered":"<p>OpenAI is going all in on healthcare AI.<\/p>\n<p>The company added two new leaders to its burgeoning healthcare AI team, <a href=\"https:\/\/www.businessinsider.com\/openai-taps-doximity-cofounder-to-lead-its-next-healthcare-push-2025-8\" rel=\"nofollow noopener\" target=\"_blank\">Business Insider<\/a> found, and is hiring for more researchers and engineers.<\/p>\n<p>Nate Gross, co-founder and former chief strategy officer of healthcare business networking tool Doximity, joined OpenAI in June, and according to Business Insider will lead the company\u2019s go-to-market strategy in healthcare. One of the early goals of the team will reportedly be to co-create new healthcare tech with clinicians and researchers.<\/p>\n<p>OpenAI also hired Ashley Alexander, former co-head of product at Instagram, BI reported, who joined the company on Tuesday as vice president of product in the health business. Alexander\u2019s team\u2019s aim, a spokesperson told BI, was to build tech for individual consumers and clinicians.<\/p>\n<p>The new hires come as OpenAI increases its bet on the healthcare industry.\u00a0<\/p>\n<p>\u201cImproving human health will be one of the defining impacts of AGI [artificial general intelligence],\u201d the company said in a press release from May announcing <a href=\"https:\/\/openai.com\/index\/healthbench\/\" rel=\"nofollow noopener\" target=\"_blank\">HealthBench<\/a>, the company\u2019s new benchmark to evaluate AI systems capabilities for health.<\/p>\n<p>Meanwhile, AI models specialized to help healthcare professionals are burrowing themselves further into the healthcare industry, and people are increasingly resorting to ChatGPT to make sense of their symptoms.\u00a0<\/p>\n<p>But, like pretty much everything else with AI, the technology\u2019s increased adoption in healthcare does not come without concerns.<\/p>\n<p> <b>OpenAI\u2019s bet<\/b> <\/p>\n<p>OpenAI is far from the first company making a bet on healthcare AI; it even lags behind Palantir, <a href=\"https:\/\/www.cnbc.com\/2023\/10\/09\/google-announces-new-generative-ai-search-capabilities-for-doctors-.html\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a>, and Microsoft, which have been making strides in this area for several years now. And the company\u2019s push into healthcare AI isn\u2019t necessarily new, but it has noticeably accelerated in the past few months.<\/p>\n<p>OpenAI announced a partnership last month with Kenya-based primary care provider <a href=\"https:\/\/openai.com\/index\/ai-clinical-copilot-penda-health\/\" rel=\"nofollow noopener\" target=\"_blank\">Penda Health<\/a> for a study looking into the company\u2019s AI Consult, an LLM-powered clinician copilot that writes recommendations during patient visits.<\/p>\n<p>Also last month, OpenAI CEO Sam Altman attended the White House\u2019s <a href=\"https:\/\/time.com\/7306647\/trump-health-data-medical-records\/\" rel=\"nofollow noopener\" target=\"_blank\">\u201cMake Health Tech Great Again\u201d<\/a> event, where President Trump announced a private sector initiative that will have Americans share their medical records across apps and programs via \u201csecured commitments\u201d from 60 companies, including OpenAI. The program will use conversational AI assistants for patient care.<\/p>\n<p>Roughly a week later, while announcing <a href=\"https:\/\/gizmodo.com\/tag\/gpt-5\" rel=\"nofollow noopener\" target=\"_blank\">GPT-5<\/a>, OpenAI drew particular attention to the model\u2019s healthcare-related capabilities.<\/p>\n<p>\u201cGPT-5 is our best model yet for health-related questions,\u201d the company wrote in a <a href=\"https:\/\/openai.com\/index\/introducing-gpt-5\/\" rel=\"nofollow noopener\" target=\"_blank\">press release<\/a>. \u201cImportantly, ChatGPT does not replace a medical professional\u2014think of it as a partner to help you understand results, ask the right questions in the time you have with providers, and weigh options as you make decisions.\u201d<\/p>\n<p>The company said the new model can \u201cproactively flag\u201d potential health concerns and adapt the answers to the user\u2019s \u201ccontext, knowledge level, and geography.\u201d In an example in the press release, GPT-5 created a six-week rehab plan for a high school pitcher with mild UCL strain.<\/p>\n<p>Meanwhile, OpenAI\u2019s new CEO of applications <a href=\"https:\/\/gizmodo.com\/openais-new-exec-has-a-grand-plan-to-make-ai-for-everyone-2000633350\" rel=\"nofollow noopener\" target=\"_blank\">Fidji Simo<\/a> said she is \u201cmost excited for the breakthroughs that AI will generate in healthcare,\u201d in a <a href=\"https:\/\/openai.com\/index\/ai-as-the-greatest-source-of-empowerment-for-all\/?utm_source=newsletter.theaireport.ai&amp;utm_medium=newsletter&amp;utm_campaign=openai-and-google-fight-for-gold&amp;_bhlid=d4f654806b257e6caa28d716983cd6a77f8e57ce\" rel=\"nofollow noopener\" target=\"_blank\">press release<\/a> announcing her new role on July 21.<\/p>\n<p>Simo said her belief in AI\u2019s potential in this field comes from her own experiences with the healthcare system after facing \u201ca complex and poorly understood chronic illness.\u201d\u00a0<\/p>\n<p>Healthcare, especially in the United States, indeed can be a confusing field for patients to navigate, and OpenAI is betting that AI can help fix that.<\/p>\n<p>\u201cAI can explain lab results, decode medical jargon, offer second opinions, and help patients understand their options in plain language. It won\u2019t replace doctors, but it can finally level the playing field for patients, putting them in the driver seat of their own care,\u201d Simo wrote in the release.<\/p>\n<p> <b>Healthcare AI: the future or a problem?<\/b> <\/p>\n<p>Can AI actually revolutionize health care? There is good news and bad news.\u00a0<\/p>\n<p>A <a href=\"https:\/\/hai.stanford.edu\/news\/can-ai-improve-medical-diagnostic-accuracy\" rel=\"nofollow noopener\" target=\"_blank\">Stanford study<\/a> from last year showed that ChatGPT on its own performed very well in medical diagnosis, even better than physicians did. Based on these preliminary results, healthcare specific AI could prove to be a powerful diagnostic aid for healthcare providers.\u00a0<\/p>\n<p>Some healthcare providers have already started deploying the use of specialized AI in patient care and diagnosis. Open Evidence, a healthcare AI startup that offers a popular AI copilot trained on medical research, <a href=\"https:\/\/www.cnbc.com\/2025\/02\/19\/ai-startup-openevidence-secures-sequoia-funding-1-billion-valuation.html\" rel=\"nofollow noopener\" target=\"_blank\">claimed<\/a> earlier this year that their chatbot is already being used by a quarter of doctors in the U.S.<\/p>\n<p>But as adoption is mounting, so are the concerns.<\/p>\n<p>Some experts think the early tests of AI in healthcare are <a href=\"https:\/\/gizmodo.com\/doctors-say-ai-is-introducing-slop-into-patient-care-2000543805\" rel=\"nofollow noopener\" target=\"_blank\">actually not reassuring<\/a>, with some medical experts completely disagreeing with ChatGPT\u2019s medical suggestions.<\/p>\n<p>Although the failure rate of AI can be overlooked in some fields, mistakes in healthcare can be fatal.\u00a0<\/p>\n<p>\u201cTwenty percent problematic responses is not, to me, good enough for actual daily use in the health care system,\u201d Stanford medical and data science professor Roxana Daneshjou told the <a href=\"https:\/\/www.washingtonpost.com\/technology\/2024\/12\/25\/ai-health-care-medical-doctors\/\" rel=\"nofollow noopener\" target=\"_blank\">Washington Post<\/a> last year when asked about ChatGPT.\u00a0<\/p>\n<p><a href=\"https:\/\/gizmodo.com\/ai-psychosis-mental-health-2000645293\" rel=\"nofollow noopener\" target=\"_blank\">Case in poin<\/a>t: A man with no past medical history ended up at the ER with <a href=\"https:\/\/gizmodo.com\/man-follows-diet-advice-from-chatgpt-ends-up-with-psychosis-2000640705\" rel=\"nofollow noopener\" target=\"_blank\">bromide poisoning-induced psychosis<\/a>, after ChatGPT falsely advised him to take bromide supplements to reduce his table salt intake.<\/p>\n<p>One of the more problematic aspects of AI that makes any false reasoning in healthcare decisions highly problematic has to do with our own automation bias. When using AI, no matter how well informed we may be about a topic, people tend to value the model\u2019s recommendations over their own beliefs.\u00a0<\/p>\n<p>This bias is made even more dangerous by the fact that AI is inherently a black box: we have no idea why or how it gets to the conclusions it does, making it harder to understand where the reasoning could have gone wrong and whether you should trust the model.<\/p>\n<p>So while AI does hold potential to help, or maybe even revolutionize, the healthcare system, there is still much to address before that can happen safely.<\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI is going all in on healthcare AI. The company added two new leaders to its burgeoning healthcare&hellip;\n","protected":false},"author":2,"featured_media":24684,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[275],"tags":[297,18,11139,135,475,474,19,17,307,308],"class_list":{"0":"post-24683","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-chatgpt","9":"tag-eire","10":"tag-emerging-technologies","11":"tag-health","12":"tag-health-care","13":"tag-healthcare","14":"tag-ie","15":"tag-ireland","16":"tag-openai","17":"tag-sam-altman"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/24683","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=24683"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/24683\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/24684"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=24683"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=24683"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=24683"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}