{"id":174617,"date":"2025-11-11T08:24:35","date_gmt":"2025-11-11T08:24:35","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/174617\/"},"modified":"2025-11-11T08:24:35","modified_gmt":"2025-11-11T08:24:35","slug":"a-vision-language-pretrained-transformer-for-versatile-clinical-respiratory-disease-applications","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/174617\/","title":{"rendered":"A vision\u2013language pretrained transformer for versatile clinical respiratory disease applications"},"content":{"rendered":"<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"1.\">\n<p class=\"c-article-references__text\" id=\"ref-CR1\">Agusti, A., Vogelmeier, C. F. &amp; Halpin, D. M. G. Tackling the global burden of lung disease through prevention and early diagnosis. Lancet Respir. Med. <b>10<\/b>, 1013\u20131015 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/S2213-2600(22)00302-2\" data-track-item_id=\"10.1016\/S2213-2600(22)00302-2\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2FS2213-2600%2822%2900302-2\" aria-label=\"Article reference 1\" data-doi=\"10.1016\/S2213-2600(22)00302-2\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=36162412\" aria-label=\"PubMed reference 1\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 1\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Tackling%20the%20global%20burden%20of%20lung%20disease%20through%20prevention%20and%20early%20diagnosis&amp;journal=Lancet%20Respir.%20Med.&amp;doi=10.1016%2FS2213-2600%2822%2900302-2&amp;volume=10&amp;pages=1013-1015&amp;publication_year=2022&amp;author=Agusti%2CA&amp;author=Vogelmeier%2CCF&amp;author=Halpin%2CDMG\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"2.\">\n<p class=\"c-article-references__text\" id=\"ref-CR2\">Ardila, D. et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. <b>25<\/b>, 954\u2013961 (2019).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41591-019-0447-x\" data-track-item_id=\"10.1038\/s41591-019-0447-x\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41591-019-0447-x\" aria-label=\"Article reference 2\" data-doi=\"10.1038\/s41591-019-0447-x\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BC1MXhtVWqurfO\" aria-label=\"CAS reference 2\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=31110349\" aria-label=\"PubMed reference 2\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 2\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=End-to-end%20lung%20cancer%20screening%20with%20three-dimensional%20deep%20learning%20on%20low-dose%20chest%20computed%20tomography&amp;journal=Nat.%20Med.&amp;doi=10.1038%2Fs41591-019-0447-x&amp;volume=25&amp;pages=954-961&amp;publication_year=2019&amp;author=Ardila%2CD\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"3.\">\n<p class=\"c-article-references__text\" id=\"ref-CR3\">Chen, Z., Song, Y., Chang, T.-H. &amp; Wan, X. Generating radiology reports via memory-driven transformer. In Proc. Conference on Empirical Methods in Natural Language Processing (eds Webber, B. et al.) 1439\u20131449 (ACL, 2020); <a href=\"https:\/\/doi.org\/10.18653\/v1\/2020.emnlp-main.112\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.18653\/v1\/2020.emnlp-main.112\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.18653\/v1\/2020.emnlp-main.112<\/a><\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"4.\">\n<p class=\"c-article-references__text\" id=\"ref-CR4\">OpenAI. GPT-4 technical report. Preprint at <a href=\"https:\/\/arxiv.org\/abs\/2303.08774\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"https:\/\/arxiv.org\/abs\/2303.08774\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2303.08774<\/a> (2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"5.\">\n<p class=\"c-article-references__text\" id=\"ref-CR5\">Kirillov, A. et al. Segment anything. In Proc. IEEE\/CVF International Conference on Computer Vision 4015\u20134026 (IEEE, 2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"6.\">\n<p class=\"c-article-references__text\" id=\"ref-CR6\">Rajpurkar, P., Chen, E., Banerjee, O. &amp; Topol, E. J. AI in health and medicine. Nat. Med. <b>28<\/b>, 31\u201338 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41591-021-01614-0\" data-track-item_id=\"10.1038\/s41591-021-01614-0\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41591-021-01614-0\" aria-label=\"Article reference 6\" data-doi=\"10.1038\/s41591-021-01614-0\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB38XhslCntr4%3D\" aria-label=\"CAS reference 6\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=35058619\" aria-label=\"PubMed reference 6\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 6\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=AI%20in%20health%20and%20medicine&amp;journal=Nat.%20Med.&amp;doi=10.1038%2Fs41591-021-01614-0&amp;volume=28&amp;pages=31-38&amp;publication_year=2022&amp;author=Rajpurkar%2CP&amp;author=Chen%2CE&amp;author=Banerjee%2CO&amp;author=Topol%2CEJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"7.\">\n<p class=\"c-article-references__text\" id=\"ref-CR7\">Azizi, S. et al. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging. Nat. Biomed. Eng. <b>7<\/b>, 756\u2013779 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41551-023-01049-7\" data-track-item_id=\"10.1038\/s41551-023-01049-7\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41551-023-01049-7\" aria-label=\"Article reference 7\" data-doi=\"10.1038\/s41551-023-01049-7\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37291435\" aria-label=\"PubMed reference 7\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 7\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Robust%20and%20data-efficient%20generalization%20of%20self-supervised%20machine%20learning%20for%20diagnostic%20imaging&amp;journal=Nat.%20Biomed.%20Eng.&amp;doi=10.1038%2Fs41551-023-01049-7&amp;volume=7&amp;pages=756-779&amp;publication_year=2023&amp;author=Azizi%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"8.\">\n<p class=\"c-article-references__text\" id=\"ref-CR8\">Singhal, K. et al. Large language models encode clinical knowledge. Nature <b>620<\/b>, 172\u2013180 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41586-023-06291-2\" data-track-item_id=\"10.1038\/s41586-023-06291-2\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41586-023-06291-2\" aria-label=\"Article reference 8\" data-doi=\"10.1038\/s41586-023-06291-2\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXhsVKju7zP\" aria-label=\"CAS reference 8\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37438534\" aria-label=\"PubMed reference 8\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10396962\" aria-label=\"PubMed Central reference 8\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 8\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Large%20language%20models%20encode%20clinical%20knowledge&amp;journal=Nature&amp;doi=10.1038%2Fs41586-023-06291-2&amp;volume=620&amp;pages=172-180&amp;publication_year=2023&amp;author=Singhal%2CK\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"9.\">\n<p class=\"c-article-references__text\" id=\"ref-CR9\">Ma, J. et al. Segment anything in medical images. Nat. Commun. <b>15<\/b>, 654 (2024).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"10.\">\n<p class=\"c-article-references__text\" id=\"ref-CR10\">Lei, W., Wei, X., Zhang, X., Li, K. &amp; Zhang, S. MedLSAM: localize and segment anything model for 3D medical images. Med. Image Anal. <b>99<\/b>, 103370 (2025).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1016\/j.media.2024.103370\" data-track-item_id=\"10.1016\/j.media.2024.103370\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1016%2Fj.media.2024.103370\" aria-label=\"Article reference 10\" data-doi=\"10.1016\/j.media.2024.103370\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=39447436\" aria-label=\"PubMed reference 10\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 10\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=MedLSAM%3A%20localize%20and%20segment%20anything%20model%20for%203D%20medical%20images&amp;journal=Med.%20Image%20Anal.&amp;doi=10.1016%2Fj.media.2024.103370&amp;volume=99&amp;publication_year=2025&amp;author=Lei%2CW&amp;author=Wei%2CX&amp;author=Zhang%2CX&amp;author=Li%2CK&amp;author=Zhang%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"11.\">\n<p class=\"c-article-references__text\" id=\"ref-CR11\">Zhang, X., Wu, C., Zhang, Y., Xie, W. &amp; Wang, Y. Knowledge-enhanced visual-language pre-training on chest radiology images. Nat. Commun. <b>14<\/b>, 4542 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41467-023-40260-7\" data-track-item_id=\"10.1038\/s41467-023-40260-7\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41467-023-40260-7\" aria-label=\"Article reference 11\" data-doi=\"10.1038\/s41467-023-40260-7\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXhsF2htL3I\" aria-label=\"CAS reference 11\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37507376\" aria-label=\"PubMed reference 11\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10382552\" aria-label=\"PubMed Central reference 11\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 11\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Knowledge-enhanced%20visual-language%20pre-training%20on%20chest%20radiology%20images&amp;journal=Nat.%20Commun.&amp;doi=10.1038%2Fs41467-023-40260-7&amp;volume=14&amp;publication_year=2023&amp;author=Zhang%2CX&amp;author=Wu%2CC&amp;author=Zhang%2CY&amp;author=Xie%2CW&amp;author=Wang%2CY\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"12.\">\n<p class=\"c-article-references__text\" id=\"ref-CR12\">Zhang, S. et al. A multimodal biomedical foundation model trained from fifteen million image\u2013text pairs. NEJM AI <b>2<\/b>, AIoa2400640 (2025).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1056\/AIoa2400640\" data-track-item_id=\"10.1056\/AIoa2400640\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1056%2FAIoa2400640\" aria-label=\"Article reference 12\" data-doi=\"10.1056\/AIoa2400640\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 12\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=A%20multimodal%20biomedical%20foundation%20model%20trained%20from%20fifteen%20million%20image%E2%80%93text%20pairs&amp;journal=NEJM%20AI&amp;doi=10.1056%2FAIoa2400640&amp;volume=2&amp;publication_year=2025&amp;author=Zhang%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"13.\">\n<p class=\"c-article-references__text\" id=\"ref-CR13\">Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature <b>622<\/b>, 156\u2013163 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41586-023-06555-x\" data-track-item_id=\"10.1038\/s41586-023-06555-x\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41586-023-06555-x\" aria-label=\"Article reference 13\" data-doi=\"10.1038\/s41586-023-06555-x\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXhvFSmtbvO\" aria-label=\"CAS reference 13\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37704728\" aria-label=\"PubMed reference 13\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10550819\" aria-label=\"PubMed Central reference 13\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 13\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=A%20foundation%20model%20for%20generalizable%20disease%20detection%20from%20retinal%20images&amp;journal=Nature&amp;doi=10.1038%2Fs41586-023-06555-x&amp;volume=622&amp;pages=156-163&amp;publication_year=2023&amp;author=Zhou%2CY\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"14.\">\n<p class=\"c-article-references__text\" id=\"ref-CR14\">Jiang, L. Y. et al. Health system-scale language models are all-purpose prediction engines. Nature <b>619<\/b>, 357\u2013362 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41586-023-06160-y\" data-track-item_id=\"10.1038\/s41586-023-06160-y\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41586-023-06160-y\" aria-label=\"Article reference 14\" data-doi=\"10.1038\/s41586-023-06160-y\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXhtFOksr%2FP\" aria-label=\"CAS reference 14\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37286606\" aria-label=\"PubMed reference 14\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10338337\" aria-label=\"PubMed Central reference 14\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 14\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Health%20system-scale%20language%20models%20are%20all-purpose%20prediction%20engines&amp;journal=Nature&amp;doi=10.1038%2Fs41586-023-06160-y&amp;volume=619&amp;pages=357-362&amp;publication_year=2023&amp;author=Jiang%2CLY\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"15.\">\n<p class=\"c-article-references__text\" id=\"ref-CR15\">Topol, E. J. As artificial intelligence goes multimodal, medical applications multiply. Science <b>381<\/b>, adk6139 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1126\/science.adk6139\" data-track-item_id=\"10.1126\/science.adk6139\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1126%2Fscience.adk6139\" aria-label=\"Article reference 15\" data-doi=\"10.1126\/science.adk6139\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37708283\" aria-label=\"PubMed reference 15\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 15\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=As%20artificial%20intelligence%20goes%20multimodal%2C%20medical%20applications%20multiply&amp;journal=Science&amp;doi=10.1126%2Fscience.adk6139&amp;volume=381&amp;publication_year=2023&amp;author=Topol%2CEJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"16.\">\n<p class=\"c-article-references__text\" id=\"ref-CR16\">National Academies of Sciences, Engineering, and Medicine. Improving Diagnosis in Health Care (The National Academies Press, 2015).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"17.\">\n<p class=\"c-article-references__text\" id=\"ref-CR17\">Azizi, S. et al. Big self-supervised models advance medical image classification. In Proc. IEEE\/CVF International Conference on Computer Vision 3458\u20133468 (IEEE, 2021); <a href=\"https:\/\/doi.org\/10.1109\/ICCV48922.2021.00346\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.1109\/ICCV48922.2021.00346\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.1109\/ICCV48922.2021.00346<\/a><\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"18.\">\n<p class=\"c-article-references__text\" id=\"ref-CR18\">Hosseinzadeh Taher, M. R., Haghighi, F., Gotway, M. B. &amp; Liang, J. CAiD: context-aware instance discrimination for self-supervised learning in medical imaging. Proc. Mach. Learn. Res. <b>172<\/b>, 535\u2013551 (2022).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"19.\">\n<p class=\"c-article-references__text\" id=\"ref-CR19\">Vaswani, A. et al. Attention is all you need. In Proc. 31st International Conference on Neural Information Processing Systems (eds Guyon, I. et al.) 6000\u20136010 (Curran Associates, 2017).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"20.\">\n<p class=\"c-article-references__text\" id=\"ref-CR20\">The National Lung Screening Trial Research Team Reduced lung-cancer mortality with low-dose computed tomographic screening. N. Engl. J. Med. <b>365<\/b>, 395\u2013409 (2011).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1056\/NEJMoa1102873\" data-track-item_id=\"10.1056\/NEJMoa1102873\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1056%2FNEJMoa1102873\" aria-label=\"Article reference 20\" data-doi=\"10.1056\/NEJMoa1102873\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC4356534\" aria-label=\"PubMed Central reference 20\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 20\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Reduced%20lung-cancer%20mortality%20with%20low-dose%20computed%20tomographic%20screening&amp;journal=N.%20Engl.%20J.%20Med.&amp;doi=10.1056%2FNEJMoa1102873&amp;volume=365&amp;pages=395-409&amp;publication_year=2011\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"21.\">\n<p class=\"c-article-references__text\" id=\"ref-CR21\">Morozov, S. P. et al. Mosmeddata: chest CT scans with COVID-19 related findings dataset. Preprint at <a href=\"http:\/\/arxiv.org\/abs\/2005.06465\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2005.06465\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2005.06465<\/a> (2020).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"22.\">\n<p class=\"c-article-references__text\" id=\"ref-CR22\">Radford, A. et al. Learning transferable visual models from natural language supervision. In Proc. International Conference on Machine Learning (eds Meila, M. &amp; Zhang, T.) 8748\u20138763 (PMLR, 2021).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"23.\">\n<p class=\"c-article-references__text\" id=\"ref-CR23\">Eslami, S., Meinel, C. &amp; De Melo, G. PubMedClip: how much does CLIP benefit visual question answering in the medical domain? In Proc. Findings of the Association for Computational Linguistics (eds Vlachos, A. &amp; Augenstein, I.) 1151\u20131163 (ACL, 2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"24.\">\n<p class=\"c-article-references__text\" id=\"ref-CR24\">Moor, M. et al. Med-Flamingo: a multimodal medical few-shot learner. In Proc. 3rd Machine Learning for Health Symposium (eds Hegselmann, S. et al.) 353\u2013367 (PMLR, 2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"25.\">\n<p class=\"c-article-references__text\" id=\"ref-CR25\">Li, C. et al. LLaVA-Med: Training a large language-and-vision assistant for biomedicine in one day. In Proc. 37th Conference on Neural Information Processing Systems 28541\u201328564 (Curran Associates, 2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"26.\">\n<p class=\"c-article-references__text\" id=\"ref-CR26\">Dosovitskiy, A. et al. An image is worth 16\u00d716 words: transformers for image recognition at scale. In Proc. International Conference on Learning Representations (OpenReview.net, 2021).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"27.\">\n<p class=\"c-article-references__text\" id=\"ref-CR27\">Deng, J. et al. Imagenet: a large-scale hierarchical image database. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 248\u2013255 (IEEE, 2009).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"28.\">\n<p class=\"c-article-references__text\" id=\"ref-CR28\">Papineni, K., Roukos, S., Ward, T. &amp; Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation. In Proc. 40th Annual Meeting of the Association for Computational Linguistics (eds Isabelle, P. et al.) 311\u2013318 (ACL, 2002); <a href=\"https:\/\/doi.org\/10.3115\/1073083.1073135\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.3115\/1073083.1073135\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.3115\/1073083.1073135<\/a><\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"29.\">\n<p class=\"c-article-references__text\" id=\"ref-CR29\">Lin, C.-Y. ROUGE: a package for automatic evaluation of summaries. In Proc. Text Summarization Branches Out 74\u201381 (ACL, 2004).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"30.\">\n<p class=\"c-article-references__text\" id=\"ref-CR30\">Banerjee, S. &amp; Lavie, A. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proc. ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and\/or Summarization (eds Goldstein, J. et al.) 65\u201372 (ACL, 2005).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"31.\">\n<p class=\"c-article-references__text\" id=\"ref-CR31\">Devlin, J., Chang, M.-W., Lee, K. &amp; Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. Conference of the North American, Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (eds Burstein, J. et al.) 4171\u20134186 (ACL, 2019); <a href=\"https:\/\/doi.org\/10.18653\/v1\/N19-1423\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.18653\/v1\/N19-1423\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.18653\/v1\/N19-1423<\/a><\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"32.\">\n<p class=\"c-article-references__text\" id=\"ref-CR32\">Fedus, W., Zoph, B. &amp; Shazeer, N. Switch transformers: scaling to trillion parameter models with simple and efficient sparsity. J. Mach. Learn. Res. <b>23<\/b>, 1\u201339 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 32\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Switch%20transformers%3A%20scaling%20to%20trillion%20parameter%20models%20with%20simple%20and%20efficient%20sparsity&amp;journal=J.%20Mach.%20Learn.%20Res.&amp;volume=23&amp;pages=1-39&amp;publication_year=2022&amp;author=Fedus%2CW&amp;author=Zoph%2CB&amp;author=Shazeer%2CN\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"33.\">\n<p class=\"c-article-references__text\" id=\"ref-CR33\">Houlsby, N. et al. Parameter-efficient transfer learning for NLP. In Proc. 36th International Conference on Machine Learning (eds Chaudhuri, K. &amp; Salakhutdinov, R.) 2790\u20132799 (PMLR, 2019).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"34.\">\n<p class=\"c-article-references__text\" id=\"ref-CR34\">Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature <b>616<\/b>, 259\u2013265 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41586-023-05881-4\" data-track-item_id=\"10.1038\/s41586-023-05881-4\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41586-023-05881-4\" aria-label=\"Article reference 34\" data-doi=\"10.1038\/s41586-023-05881-4\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXns1yqu70%3D\" aria-label=\"CAS reference 34\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37045921\" aria-label=\"PubMed reference 34\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 34\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Foundation%20models%20for%20generalist%20medical%20artificial%20intelligence&amp;journal=Nature&amp;doi=10.1038%2Fs41586-023-05881-4&amp;volume=616&amp;pages=259-265&amp;publication_year=2023&amp;author=Moor%2CM\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"35.\">\n<p class=\"c-article-references__text\" id=\"ref-CR35\">Luo, R. et al. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Brief. Bioinform. <b>23<\/b>, bbac409 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1093\/bib\/bbac409\" data-track-item_id=\"10.1093\/bib\/bbac409\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1093%2Fbib%2Fbbac409\" aria-label=\"Article reference 35\" data-doi=\"10.1093\/bib\/bbac409\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=36156661\" aria-label=\"PubMed reference 35\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 35\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=BioGPT%3A%20generative%20pre-trained%20transformer%20for%20biomedical%20text%20generation%20and%20mining&amp;journal=Brief.%20Bioinform.&amp;doi=10.1093%2Fbib%2Fbbac409&amp;volume=23&amp;publication_year=2022&amp;author=Luo%2CR\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"36.\">\n<p class=\"c-article-references__text\" id=\"ref-CR36\">Yang, X. et al. A large language model for electronic health records. npj Digit. Med. <b>5<\/b>, 194 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41746-022-00742-2\" data-track-item_id=\"10.1038\/s41746-022-00742-2\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41746-022-00742-2\" aria-label=\"Article reference 36\" data-doi=\"10.1038\/s41746-022-00742-2\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=36572766\" aria-label=\"PubMed reference 36\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC9792464\" aria-label=\"PubMed Central reference 36\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 36\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=A%20large%20language%20model%20for%20electronic%20health%20records&amp;journal=npj%20Digit.%20Med.&amp;doi=10.1038%2Fs41746-022-00742-2&amp;volume=5&amp;publication_year=2022&amp;author=Yang%2CX\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"37.\">\n<p class=\"c-article-references__text\" id=\"ref-CR37\">Rasmy, L., Xiang, Y., Xie, Z., Tao, C. &amp; Zhi, D. Med-bert: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. npj Digit. Med. <b>4<\/b>, 86 (2021).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41746-021-00455-y\" data-track-item_id=\"10.1038\/s41746-021-00455-y\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41746-021-00455-y\" aria-label=\"Article reference 37\" data-doi=\"10.1038\/s41746-021-00455-y\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=34017034\" aria-label=\"PubMed reference 37\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC8137882\" aria-label=\"PubMed Central reference 37\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 37\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Med-bert%3A%20pretrained%20contextualized%20embeddings%20on%20large-scale%20structured%20electronic%20health%20records%20for%20disease%20prediction&amp;journal=npj%20Digit.%20Med.&amp;doi=10.1038%2Fs41746-021-00455-y&amp;volume=4&amp;publication_year=2021&amp;author=Rasmy%2CL&amp;author=Xiang%2CY&amp;author=Xie%2CZ&amp;author=Tao%2CC&amp;author=Zhi%2CD\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"38.\">\n<p class=\"c-article-references__text\" id=\"ref-CR38\">Madani, A. et al. Large language models generate functional protein sequences across diverse families. Nat. Biotechnol. <b>41<\/b>, 1099\u20131106 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41587-022-01618-2\" data-track-item_id=\"10.1038\/s41587-022-01618-2\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41587-022-01618-2\" aria-label=\"Article reference 38\" data-doi=\"10.1038\/s41587-022-01618-2\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXitVeqt7k%3D\" aria-label=\"CAS reference 38\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=36702895\" aria-label=\"PubMed reference 38\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10400306\" aria-label=\"PubMed Central reference 38\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 38\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Large%20language%20models%20generate%20functional%20protein%20sequences%20across%20diverse%20families&amp;journal=Nat.%20Biotechnol.&amp;doi=10.1038%2Fs41587-022-01618-2&amp;volume=41&amp;pages=1099-1106&amp;publication_year=2023&amp;author=Madani%2CA\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"39.\">\n<p class=\"c-article-references__text\" id=\"ref-CR39\">Thirunavukarasu, A. J. et al. Large language models in medicine. Nat. Med. <b>29<\/b>, 1930\u20131940 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41591-023-02448-8\" data-track-item_id=\"10.1038\/s41591-023-02448-8\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41591-023-02448-8\" aria-label=\"Article reference 39\" data-doi=\"10.1038\/s41591-023-02448-8\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXhsVymtbbL\" aria-label=\"CAS reference 39\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37460753\" aria-label=\"PubMed reference 39\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 39\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Large%20language%20models%20in%20medicine&amp;journal=Nat.%20Med.&amp;doi=10.1038%2Fs41591-023-02448-8&amp;volume=29&amp;pages=1930-1940&amp;publication_year=2023&amp;author=Thirunavukarasu%2CAJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"40.\">\n<p class=\"c-article-references__text\" id=\"ref-CR40\">Liu, S. et al. Multimodal data matters: language model pre-training over structured and unstructured electronic health records. IEEE J. Biomed. Heal. Inform. <b>27<\/b>, 504\u2013514 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1109\/JBHI.2022.3217810\" data-track-item_id=\"10.1109\/JBHI.2022.3217810\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1109%2FJBHI.2022.3217810\" aria-label=\"Article reference 40\" data-doi=\"10.1109\/JBHI.2022.3217810\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 40\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Multimodal%20data%20matters%3A%20language%20model%20pre-training%20over%20structured%20and%20unstructured%20electronic%20health%20records&amp;journal=IEEE%20J.%20Biomed.%20Heal.%20Inform.&amp;doi=10.1109%2FJBHI.2022.3217810&amp;volume=27&amp;pages=504-514&amp;publication_year=2023&amp;author=Liu%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"41.\">\n<p class=\"c-article-references__text\" id=\"ref-CR41\">Peiris, H., Hayat, M., Chen, Z., Egan, G. &amp; Harandi, M. Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation. Nat. Mach. Intell. <b>5<\/b>, 724\u2013738 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s42256-023-00682-w\" data-track-item_id=\"10.1038\/s42256-023-00682-w\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs42256-023-00682-w\" aria-label=\"Article reference 41\" data-doi=\"10.1038\/s42256-023-00682-w\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 41\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Uncertainty-guided%20dual-views%20for%20semi-supervised%20volumetric%20medical%20image%20segmentation&amp;journal=Nat.%20Mach.%20Intell.&amp;doi=10.1038%2Fs42256-023-00682-w&amp;volume=5&amp;pages=724-738&amp;publication_year=2023&amp;author=Peiris%2CH&amp;author=Hayat%2CM&amp;author=Chen%2CZ&amp;author=Egan%2CG&amp;author=Harandi%2CM\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"42.\">\n<p class=\"c-article-references__text\" id=\"ref-CR42\">Zhou, L. et al. Self pre-training with masked autoencoders for medical image classification and segmentation. In Proc. IEEE 20th International Symposium on Biomedical Imaging 1\u20136 (IEEE, 2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"43.\">\n<p class=\"c-article-references__text\" id=\"ref-CR43\">Hu, X., Xu, X. &amp; Shi, Y. How to efficiently adapt large segmentation model (SAM) to medical images. Preprint at <a href=\"http:\/\/arxiv.org\/abs\/2306.13731\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2306.13731\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2306.13731<\/a> (2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"44.\">\n<p class=\"c-article-references__text\" id=\"ref-CR44\">Qiu, Z., Hu, Y., Li, H. &amp; Liu, J. Learnable ophthalmology SAM. Preprint at <a href=\"http:\/\/arxiv.org\/abs\/2304.13425\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2304.13425\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2304.13425<\/a> (2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"45.\">\n<p class=\"c-article-references__text\" id=\"ref-CR45\">Cao, H. et al. Swin-Unet: Unet-like pure transformer for medical image segmentation. In Proc. Computer Vision\u2013ECCV 2022 Workshops (eds Karlinsky, L. et al.) 205\u2013218 (Springer, 2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"46.\">\n<p class=\"c-article-references__text\" id=\"ref-CR46\">Sch\u00e4fer, R. et al. Overcoming data scarcity in biomedical imaging with a foundational multi-task model. Nat. Comput. Sci. <b>4<\/b>, 495\u2013509 (2024).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s43588-024-00662-z\" data-track-item_id=\"10.1038\/s43588-024-00662-z\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs43588-024-00662-z\" aria-label=\"Article reference 46\" data-doi=\"10.1038\/s43588-024-00662-z\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=39030386\" aria-label=\"PubMed reference 46\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC11288886\" aria-label=\"PubMed Central reference 46\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 46\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Overcoming%20data%20scarcity%20in%20biomedical%20imaging%20with%20a%20foundational%20multi-task%20model&amp;journal=Nat.%20Comput.%20Sci.&amp;doi=10.1038%2Fs43588-024-00662-z&amp;volume=4&amp;pages=495-509&amp;publication_year=2024&amp;author=Sch%C3%A4fer%2CR\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"47.\">\n<p class=\"c-article-references__text\" id=\"ref-CR47\">Pai, S. et al. Foundation model for cancer imaging biomarkers. Nat. Mach. Intell. <b>6<\/b>, 354\u2013367 (2024).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s42256-024-00807-9\" data-track-item_id=\"10.1038\/s42256-024-00807-9\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs42256-024-00807-9\" aria-label=\"Article reference 47\" data-doi=\"10.1038\/s42256-024-00807-9\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=38523679\" aria-label=\"PubMed reference 47\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10957482\" aria-label=\"PubMed Central reference 47\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 47\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Foundation%20model%20for%20cancer%20imaging%20biomarkers&amp;journal=Nat.%20Mach.%20Intell.&amp;doi=10.1038%2Fs42256-024-00807-9&amp;volume=6&amp;pages=354-367&amp;publication_year=2024&amp;author=Pai%2CS\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"48.\">\n<p class=\"c-article-references__text\" id=\"ref-CR48\">Tu, T. et al. Towards generalist biomedical AI. NEJM AI <b>1<\/b>, AIoa2300138 (2024).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1056\/AIoa2300138\" data-track-item_id=\"10.1056\/AIoa2300138\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1056%2FAIoa2300138\" aria-label=\"Article reference 48\" data-doi=\"10.1056\/AIoa2300138\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 48\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Towards%20generalist%20biomedical%20AI&amp;journal=NEJM%20AI&amp;doi=10.1056%2FAIoa2300138&amp;volume=1&amp;publication_year=2024&amp;author=Tu%2CT\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"49.\">\n<p class=\"c-article-references__text\" id=\"ref-CR49\">Zhou, H.-Y. et al. Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports. Nat. Mach. Intell. <b>4<\/b>, 32\u201340 (2022).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s42256-021-00425-9\" data-track-item_id=\"10.1038\/s42256-021-00425-9\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs42256-021-00425-9\" aria-label=\"Article reference 49\" data-doi=\"10.1038\/s42256-021-00425-9\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 49\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Generalized%20radiograph%20representation%20learning%20via%20cross-supervision%20between%20images%20and%20free-text%20radiology%20reports&amp;journal=Nat.%20Mach.%20Intell.&amp;doi=10.1038%2Fs42256-021-00425-9&amp;volume=4&amp;pages=32-40&amp;publication_year=2022&amp;author=Zhou%2CH-Y\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"50.\">\n<p class=\"c-article-references__text\" id=\"ref-CR50\">Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T. J. &amp; Zou, J. A visual\u2013language foundation model for pathology image analysis using medical twitter. Nat. Med. <b>29<\/b>, 2307\u20132316 (2023).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41591-023-02504-3\" data-track-item_id=\"10.1038\/s41591-023-02504-3\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41591-023-02504-3\" aria-label=\"Article reference 50\" data-doi=\"10.1038\/s41591-023-02504-3\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3sXhslansbjL\" aria-label=\"CAS reference 50\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=37592105\" aria-label=\"PubMed reference 50\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 50\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=A%20visual%E2%80%93language%20foundation%20model%20for%20pathology%20image%20analysis%20using%20medical%20twitter&amp;journal=Nat.%20Med.&amp;doi=10.1038%2Fs41591-023-02504-3&amp;volume=29&amp;pages=2307-2316&amp;publication_year=2023&amp;author=Huang%2CZ&amp;author=Bianchi%2CF&amp;author=Yuksekgonul%2CM&amp;author=Montine%2CTJ&amp;author=Zou%2CJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"51.\">\n<p class=\"c-article-references__text\" id=\"ref-CR51\">Zhang, K. et al. A generalist vision\u2013language foundation model for diverse biomedical tasks. Nat. Med. <a href=\"https:\/\/doi.org\/10.1038\/s41591-024-03185-2\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.1038\/s41591-024-03185-2\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.1038\/s41591-024-03185-2<\/a> (2024).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"52.\">\n<p class=\"c-article-references__text\" id=\"ref-CR52\">Zhou, H.-Y., Adithan, S., Acosta, J. N., Topol, E. J. &amp; Rajpurkar, P. MedVersa: a generalist foundation model for medical image interpretation. Preprint at <a href=\"http:\/\/arxiv.org\/abs\/2405.07988\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2405.07988\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2405.07988<\/a> (2025).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"53.\">\n<p class=\"c-article-references__text\" id=\"ref-CR53\">Yang, J. et al. Poisoning medical knowledge using large language models. Nat. Mach. Intell. <b>6<\/b>, 1156\u20131168 (2024).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s42256-024-00899-3\" data-track-item_id=\"10.1038\/s42256-024-00899-3\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs42256-024-00899-3\" aria-label=\"Article reference 53\" data-doi=\"10.1038\/s42256-024-00899-3\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 53\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Poisoning%20medical%20knowledge%20using%20large%20language%20models&amp;journal=Nat.%20Mach.%20Intell.&amp;doi=10.1038%2Fs42256-024-00899-3&amp;volume=6&amp;pages=1156-1168&amp;publication_year=2024&amp;author=Yang%2CJ\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"54.\">\n<p class=\"c-article-references__text\" id=\"ref-CR54\">Jin, C. et al. Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat. Commun. <b>11<\/b>, 5088 (2020).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"10.1038\/s41467-020-18685-1\" data-track-item_id=\"10.1038\/s41467-020-18685-1\" data-track-value=\"article reference\" data-track-action=\"article reference\" href=\"https:\/\/doi.org\/10.1038%2Fs41467-020-18685-1\" aria-label=\"Article reference 54\" data-doi=\"10.1038\/s41467-020-18685-1\" target=\"_blank\">Article<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"cas reference\" data-track-action=\"cas reference\" href=\"https:\/\/www.nature.com\/articles\/cas-redirect\/1:CAS:528:DC%2BB3cXitVKhu7nI\" aria-label=\"CAS reference 54\" target=\"_blank\">CAS<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed reference\" data-track-action=\"pubmed reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/entrez\/query.fcgi?cmd=Retrieve&amp;db=PubMed&amp;dopt=Abstract&amp;list_uids=33037212\" aria-label=\"PubMed reference 54\" target=\"_blank\">PubMed<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" rel=\"nofollow noopener\" data-track-label=\"link\" data-track-item_id=\"link\" data-track-value=\"pubmed central reference\" data-track-action=\"pubmed central reference\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC7547659\" aria-label=\"PubMed Central reference 54\" target=\"_blank\">PubMed Central<\/a>\u00a0<br \/>\n    <a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 54\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Development%20and%20evaluation%20of%20an%20artificial%20intelligence%20system%20for%20COVID-19%20diagnosis&amp;journal=Nat.%20Commun.&amp;doi=10.1038%2Fs41467-020-18685-1&amp;volume=11&amp;publication_year=2020&amp;author=Jin%2CC\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"55.\">\n<p class=\"c-article-references__text\" id=\"ref-CR55\">Chen, X., Fan, H., Girshick, R. &amp; He, K. Improved baselines with momentum contrastive learning. Preprint at <a href=\"http:\/\/arxiv.org\/abs\/2003.04297\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/2003.04297\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2003.04297<\/a> (2020).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"56.\">\n<p class=\"c-article-references__text\" id=\"ref-CR56\">Chen, T., Kornblith, S., Norouzi, M. &amp; Hinton, G. A simple framework for contrastive learning of visual representations. In Proc. 37th International Conference on Machine Learning (eds Daum\u00e9 III, H. &amp; Singh, A.) 1597\u20131607 (PMLR, 2020).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"57.\">\n<p class=\"c-article-references__text\" id=\"ref-CR57\">He, K., Fan, H., Wu, Y., Xie, S. &amp; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proc. IEEE\/CVF Conference on Computer Vision and Pattern Recognition 9726\u20139735 (IEEE, 2020).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"58.\">\n<p class=\"c-article-references__text\" id=\"ref-CR58\">van den Oord, A., Li, Y. &amp; Vinyals, O. Representation learning with contrastive predictive coding. Preprint at <a href=\"http:\/\/arxiv.org\/abs\/1807.03748\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"http:\/\/arxiv.org\/abs\/1807.03748\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/arxiv.org\/abs\/1807.03748<\/a> (2019).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"59.\">\n<p class=\"c-article-references__text\" id=\"ref-CR59\">He, K. et al. Masked autoencoders are scalable vision learners. In Proc. IEEE\/CVF Conference on Computer Vision and Pattern Recognition 15979\u201315988 <a href=\"https:\/\/doi.org\/10.1109\/CVPR52688.2022.01553\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.1109\/CVPR52688.2022.01553\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.1109\/CVPR52688.2022.01553<\/a> (IEEE, 2022).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"60.\">\n<p class=\"c-article-references__text\" id=\"ref-CR60\">Brody, S., Alon, U. &amp; Yahav, E. How attentive are graph attention networks? In Proc. International Conference on Learning Representations (OpenReview.net, 2022).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"61.\">\n<p class=\"c-article-references__text\" id=\"ref-CR61\">Pelka, O., Koitka, S., R\u00fcckert, J., Nensa, F. &amp; Friedrich, C. M. Radiology objects in context (ROCO): a multimodal image dataset. In Proc. Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis (eds Stoyanov, D. et al.) 180\u2013189 (Springer, 2018).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"62.\">\n<p class=\"c-article-references__text\" id=\"ref-CR62\">Liu, H., Li, C., Wu, Q. &amp; Lee, Y. J. Visual instruction tuning. In Proc. 37th Conference on Neural Information Processing Systems 34892\u201334916 (Curran Associates, 2023).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"63.\">\n<p class=\"c-article-references__text\" id=\"ref-CR63\">Abnar, S. &amp; Zuidema, W. Quantifying attention flow in transformers. In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 4190\u20134197 (ACL, 2020); <a href=\"https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.385\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.18653\/v1\/2020.acl-main.385\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.18653\/v1\/2020.acl-main.385<\/a><\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"64.\">\n<p class=\"c-article-references__text\" id=\"ref-CR64\">Chefer, H., Gur, S. &amp; Wolf, L. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In Proc. IEEE\/CVF International Conference on Computer Vision 387\u2013396 (IEEE, 2021); <a href=\"https:\/\/doi.org\/10.1109\/ICCV48922.2021.00045\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"10.1109\/ICCV48922.2021.00045\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/doi.org\/10.1109\/ICCV48922.2021.00045<\/a><\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"65.\">\n<p class=\"c-article-references__text\" id=\"ref-CR65\">Loshchilov, I. &amp; Hutter, F. Decoupled weight decay regularization. In Proc. International Conference on Learning Representations (OpenReview.net, 2019).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"66.\">\n<p class=\"c-article-references__text\" id=\"ref-CR66\">Paszke, A. et al. Pytorch: an imperative style, high-performance deep learning library. In Proc. Advances in Neural Information Processing Systems 32 (eds Wallach, H. et al.) 8024\u20138035 (Curran Associates, 2019).<\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"67.\">\n<p class=\"c-article-references__text\" id=\"ref-CR67\">Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. <b>12<\/b>, 2825\u20132830 (2011).<\/p>\n<p class=\"c-article-references__links u-hide-print\"><a data-track=\"click_references\" data-track-action=\"google scholar reference\" data-track-value=\"google scholar reference\" data-track-label=\"link\" data-track-item_id=\"link\" rel=\"nofollow noopener\" aria-label=\"Google Scholar reference 67\" href=\"http:\/\/scholar.google.com\/scholar_lookup?&amp;title=Scikit-learn%3A%20machine%20learning%20in%20Python&amp;journal=J.%20Mach.%20Learn.%20Res.&amp;volume=12&amp;pages=2825-2830&amp;publication_year=2011&amp;author=Pedregosa%2CF\" target=\"_blank\"><br \/>\n                    Google Scholar<\/a>\u00a0\n                <\/p>\n<\/li>\n<li class=\"c-article-references__item js-c-reading-companion-references-item\" data-counter=\"68.\">\n<p class=\"c-article-references__text\" id=\"ref-CR68\">Ma, L. D. et al. MedMPT. GitHub <a href=\"https:\/\/github.com\/maliangdi\/MedMPT\" data-track=\"click_references\" data-track-action=\"external reference\" data-track-value=\"external reference\" data-track-label=\"https:\/\/github.com\/maliangdi\/MedMPT\" rel=\"nofollow noopener\" target=\"_blank\">https:\/\/github.com\/maliangdi\/MedMPT<\/a> (2025).<\/p>\n<\/li>\n","protected":false},"excerpt":{"rendered":"Agusti, A., Vogelmeier, C. F. &amp; Halpin, D. M. G. Tackling the global burden of lung disease through&hellip;\n","protected":false},"author":2,"featured_media":174618,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[275],"tags":[34749,3547,2564,18,910,135,475,474,19,17,610,99452],"class_list":{"0":"post-174617","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-biomedical-engineering","9":"tag-biomedical-engineering-biotechnology","10":"tag-biomedicine","11":"tag-eire","12":"tag-general","13":"tag-health","14":"tag-health-care","15":"tag-healthcare","16":"tag-ie","17":"tag-ireland","18":"tag-machine-learning","19":"tag-respiratory-tract-diseases"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@ie\/115530108978089477","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/174617","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=174617"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/174617\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/174618"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=174617"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=174617"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=174617"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}