{"id":250657,"date":"2025-07-09T11:47:14","date_gmt":"2025-07-09T11:47:14","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/250657\/"},"modified":"2025-07-09T11:47:14","modified_gmt":"2025-07-09T11:47:14","slug":"implementing-artificial-intelligence-in-critical-care-medicine-a-consensus-of-22-critical-care","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/250657\/","title":{"rendered":"Implementing Artificial Intelligence in Critical Care Medicine: a consensus of 22 | Critical Care"},"content":{"rendered":"<p>Artificial intelligence (AI) is rapidly entering critical care, where it holds the potential to improve diagnostic accuracy and prognostication, streamline intensive care unit (ICU) workflows, and enable personalized care. [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 1\" title=\"Deo RC. Machine learning in medicine. Circulation. 2015;132:1920\u201330.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR1\" id=\"ref-link-section-d2079306e1000\" target=\"_blank\" rel=\"noopener\">1<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 2\" title=\"Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44\u201356.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR2\" id=\"ref-link-section-d2079306e1003\" target=\"_blank\" rel=\"noopener\">2<\/a>] Without a structured approach to implementation, evaluation, and control, this transformation may be hindered or possibly lead to patient harm and unintended consequences.<\/p>\n<p>Despite the need to support overwhelmed ICUs facing staff shortages, increasing case complexity, and rising costs, most AI tools remain poorly validated and untested in real settings. [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 3\" title=\"Shillan D, Sterne JAC, Champneys A, Gibbison B. Use of machine learning to analyse routinely collected intensive care unit data: A systematic review. Crit Care 2019; 23 &#010;                  https:\/\/doi.org\/10.1186\/S13054-019-2564-9&#010;                  &#010;                .\u00a0\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR3\" id=\"ref-link-section-d2079306e1009\" target=\"_blank\" rel=\"noopener\">3<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 45\" title=\"Groen AM, Kraan R, Amirkhan SF, Daams JG, Maas M. A systematic review on the use of explainability in deep learning systems for computer aided diagnosis in radiology: Limited use of explainable AI? Eur J Radiol 2022; 157. &#010;                  https:\/\/doi.org\/10.1016\/j.ejrad.2022.110592&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR45\" id=\"ref-link-section-d2079306e1012\" target=\"_blank\" rel=\"noopener\">45<\/a>]<\/p>\n<p>To address this gap, we issue a call to action for the critical care community: the integration of AI into the ICU must follow a pragmatic, clinically informed, and risk-aware framework. [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Choudhury A, Asan O. Role of Artificial Intelligence in Patient Safety Outcomes: Systematic Literature Review. JMIR Med Inform. 2020;8: e18599.\" href=\"#ref-CR6\" id=\"ref-link-section-d2079306e1018\">6<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Zhang J, Zhang Z-M. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak. 2023;23:7.\" href=\"#ref-CR7\" id=\"ref-link-section-d2079306e1018_1\">7<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 8\" title=\"Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4:140.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR8\" id=\"ref-link-section-d2079306e1021\" target=\"_blank\" rel=\"noopener\">8<\/a>] As a result of a multidisciplinary consensus process with a panel of intensivists, AI researchers, data scientists and experts, this paper offers concrete recommendations to guide the safe, effective, and meaningful adoption of AI into critical care.<\/p>\n<p>Methods<\/p>\n<p>The consensus presented in this manuscript emerged through expert discussions, rather than formal grading or voting on evidence, in recognition that AI in critical care is a rapidly evolving field where many critical questions remain unanswered. Participants were selected by the consensus chairs (MC, AB, FT, and JLV) based on their recognized contributions to AI in critical care to ensure representation from both clinical end-users and AI developers. Discussions were iterative with deliberate engagement across domains, refining recommendations through critical examination of real-world challenges, current research, and regulatory landscapes.<\/p>\n<p>While not purely based on traditional evidence grading, this manuscript reflects a rigorous, expert-driven synthesis of key barriers and opportunities for AI in critical care, aiming to bridge existing knowledge gaps and provide actionable guidance in a rapidly evolving field. To guide physicians in this complex and rapidly evolving arena [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 9\" title=\"Kwong JCC, Nguyen D-D, Khondker A, et al. When the Model Trains You: Induced Belief Revision and Its Implications on Artificial Intelligence Research and Patient Care \u2014 A Case Study on Predicting Obstructive Hydronephrosis in Children. NEJM AI 2024; 1. &#010;                  https:\/\/doi.org\/10.1056\/AICS2300004&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR9\" id=\"ref-link-section-d2079306e1034\" target=\"_blank\" rel=\"noopener\">9<\/a>], some of the current taxonomy and classifications are reported in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#Fig1\" target=\"_blank\" rel=\"noopener\">1<\/a>.<\/p>\n<p><b id=\"Fig1\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig.\u00a01<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2\/figures\/1\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig1\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/13054_2025_5532_Fig1_HTML.png\" alt=\"figure 1\" loading=\"lazy\" width=\"685\" height=\"321\"\/><\/a><\/p>\n<p>Taxonomy of AI in critical care<\/p>\n<p>Main barriers and challenges for AI integration in critical care<\/p>\n<p>The main barriers to AI implementation in critical care determined by the expert consensus are presented in this section. These unresolved and evolving challenges have prompted us to develop a series of recommendations to physicians and other healthcare workers, patients, and societal stakeholders, emphasizing the principles we believe should guide the advancement of AI in healthcare. Challenges and principles are divided into four main areas, 1) human-centric AI; 2) Recommendation for clinician training on AI use; 3) standardization of data models and networks and 4) AI governance. These are summarized in Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#Fig2\" target=\"_blank\" rel=\"noopener\">2<\/a> and discussed in more detail in the next paragraphs.<\/p>\n<p><b id=\"Fig2\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig.\u00a02<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2\/figures\/2\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig2\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/13054_2025_5532_Fig2_HTML.png\" alt=\"figure 2\" loading=\"lazy\" width=\"685\" height=\"517\"\/><\/a><\/p>\n<p>Recommendations, according to development of standards for networking, data sharing and research, ethical challenges, regulations and societal challenges, and clinical practice<\/p>\n<p>The development and maintenance of AI applications in medicine require enormous computational power, infrastructure, funding and technical expertise. Consequently, AI development is led by major technology companies whose goals may not always align with those of patients or healthcare systems [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"Yu K-H, Healey E, Leong T-Y, Kohane IS, Manrai AK. Medical Artificial Intelligence and Human Values. N Engl J Med. 2024;390:1895\u2013904.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR10\" id=\"ref-link-section-d2079306e1089\" target=\"_blank\" rel=\"noopener\">10<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Hogg HDJ, Al-Zubaidy M, Talks J, et al. Stakeholder Perspectives of Clinical Artificial Intelligence Implementation: Systematic Review of Qualitative Evidence. J Med Internet Res 2023;25:e39742 &#010;                  https:\/\/www.jmir.org\/2023\/1\/e39742&#010;                  &#010;                 2023; 25: e39742.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR11\" id=\"ref-link-section-d2079306e1092\" target=\"_blank\" rel=\"noopener\">11<\/a>]. The rapid diffusion of new AI models contrasts sharply with the evidence-based culture of medicine. This raises concerns about the deployment of insufficiently validated clinical models. [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 12\" title=\"Kerasidou A, Kerasidou X (Charalampia). AI in Medicine. 2021.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR12\" id=\"ref-link-section-d2079306e1095\" target=\"_blank\" rel=\"noopener\">12<\/a>]<\/p>\n<p>Moreover, many models are developed using datasets that underrepresent vulnerable populations, leading to algorithmic bias. [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 13\" title=\"Omiye JA, Lester JC, Spichak S, Rotemberg V, Daneshjou R. Large language models propagate race-based medicine. npj Digital Medicine 2023 6:1 2023; 6: 1\u20134.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR13\" id=\"ref-link-section-d2079306e1101\" target=\"_blank\" rel=\"noopener\">13<\/a>] AI models may lack both temporal validity (when applied to new data in a different time) and geographic validity (when applied across different institutions or regions). Variability in temporal or geographical disease patterns including demographics, healthcare infrastructure, and the design of Electronic Health Records (EHR) further complicates generalizability.<\/p>\n<p>Finally, the use of AI raises ethical concerns, including trust in algorithmic recommendations and the risk of weakening the human connection at the core of medical practice, which is the millenary relation between physicians and patients. [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 14\" title=\"Rony MKK, Parvin MR, Wahiduzzaman M, Debnath M, Bala S Das, Kayesh I. \u201cI Wonder if my Years of Training and Expertise Will be Devalued by Machines\u201d: Concerns About the Replacement of Medical Professionals by Artificial Intelligence. SAGE Open Nurs 2024; 10. &#010;                  https:\/\/doi.org\/10.1177\/23779608241245220\/ASSET\/IMAGES\/LARGE\/10.1177_23779608241245220-FIG1.JPEG&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR14\" id=\"ref-link-section-d2079306e1107\" target=\"_blank\" rel=\"noopener\">14<\/a>]<\/p>\n<p>Recommendations<\/p>\n<p>Here we report recommendations, divided in four domains. Figure\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#Fig3\" target=\"_blank\" rel=\"noopener\">3<\/a> reports a summary of five representative AI use cases in critical care\u2014ranging from waveform analysis to personalized clinician training\u2014mapped across these four domains.<\/p>\n<p><b id=\"Fig3\" class=\"c-article-section__figure-caption\" data-test=\"figure-caption-text\">Fig.\u00a03<\/b><a class=\"c-article-section__figure-link\" data-test=\"img-link\" data-track=\"click\" data-track-label=\"image\" data-track-action=\"view figure\" href=\"https:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2\/figures\/3\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" aria-describedby=\"Fig3\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/07\/13054_2025_5532_Fig3_HTML.png\" alt=\"figure 3\" loading=\"lazy\" width=\"685\" height=\"356\"\/><\/a><\/p>\n<p>Summary of five representative AI use cases in critical care\u2014ranging from waveform analysis to personalized clinician training\u2014mapped across these 4 domains<\/p>\n<p>Strive for human-centric and ethical AI utilization in healthcare<\/p>\n<p>Alongside its significant potential benefit, the risk of AI misuse cannot be underestimated. AI algorithms may be harmful when prematurely deployed without adequate control [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 9\" title=\"Kwong JCC, Nguyen D-D, Khondker A, et al. When the Model Trains You: Induced Belief Revision and Its Implications on Artificial Intelligence Research and Patient Care \u2014 A Case Study on Predicting Obstructive Hydronephrosis in Children. NEJM AI 2024; 1. &#010;                  https:\/\/doi.org\/10.1056\/AICS2300004&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR9\" id=\"ref-link-section-d2079306e1145\" target=\"_blank\" rel=\"noopener\">9<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Perry Wilson F, Martin M, Yamamoto Y, et al. Electronic health record alerts for acute kidney injury: multicenter, randomized clinical trial. BMJ 2021; 372. &#10;                  https:\/\/doi.org\/10.1136\/BMJ.M4786&#10;                  &#10;                .\" href=\"#ref-CR15\" id=\"ref-link-section-d2079306e1148\">15<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Ji Z, Lee N, Frieske R, et al. Survey of Hallucination in Natural Language Generation. ACM Comput Surv 2023; 55. &#10;                  https:\/\/doi.org\/10.1145\/3571730&#10;                  &#10;                .\" href=\"#ref-CR16\" id=\"ref-link-section-d2079306e1148_1\">16<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 17\" title=\"Oniani D, Hilsman J, Peng Y, et al. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. npj Digital Medicine 2023 6:1 2023; 6: 1\u201310.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR17\" id=\"ref-link-section-d2079306e1151\" target=\"_blank\" rel=\"noopener\">17<\/a>]. In addition to the regulatory frameworks that have been established to maintain control (presented in Sect.&#8221;<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"section anchor\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#Sec19\" target=\"_blank\" rel=\"noopener\">Governance and regulation for AI in Critical Care<\/a>&#8220;) [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"EU Artificial Intelligence Act| Up-to-date developments and analyses of the EU AI Act. &#010;                  https:\/\/artificialintelligenceact.eu\/&#010;                  &#010;                 (accessed July 6, 2024).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR18\" id=\"ref-link-section-d2079306e1157\" target=\"_blank\" rel=\"noopener\">18<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 19\" title=\"Artificial Intelligence and Machine Learning in Software as a Medical Device| FDA. &#010;                  https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-and-machine-learning-software-medical-device&#010;                  &#010;                 (accessed July 6, 2024).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR19\" id=\"ref-link-section-d2079306e1161\" target=\"_blank\" rel=\"noopener\">19<\/a>] we advocate for clinicians to be involved in this process and provide guidance.<\/p>\n<p>Develop human-centric AI in healthcare<\/p>\n<p>AI development in medicine and healthcare should maintain a human-centric perspective, promote empathetic care, and increase the time allocated to patient-physician communication and interaction. For example, the use of AI to replace humans in time-consuming or bureaucratic tasks such as documentation and transfers of care [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Tierney AA, Gayre G, Hoberman B, et al. Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation. NEJM Catal 2024; 5. &#10;                  https:\/\/doi.org\/10.1056\/CAT.23.0404\/ASSET\/306B0943-50F3-4D5F-917B-797FC28F3BF6\/ASSETS\/GRAPHIC\/CAT.23.0404-F3.PNG&#10;                  &#10;                .\" href=\"#ref-CR20\" id=\"ref-link-section-d2079306e1171\">20<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"HIMSS24: How Epic is building out AI, ambient tech in EHRs. &#10;                  https:\/\/www.fiercehealthcare.com\/ai-and-machine-learning\/himss24-how-epic-building-out-ai-ambient-technology-clinicians&#10;                  &#10;                 (accessed July 5, 2024).\" href=\"#ref-CR21\" id=\"ref-link-section-d2079306e1171_1\">21<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 22\" title=\"Chen S, Guevara M, Moningi S, et al. The effect of using a large language model to respond to patient messages. Lancet Digit Health. 2024;6:e379\u201381.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR22\" id=\"ref-link-section-d2079306e1174\" target=\"_blank\" rel=\"noopener\">22<\/a>]. It could craft clinical notes, ensuring critical information is accurately captured in health records while reducing administrative burdens [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 23\" title=\"Lin S. A Clinician\u2019s Guide to Artificial Intelligence (AI): Why and How Primary Care Should Lead the Health Care AI Revolution. J Am Board Fam Med. 2022;35:175.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR23\" id=\"ref-link-section-d2079306e1177\" target=\"_blank\" rel=\"noopener\">23<\/a>].<\/p>\n<p>Establish social contract for AI use in healthcare<\/p>\n<p>There is a significant concern that AI may exacerbate societal healthcare disparities [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Pyrros A, Rodr\u00edguez-Fern\u00e1ndez JM, Borstelmann SM, et al. Detecting Racial\/Ethnic Health Disparities Using Deep Learning From Frontal Chest Radiography. J Am Coll Radiol. 2022;19:184\u201391.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR24\" id=\"ref-link-section-d2079306e1188\" target=\"_blank\" rel=\"noopener\">24<\/a>]. When considering AI\u2019s potential influence on physicians&#8217;choices and behaviour, the possibility of including or reinforcing biases should be examined rigorously to avoid perpetuating existing health inequities and unfair data-driven associations [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Pyrros A, Rodr\u00edguez-Fern\u00e1ndez JM, Borstelmann SM, et al. Detecting Racial\/Ethnic Health Disparities Using Deep Learning From Frontal Chest Radiography. J Am Coll Radiol. 2022;19:184\u201391.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR24\" id=\"ref-link-section-d2079306e1191\" target=\"_blank\" rel=\"noopener\">24<\/a>]. It is thus vital to involve patients and societal representatives in discussions regarding the vision of the next healthcare era, its operations, goals, and limits of action [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Zack T, Lehman E, Suzgun M, et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. Lancet Digit Health. 2024;6:e12-22.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR25\" id=\"ref-link-section-d2079306e1194\" target=\"_blank\" rel=\"noopener\">25<\/a>]. The desirable aim would be to establish a social contract for AI in healthcare, to ensure the accountability and transparency of AI in healthcare. A social contract for AI in healthcare should define clear roles and responsibilities for all stakeholders\u2014clinicians, patients, developers, regulators, and administrators. This includes clinicians being equipped to critically evaluate AI tools, developers ensuring transparency, safety, and clinical relevance, and regulators enforcing performance, equity, and post-deployment monitoring standards. We advocate for hospitals to establish formal oversight mechanisms, such as dedicated AI committees, to ensure the safe implementation of AI systems. Such structures would help formalize shared accountability and ensure that AI deployment remains aligned with the core values of fairness, safety, and human-centred care.<\/p>\n<p>Prioritize human oversight and ethical governance in clinical AI<\/p>\n<p>Since the Hippocratic oath, patient care has been based on the doctor-patient connection where clinicians bear the ethical responsibility to maximize patient benefit while minimizing harm. As AI technologies are increasingly integrated into healthcare, their responsibility must also extend to overseeing its development and application. In the ICU, where treatment decisions balance between individual patient preferences and societal consideration, healthcare professionals must lead this transition [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 26\" title=\"van de Sande D, van Genderen ME, Huiskens J, Gommers D, van Bommel J. Moving from bytes to bedside: a systematic review on the use of artificial intelligence in the intensive care unit. Intensive Care Med. 2021;47:750\u201360.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR26\" id=\"ref-link-section-d2079306e1205\" target=\"_blank\" rel=\"noopener\">26<\/a>]. As intensivists, we should maintain governance of this process, ensuring ethical principles and scientific rigor guide the development of frameworks to measure fairness, assess bias, and establish acceptable thresholds for AI uncertainty [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Choudhury A, Asan O. Role of Artificial Intelligence in Patient Safety Outcomes: Systematic Literature Review. JMIR Med Inform. 2020;8: e18599.\" href=\"#ref-CR6\" id=\"ref-link-section-d2079306e1208\">6<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Zhang J, Zhang Z-M. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak. 2023;23:7.\" href=\"#ref-CR7\" id=\"ref-link-section-d2079306e1208_1\">7<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 8\" title=\"Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4:140.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR8\" id=\"ref-link-section-d2079306e1211\" target=\"_blank\" rel=\"noopener\">8<\/a>].<\/p>\n<p>While AI models are rapidly emerging, most are being developed outside the medical community. To better align AI development with clinical ethics, we propose the incorporation of multidisciplinary boards comprising clinicians, patients, ethicists, and technological experts, who should be responsible for systematically reviewing algorithmic behaviour in critical care, assessing the risks of bias, and promoting transparency in decision-making processes. In this context, AI development offers an opportunity to rethink and advance ethical principles in patient care.<\/p>\n<p>Recommendations for clinician training on AI useDevelop and assess the Human-AI interface<\/p>\n<p>Despite some promising results [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 27\" title=\"Shamout F, Zhu T, Clifton DA. Machine Learning for Clinical Outcome Prediction. IEEE Rev Biomed Eng. 2021;14:116\u201326.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR27\" id=\"ref-link-section-d2079306e1230\" target=\"_blank\" rel=\"noopener\">27<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 28\" title=\"Levi R, Carli F, Ar\u00e9valo AR, et al. Artificial intelligence-based prediction of transfusion in the intensive care unit in patients with gastrointestinal bleeding. BMJ Health Care Inform 2021; 28. &#010;                  https:\/\/doi.org\/10.1136\/BMJHCI-2020-100245&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR28\" id=\"ref-link-section-d2079306e1233\" target=\"_blank\" rel=\"noopener\">28<\/a>], the clinical application of AI remains limited [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Hah H, Goldin D. Moving toward AI-assisted decision-making: Observation on clinicians\u2019 management of multimedia patient information in synchronous and asynchronous telehealth contexts. Health Informatics J 2022; 28. &#10;                  https:\/\/doi.org\/10.1177\/14604582221077049\/ASSET\/IMAGES\/LARGE\/10.1177_14604582221077049-FIG4.JPEG&#10;                  &#10;                .\" href=\"#ref-CR29\" id=\"ref-link-section-d2079306e1236\">29<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Shen J, Zhang CJP, Jiang B, et al. Artificial intelligence versus clinicians in disease diagnosis: Systematic review. JMIR Med Inform 2019; 7. &#10;                  https:\/\/doi.org\/10.2196\/10010&#10;                  &#10;                .\" href=\"#ref-CR30\" id=\"ref-link-section-d2079306e1236_1\">30<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 31\" title=\"Zhang A, Xing L, Zou J, Wu JC. Shifting machine learning for healthcare from development to deployment and from models to data. Nature Biomedical Engineering 2022 6:12 2022; 6: 1330\u201345.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR31\" id=\"ref-link-section-d2079306e1239\" target=\"_blank\" rel=\"noopener\">31<\/a>]. The first step toward integration is to understand how clinicians interact with AI and to design systems that complement, rather than disrupt, clinical reasoning [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 32\" title=\"Shickel B, Loftus TJ, Adhikari L, Ozrazgat-Baslanti T, Bihorac A, Rashidi P. DeepSOFA: A Continuous Acuity Score for Critically Ill Patients using Clinically Interpretable Deep Learning. Sci Rep 2019; 9. &#010;                  https:\/\/doi.org\/10.1038\/S41598-019-38491-0&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR32\" id=\"ref-link-section-d2079306e1242\" target=\"_blank\" rel=\"noopener\">32<\/a>]. This translates into the need for specific research on the human-AI interface, where a key area of focus is identifying the most effective cognitive interface between clinicians and AI systems. On one side, physicians may place excessive trust on AI model results, possibly overlooking crucial information. For example, in sepsis detection an AI algorithm might miss an atypical presentation or a tropical infectious disease due to limitations in its training data; if clinicians overly trust the algorithm\u2019s negative output, they may delay initiating a necessary antibiotic. On the other, the behaviour of clinicians can influence AI responses in unintended ways. To better reflect this interaction, the concept of synergy between human and AI has been proposed in the last years, emphasizing that AI supports rather than replaces human clinicians [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 33\" title=\"Crigger E, Reinbold K, Hanson C, Kao A, Blake K, Irons M. Trustworthy Augmented Intelligence in Health Care. J Med Syst. 2022;46:12.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR33\" id=\"ref-link-section-d2079306e1246\" target=\"_blank\" rel=\"noopener\">33<\/a>]. This collaboration has been described in two forms: human-AI augmentation (when human\u2013AI interface enhances clinical performance compared to human alone) and human-AI synergy (where the combined performance exceeds that of both the human and the AI individually) [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 34\" title=\"Vaccaro M, Almaatouq A, Malone T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour 2024 8:12 2024; 8: 2293\u2013303.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR34\" id=\"ref-link-section-d2079306e1249\" target=\"_blank\" rel=\"noopener\">34<\/a>]. To support the introduction of AI in clinical practice in intensive care, we propose starting with the concept of human-AI augmentation, which is more inclusive and better established according to medical literature [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 34\" title=\"Vaccaro M, Almaatouq A, Malone T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour 2024 8:12 2024; 8: 2293\u2013303.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR34\" id=\"ref-link-section-d2079306e1252\" target=\"_blank\" rel=\"noopener\">34<\/a>]. A straightforward example of the latter is the development of interpretable, real-time dashboards that synthetize complex multidimensional data into visual formats, thereby enhancing clinicians\u2019 situational awareness without overwhelming them.<\/p>\n<p>Improve disease characterization with AI<\/p>\n<p>Traditional procedures for classifying patients and labelling diseases and syndromes based on a few simple criteria are the basis of medical education, but they may fail to grasp the complexity of underlying pathology and lead to suboptimal care. In critical care, where patient conditions are complex and rapidly evolving, AI-driven phenotyping plays a crucial role by leveraging vast amounts of genetic, radiological, biomarker, and physiological data. AI-based phenotyping methods can be broadly categorized into two approaches.<\/p>\n<p>One approach involves unsupervised clustering, in which patients are grouped based on shared features or patterns without prior labelling. Seymour et al. demonstrated how machine learning can stratify septic patients into clinically meaningful subgroups using high-dimensional data, which can subsequently inform risk assessment and prognosis [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 35\" title=\"Seymour CW, Kennedy JN, Wang S, et al. Derivation, Validation, and Potential Treatment Implications of Novel Clinical Phenotypes for Sepsis. JAMA. 2019;321:2003.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR35\" id=\"ref-link-section-d2079306e1266\" target=\"_blank\" rel=\"noopener\">35<\/a>]. Another promising possibility is the use of supervised or semi-supervised clustering techniques, which incorporate known outcomes or partial labelling to enhance the phenotyping of patient subgroups [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 36\" title=\"Peikari M, Salama S, Nofech-Mozes S, Martel AL. A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification. Scientific Reports 2018; 8: 1\u201313.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR36\" id=\"ref-link-section-d2079306e1269\" target=\"_blank\" rel=\"noopener\">36<\/a>].<\/p>\n<p>The second approach falls under the causal inference framework, where phenotyping is conducted with the specific objective of identifying subgroups that benefit from a particular intervention due to a causal association. This method aims to enhance personalized treatment by identifying how treatment effects vary among groups, ensuring that therapies are targeted toward patients most likely to benefit. For example, machine learning has been used to stratify critically ill patients based on their response to specific therapeutic interventions, potentially improving clinical outcomes [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 37\" title=\"Zhang Z, Chen L, Sun B, et al. Identifying septic shock subgroups to tailor fluid strategies through multi-omics integration. Nat Commun. 2024;15:9028.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR37\" id=\"ref-link-section-d2079306e1275\" target=\"_blank\" rel=\"noopener\">37<\/a>]. In a large ICU cohort of patients with traumatic brain injury (TBI), unsupervised clustering identified six distinct subgroups, based on combined neurological and metabolic profiles. [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 38\" title=\"\u00c5kerlund CAI, Holst A, Stocchetti N, et al. Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study. Crit Care. 2022;26:1\u201315.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR38\" id=\"ref-link-section-d2079306e1278\" target=\"_blank\" rel=\"noopener\">38<\/a>]<\/p>\n<p>These approaches hold significant potential for advancing acute and critical care by ensuring that AI-driven phenotyping is not only descriptive, but also actionable. Before integrating these methodologies into clinical workflows, we need to make sure clinicians can accept the paradigm shift between broad syndromes and specific sub-phenotypes, ultimately supporting the transition toward personalized medicine [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 35\" title=\"Seymour CW, Kennedy JN, Wang S, et al. Derivation, Validation, and Potential Treatment Implications of Novel Clinical Phenotypes for Sepsis. JAMA. 2019;321:2003.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR35\" id=\"ref-link-section-d2079306e1284\" target=\"_blank\" rel=\"noopener\">35<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Mathew D, Giles JR, Baxter AE, et al. Deep immune profiling of COVID-19 patients reveals distinct immunotypes with therapeutic implications. Science. 1979;2020:369. &#10;                  https:\/\/doi.org\/10.1126\/science.abc8511&#10;                  &#10;                .\" href=\"#ref-CR39\" id=\"ref-link-section-d2079306e1287\">39<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Sinha P, Furfaro D, Cummings MJ, et al. Latent class analysis reveals COVID-19-related acute respiratory distress syndrome subgroups with differential responses to corticosteroids. Am J Respir Crit Care Med. 2021;204:1274\u201385.\" href=\"#ref-CR40\" id=\"ref-link-section-d2079306e1287_1\">40<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 41\" title=\"Maslove DM, Tang B, Shankar-Hari M, et al. Redefining critical illness. Nature Medicine 2022 28:6 2022; 28: 1141\u20138.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR41\" id=\"ref-link-section-d2079306e1290\" target=\"_blank\" rel=\"noopener\">41<\/a>].<\/p>\n<p>Ensure AI training for responsible use of AI in healthcare<\/p>\n<p>In addition to clinical practice, undergraduate medical education is also directly influenced by AI transformation [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 42\" title=\"Wood EA, Ange BL, Miller DD. Are We Ready to Integrate Artificial Intelligence Literacy into Medical School Curriculum: Students and Faculty Survey. J Med Educ Curric Dev. 2021;8:238212052110240.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR42\" id=\"ref-link-section-d2079306e1301\" target=\"_blank\" rel=\"noopener\">42<\/a>] as future workers need to be equipped to understand and use these technologies. Providing training and knowledge from the start of their education requires that all clinicians understand data science and AI&#8217;s fundamental concepts, methods, and limitations, which should be included in medical degree core curriculum. This will allow clinicians to use and assess AI critically, identify biases and limitations, and make well-informed decisions, which may ultimately benefit the medical profession&#8217;s identity crisis and provide new careers in data analysis and AI research [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 42\" title=\"Wood EA, Ange BL, Miller DD. Are We Ready to Integrate Artificial Intelligence Literacy into Medical School Curriculum: Students and Faculty Survey. J Med Educ Curric Dev. 2021;8:238212052110240.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR42\" id=\"ref-link-section-d2079306e1304\" target=\"_blank\" rel=\"noopener\">42<\/a>].<\/p>\n<p>In addition to undergraduate education, it is essential to train experienced physicians, nurses, and other allied health professional [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 43\" title=\"Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health. 2023;2: e0000198.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR43\" id=\"ref-link-section-d2079306e1310\" target=\"_blank\" rel=\"noopener\">43<\/a>]. The effects of AI on academic education are deep and outside the scope of the current manuscript.\u00a0One promising example is the use of AI to support personalized, AI-driven training for clinicians\u2014both in clinical education and in understanding AI-related concepts [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 44\" title=\"Narayanan S, Ramakrishnan R, Durairaj E, Das A. Artificial Intelligence Revolutionizing the Field of Medical Education. Cureus. 2023;15: e49604.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR44\" id=\"ref-link-section-d2079306e1313\" target=\"_blank\" rel=\"noopener\">44<\/a>]. Tools such as chatbots, adaptive simulation platforms, and intelligent tutoring systems can adapt content to students\u2019 learning needs in real time, offering a tailored education. This may be applied to both clinical training and training in AI domains.<\/p>\n<p>Accepting uncertainty in medical decision-making<\/p>\n<p>Uncertainty is an intrinsic part of clinical decision-making, with which clinicians are familiar and are trained to navigate it through experience and intuition. However, AI models introduce a new type of uncertainty, which can undermine clinicians&#8217;trust, especially when models function as opaque \u201cblack boxes\u201d [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Groen AM, Kraan R, Amirkhan SF, Daams JG, Maas M. A systematic review on the use of explainability in deep learning systems for computer aided diagnosis in radiology: Limited use of explainable AI? Eur J Radiol 2022; 157. &#10;                  https:\/\/doi.org\/10.1016\/j.ejrad.2022.110592&#10;                  &#10;                .\" href=\"#ref-CR45\" id=\"ref-link-section-d2079306e1324\">45<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Kempt H, Heilinger JC, Nagel SK. \u201cI\u2019m afraid I can\u2019t let you do that, Doctor\u201d: meaningful disagreements with AI in medical contexts. AI Soc. 2023;38:1407\u201314.\" href=\"#ref-CR46\" id=\"ref-link-section-d2079306e1324_1\">46<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 47\" title=\"Chen JH, Dhaliwal G, Yang D. Decoding Artificial Intelligence to Achieve Diagnostic Excellence: Learning From Experts, Examples, and Experience. JAMA. 2022;328:709\u201310.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR47\" id=\"ref-link-section-d2079306e1327\" target=\"_blank\" rel=\"noopener\">47<\/a>]. This increases cognitive distance between model and clinical judgment, as clinicians don\u2019t know how to interpret it. To bridge this gap, explainable AI (XAI) has emerged, providing tools to make model predictions more interpretable and, ideally, more trustworthy to reduce perceived uncertainty [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 48\" title=\"Loftus TJ, Shickel B, Ruppert MM, et al. Uncertainty-aware deep learning in healthcare: A scoping review. PLOS Digital Health. 2022;1: e0000085.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR48\" id=\"ref-link-section-d2079306e1330\" target=\"_blank\" rel=\"noopener\">48<\/a>].<\/p>\n<p>Yet, we argue that interpretability alone is not enough [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 48\" title=\"Loftus TJ, Shickel B, Ruppert MM, et al. Uncertainty-aware deep learning in healthcare: A scoping review. PLOS Digital Health. 2022;1: e0000085.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR48\" id=\"ref-link-section-d2079306e1336\" target=\"_blank\" rel=\"noopener\">48<\/a>].To accelerate AI adoption and trust, we advocate that physicians must be trained to interpret outputs under uncertainty\u2014using frameworks like plausibility, consistency with known biology, and alignment with consolidated clinical reasoning\u2014rather than expecting full explainability [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 49\" title=\"Jin W, Li X, Hamarneh G. Why is plausibility surprisingly problematic as an XAI criterion? 2023; published online March 30. &#010;                  https:\/\/arxiv.org\/abs\/2303.17707v3&#010;                  &#010;                 (accessed Feb 10, 2025).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR49\" id=\"ref-link-section-d2079306e1339\" target=\"_blank\" rel=\"noopener\">49<\/a>].<\/p>\n<p>Standardize and share data while maintaining patient privacy<\/p>\n<p>In this section we present key infrastructures for AI deployment in critical care [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 50\" title=\"Wolff J, Pauling J, Keck A, Baumbach J. The economic impact of artificial intelligence in health care: Systematic review. J Med Internet Res 2020; 22 &#010;                  https:\/\/doi.org\/10.2196\/16866&#010;                  &#010;                .\u00a0\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR50\" id=\"ref-link-section-d2079306e1351\" target=\"_blank\" rel=\"noopener\">50<\/a>]. Their costs should be seen as investment in patient outcomes, processes efficiency, and reduced operational costs. Retaining data ownership within healthcare institutions, and recognizing patients and providers as stakeholders, allows them to benefit from the value their data creates. On the contrary, without safeguards clinical data risk becoming proprietary products of private companies\u2014which are resold to their source institutions rather than serving as a resource for their own development\u2014for instance, through the development and licensing of synthetic datasets\u00a0[<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 51\" title=\"Chen J, Chun D, Patel M, Chiang E, James J. The validity of synthetic clinical data: A validation study of a leading synthetic data generator (Synthea) using clinical quality measures. BMC Med Inform Decis Mak 2019; 19. &#010;                  https:\/\/doi.org\/10.1186\/S12911-019-0793-0&#010;                  &#010;                ,.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR51\" id=\"ref-link-section-d2079306e1354\" target=\"_blank\" rel=\"noopener\">51<\/a>].<\/p>\n<p>Standardize data to promote reproducible AI models<\/p>\n<p>Standardized data collection is essential for creating generalizable and reproducible AI models and fostering interoperability between different centres and systems. A key challenge in acute and critical care is the variability in data sources, including EHRs, multi-omics data (genomics, transcriptomics, proteomics, and metabolomics), medical imaging (radiology, pathology, and ultrasound), and unstructured free-text data from clinical notes and reports. These diverse data modalities are crucial for developing AI-driven decision-support tools, yet their integration is complex due to differences in structure, format, and quality across healthcare institutions.<\/p>\n<p>For instance, the detection of organ dysfunction in the ICU, hemodynamic monitoring collected by different devices, respiratory parameters from ventilators by different manufacturers, and variations in local policies and regulations all impact EHR data quality, structure, and consistency across different centres and clinical trials.<\/p>\n<p>The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM), which embeds standard vocabularies such as LOINC and SNOMED CT, continues to gain popularity as a framework for structuring healthcare data, enabling cross-centre data exchange and model interoperability [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Stang PE, Ryan PB, Racoosin JA, et al. Advancing the science for active surveillance: Rationale and design for the observational medical outcomes partnership. Ann Intern Med. 2010;153:600\u20136.\" href=\"#ref-CR52\" id=\"ref-link-section-d2079306e1370\">52<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"McDonald CJ, Huff SM, Suico JG, et al. LOINC, a universal standard for identifying laboratory observations: A 5-year update. Clin Chem. 2003;49:624\u201333.\" href=\"#ref-CR53\" id=\"ref-link-section-d2079306e1370_1\">53<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 54\" title=\"SNOMED CT. &#010;                  https:\/\/www.nlm.nih.gov\/healthit\/snomedct\/index.html&#010;                  &#010;                 (accessed June 14, 2025).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR54\" id=\"ref-link-section-d2079306e1373\" target=\"_blank\" rel=\"noopener\">54<\/a>]. Similarly, Fast Healthcare Interoperability Resources (FHIR) offers a flexible, standardized information exchange solution, facilitating real-time accessibility of structured data [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 55\" title=\"Resourcelist - FHIR v5.0.0. &#010;                  https:\/\/www.hl7.org\/fhir\/resourcelist.html&#010;                  &#010;                 (accessed June 14, 2025).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR55\" id=\"ref-link-section-d2079306e1376\" target=\"_blank\" rel=\"noopener\">55<\/a>].<\/p>\n<p>Hospitals, device and EHR companies must contribute to the adoption of recognized standards to make sure interoperability is not a barrier to AI implementation.<\/p>\n<p>Beyond structured data, AI has the potential to enhance data standardization by automatically tagging and labelling data sources, tracking provenance, and harmonizing data formats across institutions. Leveraging AI for these tasks can help mitigate data inconsistencies, thereby improving the reliability and scalability of AI-driven clinical applications.<\/p>\n<p>Prioritize data safety, security, and patient privacy<\/p>\n<p>Data safety, security and privacy are all needed for the application of AI in critical care. Data safety refers to the protection of data from accidental loss or system failure, while data security is related with defensive strategies for malicious attacks including hacking, ransomware, or unauthorized data access [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 56\" title=\"Ewoh P, Vartiainen T. Vulnerability to Cyberattacks and Sociotechnical Solutions for Health Care Systems: Systematic Review. J Med Internet Res. 2024;26: e46904.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR56\" id=\"ref-link-section-d2079306e1394\" target=\"_blank\" rel=\"noopener\">56<\/a>]. In modern hospitals, data safety and security will soon become as essential as wall oxygen in operating rooms [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 57\" title=\"McCoy TH, Perlis RH. Temporal Trends and Characteristics of Reportable Health Data Breaches, 2010\u20132017. JAMA. 2018;320:1282\u20134.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR57\" id=\"ref-link-section-d2079306e1397\" target=\"_blank\" rel=\"noopener\">57<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 58\" title=\"Jiang JX, Ross JS, Bai G. Ransomware Attacks and Data Breaches in US Health Care Systems. JAMA Netw Open 2025; 8. &#010;                  https:\/\/doi.org\/10.1001\/JAMANETWORKOPEN.2025.10180&#010;                  &#010;                ,.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR58\" id=\"ref-link-section-d2079306e1400\" target=\"_blank\" rel=\"noopener\">58<\/a>]. A corrupted or hacked clinical dataset during hospital care could be as catastrophic as losing electricity, medications, or oxygen. Finally, data privacy focuses on the safeguard of personally information, ensuring that patient data is stored and accessed in compliance with legal standards [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 56\" title=\"Ewoh P, Vartiainen T. Vulnerability to Cyberattacks and Sociotechnical Solutions for Health Care Systems: Systematic Review. J Med Internet Res. 2024;26: e46904.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR56\" id=\"ref-link-section-d2079306e1403\" target=\"_blank\" rel=\"noopener\">56<\/a>].<\/p>\n<p>Implementing AI that prioritizes these three pillars will be critical for resilient digital infrastructure in healthcare. A possible option for the medical community is to support open-source modes to increase transparency and reduce dependence on proprietary algorithms, and possibly enable better control of safety and privacy issues within the distributed systems [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 59\" title=\"Hahn E, Blazes D, Lewis S. Understanding How the \u2018Open\u2019 of Open Source Software (OSS) Will Improve Global Health Security. Health Secur. 2016;14:13\u20138.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR59\" id=\"ref-link-section-d2079306e1409\" target=\"_blank\" rel=\"noopener\">59<\/a>]. However, sustaining open-source innovation requires appropriate incentives, such as public or dedicated research funding, academic recognition, and regulatory support to ensure high-quality development and long-term viability [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 60\" title=\"Kobayashi S, Kane TB, Paton C. The Privacy and Security Implications of Open Data in Healthcare. Yearb Med Inform. 2018;27:41\u20137.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR60\" id=\"ref-link-section-d2079306e1412\" target=\"_blank\" rel=\"noopener\">60<\/a>]. Without such strategies, the role of open-source models will be reduced, with the risk of ceding a larger part of control of clinical decision-making to commercial algorithms.<\/p>\n<p>Develop rigorous AI research methodology<\/p>\n<p>We believe AI research should be held to the same methodological standards of other areas of medical research. Achieving this will require greater accountability from peer reviewers and scientific journals to ensure rigor, transparency, and clinical relevance.<\/p>\n<p>Furthermore, advancing AI in ICU research requires a transformation in the necessary underlying infrastructure, particularly when considering high-frequency data collection and the integration of complex, multimodal patient information, detailed in the sections below. In this context, the gap in data resolution between highly monitored environments such as ICUs and standard wards become apparent. The ICU provides a high level of data granularity due to high resolution monitoring systems, capable of capturing the rapid changes in a patient&#8217;s physiological status [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 61\" title=\"Ren IdY, Loftus Id TJ, Li Y, et al. Physiologic signatures within six hours of hospitalization identify acute illness phenotypes. PLOS Digital Health. 2022;1: e0000110.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR61\" id=\"ref-link-section-d2079306e1426\" target=\"_blank\" rel=\"noopener\">61<\/a>]. Consequently, the integration of this new source of high-volume, rapidly changing physiological data into medical research and clinical practice could give rise to \u201cphysiolomics\u201d, a proposed term to describe this domain, that could become as crucial as genomics, proteomics and other \u201c-omics\u201d fields in advancing personalized medicine.<\/p>\n<p>AI will change how clinical research is performed, improving evidence-based medicine and conducting randomized clinical trials (RCTs) [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 62\" title=\"Angus DC. Randomized Clinical Trials of Artificial Intelligence. JAMA. 2020;323:1043\u20135.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR62\" id=\"ref-link-section-d2079306e1438\" target=\"_blank\" rel=\"noopener\">62<\/a>]. Instead of using large, heterogeneous trial populations, AI might help researchers design and enrol tailored patient subgroups for precise RCTs [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 63\" title=\"Komorowski M, Lemyze M. Informing future intensive care trials with machine learning. Br J Anaesth. 2019;123:14\u20136.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR63\" id=\"ref-link-section-d2079306e1441\" target=\"_blank\" rel=\"noopener\">63<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 64\" title=\"Seitz KP, Spicer AB, Casey JD, et al. Individualized Treatment Effects of Bougie versus Stylet for Tracheal Intubation in Critical Illness. Am J Respir Crit Care Med. 2023;207:1602\u201311.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR64\" id=\"ref-link-section-d2079306e1444\" target=\"_blank\" rel=\"noopener\">64<\/a>]. These precision methods could solve the problem of negative critical care trials related to inhomogeneities in the population and significant confounding effects. AI could thus improve RCTs by allowing the enrolment of very subtle subgroups of patients with hundreds of specific inclusion criteria over dozens of centres, a task impossible to perform by humans in real-time practice, improving trial efficiency in enrolling enriched populations [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"PANTHER\u2013 Precision medicine Adaptive Network platform Trial in Hypoxemic acutE respiratory failuRe - ERS - European Respiratory Society. &#10;                  https:\/\/www.ersnet.org\/science-and-research\/clinical-research-collaboration-application-programme\/panther-precision-medicine-adaptive-network-platform-trial-in-hypoxemic-acute-respiratory-failure\/&#10;                  &#10;                 (accessed June 14, 2025).\" href=\"#ref-CR65\" id=\"ref-link-section-d2079306e1447\">65<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Bhavani SV, Holder A, Miltz D, et al. The Precision Resuscitation With Crystalloids in Sepsis (PRECISE) Trial: A Trial Protocol. JAMA Netw Open. 2024;7: e2434197.\" href=\"#ref-CR66\" id=\"ref-link-section-d2079306e1447_1\">66<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 67\" title=\"Wijnberge M, Geerts BF, Hol L, et al. Effect of a Machine Learning-Derived Early Warning System for Intraoperative Hypotension vs Standard Care on Depth and Duration of Intraoperative Hypotension during Elective Noncardiac Surgery: The HYPE Randomized Clinical Trial. JAMA - Journal of the American Medical Association. 2020;323:1052\u201360.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR67\" id=\"ref-link-section-d2079306e1450\" target=\"_blank\" rel=\"noopener\">67<\/a>]. In the TBI example cited, conducting an RCT on the six AI-identified endotypes\u2014such as patients with moderate GCS but severe metabolic derangement\u2014would be unfeasible without AI stratification [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 38\" title=\"\u00c5kerlund CAI, Holst A, Stocchetti N, et al. Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study. Crit Care. 2022;26:1\u201315.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR38\" id=\"ref-link-section-d2079306e1454\" target=\"_blank\" rel=\"noopener\">38<\/a>]. This underscores AI\u2019s potential to enable precision trial designs in critical care.<\/p>\n<p>There are multiple domains for interaction between AI and RCT, though a comprehensive review is beyond the scope of this paper. These include trial emulation to identify patient populations that may benefit most from an intervention, screening for the most promising drugs for interventions, detecting heterogeneity of treatment effects, and automated screening to improve the efficiency and cost of clinical trials.<\/p>\n<p>Ensuring that AI models are clinically effective, reproducible, and generalizable requires adherence to rigorous methodological standards, particularly in critical care where patient heterogeneity, real-time decision-making, and high-frequency data collection pose unique challenges. Several established reporting and validation frameworks already provide guidance for improving AI research in ICU settings. While these frameworks are not specific to the ICU environment, we believe these should be rapidly disseminated into the critical care community through dedicated initiatives, courses and scientific societies.<\/p>\n<p>For predictive models, the TRIPOD-AI extension of the TRIPOD guidelines focuses on transparent reporting for clinical prediction models with specific emphasis on calibration, internal and external validation, and fairness [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 68\" title=\"Collins GS, Moons KGM, Dhiman P, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024;385: e078378.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR68\" id=\"ref-link-section-d2079306e1467\" target=\"_blank\" rel=\"noopener\">68<\/a>]. PROBAST-AI framework complements this by offering a structured tool to assess risk of bias and applicability in prediction model studies [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 69\" title=\"Moons KGM, Damen JAA, Kaul T, et al. PROBAST+AI: an updated quality, risk of bias, and applicability assessment tool for prediction models using regression or artificial intelligence methods. BMJ 2025; 388. &#010;                  https:\/\/doi.org\/10.1136\/BMJ-2024-082505&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR69\" id=\"ref-link-section-d2079306e1470\" target=\"_blank\" rel=\"noopener\">69<\/a>]. CONSORT-AI extends the CONSORT framework to include AI-specific elements such as algorithm transparency and reproducibility for interventional trials with AI [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 70\" title=\"Liu X, Cruz Rivera S, Moher D, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26:1364\u201374.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR70\" id=\"ref-link-section-d2079306e1473\" target=\"_blank\" rel=\"noopener\">70<\/a>], while STARD-AI provides a framework for reporting AI-based diagnostic accuracy studies [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 71\" title=\"Sounderajah V, Ashrafian H, Golub RM, et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open. 2021;11: e047709.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR71\" id=\"ref-link-section-d2079306e1476\" target=\"_blank\" rel=\"noopener\">71<\/a>]. Together, these guidelines encompass several issues related to transparency, reproducibility, fairness, external validation, and human oversight\u2014principles that must be considered foundational for any trustworthy AI research in healthcare. Despite the availability of these frameworks, many ICU studies involving AI methods still fail to meet these standards, leading to concerns about inadequate external validation and generalizability [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 68\" title=\"Collins GS, Moons KGM, Dhiman P, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024;385: e078378.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR68\" id=\"ref-link-section-d2079306e1479\" target=\"_blank\" rel=\"noopener\">68<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 72\" title=\"Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD). Ann Intern Med. 2015;162:735\u20136.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR72\" id=\"ref-link-section-d2079306e1483\" target=\"_blank\" rel=\"noopener\">72<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 73\" title=\"Leisman DE, Harhay MO, Lederer DJ, et al. Development and Reporting of Prediction Models: Guidance for Authors From Editors of Respiratory, Sleep, and Critical Care Journals. Crit Care Med. 2020;48:623\u201333.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR73\" id=\"ref-link-section-d2079306e1486\" target=\"_blank\" rel=\"noopener\">73<\/a>].<\/p>\n<p>Beyond prediction models, critical care-specific guidelines proposed in recent literature offer targeted recommendations for evaluating AI tools in ICU environments, particularly regarding data heterogeneity, patient safety, and integration with clinical workflows. Moving forward, AI research in critical care must align with these established frameworks and adopt higher methodological standards, such as pre-registered AI trials, prospective validation in diverse ICU populations, and standardized benchmarks for algorithmic performance.<\/p>\n<p>Encourage collaborative AI models<\/p>\n<p>Centralizing data collection from multiple ICUs, or federating them into structured networks, enhances external validity and reliability by enabling a scale of data volume that would be unattainable for individual institutions alone [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 74\" title=\"Sauer CM, Dam TA, Celi LA, et al. Systematic Review and Comparison of Publicly Available ICU Data Sets-A Decision Guide for Clinicians and Data Scientists. Crit Care Med. 2022;50:E581\u20138.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR74\" id=\"ref-link-section-d2079306e1501\" target=\"_blank\" rel=\"noopener\">74<\/a>]. ICUs are at the forefront of data sharing efforts, offering several publicly available datasets for use by the research community [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 75\" title=\"Sauer CM, Dam TA, Celi LA, et al. Systematic Review and Comparison of Publicly Available ICU Data Sets\u2014A Decision Guide for Clinicians and Data Scientists. Crit Care Med. 2022;50: e581.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR75\" id=\"ref-link-section-d2079306e1504\" target=\"_blank\" rel=\"noopener\">75<\/a>]. There are several strategies to build collaborative databases. Networking refers to collaborative research consortia [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 76\" title=\"Benthin C, Pannu S, Khan A, Gong M. The nature and variability of automated practice alerts derived from electronic health records in a U.S. nationwide critical care research network. Ann Am Thorac Soc 2016; 13: 1784\u20138.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR76\" id=\"ref-link-section-d2079306e1507\" target=\"_blank\" rel=\"noopener\">76<\/a>] that align protocols and pool clinical research data across institutions. Federated learning, by contrast, involves a decentralized approach where data are stored locally and only models or weights are shared between centres [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 77\" title=\"Li R, Romano JD, Chen Y, Moore JH. Centralized and Federated Models for the Analysis of Clinical Data. Annu Rev Biomed Data Sci. 2024;7:179\u201399.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR77\" id=\"ref-link-section-d2079306e1510\" target=\"_blank\" rel=\"noopener\">77<\/a>]. Finally, centralized approaches, such as the Epic Cosmos initiative, leverage de-identified data collected from EHR and stored on a central server providing access to large patient populations for research and quality improvement purposes across the healthcare system [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 78\" title=\"Tarabichi Y, Frees A, Honeywell S, et al. The Cosmos Collaborative: A Vendor-Facilitated Electronic Health Record Data Aggregation Platform. ACI open. 2021;5: e36.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR78\" id=\"ref-link-section-d2079306e1513\" target=\"_blank\" rel=\"noopener\">78<\/a>]. Federated learning is gaining traction in Europe, where data privacy regulations have a more risk-averse approach to AI development, thus favouring decentralized models [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 79\" title=\"Brauneck A, Schmalhorst L, Majdabadi MMK, et al. Federated Machine Learning, Privacy-Enhancing Technologies, and Data Protection Laws in Medical Research: Scoping Review. J Med Internet Res. 2023;25: e41588.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR79\" id=\"ref-link-section-d2079306e1517\" target=\"_blank\" rel=\"noopener\">79<\/a>]. In contrast, centralized learning approaches like Epic Cosmos are more common in the United States, where there is a more risk-tolerant environment which favours large-scale data aggregation.<\/p>\n<p>In parallel, the use of synthetic data is emerging as a complementary strategy to enable data sharing while preserving patient privacy. Synthetic datasets are artificially generated to reflect the characteristics of real patient data and can be used to train and test models without exposing sensitive information [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 80\" title=\"Tao F, Qi Q. Make more digital twins. Nature 2021 573:7775 2019; 573: 490\u20131.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR80\" id=\"ref-link-section-d2079306e1523\" target=\"_blank\" rel=\"noopener\">80<\/a>]. The availability of large-scale data, may also support the creation of digital twins. Digital twins, or virtual simulations that mirror an individual\u2019s biological and clinical state and rely on high-volume, high-fidelity datasets, may allow for predictive modelling and virtual testing of interventions before bedside application and improve safety of interventions.<\/p>\n<p>The ICU community should advocate for the diffusion of further initiatives to extended collaborative AI models at national and international level.<\/p>\n<p>Governance and regulation for AI in Critical Care<\/p>\n<p>Despite growing regulatory efforts, AI regulation remains one of the greatest hurdles to clinical implementation, particularly in high-stakes environments like critical care, as regulatory governance, surveillance, and evaluation of model performance are not only conceptually difficult, but also require a large operational effort across diverse healthcare settings. The recent European Union AI Act introduced a risk-based regulatory framework, classifying medical AI as high-risk and requiring stringent compliance with transparency, human oversight, and post-market monitoring [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"EU Artificial Intelligence Act| Up-to-date developments and analyses of the EU AI Act. &#010;                  https:\/\/artificialintelligenceact.eu\/&#010;                  &#010;                 (accessed July 6, 2024).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR18\" id=\"ref-link-section-d2079306e1539\" target=\"_blank\" rel=\"noopener\">18<\/a>]. While these regulatory efforts provide foundational guidance, critical care AI presents unique challenges requiring specialized oversight.<\/p>\n<p>By integrating regulatory, professional, and institutional oversight, AI governance in critical care can move beyond theoretical discussions toward actionable policies that balance technological innovation with patient safety [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 73\" title=\"Leisman DE, Harhay MO, Lederer DJ, et al. Development and Reporting of Prediction Models: Guidance for Authors From Editors of Respiratory, Sleep, and Critical Care Journals. Crit Care Med. 2020;48:623\u201333.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR73\" id=\"ref-link-section-d2079306e1545\" target=\"_blank\" rel=\"noopener\">73<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 81\" title=\"Warraich HJ, Tazbaz T, Califf RM. FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. JAMA. 2025;333:241\u20137.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR81\" id=\"ref-link-section-d2079306e1548\" target=\"_blank\" rel=\"noopener\">81<\/a>, <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 82\" title=\"Nong P, Hamasha R, Singh K, Adler-Milstein J, Platt J. How Academic Medical Centers Govern AI Prediction Tools in the Context of Uncertainty and Evolving Regulation. NEJM AI 2024; 1. &#010;                  https:\/\/doi.org\/10.1056\/AIP2300048&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR82\" id=\"ref-link-section-d2079306e1551\" target=\"_blank\" rel=\"noopener\">82<\/a>].<\/p>\n<p>Grant collaboration between public and private sector<\/p>\n<p>Given the complexity and significant economic, human, and computational resources needed to develop a large generative AI model, physicians and regulators should promote partnerships among healthcare institutions, technology companies, and governmental bodies to support the research, development, and deployment of AI-enabled care solutions [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 83\" title=\"Reddy S, Rogers W, Makinen VP, et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform. 2021;28: 100444.\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR83\" id=\"ref-link-section-d2079306e1561\" target=\"_blank\" rel=\"noopener\">83<\/a>]. Beyond regulatory agencies, professional societies and institutional governance structures must assume a more active role. Organizations such as Society of Critical Care Medicine (SCCM), European Society of Intensive Care Medicine (ESICM), and regulatory bodies like the European Medical Agency (EMA) should establish specific clinical practice guidelines for AI in critical care, including standards for model validation, clinician\u2013AI collaboration, and accountability. Regulatory bodies should operate at both national and supranational levels, with transparent governance involving multidisciplinary representation\u2014including clinicians, data scientists, ethicists, and patient advocates\u2014to ensure decisions are both evidence-based and ethically grounded. To avoid postponing innovation indefinitely, regulation should be adaptive and proportionate, focusing on risk-based oversight and continuous post-deployment monitoring rather than rigid pre-market restrictions. Furthermore, implementing mandatory reporting requirements for AI performance and creating hospital-based AI safety committees could offer a structured, practical framework to safeguard the ongoing reliability and safety of clinical AI applications.<\/p>\n<p>Address AI divide to improve health equality<\/p>\n<p>The adoption of AI may vary significantly across various geographic regions, influenced by technological capacities, (i.e. disparities in access to software or hardware resources), and differences in investments and priorities between countries. This \u201cAI divide\u201d can separate those with high access to AI from those with limited or no access, exacerbating social and economic inequalities.<\/p>\n<p>The EU commission has been proposed to act as an umbrella to coordinate EU wide strategies to reduce the AI divide between European countries, implementing coordination and supporting programmes of activities [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 84\" title=\"Artificial Intelligence in Healthcare. &#010;                  https:\/\/www.europarl.europa.eu\/RegData\/etudes\/STUD\/2022\/729512\/EPRS_STU(2022)729512_EN.pdf&#010;                  &#010;                 (accessed June 12, 2025).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR84\" id=\"ref-link-section-d2079306e1575\" target=\"_blank\" rel=\"noopener\">84<\/a>]. The use of specific programmes, such as Marie-Curie training networks, is mentioned here to strengthen the human capital on AI while developing infrastructures and implementing common guidelines and approaches across countries.<\/p>\n<p>A recent document from the United Nations also addresses the digital divide across different economic sectors, recommending education, international cooperation, and technological development for an equitable AI resource and infrastructure allocation [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 85\" title=\"United nations. Mind the AI Divide: Shaping a Global Perspective on the Future of Work. &#010;                  https:\/\/www.un.org\/digital-emerging-technologies\/sites\/www.un.org.techenvoy\/files\/MindtheAIDivide.pdf&#010;                  &#010;                 (accessed June 12, 2025).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR85\" id=\"ref-link-section-d2079306e1581\" target=\"_blank\" rel=\"noopener\">85<\/a>].<\/p>\n<p>Accordingly, the medical community in each country should lobby at both national level and international level through society and WHO for international collaborations, such as through the development of specific grants and research initiatives. Intensivist should require supranational approaches to standardized data collection and require policies for AI technology and data analysis. Governments, UN, WHO, and scientific society should be the target of this coordinated effort.<\/p>\n<p>Continuous evaluation of dynamic models and post-marketing surveillance<\/p>\n<p>A major limitation in current regulation is the lack of established pathways for dynamic AI models. AI systems in critical care are inherently dynamic, evolving as they incorporate new real-world data, while most FDA approvals rely on static evaluation. In contrast, the EU AI Act emphasizes continuous risk assessment [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"EU Artificial Intelligence Act| Up-to-date developments and analyses of the EU AI Act. &#010;                  https:\/\/artificialintelligenceact.eu\/&#010;                  &#010;                 (accessed July 6, 2024).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR18\" id=\"ref-link-section-d2079306e1596\" target=\"_blank\" rel=\"noopener\">18<\/a>]. This approach should be expanded globally to enable real-time auditing, validation, and governance of AI-driven decision support tools in intensive care units, as well as applying to post-market surveillance. The EU AI Act mandates ongoing surveillance of high-risk AI systems, a principle that we advocate to be adopted internationally to mitigate the risks of AI degradation and bias drift in ICU environments. In practice, this requires AI commercial entities to provide post-marketing surveillance plans and to report serious incidents within a predefined time window (15 days or less) [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"EU Artificial Intelligence Act| Up-to-date developments and analyses of the EU AI Act. &#010;                  https:\/\/artificialintelligenceact.eu\/&#010;                  &#010;                 (accessed July 6, 2024).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR18\" id=\"ref-link-section-d2079306e1599\" target=\"_blank\" rel=\"noopener\">18<\/a>]. Companies should also maintain this monitoring as the AI systems evolve over time. The implementation of these surveillance systems should include standardized monitoring protocols, embedded incident reporting tools within clinical workflows, participation in performance registries, and regular audits. These mechanisms are overseen by national Market Surveillance Authorities (MSAs), supported by EU-wide guidance and upcoming templates to ensure consistent and enforceable oversight of clinical AI systems.<\/p>\n<p>Require adequate regulations for AI deployment in clinical practice<\/p>\n<p>Deploying AI within complex clinical environments like the ICU, acute wards, or even regular wards presents a complex challenge [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 86\" title=\"Greco M, Caruso PF, Cecconi M. Artificial Intelligence in the Intensive Care Unit. Semin Respir Crit Care Med. 2020. &#010;                  https:\/\/doi.org\/10.1055\/s-0040-1719037&#010;                  &#010;                .\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR86\" id=\"ref-link-section-d2079306e1610\" target=\"_blank\" rel=\"noopener\">86<\/a>].<\/p>\n<p>We underline three aspects for adequate regulation: first, a rigorous regulatory process for evaluation of safety and efficacy before clinical application of AI products. A second aspect is related with continuous post-market evaluation, which should be mandatory and conducted according to other types of medical devices [<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"EU Artificial Intelligence Act| Up-to-date developments and analyses of the EU AI Act. &#010;                  https:\/\/artificialintelligenceact.eu\/&#010;                  &#010;                 (accessed July 6, 2024).\" href=\"http:\/\/ccforum.biomedcentral.com\/articles\/10.1186\/s13054-025-05532-2#ref-CR18\" id=\"ref-link-section-d2079306e1616\" target=\"_blank\" rel=\"noopener\">18<\/a>].<\/p>\n<p>The third important aspect is liability, identifying who should be held accountable if an AI decision or a human decision based on AI leads to harm. This relates with the necessity for adequate insurance policies. We urge regulatory bodies in each country to provide regulations on these issues, which are fundamental for AI diffusion.<\/p>\n<p>We also recommend that both patients and clinicians request that regulatory bodies in each country update current legislation and regulatory pathways, including clear rules for insurance policies to anticipate and reduce the risk for case laws.<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial intelligence (AI) is rapidly entering critical care, where it holds the potential to improve diagnostic accuracy and&hellip;\n","protected":false},"author":2,"featured_media":250658,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4316],"tags":[1942,96700,96702,34339,105,4348,46113,96701,9826,16,15],"class_list":{"0":"post-250657","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-artificial-intelligence","9":"tag-critical-care-medicine","10":"tag-emergency-medicine","11":"tag-ethics","12":"tag-health","13":"tag-healthcare","14":"tag-healthcare-innovation","15":"tag-intensive-critical-care-medicine","16":"tag-personalized-medicine","17":"tag-uk","18":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114823118720207412","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/250657","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=250657"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/250657\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/250658"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=250657"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=250657"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=250657"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}