{"id":77484,"date":"2025-07-20T07:50:10","date_gmt":"2025-07-20T07:50:10","guid":{"rendered":"https:\/\/www.europesays.com\/us\/77484\/"},"modified":"2025-07-20T07:50:10","modified_gmt":"2025-07-20T07:50:10","slug":"an-end-to-end-multifunctional-ai-platform-for-intraoperative-diagnosis","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/77484\/","title":{"rendered":"An end-to-end multifunctional AI platform for intraoperative diagnosis"},"content":{"rendered":"<p>Patient cohorts<\/p>\n<p>This multicenter study included retrospective development and validation of the GAS platform, followed by a prospective validation to assess its generalizability. The Generation module was trained using retrospectively collected data from the Internal GZCC. External validation involved six independent cohorts from JMCH, HZCH, DGPH, JMPH, ZSZL, and TCGA. A prospective cohort was enrolled at Pro-External GZCC. All included patients had intraoperative diagnoses and matched post-operative FFPE-confirmed diagnoses. Frozen and FFPE slides were derived from the same surgical specimens, though exact pairing was not required due to the unpaired GAN architecture. Exclusion criteria included patients with incomplete clinical data and slides with extensive tissue folds, large fractures, or severe out-of-focus regions. The study was approved by the ethics committee of Sun Yat-sen University Cancer Center (No. SL-B2023-416-02), with informed consent waived for retrospective data and obtained for the prospective cohort.<\/p>\n<p>Data digitalization<\/p>\n<p>For each case, all available tumor slides were scanned at \u00d740 magnification (0.25\u2009\u03bcm\/pixel) for further analysis. Slides from the Internal GZCC, External HZCH, External DGPH, and Pro-External GZCC cohorts were scanned with a PHILIPS Ultra-Fast Scanner (Philips Electronics N.V., Amsterdam, Netherlands) and saved in iSyntax format. Meanwhile, slides from the External JMCH and External JMPH datasets were scanned using the Aperio AT2 scanner (Leica Biosystems, Wetzlar, Germany) and stored in SVS format.<\/p>\n<p>Data preprocessing<\/p>\n<p>For each digitized slide, CLAM\u2019s preprocessing<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Lu, M. Y. et al. Data-efficient and weakly supervised computational pathology on whole-slide images. Nat. Biomed. Eng. 5, 555&#x2013;570 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR24\" id=\"ref-link-section-d334806096e1950\" rel=\"nofollow noopener\" target=\"_blank\">24<\/a> method was used to segment the tissue regions in the WSIs. Based on this segmentation mask, the tissue regions were partitioned into non-overlapping 512\u2009\u00d7\u2009512 patches at 20\u00d7 magnification, with coordinates retained for the final reconstruction of the entire slide. Due to imperfections in the segmentation, partial background regions such as text or partial blank areas were often included. It was observed that background patches erroneously included exhibit distinctly different histogram distributions compared to tissue patches (Supplementary Fig. <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#MOESM1\" rel=\"nofollow noopener\" target=\"_blank\">8<\/a>). To eliminate these background patches, all patches were converted to grayscale, and the mean and variance of their grayscale images were computed. Background patches were then filtered based on thresholds applied to these statistics. In addition, within the remaining patches, there were areas blurred due to lens defocus during slide scanning, which were identified and removed using the Laplacian method<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Pech-Pacheco, J.L., Crist&#xF3;bal, G., Chamorro-Martinez, J. &amp; Fern&#xE1;ndez-Valdivia, J. Diatom autofocusing in brightfield microscopy: a comparative study. In Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, Vol. 3, 314&#x2013;317 (IEEE, 2000).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR25\" id=\"ref-link-section-d334806096e1957\" rel=\"nofollow noopener\" target=\"_blank\">25<\/a>.<\/p>\n<p>Generation module<\/p>\n<p>Model overview: To synthesize FFPE-like images from frozen sections, we developed the Generation module of the GAS platform, comprising an encoder, a style neck, and a decoder. The encoder downsampled inputs to expand the receptive field and reduce computational burden. The style neck enabled domain translation, and the decoder reconstructed the output, aided by skip connections to retain fine details lost during downsampling. Training optimized a weighted sum of adversarial<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 26\" title=\"Goodfellow, I. J. et al. Generative adversarial nets. In Neural Information Processing Systems 27 (eds Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D. &amp; Weinberger, K. Q.) 2672&#x2013;2680 (Curran Associates, Inc., 2014).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR26\" id=\"ref-link-section-d334806096e1969\" rel=\"nofollow noopener\" target=\"_blank\">26<\/a> and PatchNCE losses<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 27\" title=\"Park, T., Efros, et al. In Computer Vision &#x2013; ECCV 2020. (eds Andrea Vedaldi, Horst Bischof, Thomas Brox, &amp; Jan-Michael Frahm) 319-345 (Springer International Publishing).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR27\" id=\"ref-link-section-d334806096e1973\" rel=\"nofollow noopener\" target=\"_blank\">27<\/a>. The adversarial loss drove global realism, while PatchNCE preserved local structural fidelity by maximizing mutual information between input and output patches.<\/p>\n<p>Style neck: The style neck was composed of stacked residual blocks<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 28\" title=\"He, K., Zhang, X., Ren, S. &amp; Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770&#x2013;778 (IEEE, 2016).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR28\" id=\"ref-link-section-d334806096e1980\" rel=\"nofollow noopener\" target=\"_blank\">28<\/a>. Crucially, the normalization layers within the residual blocks were Adaptive Instance Normalization (AdaIN) layers<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 29\" title=\"Huang, X. &amp; Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In 2017 IEEE International Conference on Computer Vision (ICCV) 1510&#x2013;1519 (IEEE, 2017).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR29\" id=\"ref-link-section-d334806096e1984\" rel=\"nofollow noopener\" target=\"_blank\">29<\/a>, which accomplished the style transformation of the input images. The AdaIN layer altered the style of the input image through adaptive affine transformations (Eq. (<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"equation anchor\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#Equ1\" rel=\"nofollow noopener\" target=\"_blank\">1<\/a>)):<\/p>\n<p>$${AdaIN}\\left(x,y\\right)=\\sigma \\left(y\\right)\\left(\\frac{x-\\mu \\left(x\\right)}{\\sigma \\left(x\\right)}\\right)+\\mu \\left(y\\right)$$<\/p>\n<p>\n                    (1)\n                <\/p>\n<p>where x represented the input image and y represents the target style image. \u03bc(\u00b7) and \u03c3(\u00b7) denoted the mean and standard deviation, respectively. The original AdaIN layer required multiple computations of the affine parameters (the mean and standard deviation) from the style input. We proposed that the affine parameters could be directly predicted from text descriptions of the FFPE style, without the need for additional input images of the FFPE style and repeated calculations. There, we employed the text encoder of QuiltNet<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 30\" title=\"Ikezogwo, W. O. et al. Quilt-1M: one million image-text pairs for histopathology. Adv. Neural Inf. Process Syst. 36, 37995&#x2013;38017 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR30\" id=\"ref-link-section-d334806096e2121\" rel=\"nofollow noopener\" target=\"_blank\">30<\/a> to text encoder encode the text representing the FFPE style, which was \u201cformalin-fixed paraffin-embedded tissues\u201d. The encoded text was then fed into a multilayer perceptron (MLP) to predict the affine parameters. Through this design, our generative model efficiently leverages pathology-specific knowledge embedded in the pretrained network to encode FFPE-style representations. These representations guide the image synthesis process, enabling the model to generate high-quality FFPE-like images from frozen section inputs.<\/p>\n<p>Training configurations: Our model was optimized using the Adam<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 31\" title=\"Kingma, D. P. &amp; Ba, J. J. Adam: a method for stochastic optimization. Preprint at &#010;                  https:\/\/arxiv.org\/abs\/1412.6980&#010;                  &#010;                 (2014).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR31\" id=\"ref-link-section-d334806096e2128\" rel=\"nofollow noopener\" target=\"_blank\">31<\/a> optimizer with a learning rate set to 0.0002. It underwent training with a batch size of 1 over a maximum of 200 epochs. To ensure a fair comparison, all baseline models were trained under consistent settings. The generative model was trained with flip-equivariance augmentation, where the input image to the generator was horizontally flipped, and the output features were flipped back before computing the PatchNCE loss.<\/p>\n<p>Differences from our method: Unlike prior approaches, our method is tailored for intraoperative scenarios, where rapid inference is essential. To this end, we use a lightweight encoder\u2013decoder architecture with fewer layers and incorporate skip connections to mitigate feature loss during downsampling. Rather than relying on computationally intensive attention modules, we leverage domain knowledge embedded in a pathology-pretrained model to generate textual descriptions of FFPE characteristics. These are integrated into the generator via AdaIN, enabling fast and high-fidelity generation of FFPE-style images.<\/p>\n<p>Reader study: To assess the fidelity of the generated images compared to real FFPE images, a reader study was conducted. Three pathologists reviewed 200 image patches (100 generated and 100 real FFPE) and classified them as either generated or real, and reviewed 200 image patches (100 generated and 100 frozen) and classified them as either generated or frozen.These image patches were randomly sampled from all test sets, and the sampling process was unbiased. Regarding tissue region selection, the preprocessing pipeline was consistent across datasets.<\/p>\n<p>Assessment module<\/p>\n<p>With the application of foundation models in computational pathology<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 15\" title=\"Oluchi Ikezogwo, W. et al. Quilt-1M: one million image-text pairs for histopathology. Adv. Neural Inf. Process. Syst. 36, 37995&#x2013;38017 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR15\" id=\"ref-link-section-d334806096e2148\" rel=\"nofollow noopener\" target=\"_blank\">15<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Kang, M., Song, H., Park, S., Yoo, D. &amp; Pereira, S. Benchmarking self-supervised learning on diverse pathology datasets. In 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 3344&#x2013;3354 (IEEE, 2023).\" href=\"#ref-CR32\" id=\"ref-link-section-d334806096e2151\">32<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Lu, M. Y. et al. A visual-language foundation model for computational pathology. Nat. Med. 30, 863&#x2013;874 (2024).\" href=\"#ref-CR33\" id=\"ref-link-section-d334806096e2151_1\">33<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 34\" title=\"Xu, H. et al. A whole-slide foundation model for digital pathology from real-world data. Nature 630, 181&#x2013;188 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR34\" id=\"ref-link-section-d334806096e2154\" rel=\"nofollow noopener\" target=\"_blank\">34<\/a>, the ability of these models to extract general features from pathological images has significantly improved. We developed the Assessment module using pathological image quality control models derived from the foundation model. Then, the adapter architecture was introduced to enhance the model\u2019s performance on the task of pathological image quality assessment. During training, the parameters of the foundation model were kept frozen, only the adapter and the projector used for prediction were trained. The foundation model consisted of 24 ViT blocks<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 35\" title=\"Dosovitskiy, A. et al. An image is worth 16x16 words: transformers for image recognition at scale. Preprint at &#010;                  https:\/\/arxiv.org\/abs\/2010.11929&#010;                  &#010;                 (2020).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR35\" id=\"ref-link-section-d334806096e2158\" rel=\"nofollow noopener\" target=\"_blank\">35<\/a>. The adapter followed an MLP architecture. To retain the original knowledge encoded by the foundation model, a residual structure was introduced. Specifically, the output of the ViT block was fed into the adapter, and the adapter\u2019s output was aggregated with the original features using a fixed parameter \u03b3 (Eq. (<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"equation anchor\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#Equ2\" rel=\"nofollow noopener\" target=\"_blank\">2<\/a>)):<\/p>\n<p>$${{\\mathcal{F}}}_{l}^{* }=\\gamma {A}_{l}{({F}_{l})}^{T}+(1-\\gamma ){F}_{l}$$<\/p>\n<p>\n                    (2)\n                <\/p>\n<p>Where Al denoted the adapter at layer l, Fl represented the output of the ViT block at layer l, and \\({{\\mathcal{F}}}_{l}^{* }\\) indicated the output feature, which served as the input to the subsequent ViT block, and \u03b3 was set to 0.8 in this experiment<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 36\" title=\"Zhang, J. O. et al. In Computer Vision &#x2013; ECCV 2020. (eds Andrea Vedaldi, Horst Bischof, Thomas Brox, &amp; Jan-Michael Frahm) 698-714 (Springer International Publishing).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR36\" id=\"ref-link-section-d334806096e2312\" rel=\"nofollow noopener\" target=\"_blank\">36<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 37\" title=\"Sung, Y.-L., Cho, J. &amp; Bansal, M. LST: ladder side-tuning for parameter and memory efficient transfer learning. Preprint at &#010;                  https:\/\/arxiv.org\/abs\/2206.06522&#010;                  &#010;                 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR37\" id=\"ref-link-section-d334806096e2315\" rel=\"nofollow noopener\" target=\"_blank\">37<\/a>.<\/p>\n<p>Compared to Low-Rank Adaptation (LoRA)<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 38\" title=\"Hu, J. E. et al. LoRA: Low-Rank Adaptation of Large Language Models. In Proceedings of the International Conference on Learning Representations (ICLR, 2021).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR38\" id=\"ref-link-section-d334806096e2322\" rel=\"nofollow noopener\" target=\"_blank\">38<\/a>, a widely used parameter-efficient fine-tuning method that modifies model weights through low-rank matrix updates, the adapter approach adjusts features via external modules<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 36\" title=\"Zhang, J. O. et al. In Computer Vision &#x2013; ECCV 2020. (eds Andrea Vedaldi, Horst Bischof, Thomas Brox, &amp; Jan-Michael Frahm) 698-714 (Springer International Publishing).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR36\" id=\"ref-link-section-d334806096e2326\" rel=\"nofollow noopener\" target=\"_blank\">36<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 39\" title=\"Han, Z., Gao, C., Liu, J., Zhang, J. &amp; Zhang, S. Q. J. A. Parameter-efficient fine-tuning for large models: a comprehensive survey. Preprint at &#010;                  https:\/\/arxiv.org\/abs\/2403.14608&#010;                  &#010;                 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR39\" id=\"ref-link-section-d334806096e2329\" rel=\"nofollow noopener\" target=\"_blank\">39<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 40\" title=\"Sung, Y.-L., Cho, J. &amp; Bansal, M. J. A. LST: ladder side-tuning for parameter and memory efficient transfer learning. Adv. Neural Inf. Process. Syst. 35, 12991&#x2013;13005 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR40\" id=\"ref-link-section-d334806096e2332\" rel=\"nofollow noopener\" target=\"_blank\">40<\/a>. It combines original and adapted features using a scaling factor \u03b3, allowing finer control over the degree of adaptation. As the foundation model, UNI<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 21\" title=\"Chen, R. J. et al. Towards a general-purpose foundation model for computational pathology. Nat. Med. 30, 850&#x2013;862 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR21\" id=\"ref-link-section-d334806096e2339\" rel=\"nofollow noopener\" target=\"_blank\">21<\/a>, was pretrained on over 100 million pathology images using the MoCoV3<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 41\" title=\"Chen, X. et al. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE\/CVF International Conference on Computer Vision 9620&#x2013;9629 (IEEE, 2021).\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#ref-CR41\" id=\"ref-link-section-d334806096e2343\" rel=\"nofollow noopener\" target=\"_blank\">41<\/a> framework and already captures extensive domain knowledge, the adapter strategy preserves core representations while introducing task-specific features for quality control, thereby improving overall performance.<\/p>\n<p>The quality control models were trained using a batch size of 16, with training lasting up to 50 epochs. Optimization employed the Adam optimizer with an L2 weight decay set at 5e\u22125, along with a learning rate of 1e\u22125. Segmentation and patching of WSIs were performed on Intel(R) Xeon(R) Gold 6240 Central Processing Units (CPUs), and the models were trained on 4 NVIDIA GeForce RTX 2080 Ti Graphics Processing Units (GPUs).<\/p>\n<p>To verify whether the quality control model attended to relevant features, we applied Grad-CAM. A forward pass produced the final convolutional feature map and the pre-softmax prediction score. Gradients of the score with respect to the feature map were then computed via backpropagation. Channel-wise importance weights were calculated, aggregated through a weighted sum, and passed through ReLU to generate the final localization map.<\/p>\n<p>Deep-learning classification of support module<\/p>\n<p>To evaluate whether the synthetic FFPE-like images generated by GAS provide greater utility in downstream classification tasks compared to frozen images, we designed a series of deep-learning classification tasks. These included: margin assessment, sentinel lymph node metastasis prediction in breast cancer, classification of benign versus malignant thyroid lesions, lung adenocarcinoma versus lung squamous cell carcinoma, benign versus malignant breast lesions, and breast carcinoma in situ versus invasive breast cancer.<\/p>\n<p>For the task of distinguishing lung adenocarcinoma from lung squamous cell carcinoma, cases from TCGA were used to develop the model. For the classification of benign versus malignant thyroid lesions, benign versus malignant breast lesions, and carcinoma in situ versus invasive breast cancer, FFPE data were used for model development. In contrast, for margin assessment and sentinel lymph node metastasis tasks, diagnostic models were developed using either original frozen images or GAS-generated images only, without incorporating any FFPE data. The classification results obtained using GAS-generated images were compared with those using frozen images as inputs. The prediction process using the generated images was as follows: tissue regions in the frozen images were segmented and divided into 512\u2009\u00d7\u2009512 patches. Subsequently, synthetic FFPE-like patches were generated from frozen patches, and their features were extracted. Finally, the patch-level features were aggregated into a slide-level feature for prediction. The feature aggregation follows the attention-based pooling function introduced in CLAM (Eq. (<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"equation anchor\" href=\"http:\/\/www.nature.com\/articles\/s41746-025-01808-7#Equ3\" rel=\"nofollow noopener\" target=\"_blank\">3<\/a>)):<\/p>\n<p>$${a}_{i,k}=\\frac{{e}^{\\{{W}_{a,i}(\\tan \\,{\\rm{h}}\\,{V}_{a}{{\\bf{h}}}_{k})\\odot simg({U}_{a}{{\\bf{h}}}_{k})\\}}}{{\\sum }_{j=1}^{K}{e}^{\\{{W}_{a,i}(\\tan \\,{\\rm{h}}\\,{V}_{a}{{\\bf{h}}}_{j})\\odot simg({U}_{a}{{\\bf{h}}}_{j})\\}}}$$<\/p>\n<p>\n                    (3)\n                <\/p>\n<p>$${{\\boldsymbol{f}}}_{{slide},i}=\\mathop{\\sum }\\limits_{k=1}^{K}{a}_{i,k}{{\\boldsymbol{h}}}_{k}$$<\/p>\n<p>Here, <b>f<\/b>slide,i denotes the aggregated feature representation for class i, ai,k is the attention score for patch k, and <b>h<\/b>k is the patch-level feature. Va and Ua are learnable fully connected layers, and Wa,i epresents one of N parallel attention branches.<\/p>\n<p>GAS software of support module<\/p>\n<p>A prospective assessment using a sub-cohort from the Pro-External GZCC cohort was conducted to validate the clinical utility of GAS in intraoperative diagnostic workflows. We developed a human\u2013AI collaboration software to facilitate the process. Once the frozen sections were prepared, they were scanned to create digital slides. After uploading these slides to the GAS software, pathologists can browse the frozen sections online. If any areas appear unclear, the pathologists can click on the corresponding regions, and the software will convert these unclear areas into high-resolution FFPE-like images.<\/p>\n<p>Quantification and statistical analysis<\/p>\n<p>FID was used to measure image similarity for the generative models. Additionally, the quality control model evaluated the generative models using scoring as another metric. For the quality control model, accuracy and AUROC served as the primary evaluation metrics. The external test sets from different centers, allowed for a more comprehensive assessment of the generalization properties of GAS. A P value less than 0.05 was considered statistically significant. Statistical significance was evaluated using t test and Wilcoxon signed-rank test. Data preprocessing and model development were conducted using Python (version 3.8.0) and the deep-learning platform PyTorch (version 2.3.0).<\/p>\n","protected":false},"excerpt":{"rendered":"Patient cohorts This multicenter study included retrospective development and validation of the GAS platform, followed by a prospective&hellip;\n","protected":false},"author":3,"featured_media":77485,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,15576,150,10020,834,8523,3209,158,67,132,68],"class_list":{"0":"post-77484","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-biomedicine","11":"tag-biotechnology","12":"tag-diagnosis","13":"tag-general","14":"tag-machine-learning","15":"tag-medicine-public-health","16":"tag-technology","17":"tag-united-states","18":"tag-unitedstates","19":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114884471885231851","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/77484","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=77484"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/77484\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/77485"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=77484"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=77484"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=77484"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}