{"id":107066,"date":"2025-07-31T08:17:14","date_gmt":"2025-07-31T08:17:14","guid":{"rendered":"https:\/\/www.europesays.com\/us\/107066\/"},"modified":"2025-07-31T08:17:14","modified_gmt":"2025-07-31T08:17:14","slug":"artificial-intelligence-integrated-video-analysis-of-vessel-area-changes-and-instrument-motion-for-microsurgical-skill-assessment","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/us\/107066\/","title":{"rendered":"Artificial intelligence-integrated video analysis of vessel area changes and instrument motion for microsurgical skill assessment"},"content":{"rendered":"<p>We employed a combined AI-based video analysis approach to assess the microvascular anastomosis performance by integrating VA changes and instrument motion. By comparing technical category scores with AI-generated parameters, we demonstrated that the parameters from both AI models encompassed a wide range of technical skills required for microvascular anastomosis. Furthermore, ROC curve analysis indicated that integrating parameters from both AI models improved the ability to distinguish surgical performance compared to using a single AI model. A distinctive feature of this study was the integration of multiple AI models that incorporated both tools and tissue elements.<\/p>\n<p>AI-based technical analytic approach<\/p>\n<p>Traditional criteria-based scoring by multiple blinded expert surgeons was a highly reliable method for assessing surgeon performance with minimal interrater bias (Fig.\u00a0<a data-track=\"click\" data-track-label=\"link\" data-track-action=\"figure anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#Fig2\" target=\"_blank\" rel=\"noopener\">2<\/a> and Supplementary Table <a data-track=\"click\" data-track-label=\"link\" data-track-action=\"supplementary material anchor\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#MOESM1\" target=\"_blank\" rel=\"noopener\">1<\/a>). However, the significant demand for human expertise and time makes real-time feedback impractical during surgery and training<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"McGoldrick, R. B. et al. Motion analysis for microsurgical training: Objective measures of dexterity, economy of movement, and ability. Plast. Reconstr. Surg. 136, 231e&#x2013;240e (2015).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR10\" id=\"ref-link-section-d107787054e4051\" target=\"_blank\" rel=\"noopener\">10<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Satterwhite, T. et al. The stanford microsurgery and resident training (SMaRT) scale: Validation of an on-line global rating scale for technical assessment. Ann. Plast. Surg. 72, S84&#x2013;S88 (2014).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR11\" id=\"ref-link-section-d107787054e4054\" target=\"_blank\" rel=\"noopener\">11<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Sugiyama, T., Sugimori, H., Tang, M. &amp; Fujimura, M. Artificial intelligence for patient safety and surgical education in neurosurgery. JMA J. 8, 76&#x2013;85 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR18\" id=\"ref-link-section-d107787054e4057\" target=\"_blank\" rel=\"noopener\">18<\/a>. A recent study demonstrated that self-directed learning using digital instructional materials provides non-inferior outcomes in the initial stages of microsurgical skill acquisition compared to traditional instructor-led training<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 28\" title=\"D&#x105;browski, F. et al. Video-based microsurgical education versus stationary basic microsurgical course: A noninferiority randomized controlled study. J. Reconstr. Microsurg. 38, 585&#x2013;592 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR28\" id=\"ref-link-section-d107787054e4061\" target=\"_blank\" rel=\"noopener\">28<\/a>. However, direct feedback from an instructor continues to play a critical role when progressing toward more advanced skill levels and actual clinical practice.<\/p>\n<p>AI technology can rapidly analyze vast amounts of clinical data generated in modern operating theaters, offering real-time feedback capabilities. The proposed method\u2019s reliance on surgical video analysis makes it highly applicable in clinical settings<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Sugiyama, T., Sugimori, H., Tang, M. &amp; Fujimura, M. Artificial intelligence for patient safety and surgical education in neurosurgery. JMA J. 8, 76&#x2013;85 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR18\" id=\"ref-link-section-d107787054e4068\" target=\"_blank\" rel=\"noopener\">18<\/a>. Moreover, the manner in which AI is utilized in this study addresses concerns regarding transparency, explainability, and interpretability, which are fundamental risks associated with AI adoption. One anticipated application is AI-assisted devices that can promptly provide feedback on technical challenges, allowing trainees to refine their surgical skills more effectively<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 29\" title=\"Baghdadi, A. et al. A data-driven performance dashboard for surgical dissection. Sci. Rep. 11, 15013 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR29\" id=\"ref-link-section-d107787054e4072\" target=\"_blank\" rel=\"noopener\">29<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 30\" title=\"Baghdadi, A., Lama, S., Singh, R. &amp; Sutherland, G. R. Tool-tissue force segmentation and pattern recognition for evaluating neurosurgical performance. Sci. Rep. 13, 9591 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR30\" id=\"ref-link-section-d107787054e4075\" target=\"_blank\" rel=\"noopener\">30<\/a>. Additionally, an objective assessment of microsurgical skills could facilitate surgeon certification and credentialing processes within the medical community.<\/p>\n<p>Theoretically, this approach could help implement a real-time warning system, alerting surgeons or other staff when instrument motion or tissue deformation exceeds a predefined safety threshold, thereby enhancing patient safety<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 17\" title=\"Sugiyama, T. et al. Tissue acceleration as a novel metric for surgical performance during carotid endarterectomy. Oper. Neurosurg. 25, 343&#x2013;352 (2023).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR17\" id=\"ref-link-section-d107787054e4082\" target=\"_blank\" rel=\"noopener\">17<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 31\" title=\"Curtis, N. J. et al. Clinical evaluation of intraoperative near misses in laparoscopic rectal cancer surgery. Ann. Surg. 273, 778&#x2013;784 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR31\" id=\"ref-link-section-d107787054e4085\" target=\"_blank\" rel=\"noopener\">31<\/a>. However, a large dataset of clinical cases involving adverse events such as vascular injury, bypass occlusion, and ischemic stroke would be required. For real-time clinical applications, further data collection and computational optimization are necessary to reduce processing latency and enhance practical usability. Given that our AI model can be applied to clinical surgical videos, future research could explore its utility in this context.<\/p>\n<p>Related works: AI-integrated instrument tracking<\/p>\n<p>To contextualize our results, we compared our AI-integrated approach with recent methods implementing instrument tracking in microsurgical practice. Franco-Gonz\u00e1lez et al. compared stereoscopic marker-based tracking with a YOLOv8-based deep learning method, reporting high accuracy and real-time capability<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 32\" title=\"Franco-Gonz&#xE1;lez, I. T., Lappalainen, N. &amp; Bednarik, R. Tracking 3D motion of instruments in microsurgery: A comparative study of stereoscopic marker-based vs. deep learning method for objective analysis of surgical skills. Informat. Med. Unlocked 51, 101593 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR32\" id=\"ref-link-section-d107787054e4097\" target=\"_blank\" rel=\"noopener\">32<\/a>. Similarly, Magro et al. proposed a robust dual-instrument Kalman-based tracker, effectively mitigating tracking errors due to occlusion or motion blur<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 33\" title=\"Magro, M., Covallero, N., Gambaro, E., Ruffaldi, E. &amp; De Momi, E. A dual-instrument Kalman-based tracker to enhance robustness of microsurgical tools tracking. Int. J. Comput. Assist. Radiol. Surg. 19, 2351&#x2013;2362 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR33\" id=\"ref-link-section-d107787054e4101\" target=\"_blank\" rel=\"noopener\">33<\/a>. Koskinen et al. utilized YOLOv5 for real-time tracking of microsurgical instruments, demonstrating its effectiveness in monitoring instrument kinematics and eye-hand coordination<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 34\" title=\"Koskinen, J., Torkamani-Azar, M., Hussein, A., Huotarinen, A. &amp; Bednarik, R. Automated tool detection with deep learning for monitoring kinematics and eye-hand coordination in microsurgery. Comput. Biol. Med. 141, 105121 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR34\" id=\"ref-link-section-d107787054e4105\" target=\"_blank\" rel=\"noopener\">34<\/a>.<\/p>\n<p>Our integrated AI model employs semantic segmentation (ResNet-50) for vessel deformation analysis and a trajectory-tracking algorithm (YOLOv2) for assessment of instrument motion. The major advantage of our approach is its comprehensive and simultaneous evaluation of tissue deformation and instrument handling smoothness, enabling robust and objective skill assessment even under challenging conditions, such as variable illumination and partial occlusion. YOLO was selected due to its computational speed and precision in real-time object detection, making it particularly suitable for live microsurgical video analysis. ResNet was chosen for its effectiveness in detailed image segmentation, facilitating accurate quantification of tissue deformation. However, unlike three-dimensional (3D) tracking methods<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 32\" title=\"Franco-Gonz&#xE1;lez, I. T., Lappalainen, N. &amp; Bednarik, R. Tracking 3D motion of instruments in microsurgery: A comparative study of stereoscopic marker-based vs. deep learning method for objective analysis of surgical skills. Informat. Med. Unlocked 51, 101593 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR32\" id=\"ref-link-section-d107787054e4112\" target=\"_blank\" rel=\"noopener\">32<\/a>, our current method relies solely on 2D imaging, potentially limiting depth perception accuracy.<\/p>\n<p>These comparisons highlight both the strengths and limitations of our approach, emphasizing the necessity of future studies incorporating 3D tracking technologies and expanded datasets to further validate and refine AI-driven microsurgical skill assessment methodologies.<\/p>\n<p>Future challenges<\/p>\n<p>Microvascular anastomosis tasks typically consist of distinct phases, including vessel preparation, needle insertion, suture placement, thread pulling, and knot tying. As demonstrated by our video parameters for each surgical phase (phases A\u2013D), a separate analysis of each surgical phase is essential to enhance skill evaluation and training efficiency. However, our current AI model does not have the capability to automatically distinguish these surgical phases.<\/p>\n<p>Previous studies utilizing convolutional neural networks (CNN) and recurrent neural networks (RNN) have demonstrated high accuracy in recognizing surgical phases and steps, particularly through the analysis of intraoperative video data<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 35\" title=\"Khan, D. Z. et al. Automated operative workflow analysis of endoscopic pituitary surgery using machine learning: Development and preclinical evaluation (IDEAL stage 0). J. Neurosurg. 137, 51&#x2013;58 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR35\" id=\"ref-link-section-d107787054e4130\" target=\"_blank\" rel=\"noopener\">35<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 36\" title=\"Williams, S. C. et al. Automated operative phase and step recognition in vestibular schwannoma surgery: Development and preclinical evaluation of a deep learning neural network (IDEAL Stage 0). Neurosurgery &#010;                  https:\/\/doi.org\/10.1227\/neu.0000000000003466&#010;                  &#010;                 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR36\" id=\"ref-link-section-d107787054e4133\" target=\"_blank\" rel=\"noopener\">36<\/a>. Khan et al. successfully applied a combined CNN-RNN model to achieve accurate automated recognition of surgical workflows during endoscopic pituitary surgery, despite significant variability in surgical procedures and video appearances<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 35\" title=\"Khan, D. Z. et al. Automated operative workflow analysis of endoscopic pituitary surgery using machine learning: Development and preclinical evaluation (IDEAL stage 0). J. Neurosurg. 137, 51&#x2013;58 (2022).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR35\" id=\"ref-link-section-d107787054e4137\" target=\"_blank\" rel=\"noopener\">35<\/a>. Similarly, automated operative phase and step recognition in vestibular schwannoma surgery further highlights the ability of these models to handle complex and lengthy surgical tasks<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 36\" title=\"Williams, S. C. et al. Automated operative phase and step recognition in vestibular schwannoma surgery: Development and preclinical evaluation of a deep learning neural network (IDEAL Stage 0). Neurosurgery &#010;                  https:\/\/doi.org\/10.1227\/neu.0000000000003466&#010;                  &#010;                 (2025).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR36\" id=\"ref-link-section-d107787054e4141\" target=\"_blank\" rel=\"noopener\">36<\/a>. Such methods could be integrated into our current AI framework to segment and individually evaluate each distinct phase of microvascular anastomosis, enabling detailed performance analytics and precise feedback.<\/p>\n<p>Furthermore, establishing global standards for video recording is critical for broadly implementing and enhancing computer vision techniques in surgical settings. Developing guidelines for video recording that standardize resolution, frame rate, camera angle, illumination, and surgical field coverage can significantly reduce algorithmic misclassification issues caused by shadows or instrument occlusion<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Sugiyama, T., Sugimori, H., Tang, M. &amp; Fujimura, M. Artificial intelligence for patient safety and surgical education in neurosurgery. JMA J. 8, 76&#x2013;85 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR18\" id=\"ref-link-section-d107787054e4148\" target=\"_blank\" rel=\"noopener\">18<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 37\" title=\"Garrow, C. R. et al. Machine learning for surgical phase recognition: A systematic review. Ann. Surg. 273, 684&#x2013;693 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR37\" id=\"ref-link-section-d107787054e4151\" target=\"_blank\" rel=\"noopener\">37<\/a>. Such standardization ensures consistent data quality, crucial for training accurate and widely applicable AI models across diverse clinical settings <a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 37\" title=\"Garrow, C. R. et al. Machine learning for surgical phase recognition: A systematic review. Ann. Surg. 273, 684&#x2013;693 (2021).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR37\" id=\"ref-link-section-d107787054e4155\" target=\"_blank\" rel=\"noopener\">37<\/a>. These guidelines would facilitate large-scale data sharing and collaboration, substantially improving the reliability and effectiveness of AI-based surgical assessment tools globally.<\/p>\n<p>Technical consideration<\/p>\n<p>The semantic segmentation AI models were designed to assess respect for tissue during the needle manipulation process<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Tang, M. et al. Assessment of changes in vessel area during needle manipulation in microvascular anastomosis using a deep learning-based semantic segmentation algorithm: A pilot study. Neurosurg. Rev. 47, 200 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR24\" id=\"ref-link-section-d107787054e4168\" target=\"_blank\" rel=\"noopener\">24<\/a>. As expected, the Max-\u0394VA correlated with respect for tissue in Phase B (from needle insertion to extraction). Proper needle extraction requires following its natural curve to avoid tearing the vessel wall<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 6\" title=\"Villanueva, P. J. et al. Using engineering methods (Kaizen and micromovements science) to improve and provide evidence regarding microsurgical hand skills. World Neurosurg. 189, e380&#x2013;e390 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR6\" id=\"ref-link-section-d107787054e4172\" target=\"_blank\" rel=\"noopener\">6<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 7\" title=\"Sugiyama, T. Mastering Intracranial Microvascular Anastomoses-Basic Techniques and Surgical Pearls. (MEDICUS SHUPPAN, Publishers Co., Ltd, 2017).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR7\" id=\"ref-link-section-d107787054e4175\" target=\"_blank\" rel=\"noopener\">7<\/a>, and these technical nuances were well captured by these parameters. Additionally, the No. of TDE correlated with respect for tissue in Phases C, indicating that even during the process of pulling the threads, surgeons must exercise caution to prevent thread-induced vessel wall injury<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 6\" title=\"Villanueva, P. J. et al. Using engineering methods (Kaizen and micromovements science) to improve and provide evidence regarding microsurgical hand skills. World Neurosurg. 189, e380&#x2013;e390 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR6\" id=\"ref-link-section-d107787054e4179\" target=\"_blank\" rel=\"noopener\">6<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 7\" title=\"Sugiyama, T. Mastering Intracranial Microvascular Anastomoses-Basic Techniques and Surgical Pearls. (MEDICUS SHUPPAN, Publishers Co., Ltd, 2017).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR7\" id=\"ref-link-section-d107787054e4182\" target=\"_blank\" rel=\"noopener\">7<\/a>. These parameters also correlated with instrument handling, efficiency, suturing technique and overall performance\u2014an expected finding, as proper instrument handling and suturing technique are fundamental to respecting tissue. Thus, the technical categories are interrelated and mutually influential.<\/p>\n<p>Trajectory-tracking AI models were designed to assess motion economy and the smoothness of surgical instrument movements<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Sugiyama, T. et al. Deep learning-based video-analysis of instrument motion in microvascular anastomosis training. Acta Neurochir. (Wien). 166, 6 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR25\" id=\"ref-link-section-d107787054e4189\" target=\"_blank\" rel=\"noopener\">25<\/a>. Motion economy can be represented by the PD during a procedure. The smoothness and coordination of movement are frequently assessed using jerk-based metrics, where jerk is defined as the time derivative of acceleration. Since these jerk indexes are influenced by both movement duration and amplitude, we utilized the NJI, first proposed by Flash and Hogan<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 38\" title=\"Flash, T. &amp; Hogan, N. The coordination of arm movements: An experimentally confirmed mathematical model. J. Neurosci. 5, 1688&#x2013;1703 (1985).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR38\" id=\"ref-link-section-d107787054e4193\" target=\"_blank\" rel=\"noopener\">38<\/a>. The NJI is calculated by multiplying the jerk index by [(duration interval)5\/(path length)2], with lower values indicating smoother movements. The dimensionless NJI has been used as a quantitative metric to evaluate movement irregularities in various contexts, such as jaw movements during chewing<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 39\" title=\"Yashiro, K., Yamauchi, T., Fujii, M. &amp; Takada, K. Smoothness of human jaw movement during chewing. J. Dent. Res. 78, 1662&#x2013;1668 (1999).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR39\" id=\"ref-link-section-d107787054e4201\" target=\"_blank\" rel=\"noopener\">39<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 40\" title=\"Takada, K., Yashiro, K. &amp; Takagi, M. Reliability and sensitivity of jerk-cost measurement for evaluating irregularity of chewing jaw movements. Physiol. Meas. 27, 609&#x2013;622 (2006).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR40\" id=\"ref-link-section-d107787054e4204\" target=\"_blank\" rel=\"noopener\">40<\/a>, laparoscopic skills<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 41\" title=\"Chmarra, M. K., Kolkman, W., Jansen, F. W., Grimbergen, C. A. &amp; Dankelman, J. The influence of experience and camera holding on laparoscopic instrument movements measured with the TrEndo tracking system. Surg. Endosc. 21, 2069&#x2013;2075 (2007).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR41\" id=\"ref-link-section-d107787054e4209\" target=\"_blank\" rel=\"noopener\">41<\/a>, and microsurgical skills<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 16\" title=\"Ghasemloonia, A. et al. Surgical skill assessment using motion quality and smoothness. J. Surg. Educ. 74, 295&#x2013;305 (2017).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR16\" id=\"ref-link-section-d107787054e4213\" target=\"_blank\" rel=\"noopener\">16<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Sugiyama, T. et al. Deep learning-based video-analysis of instrument motion in microvascular anastomosis training. Acta Neurochir. (Wien). 166, 6 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR25\" id=\"ref-link-section-d107787054e4216\" target=\"_blank\" rel=\"noopener\">25<\/a>. In this study, the Rt-PD and Lt-NJI correlated with a broad range of technical categories. Despite their distinct roles in microvascular anastomosis, coordinated bimanual manipulation is essential for optimal surgical performance<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 6\" title=\"Villanueva, P. J. et al. Using engineering methods (Kaizen and micromovements science) to improve and provide evidence regarding microsurgical hand skills. World Neurosurg. 189, e380&#x2013;e390 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR6\" id=\"ref-link-section-d107787054e4220\" target=\"_blank\" rel=\"noopener\">6<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 7\" title=\"Sugiyama, T. Mastering Intracranial Microvascular Anastomoses-Basic Techniques and Surgical Pearls. (MEDICUS SHUPPAN, Publishers Co., Ltd, 2017).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR7\" id=\"ref-link-section-d107787054e4223\" target=\"_blank\" rel=\"noopener\">7<\/a>. With regard to Rt-NJI, these trends were particularly evident in Phases C and D, highlighting the importance of the motion smoothness in thread pulling and tying knots in determining overall surgical proficiency.<\/p>\n<p>Overall, integrating these parameters enabled a comprehensive assessment of complex microsurgical skills, as each parameter captured different technical aspects. Despite its effectiveness, the model still exhibited some degree of misclassification when differentiating between good and poor performance. Notably, procedural time\u2014a key determinant of surgical performance<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Tang, M. et al. Assessment of changes in vessel area during needle manipulation in microvascular anastomosis using a deep learning-based semantic segmentation algorithm: A pilot study. Neurosurg. Rev. 47, 200 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR24\" id=\"ref-link-section-d107787054e4236\" target=\"_blank\" rel=\"noopener\">24<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Sugiyama, T. et al. Deep learning-based video-analysis of instrument motion in microvascular anastomosis training. Acta Neurochir. (Wien). 166, 6 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR25\" id=\"ref-link-section-d107787054e4239\" target=\"_blank\" rel=\"noopener\">25<\/a>\u2014was intentionally excluded from the analysis. Although further exploration of additional parameters remains essential, integrating procedural time could significantly improve the classification accuracy.<\/p>\n<p>This study employed the Stanford Microsurgery and Resident Training scale<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 10\" title=\"McGoldrick, R. B. et al. Motion analysis for microsurgical training: Objective measures of dexterity, economy of movement, and ability. Plast. Reconstr. Surg. 136, 231e&#x2013;240e (2015).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR10\" id=\"ref-link-section-d107787054e4246\" target=\"_blank\" rel=\"noopener\">10<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 11\" title=\"Satterwhite, T. et al. The stanford microsurgery and resident training (SMaRT) scale: Validation of an on-line global rating scale for technical assessment. Ann. Plast. Surg. 72, S84&#x2013;S88 (2014).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR11\" id=\"ref-link-section-d107787054e4249\" target=\"_blank\" rel=\"noopener\">11<\/a> as a criteria-based objective assessment tool, as it covers a wide range of microsurgical technical aspects. Future research incorporating leakage tests or the Anastomosis Lapse Index<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 13\" title=\"Ghanem, A. M., Al Omran, Y., Shatta, B., Kim, E. &amp; Myers, S. Anastomosis lapse index (ALI): A validated end product assessment tool for simulation microsurgery training. J. Reconstr. Microsurg. 32, 233&#x2013;241 (2016).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR13\" id=\"ref-link-section-d107787054e4253\" target=\"_blank\" rel=\"noopener\">13<\/a>, which identifies ten distinct types of anastomotic errors, could provide deeper insights into the relationship between the quality of the final product and various technical factors.<\/p>\n<p>Limitations<\/p>\n<p>As mentioned above, a fundamental technical limitation of this analytical approach is the lack of 3D kinematic data, particularly in the absence of depth information. Another constraint was that when the surgical tool was outside the microscope\u2019s visual field, kinematic data of the surgical instrument could not be captured<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 25\" title=\"Sugiyama, T. et al. Deep learning-based video-analysis of instrument motion in microvascular anastomosis training. Acta Neurochir. (Wien). 166, 6 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR25\" id=\"ref-link-section-d107787054e4265\" target=\"_blank\" rel=\"noopener\">25<\/a>. Additionally, the semantic segmentation model occasionally misclassified images containing shadows from surgical instruments or hands<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 24\" title=\"Tang, M. et al. Assessment of changes in vessel area during needle manipulation in microvascular anastomosis using a deep learning-based semantic segmentation algorithm: A pilot study. Neurosurg. Rev. 47, 200 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR24\" id=\"ref-link-section-d107787054e4269\" target=\"_blank\" rel=\"noopener\">24<\/a>. To mitigate this issue, future studies should expand the training dataset to include shadowed images, thereby improving model robustness. Given that the AI model in this study utilized the ResNet-50 and YOLOv2 networks, further investigation is warranted to optimize network architecture selection. Exploring alternative deep learning models or fine-tuning existing architectures could further improve the accuracy and generalizability of surgical video analysis<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 18\" title=\"Sugiyama, T., Sugimori, H., Tang, M. &amp; Fujimura, M. Artificial intelligence for patient safety and surgical education in neurosurgery. JMA J. 8, 76&#x2013;85 (2024).\" href=\"http:\/\/www.nature.com\/articles\/s41598-025-13522-1#ref-CR18\" id=\"ref-link-section-d107787054e4273\" target=\"_blank\" rel=\"noopener\">18<\/a>.<\/p>\n<p>Our study had a relatively small sample size with respect to the number of participating surgeons, although it included surgeons with a diverse range of skills. Moreover, we did not evaluate the data from repeated training sessions to estimate the learning curve or determine whether feedback could enhance training efficacy. Future studies should evaluate the impact of AI-assisted feedback on the learning curve of surgical trainees and assess whether real-time performance tracking leads to more efficient skill acquisition.<\/p>\n","protected":false},"excerpt":{"rendered":"We employed a combined AI-based video analysis approach to assess the microvascular anastomosis performance by integrating VA changes&hellip;\n","protected":false},"author":3,"featured_media":107067,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[691,738,16622,68610,10046,68611,10047,68609,68612,159,793,68613,158,68614,67,132,68],"class_list":{"0":"post-107066","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-deep-learning","11":"tag-ec-ic-bypass","12":"tag-humanities-and-social-sciences","13":"tag-microsurgical-training","14":"tag-multidisciplinary","15":"tag-neurosurgery","16":"tag-objective-surgical-skill-evaluation","17":"tag-science","18":"tag-software","19":"tag-surgical-education","20":"tag-technology","21":"tag-tissue-deformation","22":"tag-united-states","23":"tag-unitedstates","24":"tag-us"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@us\/114946863442614341","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/107066","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/comments?post=107066"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/posts\/107066\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media\/107067"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/media?parent=107066"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/categories?post=107066"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/us\/wp-json\/wp\/v2\/tags?post=107066"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}