Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 58
Filter
1.
Surg Today ; 2024 May 14.
Article in English | MEDLINE | ID: mdl-38740574

ABSTRACT

The sigmoid colon simulator was designed to accurately reproduce the anatomical layer structure and the arrangement of characteristic organs in each layer, and to have conductivity so that energy devices can be used. Dry polyester fibers were used to reproduce the layered structures, which included characteristic blood vessels, nerve sheaths, and intestinal tracts. The adhesive strength of the layers was controlled to allow realistic peeling techniques. The features of the Sigmaster are illustrated through a comparison of simulated sigmoidectomy using Sigmaster and actual surgery. We developed a laparoscopic sigmoidectomy simulator called Sigmaster. Sigmaster is a training device that closely reproduces the membrane structures of the human body and allows surgeons to experience the entire laparoscopic sigmoidectomy process.

2.
Surg Endosc ; 38(2): 1088-1095, 2024 02.
Article in English | MEDLINE | ID: mdl-38216749

ABSTRACT

BACKGROUND: The precise recognition of liver vessels during liver parenchymal dissection is the crucial technique for laparoscopic liver resection (LLR). This retrospective feasibility study aimed to develop artificial intelligence (AI) models to recognize liver vessels in LLR, and to evaluate their accuracy and real-time performance. METHODS: Images from LLR videos were extracted, and the hepatic veins and Glissonean pedicles were labeled separately. Two AI models were developed to recognize liver vessels: the "2-class model" which recognized both hepatic veins and Glissonean pedicles as equivalent vessels and distinguished them from the background class, and the "3-class model" which recognized them all separately. The Feature Pyramid Network was used as a neural network architecture for both models in their semantic segmentation tasks. The models were evaluated using fivefold cross-validation tests, and the Dice coefficient (DC) was used as an evaluation metric. Ten gastroenterological surgeons also evaluated the models qualitatively through rubric. RESULTS: In total, 2421 frames from 48 video clips were extracted. The mean DC value of the 2-class model was 0.789, with a processing speed of 0.094 s. The mean DC values for the hepatic vein and the Glissonean pedicle in the 3-class model were 0.631 and 0.482, respectively. The average processing time for the 3-class model was 0.097 s. Qualitative evaluation by surgeons revealed that false-negative and false-positive ratings in the 2-class model averaged 4.40 and 3.46, respectively, on a five-point scale, while the false-negative, false-positive, and vessel differentiation ratings in the 3-class model averaged 4.36, 3.44, and 3.28, respectively, on a five-point scale. CONCLUSION: We successfully developed deep-learning models that recognize liver vessels in LLR with high accuracy and sufficient processing speed. These findings suggest the potential of a new real-time automated navigation system for LLR.


Subject(s)
Artificial Intelligence , Laparoscopy , Humans , Retrospective Studies , Liver/diagnostic imaging , Liver/surgery , Liver/blood supply , Hepatectomy/methods , Laparoscopy/methods
3.
Surg Endosc ; 38(1): 171-178, 2024 01.
Article in English | MEDLINE | ID: mdl-37950028

ABSTRACT

BACKGROUND: In laparoscopic right hemicolectomy (RHC) for right-sided colon cancer, accurate recognition of the vascular anatomy is required for appropriate lymph node harvesting and safe operative procedures. We aimed to develop a deep learning model that enables the automatic recognition and visualization of major blood vessels in laparoscopic RHC. MATERIALS AND METHODS: This was a single-institution retrospective feasibility study. Semantic segmentation of three vessel areas, including the superior mesenteric vein (SMV), ileocolic artery (ICA), and ileocolic vein (ICV), was performed using the developed deep learning model. The Dice coefficient, recall, and precision were utilized as evaluation metrics to quantify the model performance after fivefold cross-validation. The model was further qualitatively appraised by 13 surgeons, based on a grading rubric to assess its potential for clinical application. RESULTS: In total, 2624 images were extracted from 104 laparoscopic colectomy for right-sided colon cancer videos, and the pixels corresponding to the SMV, ICA, and ICV were manually annotated and utilized as training data. SMV recognition was the most accurate, with all three evaluation metrics having values above 0.75, whereas the recognition accuracy of ICA and ICV ranged from 0.53 to 0.57 for the three evaluation metrics. Additionally, all 13 surgeons gave acceptable ratings for the possibility of clinical application in rubric-based quantitative evaluations. CONCLUSION: We developed a DL-based vessel segmentation model capable of achieving feasible identification and visualization of major blood vessels in association with RHC. This model may be used by surgeons to accomplish reliable navigation of vessel visualization.


Subject(s)
Colonic Neoplasms , Deep Learning , Laparoscopy , Humans , Colonic Neoplasms/diagnostic imaging , Colonic Neoplasms/surgery , Colonic Neoplasms/blood supply , Retrospective Studies , Laparoscopy/methods , Colectomy/methods
4.
Gastric Cancer ; 27(1): 187-196, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38038811

ABSTRACT

BACKGROUND: Gastric surgery involves numerous surgical phases; however, its steps can be clearly defined. Deep learning-based surgical phase recognition can promote stylization of gastric surgery with applications in automatic surgical skill assessment. This study aimed to develop a deep learning-based surgical phase-recognition model using multicenter videos of laparoscopic distal gastrectomy, and examine the feasibility of automatic surgical skill assessment using the developed model. METHODS: Surgical videos from 20 hospitals were used. Laparoscopic distal gastrectomy was defined and annotated into nine phases and a deep learning-based image classification model was developed for phase recognition. We examined whether the developed model's output, including the number of frames in each phase and the adequacy of the surgical field development during the phase of supra-pancreatic lymphadenectomy, correlated with the manually assigned skill assessment score. RESULTS: The overall accuracy of phase recognition was 88.8%. Regarding surgical skill assessment based on the number of frames during the phases of lymphadenectomy of the left greater curvature and reconstruction, the number of frames in the high-score group were significantly less than those in the low-score group (829 vs. 1,152, P < 0.01; 1,208 vs. 1,586, P = 0.01, respectively). The output score of the adequacy of the surgical field development, which is the developed model's output, was significantly higher in the high-score group than that in the low-score group (0.975 vs. 0.970, P = 0.04). CONCLUSION: The developed model had high accuracy in phase-recognition tasks and has the potential for application in automatic surgical skill assessment systems.


Subject(s)
Laparoscopy , Stomach Neoplasms , Humans , Stomach Neoplasms/surgery , Laparoscopy/methods , Gastroenterostomy , Gastrectomy/methods
5.
Br J Surg ; 110(10): 1355-1358, 2023 09 06.
Article in English | MEDLINE | ID: mdl-37552629

ABSTRACT

To prevent intraoperative organ injury, surgeons strive to identify anatomical structures as early and accurately as possible during surgery. The objective of this prospective observational study was to develop artificial intelligence (AI)-based real-time automatic organ recognition models in laparoscopic surgery and to compare its performance with that of surgeons. The time taken to recognize target anatomy between AI and both expert and novice surgeons was compared. The AI models demonstrated faster recognition of target anatomy than surgeons, especially novice surgeons. These findings suggest that AI has the potential to compensate for the skill and experience gap between surgeons.


Subject(s)
Colorectal Surgery , Digestive System Surgical Procedures , Laparoscopy , Humans , Artificial Intelligence
6.
JAMA Surg ; 158(8): e231131, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37285142

ABSTRACT

Importance: Automatic surgical skill assessment with artificial intelligence (AI) is more objective than manual video review-based skill assessment and can reduce human burden. Standardization of surgical field development is an important aspect of this skill assessment. Objective: To develop a deep learning model that can recognize the standardized surgical fields in laparoscopic sigmoid colon resection and to evaluate the feasibility of automatic surgical skill assessment based on the concordance of the standardized surgical field development using the proposed deep learning model. Design, Setting, and Participants: This retrospective diagnostic study used intraoperative videos of laparoscopic colorectal surgery submitted to the Japan Society for Endoscopic Surgery between August 2016 and November 2017. Data were analyzed from April 2020 to September 2022. Interventions: Videos of surgery performed by expert surgeons with Endoscopic Surgical Skill Qualification System (ESSQS) scores higher than 75 were used to construct a deep learning model able to recognize a standardized surgical field and output its similarity to standardized surgical field development as an AI confidence score (AICS). Other videos were extracted as the validation set. Main Outcomes and Measures: Videos with scores less than or greater than 2 SDs from the mean were defined as the low- and high-score groups, respectively. The correlation between AICS and ESSQS score and the screening performance using AICS for low- and high-score groups were analyzed. Results: The sample included 650 intraoperative videos, 60 of which were used for model construction and 60 for validation. The Spearman rank correlation coefficient between the AICS and ESSQS score was 0.81. The receiver operating characteristic (ROC) curves for the screening of the low- and high-score groups were plotted, and the areas under the ROC curve for the low- and high-score group screening were 0.93 and 0.94, respectively. Conclusions and Relevance: The AICS from the developed model strongly correlated with the ESSQS score, demonstrating the model's feasibility for use as a method of automatic surgical skill assessment. The findings also suggest the feasibility of the proposed model for creating an automated screening system for surgical skills and its potential application to other types of endoscopic procedures.


Subject(s)
Digestive System Surgical Procedures , Laparoscopy , Humans , Artificial Intelligence , Retrospective Studies , Laparoscopy/methods , ROC Curve
7.
BMC Surg ; 23(1): 121, 2023 May 11.
Article in English | MEDLINE | ID: mdl-37170107

ABSTRACT

BACKGROUND: Anastomotic leakage has been reported to occur when the load on the anastomotic site exceeds the resistance created by sutures, staples, and early scars. It may be possible to avoid anastomotic leakage by covering and reinforcing the anastomotic site with a biocompatible material. The aim of this study was to evaluate the safety and feasibility of a novel external reinforcement device for gastrointestinal anastomosis in an experimental model. METHODS: A single pig was used in this non-survival study, and end-to-end anastomoses were created in six small bowel loops by a single-stapling technique using a circular stapler. Three of the six anastomoses were covered with a novel external reinforcement device. Air was injected, a pressure test of each anastomosis was performed, and the bursting pressure was measured. RESULTS: Reinforcement of the anastomotic site with the device was successfully performed in all anastomoses. The bursting pressure was 76.1 ± 5.7 mmHg in the control group, and 126.8 ± 6.8 mmHg in the device group, respectively. The bursting pressure in the device group was significantly higher than that in the control group (p = 0.0006). CONCLUSIONS: The novel external reinforcement device was safe and feasible for reinforcing the anastomoses in the experimental model.


Subject(s)
Anastomotic Leak , Intestine, Small , Swine , Animals , Anastomotic Leak/prevention & control , Anastomotic Leak/surgery , Anastomosis, Surgical/methods , Intestine, Small/surgery , Surgical Stapling/methods , Cicatrix
8.
Head Neck ; 45(6): 1549-1557, 2023 06.
Article in English | MEDLINE | ID: mdl-37045798

ABSTRACT

BACKGROUND: The entire pharynx should be observed endoscopically to avoid missing pharyngeal lesions. An artificial intelligence (AI) model recognizing anatomical locations can help identify blind spots. We developed and evaluated an AI model classifying pharyngeal and laryngeal endoscopic locations. METHODS: The AI model was trained using 5382 endoscopic images, categorized into 15 anatomical locations, and evaluated using an independent dataset of 1110 images. The main outcomes were model accuracy, precision, recall, and F1-score. Moreover, we investigated focused regions in the input images contributing to the model predictions using gradient-weighted class activation mapping (Grad-CAM) and Guided Grad-CAM. RESULTS: Our AI model correctly classified pharyngeal and laryngeal images into 15 anatomical locations, with an accuracy of 93.3%. The weighted averages of precision, recall, and F1-score were 0.934, 0.933, and 0.933, respectively. CONCLUSION: Our AI model has an excellent performance determining pharyngeal and laryngeal anatomical locations, helping endoscopists notify of blind spots.


Subject(s)
Larynx , Pharynx , Humans , Pharynx/diagnostic imaging , Artificial Intelligence , Endoscopy , Larynx/diagnostic imaging
9.
BJS Open ; 7(2)2023 03 07.
Article in English | MEDLINE | ID: mdl-36882082

ABSTRACT

BACKGROUND: Purse-string suture in transanal total mesorectal excision is a key procedural step. The aims of this study were to develop an automatic skill assessment system for purse-string suture in transanal total mesorectal excision using deep learning and to evaluate the reliability of the score output from the proposed system. METHODS: Purse-string suturing extracted from consecutive transanal total mesorectal excision videos was manually scored using a performance rubric scale and computed into a deep learning model as training data. Deep learning-based image regression analysis was performed, and the purse-string suture skill scores predicted by the trained deep learning model (artificial intelligence score) were output as continuous variables. The outcomes of interest were the correlation, assessed using Spearman's rank correlation coefficient, between the artificial intelligence score and the manual score, purse-string suture time, and surgeon's experience. RESULTS: Forty-five videos obtained from five surgeons were evaluated. The mean(s.d.) total manual score was 9.2(2.7) points, the mean(s.d.) total artificial intelligence score was 10.2(3.9) points, and the mean(s.d.) absolute error between the artificial intelligence and manual scores was 0.42(0.39). Further, the artificial intelligence score significantly correlated with the purse-string suture time (correlation coefficient = -0.728) and surgeon's experience (P< 0.001). CONCLUSION: An automatic purse-string suture skill assessment system using deep learning-based video analysis was shown to be feasible, and the results indicated that the artificial intelligence score was reliable. This application could be expanded to other endoscopic surgeries and procedures.


Subject(s)
Deep Learning , Rectal Neoplasms , Humans , Artificial Intelligence , Reproducibility of Results , Sutures
10.
Urology ; 173: 98-103, 2023 03.
Article in English | MEDLINE | ID: mdl-36572225

ABSTRACT

OBJECTIVE: To develop a convolutional neural network to recognize the seminal vesicle and vas deferens (SV-VD) in the posterior approach of robot-assisted radical prostatectomy (RARP) and assess the performance of the convolutional neural network model under clinically relevant conditions. METHODS: Intraoperative videos of robot-assisted radical prostatectomy performed by the posterior approach from 3 institutions were obtained between 2019 and 2020. Using SV-VD dissection videos, semantic segmentation of the seminal vesicle-vas deferens area was performed using a convolutional neural network-based approach. The dataset was split into training and test data in a 10:3 ratio. The average time required by 6 novice urologists to correctly recognize the SV-VD was compared using intraoperative videos with and without segmentation masks generated by the convolutional neural network model, which was evaluated with the test data using the Dice similarity coefficient. Training and test datasets were compared using the Mann-Whitney U-test and chi-square test. Time required to recognize the SV-VD was evaluated using the Mann-Whitney U-test. RESULTS: From 26 patient videos, 1 040 images were created (520 SV-VD annotated images and 520 SV-VD non-displayed images). The convolutional neural network model had a Dice similarity coefficient value of 0.73 in the test data. Compared with original videos, videos with the generated segmentation mask promoted significantly faster seminal vesicle and vas deferens recognition (P < .001). CONCLUSION: The convolutional neural network model provides accurate recognition of the SV-VD in the posterior approach RARP, which may be helpful, especially for novice urologists.


Subject(s)
Deep Learning , Robotics , Male , Humans , Seminal Vesicles , Vas Deferens , Prostatectomy/methods , Image Processing, Computer-Assisted
11.
Ann Surg ; 278(2): e250-e255, 2023 08 01.
Article in English | MEDLINE | ID: mdl-36250677

ABSTRACT

OBJECTIVE: To develop a machine learning model that automatically quantifies the spread of blood in the surgical field using intraoperative videos of laparoscopic colorectal surgery and evaluate whether the index measured with the developed model can be used to assess tissue handling skill. BACKGROUND: Although skill evaluation is crucial in laparoscopic surgery, existing evaluation systems suffer from evaluator subjectivity and are labor-intensive. Therefore, automatic evaluation using machine learning is potentially useful. MATERIALS AND METHODS: In this retrospective experimental study, we used training data with annotated labels of blood or non-blood pixels on intraoperative images to develop a machine learning model to classify pixel RGB values into blood and non-blood. The blood pixel count per frame (the total number of blood pixels throughout a surgery divided by the number of frames) was compared among groups of surgeons with different tissue handling skills. RESULTS: The overall accuracy of the machine learning model for the blood classification task was 85.7%. The high tissue handling skill group had the lowest blood pixel count per frame, and the novice surgeon group had the highest count (mean [SD]: high tissue handling skill group 20972.23 [19287.05] vs. low tissue handling skill group 34473.42 [28144.29] vs. novice surgeon group 50630.04 [42427.76], P <0.01). The difference between any 2 groups was significant. CONCLUSIONS: We developed a machine learning model to measure blood pixels in laparoscopic colorectal surgery images using RGB information. The blood pixel count per frame measured with this model significantly correlated with surgeons' tissue handling skills.


Subject(s)
Colorectal Surgery , Laparoscopy , Humans , Retrospective Studies , Clinical Competence , Laparoscopy/methods , Machine Learning
12.
Surg Endosc ; 37(2): 835-845, 2023 02.
Article in English | MEDLINE | ID: mdl-36097096

ABSTRACT

BACKGROUND: Prioritizing patient health is essential, and given the risk of mortality, surgical techniques should be objectively evaluated. However, there is no comprehensive cross-disciplinary system that evaluates skills across all aspects among surgeons of varying levels. Therefore, this study aimed to uncover universal surgical competencies by decomposing and reconstructing specific descriptions in operative performance assessment tools, as the basis of building automated evaluation system using computer vision and machine learning-based analysis. METHODS: The study participants were primarily expert surgeons in the gastrointestinal surgery field and the methodology comprised data collection, thematic analysis, and validation. For the data collection, participants identified global operative performance assessment tools according to detailed inclusion and exclusion criteria. Thereafter, thematic analysis was used to conduct detailed analyses of the descriptions in the tools where specific rules were coded, integrated, and discussed to obtain high-level concepts, namely, "Skill meta-competencies." "Skill meta-competencies" was recategorized for data validation and reliability assurance. Nine assessment tools were selected based on participant criteria. RESULTS: In total, 189 types of skill performances were extracted from the nine tool descriptions and organized into the following five competencies: (1) Tissue handling, (2) Psychomotor skill, (3) Efficiency, (4) Dissection quality, and (5) Exposure quality. The evolutionary importance of these competences' different evaluation targets and purpose over time were assessed; the results showed relatively high reliability, indicating that the categorization was reproducible. The inclusion of basic (tissue handling, psychomotor skill, and efficiency) and advanced (dissection quality and exposure quality) skills in these competencies enhanced the tools' comprehensiveness. CONCLUSIONS: The competencies identified to help surgeons formalize and implement tacit knowledge of operative performance are highly reproducible. These results can be used to form the basis of an automated skill evaluation system and help surgeons improve the provision of care and training, consequently, improving patient prognosis.


Subject(s)
Internship and Residency , Surgeons , Humans , Reproducibility of Results , Educational Measurement , Data Collection , Clinical Competence
14.
JAMA Netw Open ; 5(8): e2226265, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35984660

ABSTRACT

Importance: Deep learning-based automatic surgical instrument recognition is an indispensable technology for surgical research and development. However, pixel-level recognition with high accuracy is required to make it suitable for surgical automation. Objective: To develop a deep learning model that can simultaneously recognize 8 types of surgical instruments frequently used in laparoscopic colorectal operations and evaluate its recognition performance. Design, Setting, and Participants: This quality improvement study was conducted at a single institution with a multi-institutional data set. Laparoscopic colorectal surgical videos recorded between April 1, 2009, and December 31, 2021, were included in the video data set. Deep learning-based instance segmentation, an image recognition approach that recognizes each object individually and pixel by pixel instead of roughly enclosing with a bounding box, was performed for 8 types of surgical instruments. Main Outcomes and Measures: Average precision, calculated from the area under the precision-recall curve, was used as an evaluation metric. The average precision represents the number of instances of true-positive, false-positive, and false-negative results, and the mean average precision value for 8 types of surgical instruments was calculated. Five-fold cross-validation was used as the validation method. The annotation data set was split into 5 segments, of which 4 were used for training and the remainder for validation. The data set was split at the per-case level instead of the per-frame level; thus, the images extracted from an intraoperative video in the training set never appeared in the validation set. Validation was performed for all 5 validation sets, and the average mean average precision was calculated. Results: In total, 337 laparoscopic colorectal surgical videos were used. Pixel-by-pixel annotation was manually performed for 81 760 labels on 38 628 static images, constituting the annotation data set. The mean average precisions of the instance segmentation for surgical instruments were 90.9% for 3 instruments, 90.3% for 4 instruments, 91.6% for 6 instruments, and 91.8% for 8 instruments. Conclusions and Relevance: A deep learning-based instance segmentation model that simultaneously recognizes 8 types of surgical instruments with high accuracy was successfully developed. The accuracy was maintained even when the number of types of surgical instruments increased. This model can be applied to surgical innovations, such as intraoperative navigation and surgical automation.


Subject(s)
Colorectal Neoplasms , Laparoscopy , Automation , Humans , Laparoscopy/methods , Neural Networks, Computer , Surgical Instruments
15.
Int J Surg ; 105: 106856, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36031068

ABSTRACT

BACKGROUND: To perform accurate laparoscopic hepatectomy (LH) without injury, novel intraoperative systems of computer-assisted surgery (CAS) for LH are expected. Automated surgical workflow identification is a key component for developing CAS systems. This study aimed to develop a deep-learning model for automated surgical step identification in LH. MATERIALS AND METHODS: We constructed a dataset comprising 40 cases of pure LH videos; 30 and 10 cases were used for the training and testing datasets, respectively. Each video was divided into 30 frames per second as static images. LH was divided into nine surgical steps (Steps 0-8), and each frame was annotated as being within one of these steps in the training set. After extracorporeal actions (Step 0) were excluded from the video, two deep-learning models of automated surgical step identification for 8-step and 6-step models were developed using a convolutional neural network (Models 1 & 2). Each frame in the testing dataset was classified using the constructed model performed in real-time. RESULTS: Above 8 million frames were annotated for surgical step identification from the pure LH videos. The overall accuracy of Model 1 was 0.891, which was increased to 0.947 in Model 2. Median and average accuracy for each case in Model 2 was 0.927 (range, 0.884-0.997) and 0.937 ± 0.04 (standardized difference), respectively. Real-time automated surgical step identification was performed at 21 frames per second. CONCLUSIONS: We developed a highly accurate deep-learning model for surgical step identification in pure LH. Our model could be applied to intraoperative systems of CAS.


Subject(s)
Artificial Intelligence , Laparoscopy , Hepatectomy , Humans , Laparoscopy/methods , Neural Networks, Computer , Workflow
16.
Sci Rep ; 12(1): 12575, 2022 07 22.
Article in English | MEDLINE | ID: mdl-35869249

ABSTRACT

Clarifying the generalizability of deep-learning-based surgical-instrument segmentation networks in diverse surgical environments is important in recognizing the challenges of overfitting in surgical-device development. This study comprehensively evaluated deep neural network generalizability for surgical instrument segmentation using 5238 images randomly extracted from 128 intraoperative videos. The video dataset contained 112 laparoscopic colorectal resection, 5 laparoscopic distal gastrectomy, 5 laparoscopic cholecystectomy, and 6 laparoscopic partial hepatectomy cases. Deep-learning-based surgical-instrument segmentation was performed for test sets with (1) the same conditions as the training set; (2) the same recognition target surgical instrument and surgery type but different laparoscopic recording systems; (3) the same laparoscopic recording system and surgery type but slightly different recognition target laparoscopic surgical forceps; (4) the same laparoscopic recording system and recognition target surgical instrument but different surgery types. The mean average precision and mean intersection over union for test sets 1, 2, 3, and 4 were 0.941 and 0.887, 0.866 and 0.671, 0.772 and 0.676, and 0.588 and 0.395, respectively. Therefore, the recognition accuracy decreased even under slightly different conditions. The results of this study reveal the limited generalizability of deep neural networks in the field of surgical artificial intelligence and caution against deep-learning-based biased datasets and models.Trial Registration Number: 2020-315, date of registration: October 5, 2020.


Subject(s)
Artificial Intelligence , Laparoscopy , Laparoscopy/methods , Neural Networks, Computer , Surgical Instruments
17.
Surg Endosc ; 36(8): 6105-6112, 2022 08.
Article in English | MEDLINE | ID: mdl-35764837

ABSTRACT

BACKGROUND: Recognition of the inferior mesenteric artery (IMA) during colorectal cancer surgery is crucial to avoid intraoperative hemorrhage and define the appropriate lymph node dissection line. This retrospective feasibility study aimed to develop an IMA anatomical recognition model for laparoscopic colorectal resection using deep learning, and to evaluate its recognition accuracy and real-time performance. METHODS: A complete multi-institutional surgical video database, LapSig300 was used for this study. Intraoperative videos of 60 patients who underwent laparoscopic sigmoid colon resection or high anterior resection were randomly extracted from the database and included. Deep learning-based semantic segmentation accuracy and real-time performance of the developed IMA recognition model were evaluated using Dice similarity coefficient (DSC) and frames per second (FPS), respectively. RESULTS: In a fivefold cross-validation conducted using 1200 annotated images for the IMA semantic segmentation task, the mean DSC value was 0.798 (± 0.0161 SD) and the maximum DSC was 0.816. The proposed deep learning model operated at a speed of over 12 FPS. CONCLUSION: To the best of our knowledge, this is the first study to evaluate the feasibility of real-time vascular anatomical navigation during laparoscopic colorectal surgery using a deep learning-based semantic segmentation approach. This experimental study was conducted to confirm the feasibility of our model; therefore, its safety and usefulness were not verified in clinical practice. However, the proposed deep learning model demonstrated a relatively high accuracy in recognizing IMA in intraoperative images. The proposed approach has potential application in image navigation systems for unfixed soft tissues and organs during various laparoscopic surgeries.


Subject(s)
Laparoscopy , Mesenteric Artery, Inferior , Colon, Sigmoid/blood supply , Humans , Image Processing, Computer-Assisted , Laparoscopy/methods , Lymph Node Excision/methods , Mesenteric Artery, Inferior/surgery , Retrospective Studies
18.
Surg Endosc ; 36(7): 5531-5539, 2022 07.
Article in English | MEDLINE | ID: mdl-35476155

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has been largely investigated in the field of surgery, particularly in quality assurance. However, AI-guided navigation during surgery has not yet been put into practice because a sufficient level of performance has not been reached. We aimed to develop deep learning-based AI image processing software to identify the location of the recurrent laryngeal nerve during thoracoscopic esophagectomy and determine whether the incidence of recurrent laryngeal nerve paralysis is reduced using this software. METHODS: More than 3000 images extracted from 20 thoracoscopic esophagectomy videos and 40 images extracted from 8 thoracoscopic esophagectomy videos were annotated for identification of the recurrent laryngeal nerve. The Dice coefficient was used to assess the detection performance of the model and that of surgeons (specialized esophageal surgeons and certified general gastrointestinal surgeons). The performance was compared using a test set. RESULTS: The average Dice coefficient of the AI model was 0.58. This was not significantly different from the Dice coefficient of the group of specialized esophageal surgeons (P = 0.26); however, it was significantly higher than that of the group of certified general gastrointestinal surgeons (P = 0.019). CONCLUSIONS: Our software's performance in identification of the recurrent laryngeal nerve was superior to that of general surgeons and almost reached that of specialized surgeons. Our software provides real-time identification and will be useful for thoracoscopic esophagectomy after further developments.


Subject(s)
Esophageal Neoplasms , Esophagectomy , Artificial Intelligence , Esophageal Neoplasms/surgery , Esophagectomy/methods , Humans , Lymph Node Excision/methods , Recurrent Laryngeal Nerve/surgery , Retrospective Studies
19.
Ann Gastroenterol Surg ; 6(1): 29-36, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35106412

ABSTRACT

Technology has advanced surgery, especially minimally invasive surgery (MIS), including laparoscopic surgery and robotic surgery. It has led to an increase in the number of technologies in the operating room. They can provide further information about a surgical procedure, e.g. instrument usage and trajectories. Among these surgery-related technologies, the amount of information extracted from a surgical video captured by an endoscope is especially great. Therefore, the automation of data analysis is essential in surgery to reduce the complexity of the data while maximizing its utility to enable new opportunities for research and development. Computer vision (CV) is the field of study that deals with how computers can understand digital images or videos and seeks to automate tasks that can be performed by the human visual system. Because this field deals with all the processes of real-world information acquisition by computers, the terminology "CV" is extensive, and ranges from hardware for image sensing to AI-based image recognition. AI-based image recognition for simple tasks, such as recognizing snapshots, has advanced and is comparable to humans in recent years. Although surgical video recognition is a more complex and challenging task, if we can effectively apply it to MIS, it leads to future surgical advancements, such as intraoperative decision-making support and image navigation surgery. Ultimately, automated surgery might be realized. In this article, we summarize the recent advances and future perspectives of AI-related research and development in the field of surgery.

SELECTION OF CITATIONS
SEARCH DETAIL
...