Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Surg Endosc ; 38(1): 171-178, 2024 01.
Article in English | MEDLINE | ID: mdl-37950028

ABSTRACT

BACKGROUND: In laparoscopic right hemicolectomy (RHC) for right-sided colon cancer, accurate recognition of the vascular anatomy is required for appropriate lymph node harvesting and safe operative procedures. We aimed to develop a deep learning model that enables the automatic recognition and visualization of major blood vessels in laparoscopic RHC. MATERIALS AND METHODS: This was a single-institution retrospective feasibility study. Semantic segmentation of three vessel areas, including the superior mesenteric vein (SMV), ileocolic artery (ICA), and ileocolic vein (ICV), was performed using the developed deep learning model. The Dice coefficient, recall, and precision were utilized as evaluation metrics to quantify the model performance after fivefold cross-validation. The model was further qualitatively appraised by 13 surgeons, based on a grading rubric to assess its potential for clinical application. RESULTS: In total, 2624 images were extracted from 104 laparoscopic colectomy for right-sided colon cancer videos, and the pixels corresponding to the SMV, ICA, and ICV were manually annotated and utilized as training data. SMV recognition was the most accurate, with all three evaluation metrics having values above 0.75, whereas the recognition accuracy of ICA and ICV ranged from 0.53 to 0.57 for the three evaluation metrics. Additionally, all 13 surgeons gave acceptable ratings for the possibility of clinical application in rubric-based quantitative evaluations. CONCLUSION: We developed a DL-based vessel segmentation model capable of achieving feasible identification and visualization of major blood vessels in association with RHC. This model may be used by surgeons to accomplish reliable navigation of vessel visualization.


Subject(s)
Colonic Neoplasms , Deep Learning , Laparoscopy , Humans , Colonic Neoplasms/diagnostic imaging , Colonic Neoplasms/surgery , Colonic Neoplasms/blood supply , Retrospective Studies , Laparoscopy/methods , Colectomy/methods
2.
Br J Surg ; 110(10): 1355-1358, 2023 09 06.
Article in English | MEDLINE | ID: mdl-37552629

ABSTRACT

To prevent intraoperative organ injury, surgeons strive to identify anatomical structures as early and accurately as possible during surgery. The objective of this prospective observational study was to develop artificial intelligence (AI)-based real-time automatic organ recognition models in laparoscopic surgery and to compare its performance with that of surgeons. The time taken to recognize target anatomy between AI and both expert and novice surgeons was compared. The AI models demonstrated faster recognition of target anatomy than surgeons, especially novice surgeons. These findings suggest that AI has the potential to compensate for the skill and experience gap between surgeons.


Subject(s)
Colorectal Surgery , Digestive System Surgical Procedures , Laparoscopy , Humans , Artificial Intelligence
3.
Comput Methods Programs Biomed ; 236: 107561, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37119774

ABSTRACT

BACKGROUND AND OBJECTIVE: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.


Subject(s)
Algorithms , Robotic Surgical Procedures , Humans , Workflow , Robotic Surgical Procedures/methods
4.
Int J Surg ; 109(4): 813-820, 2023 Apr 01.
Article in English | MEDLINE | ID: mdl-36999784

ABSTRACT

BACKGROUND: The preservation of autonomic nerves is the most important factor in maintaining genitourinary function in colorectal surgery; however, these nerves are not clearly recognisable, and their identification is strongly affected by the surgical ability. Therefore, this study aimed to develop a deep learning model for the semantic segmentation of autonomic nerves during laparoscopic colorectal surgery and to experimentally verify the model through intraoperative use and pathological examination. MATERIALS AND METHODS: The annotation data set comprised videos of laparoscopic colorectal surgery. The images of the hypogastric nerve (HGN) and superior hypogastric plexus (SHP) were manually annotated under a surgeon's supervision. The Dice coefficient was used to quantify the model performance after five-fold cross-validation. The model was used in actual surgeries to compare the recognition timing of the model with that of surgeons, and pathological examination was performed to confirm whether the samples labelled by the model from the colorectal branches of the HGN and SHP were nerves. RESULTS: The data set comprised 12 978 video frames of the HGN from 245 videos and 5198 frames of the SHP from 44 videos. The mean (±SD) Dice coefficients of the HGN and SHP were 0.56 (±0.03) and 0.49 (±0.07), respectively. The proposed model was used in 12 surgeries, and it recognised the right HGN earlier than the surgeons did in 50.0% of the cases, the left HGN earlier in 41.7% of the cases and the SHP earlier in 50.0% of the cases. Pathological examination confirmed that all 11 samples were nerve tissue. CONCLUSION: An approach for the deep-learning-based semantic segmentation of autonomic nerves was developed and experimentally validated. This model may facilitate intraoperative recognition during laparoscopic colorectal surgery.


Subject(s)
Colorectal Surgery , Deep Learning , Laparoscopy , Humans , Pilot Projects , Semantics , Autonomic Pathways/surgery , Laparoscopy/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...