Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Comput Assist Radiol Surg ; 19(6): 1045-1052, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38526613

ABSTRACT

PURPOSE: Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills. METHODS: We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset. RESULTS: The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training. CONCLUSION: We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.


Subject(s)
Clinical Competence , Suture Techniques , Video Recording , Humans , Suture Techniques/education , Benchmarking
2.
Med Image Anal ; 94: 103126, 2024 May.
Article in English | MEDLINE | ID: mdl-38452578

ABSTRACT

Batch Normalization's (BN) unique property of depending on other samples in a batch is known to cause problems in several tasks, including sequence modeling. Yet, BN-related issues are hardly studied for long video understanding, despite the ubiquitous use of BN in CNNs (Convolutional Neural Networks) for feature extraction. Especially in surgical workflow analysis, where the lack of pretrained feature extractors has led to complex, multi-stage training pipelines, limited awareness of BN issues may have hidden the benefits of training CNNs and temporal models end to end. In this paper, we analyze pitfalls of BN in video learning, including issues specific to online tasks such as a 'cheating' effect in anticipation. We observe that BN's properties create major obstacles for end-to-end learning. However, using BN-free backbones, even simple CNN-LSTMs beat the state of the art on three surgical workflow benchmarks by utilizing adequate end-to-end training strategies which maximize temporal context. We conclude that awareness of BN's pitfalls is crucial for effective end-to-end learning in surgical tasks. By reproducing results on natural-video datasets, we hope our insights will benefit other areas of video learning as well. Code is available at: https://gitlab.com/nct_tso_public/pitfalls_bn.


Subject(s)
Neural Networks, Computer , Humans , Workflow
3.
Updates Surg ; 75(5): 1103-1115, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37160843

ABSTRACT

Training improves skills in minimally invasive surgery. This study aimed to investigate the learning curves of complex motion parameters for both hands during a standardized training course using a novel measurement tool. An additional focus was placed on the parameters representing surgical safety and precision. Fifty-six laparoscopic novices participated in a training course on the basic skills of minimally invasive surgery based on a modified Fundamentals of Laparoscopic Surgery (FLS) curriculum. Before, twice during, and once after the practical lessons, all participants had to perform four laparoscopic tasks (peg transfer, precision cut, balloon resection, and laparoscopic suture and knot), which were recorded and analyzed using an instrument motion analysis system. Participants significantly improved the time per task for all four tasks (all p < 0.001). The individual instrument path length decreased significantly for the dominant and non-dominant hands in all four tasks. Similarly, both hands became significantly faster in all tasks, with the exception of the non-dominant hand in the precision cut task. In terms of relative idle time, only in the peg transfer task did both hands improve significantly, while in the precision cut task, only the dominant hand performed better. In contrast, the motion volume of both hands combined was reduced in only one task (precision cut, p = 0.01), whereas no significant improvement in the relative time of instruments being out of view was observed. FLS-based skills training increases motion efficiency primarily by increasing speed and reducing idle time and path length. Parameters relevant for surgical safety and precision (motion volume and relative time of instruments being out of view) are minimally affected by short-term training. Consequently, surgical training should also focus on safety and precision-related parameters, and assessment of these parameters should be incorporated into basic skill training accordingly.


Subject(s)
Laparoscopy , Humans , Prospective Studies , Laparoscopy/education , Curriculum , Minimally Invasive Surgical Procedures , Learning Curve , Clinical Competence
4.
IEEE Trans Med Imaging ; 41(7): 1677-1687, 2022 07.
Article in English | MEDLINE | ID: mdl-35108200

ABSTRACT

Automatically recognising surgical gestures from surgical data is an important building block of automated activity recognition and analytics, technical skill assessment, intra-operative assistance and eventually robotic automation. The complexity of articulated instrument trajectories and the inherent variability due to surgical style and patient anatomy make analysis and fine-grained segmentation of surgical motion patterns from robot kinematics alone very difficult. Surgical video provides crucial information from the surgical site with context for the kinematic data and the interaction between the instruments and tissue. Yet sensor fusion between the robot data and surgical video stream is non-trivial because the data have different frequency, dimensions and discriminative capability. In this paper, we integrate multimodal attention mechanisms in a two-stream temporal convolutional network to compute relevance scores and weight kinematic and visual feature representations dynamically in time, aiming to aid multimodal network training and achieve effective sensor fusion. We report the results of our system on the JIGSAWS benchmark dataset and on a new in vivo dataset of suturing segments from robotic prostatectomy procedures. Our results are promising and obtain multimodal prediction sequences with higher accuracy and better temporal structure than corresponding unimodal solutions. Visualization of attention scores also gives physically interpretable insights on network understanding of strengths and weaknesses of each sensor.


Subject(s)
Robotic Surgical Procedures , Robotics , Biomechanical Phenomena , Gestures , Humans , Motion , Robotics/methods
5.
Surg Endosc ; 36(6): 4359-4368, 2022 06.
Article in English | MEDLINE | ID: mdl-34782961

ABSTRACT

BACKGROUND: Coffee can increase vigilance and performance, especially during sleep deprivation. The hypothetical downside of caffeine in the surgical field is the potential interaction with the ergonomics of movement and the central nervous system. The objective of this trial was to investigate the influence of caffeine on laparoscopic performance. METHODS: Fifty laparoscopic novices participated in this prospective randomized, blinded crossover trial and were trained in a modified FLS curriculum until reaching a predefined proficiency. Subsequently, all participants performed four laparoscopic tasks twice, once after consumption of a placebo and once after a caffeinated (200 mg) beverage. Comparative analysis was performed between the cohorts. Primary endpoint analysis included task time, task errors, OSATS score and a performance analysis with an instrument motion analysis (IMA) system. RESULTS: Fifty participants completed the study. Sixty-eight percent of participants drank coffee daily. The time to completion for each task was comparable between the caffeine and placebo cohorts for PEG transfer (119 s vs 121 s; p = 0.73), precise cutting (157 s vs 163 s; p = 0.74), gallbladder resection (190 s vs 173 s; p = 0.6) and surgical knot (171 s vs 189 s; p = 0.68). The instrument motion analysis showed no significant differences between the caffeine and placebo groups in any parameters: instrument volume, path length, idle, velocity, acceleration, and instrument out of view. Additionally, OSATS scores did not differ between groups, regardless of task. Major errors occurred similarly in both groups, except for one error criteria during the circle cutting task, which occurred significantly more often in the caffeine group (34% vs. 16%, p < 0.05). CONCLUSION: The objective IMA and performance scores of laparoscopic skills revealed that caffeine consumption does not enhance or impair the overall laparoscopic performance of surgical novices. The occurrence of major errors is not conclusive but could be negatively influenced in part by caffeine intake.


Subject(s)
Caffeine , Laparoscopy , Clinical Competence , Coffee , Cross-Over Studies , Humans , Laparoscopy/education , Prospective Studies
6.
Front Hum Neurosci ; 15: 675700, 2021.
Article in English | MEDLINE | ID: mdl-34675789

ABSTRACT

The ability to perceive differences in depth is important in many daily life situations. It is also of relevance in laparoscopic surgical procedures that require the extrapolation of three-dimensional visual information from two-dimensional planar images. Besides visual-motor coordination, laparoscopic skills and binocular depth perception are demanding visual tasks for which learning is important. This study explored potential relations between binocular depth perception and individual variations in performance gains during laparoscopic skill acquisition in medical students naïve of such procedures. Individual differences in perceptual learning of binocular depth discrimination when performing a random dot stereogram (RDS) task were measured as variations in the slope changes of the logistic disparity psychometric curves from the first to the last blocks of the experiment. The results showed that not only did the individuals differ in their depth discrimination; the extent with which this performance changed across blocks also differed substantially between individuals. Of note, individual differences in perceptual learning of depth discrimination are associated with performance gains from laparoscopic skill training, both with respect to movement speed and an efficiency score that considered both speed and precision. These results indicate that learning-related benefits for enhancing demanding visual processes are, in part, shared between these two tasks. Future studies that include a broader selection of task-varying monocular and binocular cues as well as visual-motor coordination are needed to further investigate potential mechanistic relations between depth perceptual learning and laparoscopic skill acquisition. A deeper understanding of these mechanisms would be important for applied research that aims at designing behavioral interventions for enhancing technology-assisted laparoscopic skills.

7.
Int J Comput Assist Radiol Surg ; 14(7): 1217-1225, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31104257

ABSTRACT

PURPOSE: A profound education of novice surgeons is crucial to ensure that surgical interventions are effective and safe. One important aspect is the teaching of technical skills for minimally invasive or robot-assisted procedures. This includes the objective and preferably automatic assessment of surgical skill. Recent studies presented good results for automatic, objective skill evaluation by collecting and analyzing motion data such as trajectories of surgical instruments. However, obtaining the motion data generally requires additional equipment for instrument tracking or the availability of a robotic surgery system to capture kinematic data. In contrast, we investigate a method for automatic, objective skill assessment that requires video data only. This has the advantage that video can be collected effortlessly during minimally invasive and robot-assisted training scenarios. METHODS: Our method builds on recent advances in deep learning-based video classification. Specifically, we propose to use an inflated 3D ConvNet to classify snippets, i.e., stacks of a few consecutive frames, extracted from surgical video. The network is extended into a temporal segment network during training. RESULTS: We evaluate the method on the publicly available JIGSAWS dataset, which consists of recordings of basic robot-assisted surgery tasks performed on a dry lab bench-top model. Our approach achieves high skill classification accuracies ranging from 95.1 to 100.0%. CONCLUSIONS: Our results demonstrate the feasibility of deep learning-based assessment of technical skill from surgical video. Notably, the 3D ConvNet is able to learn meaningful patterns directly from the data, alleviating the need for manual feature engineering. Further evaluation will require more annotated data for training and testing.


Subject(s)
Clinical Competence , Neural Networks, Computer , Deep Learning , Humans , Motion , Surgeons
SELECTION OF CITATIONS
SEARCH DETAIL
...