Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Clin Ophthalmol ; 18: 647-657, 2024.
Article in English | MEDLINE | ID: mdl-38476358

ABSTRACT

Background: The capsulorhexis is one of the most important and challenging maneuvers in cataract surgery. Automated analysis of the anterior capsulotomy could aid surgical training through the provision of objective feedback and guidance to trainees. Purpose: To develop and evaluate a deep learning-based system for the automated identification and semantic segmentation of the anterior capsulotomy in cataract surgery video. Methods: In this study, we established a BigCat-Capsulotomy dataset comprising 1556 video frames extracted from 190 recorded cataract surgery videos for developing and validating the capsulotomy recognition system. The proposed system involves three primary stages: video preprocessing, capsulotomy video frame classification, and capsulotomy segmentation. To thoroughly evaluate its efficacy, we examined the performance of a total of eight deep learning-based classification models and eleven segmentation models, assessing both accuracy and time consumption. Furthermore, we delved into the factors influencing system performance by deploying it across various surgical phases. Results: The ResNet-152 model employed in the classification step of the proposed capsulotomy recognition system attained strong performance with an overall Dice coefficient of 92.21%. Similarly, the UNet model with the DenseNet-169 backbone emerged as the most effective segmentation model among those investigated, achieving an overall Dice coefficient of 92.12%. Moreover, the time consumption of the system was low at 103.37 milliseconds per frame, facilitating its application in real-time scenarios. Phase-wise analysis indicated that the Phacoemulsification phase (nuclear disassembly) was the most challenging to segment (Dice coefficient of 86.02%). Conclusion: The experimental results showed that the proposed system is highly effective in intraoperative capsulotomy recognition during cataract surgery and demonstrates both high accuracy and real-time capabilities. This system holds significant potential for applications in surgical performance analysis, education, and intraoperative guidance systems.

2.
IEEE J Biomed Health Inform ; 28(3): 1599-1610, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38127596

ABSTRACT

Cataract surgery remains the only definitive treatment for visually significant cataracts, which are a major cause of preventable blindness worldwide. Successful performance of cataract surgery relies on stable dilation of the pupil. Automated pupil segmentation from surgical videos can assist surgeons in detecting risk factors for pupillary instability prior to the development of surgical complications. However, surgical illumination variations, surgical instrument obstruction, and lens material hydration during cataract surgery can limit pupil segmentation accuracy. To address these problems, we propose a novel method named adaptive wavelet tensor feature extraction (AWTFE). AWTFE is designed to enhance the accuracy of deep learning-powered pupil recognition systems. First, we represent the correlations among spatial information, color channels, and wavelet subbands by constructing a third-order tensor. We then utilize higher-order singular value decomposition to eliminate redundant information adaptively and estimate pupil feature information. We evaluated the proposed method by conducting experiments with state-of-the-art deep learning segmentation models on our BigCat dataset consisting of 5,700 annotated intraoperative images from 190 cataract surgeries and a public CaDIS dataset. The experimental results reveal that the AWTFE method effectively identifies features relevant to the pupil region and improved the overall performance of segmentation models by up to 2.26% (BigCat) and 3.31% (CaDIS). Incorporation of the AWTFE method led to statistically significant improvements in segmentation performance (P < 1.29 × 10-10 for each model) and yielded the highest-performing model overall (Dice coefficients of 94.74% and 96.71% for the BigCat and CaDIS datasets, respectively). In performance comparisons, the AWTFE consistently outperformed other feature extraction methods in enhancing model performance. In addition, the proposed AWTFE method significantly improved pupil recognition performance by up to 2.87% in particularly challenging phases of cataract surgery.


Subject(s)
Cataract Extraction , Cataract , Humans , Pupil , Cataract Extraction/methods , Cataract/diagnostic imaging , Image Processing, Computer-Assisted
3.
Article in English | MEDLINE | ID: mdl-38082579

ABSTRACT

Cataract surgery remains the definitive treatment for cataracts, which are a major cause of preventable blindness worldwide. Adequate and stable dilation of the pupil are necessary for the successful performance of cataract surgery. Pupillary instability is a known risk factor for cataract surgery complications, and the accurate segmentation of the pupil from surgical video streams can enable the analysis of intraoperative pupil changes in cataract surgery. However, pupil segmentation performance can suffer due to variations in surgical illumination, obscuration of the pupil with surgical instruments, and hydration of the lens material intraoperatively. To overcome these challenges, we present a novel method called tensor-based pupil feature extraction (TPFE) to improve the accuracy of pupil recognition systems. We analyzed the efficacy of this approach with experiments performed on a dataset of 4,560 intraoperative annotated images from 190 cataract surgeries in human patients. Our results indicate that TPFE can identify features relevant to pupil segmentation and that pupil segmentation with state-of-the-art deep learning models can be significantly improved with the TPFE method.


Subject(s)
Cataract Extraction , Cataract , Lens, Crystalline , Humans , Pupil , Cataract Extraction/methods , Surgical Instruments
SELECTION OF CITATIONS
SEARCH DETAIL
...