Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Surg Endosc ; 37(11): 8778-8784, 2023 11.
Article in English | MEDLINE | ID: mdl-37580578

ABSTRACT

BACKGROUND: Automation of surgical phase recognition is a key effort toward the development of Computer Vision (CV) algorithms, for workflow optimization and video-based assessment. CV is a form of Artificial Intelligence (AI) that allows interpretation of images through a deep learning (DL)-based algorithm. The improvements in Graphic Processing Unit (GPU) computing devices allow researchers to apply these algorithms for recognition of content in videos in real-time. Edge computing, where data is collected, analyzed, and acted upon in close proximity to the collection source, is essential meet the demands of workflow optimization by providing real-time algorithm application. We implemented a real-time phase recognition workflow and demonstrated its performance on 10 Robotic Inguinal Hernia Repairs (RIHR) to obtain phase predictions during the procedure. METHODS: Our phase recognition algorithm was developed with 211 videos of RIHR originally annotated into 14 surgical phases. Using these videos, a DL model with a ResNet-50 backbone was trained and validated to automatically recognize surgical phases. The model was deployed to a GPU, the Nvidia® Jetson Xavier™ NX edge computing device. RESULTS: This model was tested on 10 inguinal hernia repairs from four surgeons in real-time. The model was improved using post-recording processing methods such as phase merging into seven final phases (peritoneal scoring, mesh placement, preperitoneal dissection, reduction of hernia, out of body, peritoneal closure, and transitionary idle) and averaging of frames. Predictions were made once per second with a processing latency of approximately 250 ms. The accuracy of the real-time predictions ranged from 59.8 to 78.2% with an average accuracy of 68.7%. CONCLUSION: A real-time phase prediction of RIHR using a CV deep learning model was successfully implemented. This real-time CV phase segmentation system can be useful for monitoring surgical progress and be integrated into software to provide hospital workflow optimization.


Subject(s)
Artificial Intelligence , Hernia, Inguinal , Humans , Operating Rooms , Hernia, Inguinal/surgery , Algorithms , Peritoneum
2.
Bioengineering (Basel) ; 10(6)2023 May 27.
Article in English | MEDLINE | ID: mdl-37370585

ABSTRACT

Video-recorded robotic-assisted surgeries allow the use of automated computer vision and artificial intelligence/deep learning methods for quality assessment and workflow analysis in surgical phase recognition. We considered a dataset of 209 videos of robotic-assisted laparoscopic inguinal hernia repair (RALIHR) collected from 8 surgeons, defined rigorous ground-truth annotation rules, then pre-processed and annotated the videos. We deployed seven deep learning models to establish the baseline accuracy for surgical phase recognition and explored four advanced architectures. For rapid execution of the studies, we initially engaged three dozen MS-level engineering students in a competitive classroom setting, followed by focused research. We unified the data processing pipeline in a confirmatory study, and explored a number of scenarios which differ in how the DL networks were trained and evaluated. For the scenario with 21 validation videos of all surgeons, the Video Swin Transformer model achieved ~0.85 validation accuracy, and the Perceiver IO model achieved ~0.84. Our studies affirm the necessity of close collaborative research between medical experts and engineers for developing automated surgical phase recognition models deployable in clinical settings.

3.
Methods ; 202: 110-116, 2022 06.
Article in English | MEDLINE | ID: mdl-34245871

ABSTRACT

This paper presents a heart murmur detection and multi-class classification approach via machine learning. We extracted heart sound and murmur features that are of diagnostic importance and developed additional 16 features that are not perceivable by human ears but are valuable to improve murmur classification accuracy. We examined and compared the classification performance of supervised machine learning with k-nearest neighbor (KNN) and support vector machine (SVM) algorithms. We put together a test repertoire having more than 450 heart sound and murmur episodes to evaluate the performance of murmur classification using cross-validation of 80-20 and 90-10 splits. As clearly demonstrated in our evaluation, the specific set of features chosen in our study resulted in accurate classification consistently exceeding 90% for both classifiers.


Subject(s)
Heart Murmurs , Heart Sounds , Algorithms , Heart Murmurs/diagnosis , Humans , Machine Learning , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL
...