Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Telemed Telecare ; : 1357633X231166226, 2023 Apr 24.
Article in English | MEDLINE | ID: mdl-37093788

ABSTRACT

Existing challenges in surgical education (See one, do one, teach one) as well as the COVID-19 pandemic make it necessary to develop new ways for surgical training. Therefore, this work describes the implementation of a scalable remote solution called "TeleSTAR" using immersive, interactive and augmented reality elements which enhances surgical training in the operating room. The system uses a full digital surgical microscope in the context of Ear-Nose-Throat surgery. The microscope is equipped with a modular software augmented reality interface consisting an interactive annotation mode to mark anatomical landmarks using a touch device, an experimental intraoperative image-based stereo-spectral algorithm unit to measure anatomical details and highlight tissue characteristics. The new educational tool was evaluated and tested during the broadcast of three live XR-based three-dimensional cochlear implant surgeries. The system was able to scale to five different remote locations in parallel with low latency and offering a separate two-dimensional YouTube stream with a higher latency. In total more than 150 persons were trained including healthcare professionals, biomedical engineers and medical students.

2.
Surg Endosc ; 35(11): 6150-6157, 2021 11.
Article in English | MEDLINE | ID: mdl-33237461

ABSTRACT

BACKGROUND: Operating room planning is a complex task as pre-operative estimations of procedure duration have a limited accuracy. This is due to large variations in the course of procedures. Therefore, information about the progress of procedures is essential to adapt the daily operating room schedule accordingly. This information should ideally be objective, automatically retrievable and in real-time. Recordings made during endoscopic surgeries are a potential source of progress information. A trained observer is able to recognize the ongoing surgical phase from watching these videos. The introduction of deep learning techniques brought up opportunities to automatically retrieve information from surgical videos. The aim of this study was to apply state-of-the art deep learning techniques on a new set of endoscopic videos to automatically recognize the progress of a procedure, and to assess the feasibility of the approach in terms of performance, scalability and practical considerations. METHODS: A dataset of 33 laparoscopic cholecystectomies (LC) and 35 total laparoscopic hysterectomies (TLH) was used. The surgical tools that were used and the ongoing surgical phases were annotated in the recordings. Neural networks were trained on a subset of annotated videos. The automatic recognition of surgical tools and phases was then assessed on another subset. The scalability of the networks was tested and practical considerations were kept up. RESULTS: The performance of the surgical tools and phase recognition reached an average precision and recall between 0.77 and 0.89. The scalability tests showed diverging results. Legal considerations had to be taken into account and a considerable amount of time was needed to annotate the datasets. CONCLUSION: This study shows the potential of deep learning to automatically recognize information contained in surgical videos. This study also provides insights in the applicability of such a technique to support operating room planning.


Subject(s)
Cholecystectomy, Laparoscopic , Deep Learning , Laparoscopy , Humans , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...