Your browser doesn't support javascript.
loading
Development of Machine Learning Copilot to Assist Novices in Learning Flexible Laryngoscopy.
Miller, Mattea E; Witte, Dan; Lina, Ioan; Walsh, Jonathan; Rameau, Anaïs; Bhatti, Nasir I.
Afiliação
  • Miller ME; Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA.
  • Witte D; Perceptron Health, Inc, New York, New York, U.S.A.
  • Lina I; Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA.
  • Walsh J; Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA.
  • Rameau A; Department of Otolaryngology - Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medical College, New York, New York, U.S.A.
  • Bhatti NI; Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland, USA.
Laryngoscope ; 2024 Oct 03.
Article em En | MEDLINE | ID: mdl-39363661
ABSTRACT

OBJECTIVES:

Here we describe the development and pilot testing of the first artificial intelligence (AI) software "copilot" to help train novices to competently perform flexible fiberoptic laryngoscopy (FFL) on a mannikin and improve their uptake of FFL skills.

METHODS:

Supervised machine learning was used to develop an image classifier model, dubbed the "anatomical region classifier," responsible for predicting the location of camera in the upper aerodigestive tract and an object detection model, dubbed the "anatomical structure detector," responsible for locating and identifying key anatomical structures in images. Training data were collected by performing FFL on an AirSim Combo Bronchi X mannikin (United Kingdom, TruCorp Ltd) using an Ambu aScope 4 RhinoLaryngo Slim connected to an Ambu® aView™ 2 Advance Displaying Unit (Ballerup, Ambu A/S). Medical students were prospectively recruited to try the FFL copilot and rate its ease of use and self-rate their skills with and without the copilot.

RESULTS:

This model classified anatomical regions with an overall accuracy of 91.9% on the validation set and 80.1% on the test set. The model detected anatomical structures with overall mean average precision of 0.642. Through various optimizations, we were able to run the AI copilot at approximately 28 frames per second (FPS), which is imperceptible from real time and nearly matches the video frame rate of 30 FPS. Sixty-four novice medical students were recruited for feedback on the copilot. Although 90.9% strongly agreed/agreed that the AI copilot was easy to use, their self-rating of FFL skills following use of the copilot were overall equivocal to their self-rating without the copilot.

CONCLUSIONS:

The AI copilot tracked successful capture of diagnosable views of key anatomical structures effectively guiding users through FFL to ensure all anatomical structures are sufficiently captured. This tool has the potential to assist novices in efficiently gaining competence in FFL. LEVEL OF EVIDENCE NA Laryngoscope, 2024.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Laryngoscope Assunto da revista: OTORRINOLARINGOLOGIA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Laryngoscope Assunto da revista: OTORRINOLARINGOLOGIA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos País de publicação: Estados Unidos