Your browser doesn't support javascript.
Developing artificial intelligence models for medical student suturing and knot-tying video-based assessment and coaching.
Nagaraj, Madhuri B; Namazi, Babak; Sankaranarayanan, Ganesh; Scott, Daniel J.
  • Nagaraj MB; Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA. Madhuri.nagaraj@gmail.com.
  • Namazi B; University of Texas Southwestern Simulation Center, 2001 Inwood Road, Dallas, TX, 75390-9092, USA. Madhuri.nagaraj@gmail.com.
  • Sankaranarayanan G; Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA.
  • Scott DJ; Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA.
Surg Endosc ; 2022 Aug 18.
Article in English | MEDLINE | ID: covidwho-2243633
ABSTRACT

BACKGROUND:

Early introduction and distributed learning have been shown to improve student comfort with basic requisite suturing skills. The need for more frequent and directed feedback, however, remains an enduring concern for both remote and in-person training. A previous in-person curriculum for our second-year medical students transitioning to clerkships was adapted to an at-home video-based assessment model due to the social distancing implications of COVID-19. We aimed to develop an Artificial Intelligence (AI) model to perform video-based assessment.

METHODS:

Second-year medical students were asked to submit a video of a simple interrupted knot on a penrose drain with instrument tying technique after self-training to proficiency. Proficiency was defined as performing the task under two minutes with no critical errors. All the videos were first manually rated with a pass-fail rating and then subsequently underwent task segmentation. We developed and trained two AI models based on convolutional neural networks to identify errors (instrument holding and knot-tying) and provide automated ratings.

RESULTS:

A total of 229 medical student videos were reviewed (150 pass, 79 fail). Of those who failed, the critical error distribution was 15 knot-tying, 47 instrument-holding, and 17 multiple. A total of 216 videos were used to train the models after excluding the low-quality videos. A k-fold cross-validation (k = 10) was used. The accuracy of the instrument holding model was 89% with an F-1 score of 74%. For the knot-tying model, the accuracy was 91% with an F-1 score of 54%.

CONCLUSIONS:

Medical students require assessment and directed feedback to better acquire surgical skill, but this is often time-consuming and inadequately done. AI techniques can instead be employed to perform automated surgical video analysis. Future work will optimize the current model to identify discrete errors in order to supplement video-based rating with specific feedback.
Keywords

Full text: Available Collection: International databases Database: MEDLINE Type of study: Prognostic study / Randomized controlled trials Language: English Journal subject: Diagnostic Imaging / Gastroenterology Year: 2022 Document Type: Article Affiliation country: S00464-022-09509-y

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Type of study: Prognostic study / Randomized controlled trials Language: English Journal subject: Diagnostic Imaging / Gastroenterology Year: 2022 Document Type: Article Affiliation country: S00464-022-09509-y