Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Methods Programs Biomed ; 248: 108118, 2024 May.
Article in English | MEDLINE | ID: mdl-38489935

ABSTRACT

BACKGROUND: Estimating the risk of a difficult tracheal intubation should help clinicians in better anaesthesia planning, to maximize patient safety. Routine bedside screenings suffer from low sensitivity. OBJECTIVE: To develop and evaluate machine learning (ML) and deep learning (DL) algorithms for the reliable prediction of intubation risk, using information about airway morphology. METHODS: Observational, prospective cohort study enrolling n=623 patients who underwent tracheal intubation: 53/623 difficult cases (prevalence 8.51%). First, we used our previously validated deep convolutional neural network (DCNN) to extract 2D image coordinates for 27 + 13 relevant anatomical landmarks in two preoperative photos (frontal and lateral views). Here we propose a method to determine the 3D pose of the camera with respect to the patient and to obtain the 3D world coordinates of these landmarks. Then we compute a novel set of dM=59 morphological features (distances, areas, angles and ratios), engineered with our anaesthesiologists to characterize each individual's airway anatomy towards prediction. Subsequently, here we propose four ad hoc ML pipelines for difficult intubation prognosis, each with four stages: feature scaling, imputation, resampling for imbalanced learning, and binary classification (Logistic Regression, Support Vector Machines, Random Forests and eXtreme Gradient Boosting). These compound ML pipelines were fed with the dM=59 morphological features, alongside dD=7 demographic variables. Here we trained them with automatic hyperparameter tuning (Bayesian search) and probability calibration (Platt scaling). In addition, we developed an ad hoc multi-input DCNN to estimate the intubation risk directly from each pair of photographs, i.e. without any intermediate morphological description. Performance was evaluated using optimal Bayesian decision theory. It was compared against experts' judgement and against state-of-the-art methods (three clinical formulae, four ML, four DL models). RESULTS: Our four ad hoc ML pipelines with engineered morphological features achieved similar discrimination capabilities: median AUCs between 0.746 and 0.766. They significantly outperformed both expert judgement and all state-of-the-art methods (highest AUC at 0.716). Conversely, our multi-input DCNN yielded low performance due to overfitting. This same behaviour occurred for the state-of-the-art DL algorithms. Overall, the best method was our XGB pipeline, with the fewest false negatives at the optimal Bayesian decision threshold. CONCLUSIONS: We proposed and validated ML models to assist clinicians in anaesthesia planning, providing a reliable calibrated estimate of airway intubation risk, which outperformed expert assessments and state-of-the-art methods. Our novel set of engineered features succeeded in providing informative descriptions for prognosis.


Subject(s)
Intubation, Intratracheal , Machine Learning , Humans , Bayes Theorem , Prospective Studies , Intubation, Intratracheal/methods , Neural Networks, Computer
2.
Comput Methods Programs Biomed ; 232: 107428, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36870169

ABSTRACT

BACKGROUND: A reliable anticipation of a difficult airway may notably enhance safety during anaesthesia. In current practice, clinicians use bedside screenings by manual measurements of patients' morphology. OBJECTIVE: To develop and evaluate algorithms for the automated extraction of orofacial landmarks, which characterize airway morphology. METHODS: We defined 27 frontal + 13 lateral landmarks. We collected n=317 pairs of pre-surgery photos from patients undergoing general anaesthesia (140 females, 177 males). As ground truth reference for supervised learning, landmarks were independently annotated by two anaesthesiologists. We trained two ad-hoc deep convolutional neural network architectures based on InceptionResNetV2 (IRNet) and MobileNetV2 (MNet), to predict simultaneously: (a) whether each landmark is visible or not (occluded, out of frame), (b) its 2D-coordinates (x,y). We implemented successive stages of transfer learning, combined with data augmentation. We added custom top layers on top of these networks, whose weights were fully tuned for our application. Performance in landmark extraction was evaluated by 10-fold cross-validation (CV) and compared against 5 state-of-the-art deformable models. RESULTS: With annotators' consensus as the 'gold standard', our IRNet-based network performed comparably to humans in the frontal view: median CV loss L=1.277·10-3, inter-quartile range (IQR) [1.001, 1.660]; versus median 1.360, IQR [1.172, 1.651], and median 1.352, IQR [1.172, 1.619], for each annotator against consensus, respectively. MNet yielded slightly worse results: median 1.471, IQR [1.139, 1.982]. In the lateral view, both networks attained performances statistically poorer than humans: median CV loss L=2.141·10-3, IQR [1.676, 2.915], and median 2.611, IQR [1.898, 3.535], respectively; versus median 1.507, IQR [1.188, 1.988], and median 1.442, IQR [1.147, 2.010] for both annotators. However, standardized effect sizes in CV loss were small: 0.0322 and 0.0235 (non-significant) for IRNet, 0.1431 and 0.1518 (p<0.05) for MNet; therefore quantitatively similar to humans. The best performing state-of-the-art model (a deformable regularized Supervised Descent Method, SDM) behaved comparably to our DCNNs in the frontal scenario, but notoriously worse in the lateral view. CONCLUSIONS: We successfully trained two DCNN models for the recognition of 27 + 13 orofacial landmarks pertaining to the airway. Using transfer learning and data augmentation, they were able to generalize without overfitting, reaching expert-like performances in CV. Our IRNet-based methodology achieved a satisfactory identification and location of landmarks: particularly in the frontal view, at the level of anaesthesiologists. In the lateral view, its performance decayed, although with a non-significant effect size. Independent authors had also reported lower lateral performances; as certain landmarks may not be clear salient points, even for a trained human eye.


Subject(s)
Algorithms , Neural Networks, Computer , Male , Female , Humans , Anesthesia, General
SELECTION OF CITATIONS
SEARCH DETAIL
...