Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38985412

ABSTRACT

PURPOSE: Decision support systems and context-aware assistance in the operating room have emerged as the key clinical applications supporting surgeons in their daily work and are generally based on single modalities. The model- and knowledge-based integration of multimodal data as a basis for decision support systems that can dynamically adapt to the surgical workflow has not yet been established. Therefore, we propose a knowledge-enhanced method for fusing multimodal data for anticipation tasks. METHODS: We developed a holistic, multimodal graph-based approach combining imaging and non-imaging information in a knowledge graph representing the intraoperative scene of a surgery. Node and edge features of the knowledge graph are extracted from suitable data sources in the operating room using machine learning. A spatiotemporal graph neural network architecture subsequently allows for interpretation of relational and temporal patterns within the knowledge graph. We apply our approach to the downstream task of instrument anticipation while presenting a suitable modeling and evaluation strategy for this task. RESULTS: Our approach achieves an F1 score of 66.86% in terms of instrument anticipation, allowing for a seamless surgical workflow and adding a valuable impact for surgical decision support systems. A resting recall of 63.33% indicates the non-prematurity of the anticipations. CONCLUSION: This work shows how multimodal data can be combined with the topological properties of an operating room in a graph-based approach. Our multimodal graph architecture serves as a basis for context-sensitive decision support systems in laparoscopic surgery considering a comprehensive intraoperative operating scene.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2631-2634, 2022 07.
Article in English | MEDLINE | ID: mdl-36086507

ABSTRACT

The period directly following surgery is critical for patients as they are at risk of infections and other types of complications, often summarized as severe adverse events (SAE). We hypothesize that impending complications might alter the circadian rhythm and, therefore, be detectable during the night before. We propose a SMOTE-enhanced XGBoost prediction model that classifies nighttime vital signs depending on whether they precede a serious adverse event or come from a patient that does not have a complication at all, based on data from 450 postoperative patients. The approach showed respectable results, producing a ROC-AUC score of 0.65 and an accuracy of 0.75. These findings demonstrate the need for further investigation.


Subject(s)
Vital Signs , Humans
3.
Sensors (Basel) ; 21(4)2021 Feb 21.
Article in English | MEDLINE | ID: mdl-33670066

ABSTRACT

Infrared thermography for camera-based skin temperature measurement is increasingly used in medical practice, e.g., to detect fevers and infections, such as recently in the COVID-19 pandemic. This contactless method is a promising technology to continuously monitor the vital signs of patients in clinical environments. In this study, we investigated both skin temperature trend measurement and the extraction of respiration-related chest movements to determine the respiratory rate using low-cost hardware in combination with advanced algorithms. In addition, the frequency of medical examinations or visits to the patients was extracted. We implemented a deep learning-based algorithm for real-time vital sign extraction from thermography images. A clinical trial was conducted to record data from patients on an intensive care unit. The YOLOv4-Tiny object detector was applied to extract image regions containing vital signs (head and chest). The infrared frames were manually labeled for evaluation. Validation was performed on a hold-out test dataset of 6 patients and revealed good detector performance (0.75 intersection over union, 0.94 mean average precision). An optical flow algorithm was used to extract the respiratory rate from the chest region. The results show a mean absolute error of 2.69 bpm. We observed a computational performance of 47 fps on an NVIDIA Jetson Xavier NX module for YOLOv4-Tiny, which proves real-time capability on an embedded GPU system. In conclusion, the proposed method can perform real-time vital sign extraction on a low-cost system-on-module and may thus be a useful method for future contactless vital sign measurements.


Subject(s)
Deep Learning , Intensive Care Units , Thermography/instrumentation , Vital Signs , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...