Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Comput Assist Radiol Surg ; 18(3): 493-500, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36242701

ABSTRACT

PURPOSE: In this study, we present and validate a novel concept for target tracking in 4D ultrasound. The key idea is to replace image patch similarity metrics by distances in a latent representation. For this, 3D ultrasound patches are mapped into a representation space using sliced-Wasserstein autoencoders. METHODS: A novel target tracking method for 4D ultrasound is presented that performs tracking in a representation space instead of in images space. Sliced-Wasserstein autoencoders are trained in an unsupervised manner which are used to map 3D ultrasound patches into a representation space. The tracking procedure is based on a greedy algorithm approach and measuring distances between representation vectors to relocate the target . The proposed algorithm is validated on an in vivo data set of liver images. Furthermore, three different concepts for training the autoencoder are presented to provide cross-patient generalizability, aiming at minimal training time on data of the individual patient. RESULTS: Eight annotated 4D ultrasound sequences are used to test the tracking method. Tracking could be performed in all sequences using all autoencoder training approaches. A mean tracking error of 3.23 mm could be achieved using generalized fine-tuned autoencoders. It is shown that using generalized autoencoders and fine-tuning them achieves better tracking results than training subject individual autoencoders. CONCLUSION: It could be shown that distances between encoded image patches in a representation space can serve as a meaningful measure of the image patch similarity, even under realistic deformations of the anatomical structure. Based on that, we could validate the proposed tracking algorithm in an in vivo setting. Furthermore, our results indicate that using generalized autoencoders, fine-tuning on only a small number of patches from the individual patient provides promising results.


Subject(s)
Abdomen , Liver , Humans , Algorithms
2.
IEEE Trans Biomed Eng ; 70(4): 1340-1350, 2023 04.
Article in English | MEDLINE | ID: mdl-36269901

ABSTRACT

Tetanus is a life-threatening infectious disease, which is still common in low- and middle-income countries, including in Vietnam. This disease is characterized by muscle spasm and in severe cases is complicated by autonomic dysfunction. Ideally continuous vital sign monitoring using bedside monitors allows the prompt detection of the onset of autonomic nervous system dysfunction or avoiding rapid deterioration. Detection can be improved using heart rate variability analysis from ECG signals. Recently, characteristic ECG and heart rate variability features have been shown to be of value in classifying tetanus severity. However, conventional manual analysis of ECG is time-consuming. The traditional convolutional neural network (CNN) has limitations in extracting the global context information, due to its fixed-sized kernel filters. In this work, we propose a novel hybrid CNN-Transformer model to automatically classify tetanus severity using tetanus monitoring from low-cost wearable sensors. This model can capture the local features from the CNN and the global features from the Transformer. The time series imaging - spectrogram - is transformed from one-dimensional ECG signal and input to the proposed model. The CNN-Transformer model outperforms state-of-the-art methods in tetanus classification, achieves results with a F1 score of 0.82±0.03, precision of 0.94±0.03, recall of 0.73±0.07, specificity of 0.97±0.02, accuracy of 0.88±0.01 and AUC of 0.85±0.03. In addition, we found that Random Forest with enough manually selected features can be comparable with the proposed CNN-Transformer model.


Subject(s)
Tetanus , Humans , Tetanus/diagnosis , Developing Countries , Electrocardiography/methods , Neural Networks, Computer , Heart Rate
3.
Sensors (Basel) ; 22(17)2022 Aug 30.
Article in English | MEDLINE | ID: mdl-36081013

ABSTRACT

Infectious diseases remain a common problem in low- and middle-income countries, including in Vietnam. Tetanus is a severe infectious disease characterized by muscle spasms and complicated by autonomic nervous system dysfunction in severe cases. Patients require careful monitoring using electrocardiograms (ECGs) to detect deterioration and the onset of autonomic nervous system dysfunction as early as possible. Machine learning analysis of ECG has been shown of extra value in predicting tetanus severity, however any additional ECG signal analysis places a high demand on time-limited hospital staff and requires specialist equipment. Therefore, we present a novel approach to tetanus monitoring from low-cost wearable sensors combined with a deep-learning-based automatic severity detection. This approach can automatically triage tetanus patients and reduce the burden on hospital staff. In this study, we propose a two-dimensional (2D) convolutional neural network with a channel-wise attention mechanism for the binary classification of ECG signals. According to the Ablett classification of tetanus severity, we define grades 1 and 2 as mild tetanus and grades 3 and 4 as severe tetanus. The one-dimensional ECG time series signals are transformed into 2D spectrograms. The 2D attention-based network is designed to extract the features from the input spectrograms. Experiments demonstrate a promising performance for the proposed method in tetanus classification with an F1 score of 0.79 ± 0.03, precision of 0.78 ± 0.08, recall of 0.82 ± 0.05, specificity of 0.85 ± 0.08, accuracy of 0.84 ± 0.04 and AUC of 0.84 ± 0.03.


Subject(s)
Tetanus , Wearable Electronic Devices , Algorithms , Electrocardiography , Humans , Machine Learning , Neural Networks, Computer , Tetanus/diagnosis
4.
Front Robot AI ; 9: 892916, 2022.
Article in English | MEDLINE | ID: mdl-35572376

ABSTRACT

Reliable force-driven robot-interaction requires precise contact wrench measurements. In most robot systems these measurements are severely incorrect and in most manipulation tasks expensive additional force sensors are installed. We follow a learning approach to train the dependencies between joint torques and end-effector contact wrenches. We used a redundant serial light-weight manipulator (KUKA iiwa 7 R800) with integrated force estimation based on the joint torques measured in each of the robot's seven axes. Firstly, a simulated dataset is created to let a feed-forward net learn the relationship between end-effector contact wrenches and joint torques for a static case. Secondly, an extensive real training dataset was acquired with 330,000 randomized robot positions and end-effector contact wrenches and used for retraining the simulated trained feed-forward net. We can show that the wrench prediction error could be reduced by around 57% for the forces compared to the manufacturer's proprietary force estimation model. In addition, we show that the number of high outliers can be reduced substantially. Furthermore we prove that the approach could be also transferred to another robot (KUKA iiwa 14 R820) with reasonable prediction accuracy and without the need of acquiring new robot specific data.

5.
Sensors (Basel) ; 22(10)2022 May 19.
Article in English | MEDLINE | ID: mdl-35632275

ABSTRACT

Sepsis is associated with high mortality-particularly in low-middle income countries (LMICs). Critical care management of sepsis is challenging in LMICs due to the lack of care providers and the high cost of bedside monitors. Recent advances in wearable sensor technology and machine learning (ML) models in healthcare promise to deliver new ways of digital monitoring integrated with automated decision systems to reduce the mortality risk in sepsis. In this study, firstly, we aim to assess the feasibility of using wearable sensors instead of traditional bedside monitors in the sepsis care management of hospital admitted patients, and secondly, to introduce automated prediction models for the mortality prediction of sepsis patients. To this end, we continuously monitored 50 sepsis patients for nearly 24 h after their admission to the Hospital for Tropical Diseases in Vietnam. We then compared the performance and interpretability of state-of-the-art ML models for the task of mortality prediction of sepsis using the heart rate variability (HRV) signal from wearable sensors and vital signs from bedside monitors. Our results show that all ML models trained on wearable data outperformed ML models trained on data gathered from the bedside monitors for the task of mortality prediction with the highest performance (area under the precision recall curve = 0.83) achieved using time-varying features of HRV and recurrent neural networks. Our results demonstrate that the integration of automated ML prediction models with wearable technology is well suited for helping clinicians who manage sepsis patients in LMICs to reduce the mortality risk of sepsis.


Subject(s)
Sepsis , Wearable Electronic Devices , Developing Countries , Humans , Machine Learning , Sepsis/diagnosis , Vital Signs
6.
Front Cardiovasc Med ; 9: 772222, 2022.
Article in English | MEDLINE | ID: mdl-35369295

ABSTRACT

Even though the field of medical imaging advances, there are structures in the human body that are barely assessible with classical image acquisition modalities. One example are the three leaflets of the aortic valve due to their thin structure and high movement. However, with an increasing accuracy of biomechanical simulation, for example of the heart function, and extense computing capabilities available, concise knowledge of the individual morphology of these structures could have a high impact on personalized therapy and intervention planning as well as on clinical research. Thus, there is a high demand to estimate the individual shape of inassessible structures given only information on the geometry of the surrounding tissue. This leads to a domain adaptation problem, where the domain gap could be very large while typically only small datasets are available. Hence, classical approaches for domain adaptation are not capable of providing sufficient predictions. In this work, we present a new framework for bridging this domain gap in the scope of estimating anatomical shapes based on the surrounding tissue's morphology. Thus, we propose deep representation learning to not map from one image to another but to predict a latent shape representation. We formalize this framework and present two different approaches to solve the given problem. Furthermore, we perform a proof-of-concept study for estimating the individual shape of the aortic valve leaflets based on a volumetric ultrasound image of the aortic root. Therefore, we collect an ex-vivo porcine data set consisting of both, ultrasound volume images as well as high-resolution leaflet images, evaluate both approaches on it and perform an analysis of the model's hyperparameters. Our results show that using deep representation learning and domain mapping between the identified latent spaces, a robust prediction of the unknown leaflet shape only based on surrounding tissue information is possible, even in limited data scenarios. The concept can be applied to a wide range of modeling tasks, not only in the scope of heart modeling but also for all kinds of inassessible structures within the human body.

7.
Curr Robot Rep ; 2(1): 55-71, 2021.
Article in English | MEDLINE | ID: mdl-34977593

ABSTRACT

PURPOSE OF REVIEW: This review provides an overview of the most recent robotic ultrasound systems that have contemporary emerged over the past five years, highlighting their status and future directions. The systems are categorized based on their level of robot autonomy (LORA). RECENT FINDINGS: Teleoperating systems show the highest level of technical maturity. Collaborative assisting and autonomous systems are still in the research phase, with a focus on ultrasound image processing and force adaptation strategies. However, missing key factors are clinical studies and appropriate safety strategies. Future research will likely focus on artificial intelligence and virtual/augmented reality to improve image understanding and ergonomics. SUMMARY: A review on robotic ultrasound systems is presented in which first technical specifications are outlined. Hereafter, the literature of the past five years is subdivided into teleoperation, collaborative assistance, or autonomous systems based on LORA. Finally, future trends for robotic ultrasound systems are reviewed with a focus on artificial intelligence and virtual/augmented reality.

SELECTION OF CITATIONS
SEARCH DETAIL
...