Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38082685

ABSTRACT

Leg length measurement is relevant for the early diagnostic and treatment of discrepancies as they are related with orthopedic and biomechanical changes. Simple radiology constitutes the gold standard on which radiologists perform manual lower limb measurements. It is a simple task but represents an inefficient use of their time, expertise and knowledge that could be spent in more complex labors. In this study, a pipeline for semantic bone segmentation in lower extremities radiographs is proposed. It uses a deep learning U-net model and performs an automatic measurement without consuming physicians' time. A total of 20 radiographs were used to test the methodology proposed obtaining a high overlap between manual and automatic masks with a Dice coefficient value of 0.963. The obtained Spearman's rank correlation coefficient between manual and automatic leg length measurements is statistically different from cero except for the angle of the left mechanical axis. Furthermore, there is no case in which the proposed automatic method makes an absolute error greater than 2 cm in the quantification of leg length discrepancies, being this value the degree of discrepancy from which medical treatment is required.Clinical Relevance- Leg length discrepancy measurements from X-ray images is of vital importance for proper treatment planning. This is a laborious task for radiologists that can be accelerated using deep learning techniques.


Subject(s)
Deep Learning , Leg , Humans , Leg/diagnostic imaging , Radiography , Lower Extremity/diagnostic imaging , Leg Length Inequality/diagnostic imaging
2.
Article in English | MEDLINE | ID: mdl-38083048

ABSTRACT

Revascularization of chronic total occlusions (CTO) is currently one of the most complex procedures in percutaneous coronary intervention (PCI), requiring the use of specific devices and a high level of experience to obtain good results. Once the clinical indication for extensive ischemia or angina uncontrolled with medical treatment has been established, the decision to perform coronary intervention is not simple, since this procedure has a higher rate of complications than non-PCI percutaneous intervention, higher ionizing radiation doses and a lower success rate. However, CTO revascularization has been shown to be helpful in symptomatic improvement of angina, reduction of ischemic burden, or improvement of ejection fraction. The aim of this work is to determine whether a model developed using deep learning techniques, and trained with angiography images, can better predict the likelihood of a successful revascularization procedure for a patient with a chronic total occlusion (CTO) lesion in their coronary artery (measured as procedure success and the duration of time during which X-ray imaging technology is used to perform a medical procedure) than the scales traditionally used. As a preliminary approach, patients with right coronary artery CTO will be included since they present standard angiographic projections that are performed in all patients and present less technical variability (duration, projection angle, image similarity) among them.The ultimate objective is to develop a predictive model to help the clinician in the decision to intervene and to analyze the performance in terms of predicting the success of the technique for the revascularization of chronic occlusions.Clinical Relevance- The development of a deep learning model based on the angiography images could potentially overcome the gold standard and help interventional cardiologists in the treatment decision for percutaneous coronary intervention, maximizing the success rate of coronary intervention.


Subject(s)
Coronary Occlusion , Deep Learning , Percutaneous Coronary Intervention , Humans , Treatment Outcome , Coronary Angiography , Percutaneous Coronary Intervention/methods , Coronary Occlusion/diagnostic imaging , Coronary Occlusion/surgery
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2084-2087, 2022 07.
Article in English | MEDLINE | ID: mdl-36086174

ABSTRACT

The number of studies in the medical field that uses machine learning and deep learning techniques has been increasing in the last years. However, these techniques require a huge amount of data that can be difficult and expensive to obtain. This specially happens with cardiac magnetic resonance (MR) images. One solution to the problem is raise the dataset size by generating synthetic data. Convolutional Variational Autoencoder (CVAe) is a deep learning technique which allows to generate synthetic images, but sometimes the synthetic images can be slightly blurred. We propose the combination of the CVAe technique combined with Style Transfer technique to generate synthetic realistic cardiac MR images. Clinical Relevance-The current work presents a tool to increase in a simple easy and fast way the cardiac magnetic resonance images dataset with which perform machine learning and deep learning studies.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Heart/diagnostic imaging , Machine Learning
4.
Comput Med Imaging Graph ; 99: 102085, 2022 07.
Article in English | MEDLINE | ID: mdl-35689982

ABSTRACT

The correct assessment and characterization of heart anatomy and functionality is usually done through inspection of magnetic resonance image cine sequences. In the clinical setting it is especially important to determine the state of the left ventricle. This requires the measurement of its volume in the end-diastolic and end-systolic frames within the sequence trough segmentation methods. However, the first step required for this analysis before any segmentation is the detection of the end-systolic and end-diastolic frames within the image acquisition. In this work we present a fully convolutional neural network that makes use of dilated convolutions to encode and process the temporal information of the sequences in contrast to the more widespread use of recurrent networks that are usually employed for problems involving temporal information. We trained the network in two different settings employing different loss functions to train the network: the classical weighted cross-entropy, and the weighted Dice loss. We had access to a database comprising a total of 397 cases. Out of this dataset we used 98 cases as test set to validate our network performance. The final classification on the test set yielded a mean frame distance of 0 for the end-diastolic frame (i.e.: the selected frame was the correct one in all images of the test set) and 1.242 (relative frame distance of 0.036) for the end-systolic frame employing the optimum setting, which involved training the neural network with the Dice loss. Our neural network is capable of classifying each frame and enables the detection of the end-systolic and end-diastolic frames in short axis cine MRI sequences with high accuracy.


Subject(s)
Magnetic Resonance Imaging, Cine , Neural Networks, Computer , Diastole , Heart , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Magnetic Resonance Imaging, Cine/methods , Systole
5.
J Xray Sci Technol ; 29(5): 823-834, 2021.
Article in English | MEDLINE | ID: mdl-34334443

ABSTRACT

BACKGROUND AND OBJECTIVE: Estimates of parameters used to select patients for endovascular thrombectomy (EVT) for acute ischemic stroke differ among software packages for automated computed tomography (CT) perfusion analysis. To determine impact of these differences in decision making, we analyzed intra-observer and inter-observer agreement in recommendations about whether to perform EVT based on perfusion maps from 4 packages. METHODS: Perfusion CT datasets from 63 consecutive patients with suspected acute ischemic stroke were retrospectively postprocessed with 4 packages of Minerva, RAPID, Olea, and IntelliSpace Portal (ISP). We used Pearson correlation coefficients and Bland-Altman analysis to compare volumes of infarct core, penumbra, and mismatch calculated by Minerva and RAPID. We used kappa analysis to assess agreement among decisions of 3 radiologists about whether to recommend EVT based on maps generated by 4 packages. RESULTS: We found significant differences between using Minerva and RAPID to estimate penumbra (67.39±41.37mL vs. 78.35±45.38 mL, p < 0.001) and mismatch (48.41±32.03 vs. 61.27±32.73mL, p < 0.001), but not of infarct core (p = 0.230). Pearson correlation coefficients were 0.94 (95%CI:0.90-0.96) for infarct core, 0.87 (95%CI:0.79-0.91) for penumbra, and 0.72 (95%CI:0.57-0.83) for mismatch volumes (p < 0.001). Limits of agreements were (-21.22-25.02) for infarct core volumes, (-54.79-32.88) for penumbra volumes, and (-60.16-34.45) for mismatch volumes. Final agreement for EVT decision-making was substantial between Minerva vs. RAPID (k = 0.722), Minerva vs. Olea (k = 0.761), and RAPID vs. Olea (k = 0.782), but moderate for ISP vs. the other three. CONCLUSIONS: Despite quantitative differences in estimates of infarct core, penumbra, and mismatch using 4 software packages, their impact on radiologists' decisions about EVT is relatively small.


Subject(s)
Brain Ischemia , Ischemic Stroke , Stroke , Humans , Perfusion , Perfusion Imaging/methods , Retrospective Studies , Software , Stroke/diagnostic imaging , Stroke/surgery , Tomography, X-Ray Computed/methods
6.
Comput Methods Programs Biomed ; 208: 106275, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34274609

ABSTRACT

BACKGROUND AND OBJECTIVE: Magnetic resonance imaging is the most reliable imaging technique to assess the heart. More specifically there is great importance in the analysis of the left ventricle, as the main pathologies directly affect this region. In order to characterize the left ventricle, it is necessary to extract its volume. In this work we present a neural network architecture that is capable of directly estimating the left ventricle volume in short axis cine Magnetic Resonance Imaging in the end-diastolic frame and provide a segmentation of the region which is the basis of the volume calculation, thus offering explainability to the estimated value. METHODS: The network was designed to directly target the volumes to estimate, not requiring any labeled segmentation on the images. The network was based on a 3D U-net with extra layers defined in a scanning module that learned features like the circularity of the objects and the volumes to estimate in a weakly-supervised manner. The only targets defined were the left ventricle volumes and the circularity of the object detected through the estimation of the π value derived from its shape. We had access to 397 cases corresponding to 397 different subjects. We randomly selected 98 cases to use as test set. RESULTS: The results show a good match between the real and estimated volumes in the test set, with a mean relative error of 8% and a mean absolute error of 9.12 ml with a Pearson correlation coefficient of 0.95. The derived segmentations obtained by the network achieved Dice coefficients with a mean value of 0.79. CONCLUSIONS: The proposed method is capable of obtaining the left ventricle volume biomarker in the end-diastole and offer an explanation of how it obtains the result in the form of a segmentation mask without the need of segmentation labels to train the algorithm, making it a potentially more trustworthy method for clinicians and a way to train neural networks more easily when segmentation labels are not readily available.


Subject(s)
Deep Learning , Heart Ventricles , Heart , Heart Ventricles/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Magnetic Resonance Imaging, Cine , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...