Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Neural Syst ; 34(8): 2450043, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38770651

ABSTRACT

Neurodegenerative diseases pose a formidable challenge to medical research, demanding a nuanced understanding of their progressive nature. In this regard, latent generative models can effectively be used in a data-driven modeling of different dimensions of neurodegeneration, framed within the context of the manifold hypothesis. This paper proposes a joint framework for a multi-modal, common latent generative model to address the need for a more comprehensive understanding of the neurodegenerative landscape in the context of Parkinson's disease (PD). The proposed architecture uses coupled variational autoencoders (VAEs) to joint model a common latent space to both neuroimaging and clinical data from the Parkinson's Progression Markers Initiative (PPMI). Alternative loss functions, different normalization procedures, and the interpretability and explainability of latent generative models are addressed, leading to a model that was able to predict clinical symptomatology in the test set, as measured by the unified Parkinson's disease rating scale (UPDRS), with R2 up to 0.86 for same-modality and 0.441 cross-modality (using solely neuroimaging). The findings provide a foundation for further advancements in the field of clinical research and practice, with potential applications in decision-making processes for PD. The study also highlights the limitations and capabilities of the proposed model, emphasizing its direct interpretability and potential impact on understanding and interpreting neuroimaging patterns associated with PD symptomatology.


Subject(s)
Deep Learning , Disease Progression , Neuroimaging , Parkinson Disease , Parkinson Disease/diagnostic imaging , Parkinson Disease/physiopathology , Humans , Neuroimaging/methods , Supervised Machine Learning , Multimodal Imaging , Male , Female
2.
Hum Brain Mapp ; 45(5): e26555, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38544418

ABSTRACT

Novel features derived from imaging and artificial intelligence systems are commonly coupled to construct computer-aided diagnosis (CAD) systems that are intended as clinical support tools or for investigation of complex biological patterns. This study used sulcal patterns from structural images of the brain as the basis for classifying patients with schizophrenia from unaffected controls. Statistical, machine learning and deep learning techniques were sequentially applied as a demonstration of how a CAD system might be comprehensively evaluated in the absence of prior empirical work or extant literature to guide development, and the availability of only small sample datasets. Sulcal features of the entire cerebral cortex were derived from 58 schizophrenia patients and 56 healthy controls. No similar CAD systems has been reported that uses sulcal features from the entire cortex. We considered all the stages in a CAD system workflow: preprocessing, feature selection and extraction, and classification. The explainable AI techniques Local Interpretable Model-agnostic Explanations and SHapley Additive exPlanations were applied to detect the relevance of features to classification. At each stage, alternatives were compared in terms of their performance in the context of a small sample. Differentiating sulcal patterns were located in temporal and precentral areas, as well as the collateral fissure. We also verified the benefits of applying dimensionality reduction techniques and validation methods, such as resubstitution with upper bound correction, to optimize performance.


Subject(s)
Artificial Intelligence , Schizophrenia , Humans , Schizophrenia/diagnostic imaging , Neuroimaging , Machine Learning , Diagnosis, Computer-Assisted
3.
Pharmacol Res ; 197: 106984, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37940064

ABSTRACT

The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.


Subject(s)
Deep Learning , Tomography, X-Ray Computed , Positron-Emission Tomography/methods , Tomography, Emission-Computed, Single-Photon/methods , Machine Learning
4.
Int J Neural Syst ; 33(4): 2350015, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36799660

ABSTRACT

The prevalence of dementia is currently increasing worldwide. This syndrome produces a deterioration in cognitive function that cannot be reverted. However, an early diagnosis can be crucial for slowing its progress. The Clock Drawing Test (CDT) is a widely used paper-and-pencil test for cognitive assessment in which an individual has to manually draw a clock on a paper. There are a lot of scoring systems for this test and most of them depend on the subjective assessment of the expert. This study proposes a computer-aided diagnosis (CAD) system based on artificial intelligence (AI) methods to analyze the CDT and obtain an automatic diagnosis of cognitive impairment (CI). This system employs a preprocessing pipeline in which the clock is detected, centered and binarized to decrease the computational burden. Then, the resulting image is fed into a Convolutional Neural Network (CNN) to identify the informative patterns within the CDT drawings that are relevant for the assessment of the patient's cognitive status. Performance is evaluated in a real context where patients with CI and controls have been classified by clinical experts in a balanced sample size of [Formula: see text] drawings. The proposed method provides an accuracy of [Formula: see text] in the binary case-control classification task, with an AUC of [Formula: see text]. These results are indeed relevant considering the use of the classic version of the CDT. The large size of the sample suggests that the method proposed has a high reliability to be used in clinical contexts and demonstrates the suitability of CAD systems in the CDT assessment process. Explainable artificial intelligence (XAI) methods are applied to identify the most relevant regions during classification. Finding these patterns is extremely helpful to understand the brain damage caused by CI. A validation method using resubstitution with upper bound correction in a machine learning approach is also discussed.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Humans , Alzheimer Disease/diagnosis , Artificial Intelligence , Reproducibility of Results , Cognitive Dysfunction/diagnosis , Neuropsychological Tests
5.
Int J Neural Syst ; 32(5): 2250019, 2022 May.
Article in English | MEDLINE | ID: mdl-35313792

ABSTRACT

Spatial normalization helps us to compare quantitatively two or more input brain scans. Although using an affine normalization approach preserves the anatomical structures, the neuroimaging field is more common to find works that make use of nonlinear transformations. The main reason is that they facilitate a voxel-wise comparison, not only when studying functional images but also when comparing MRI scans given that they fit better to a reference template. However, the amount of bias introduced by the nonlinear transformations can potentially alter the final outcome of a diagnosis especially when studying functional scans for neurological disorders like Parkinson's Disease. In this context, we have tried to quantify the bias introduced by the affine and the nonlinear spatial registration of FP-CIT SPECT volumes of healthy control subjects and patients with PD. For that purpose, we calculated the deformation fields of each participant and applied these deformation fields to a 3D-grid. As the space between the edges of small cubes comprising the grid change, we can quantify which parts from the brain have been enlarged, compressed or just remain the same. When the nonlinear approach is applied, scans from PD patients show a region near their striatum very similar in shape to that of healthy subjects. This artificially increases the interclass separation between patients with PD and healthy subjects as the local intensity is decreased in the latter region, and leads machine learning systems to biased results due to the artificial information introduced by these deformations.


Subject(s)
Parkinson Disease , Tropanes , Humans , Magnetic Resonance Imaging , Parkinson Disease/diagnostic imaging , Tomography, Emission-Computed, Single-Photon/methods
6.
Inf Fusion ; 64: 149-187, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32834795

ABSTRACT

Multimodal fusion in neuroimaging combines data from multiple imaging modalities to overcome the fundamental limitations of individual modalities. Neuroimaging fusion can achieve higher temporal and spatial resolution, enhance contrast, correct imaging distortions, and bridge physiological and cognitive information. In this study, we analyzed over 450 references from PubMed, Google Scholar, IEEE, ScienceDirect, Web of Science, and various sources published from 1978 to 2020. We provide a review that encompasses (1) an overview of current challenges in multimodal fusion (2) the current medical applications of fusion for specific neurological diseases, (3) strengths and limitations of available imaging modalities, (4) fundamental fusion rules, (5) fusion quality assessment methods, and (6) the applications of fusion for atlas-based segmentation and quantification. Overall, multimodal fusion shows significant benefits in clinical diagnosis and neuroscience research. Widespread education and further research amongst engineers, researchers and clinicians will benefit the field of multimodal neuroimaging.

SELECTION OF CITATIONS
SEARCH DETAIL
...