Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
1.
Int J Neural Syst ; : 2450043, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38770651

ABSTRACT

Neurodegenerative diseases pose a formidable challenge to medical research, demanding a nuanced understanding of their progressive nature. In this regard, latent generative models can effectively be used in a data-driven modeling of different dimensions of neurodegeneration, framed within the context of the manifold hypothesis. This paper proposes a joint framework for a multi-modal, common latent generative model to address the need for a more comprehensive understanding of the neurodegenerative landscape in the context of Parkinson's disease (PD). The proposed architecture uses coupled variational autoencoders (VAEs) to joint model a common latent space to both neuroimaging and clinical data from the Parkinson's Progression Markers Initiative (PPMI). Alternative loss functions, different normalization procedures, and the interpretability and explainability of latent generative models are addressed, leading to a model that was able to predict clinical symptomatology in the test set, as measured by the unified Parkinson's disease rating scale (UPDRS), with R2 up to 0.86 for same-modality and 0.441 cross-modality (using solely neuroimaging). The findings provide a foundation for further advancements in the field of clinical research and practice, with potential applications in decision-making processes for PD. The study also highlights the limitations and capabilities of the proposed model, emphasizing its direct interpretability and potential impact on understanding and interpreting neuroimaging patterns associated with PD symptomatology.

2.
Hum Brain Mapp ; 45(5): e26555, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38544418

ABSTRACT

Novel features derived from imaging and artificial intelligence systems are commonly coupled to construct computer-aided diagnosis (CAD) systems that are intended as clinical support tools or for investigation of complex biological patterns. This study used sulcal patterns from structural images of the brain as the basis for classifying patients with schizophrenia from unaffected controls. Statistical, machine learning and deep learning techniques were sequentially applied as a demonstration of how a CAD system might be comprehensively evaluated in the absence of prior empirical work or extant literature to guide development, and the availability of only small sample datasets. Sulcal features of the entire cerebral cortex were derived from 58 schizophrenia patients and 56 healthy controls. No similar CAD systems has been reported that uses sulcal features from the entire cortex. We considered all the stages in a CAD system workflow: preprocessing, feature selection and extraction, and classification. The explainable AI techniques Local Interpretable Model-agnostic Explanations and SHapley Additive exPlanations were applied to detect the relevance of features to classification. At each stage, alternatives were compared in terms of their performance in the context of a small sample. Differentiating sulcal patterns were located in temporal and precentral areas, as well as the collateral fissure. We also verified the benefits of applying dimensionality reduction techniques and validation methods, such as resubstitution with upper bound correction, to optimize performance.


Subject(s)
Artificial Intelligence , Schizophrenia , Humans , Schizophrenia/diagnostic imaging , Neuroimaging , Machine Learning , Diagnosis, Computer-Assisted
3.
Pharmacol Res ; 197: 106984, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37940064

ABSTRACT

The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.


Subject(s)
Deep Learning , Tomography, X-Ray Computed , Positron-Emission Tomography/methods , Tomography, Emission-Computed, Single-Photon/methods , Machine Learning
4.
J Imaging ; 9(7)2023 Jul 21.
Article in English | MEDLINE | ID: mdl-37504824

ABSTRACT

Artificial intelligence (AI) refers to the field of computer science theory and technology [...].

6.
Int J Neural Syst ; 33(4): 2303001, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36867103
7.
IEEE Sens J ; 22(18): 17573-17582, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36346095

ABSTRACT

(Aim) COVID-19 pandemic causes numerous death tolls till now. Chest CT is an effective imaging sensor system to make accurate diagnosis. (Method) This article proposed a novel seven layer convolutional neural network based smart diagnosis model for COVID-19 diagnosis (7L-CNN-CD). We proposed a 14-way data augmentation to enhance the training set, and introduced stochastic pooling to replace traditional pooling methods. (Results) The 10 runs of 10-fold cross validation experiment show that our 7L-CNN-CD approach achieves a sensitivity of 94.44±0.73, a specificity of 93.63±1.60, and an accuracy of 94.03±0.80. (Conclusion) Our proposed 7L-CNN-CD is effective in diagnosing COVID-19 in chest CT images. It gives better performance than several state-of-the-art algorithms. The data augmentation and stochastic pooling methods are proven to be effective.

8.
Comput Biol Med ; 149: 106053, 2022 10.
Article in English | MEDLINE | ID: mdl-36108415

ABSTRACT

Epilepsy is a disorder of the brain denoted by frequent seizures. The symptoms of seizure include confusion, abnormal staring, and rapid, sudden, and uncontrollable hand movements. Epileptic seizure detection methods involve neurological exams, blood tests, neuropsychological tests, and neuroimaging modalities. Among these, neuroimaging modalities have received considerable attention from specialist physicians. One method to facilitate the accurate and fast diagnosis of epileptic seizures is to employ computer-aided diagnosis systems (CADS) based on deep learning (DL) and neuroimaging modalities. This paper has studied a comprehensive overview of DL methods employed for epileptic seizures detection and prediction using neuroimaging modalities. First, DL-based CADS for epileptic seizures detection and prediction using neuroimaging modalities are discussed. Also, descriptions of various datasets, preprocessing algorithms, and DL models which have been used for epileptic seizures detection and prediction have been included. Then, research on rehabilitation tools has been presented, which contains brain-computer interface (BCI), cloud computing, internet of things (IoT), hardware implementation of DL techniques on field-programmable gate array (FPGA), etc. In the discussion section, a comparison has been carried out between research on epileptic seizure detection and prediction. The challenges in epileptic seizures detection and prediction using neuroimaging modalities and DL models have been described. In addition, possible directions for future works in this field, specifically for solving challenges in datasets, DL, rehabilitation, and hardware models, have been proposed. The final section is dedicated to the conclusion which summarizes the significant findings of the paper.


Subject(s)
Deep Learning , Epilepsy , Algorithms , Electroencephalography/methods , Epilepsy/diagnostic imaging , Humans , Neuroimaging , Seizures/diagnostic imaging
10.
Front Syst Neurosci ; 16: 838822, 2022.
Article in English | MEDLINE | ID: mdl-35720439

ABSTRACT

Aims: Brain diseases refer to intracranial tissue and organ inflammation, vascular diseases, tumors, degeneration, malformations, genetic diseases, immune diseases, nutritional and metabolic diseases, poisoning, trauma, parasitic diseases, etc. Taking Alzheimer's disease (AD) as an example, the number of patients dramatically increases in developed countries. By 2025, the number of elderly patients with AD aged 65 and over will reach 7.1 million, an increase of nearly 29% over the 5.5 million patients of the same age in 2018. Unless medical breakthroughs are made, AD patients may increase from 5.5 million to 13.8 million by 2050, almost three times the original. Researchers have focused on developing complex machine learning (ML) algorithms, i.e., convolutional neural networks (CNNs), containing millions of parameters. However, CNN models need many training samples. A small number of training samples in CNN models may lead to overfitting problems. With the continuous research of CNN, other networks have been proposed, such as randomized neural networks (RNNs). Schmidt neural network (SNN), random vector functional link (RVFL), and extreme learning machine (ELM) are three types of RNNs. Methods: We propose three novel models to classify brain diseases to cope with these problems. The proposed models are DenseNet-based SNN (DSNN), DenseNet-based RVFL (DRVFL), and DenseNet-based ELM (DELM). The backbone of the three proposed models is the pre-trained "customize" DenseNet. The modified DenseNet is fine-tuned on the empirical dataset. Finally, the last five layers of the fine-tuned DenseNet are substituted by SNN, ELM, and RVFL, respectively. Results: Overall, the DSNN gets the best performance among the three proposed models in classification performance. We evaluate the proposed DSNN by five-fold cross-validation. The accuracy, sensitivity, specificity, precision, and F1-score of the proposed DSNN on the test set are 98.46% ± 2.05%, 100.00% ± 0.00%, 85.00% ± 20.00%, 98.36% ± 2.17%, and 99.16% ± 1.11%, respectively. The proposed DSNN is compared with restricted DenseNet, spiking neural network, and other state-of-the-art methods. Finally, our model obtains the best results among all models. Conclusions: DSNN is an effective model for classifying brain diseases.

11.
Biology (Basel) ; 11(1)2022 Jan 14.
Article in English | MEDLINE | ID: mdl-35053131

ABSTRACT

As an important imaging modality, mammography is considered to be the global gold standard for early detection of breast cancer. Computer-Aided (CAD) systems have played a crucial role in facilitating quicker diagnostic procedures, which otherwise could take weeks if only radiologists were involved. In some of these CAD systems, breast pectoral segmentation is required for breast region partition from breast pectoral muscle for specific analysis tasks. Therefore, accurate and efficient breast pectoral muscle segmentation frameworks are in high demand. Here, we proposed a novel deep learning framework, which we code-named PeMNet, for breast pectoral muscle segmentation in mammography images. In the proposed PeMNet, we integrated a novel attention module called the Global Channel Attention Module (GCAM), which can effectively improve the segmentation performance of Deeplabv3+ using minimal parameter overheads. In GCAM, channel attention maps (CAMs) are first extracted by concatenating feature maps after paralleled global average pooling and global maximum pooling operation. CAMs are then refined and scaled up by multi-layer perceptron (MLP) for elementwise multiplication with CAMs in next feature level. By iteratively repeating this procedure, the global CAMs (GCAMs) are then formed and multiplied elementwise with final feature maps to lead to final segmentation. By doing so, CAMs in early stages of a deep convolution network can be effectively passed on to later stages of the network and therefore leads to better information usage. The experiments on a merged dataset derived from two datasets, INbreast and OPTIMAM, showed that PeMNet greatly outperformed state-of-the-art methods by achieving an IoU of 97.46%, global pixel accuracy of 99.48%, Dice similarity coefficient of 96.30%, and Jaccard of 93.33%, respectively.

12.
Int J Neural Syst ; 32(3): 2250001, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34931938

ABSTRACT

Implantable high-density multichannel neural recording microsystems provide simultaneous recording of brain activities. Wireless transmission of the entire recorded data causes high bandwidth usage, which is not tolerable for implantable applications. As a result, a hardware-friendly compression module is required to reduce the amount of data before it is transmitted. This paper presents a novel compression approach that utilizes a spike extractor and a vector quantization (VQ)-based spike compressor. In this approach, extracted spikes are vector quantized using an unsupervised learning process providing a high spike compression ratio (CR) of 10-80. A combination of extracting and compressing neural spikes results in a significant data reduction as well as preserving the spike waveshapes. The compression performance of the proposed approach was evaluated under variant conditions. We also developed new architectures such that the hardware blocks of our approach can be implemented more efficiently. The compression module was implemented in a 180-nm standard CMOS process achieving a SNDR of 14.49[Formula: see text]dB and a classification accuracy (CA) of 99.62% at a CR of 20, while consuming 4[Formula: see text][Formula: see text]W power and 0.16[Formula: see text]mm2 chip area per channel.


Subject(s)
Data Compression , Signal Processing, Computer-Assisted , Action Potentials , Algorithms , Data Compression/methods
13.
Int J Intell Syst ; 37(2): 1572-1598, 2022 Feb.
Article in English | MEDLINE | ID: mdl-38607823

ABSTRACT

COVID-19 pneumonia started in December 2019 and caused large casualties and huge economic losses. In this study, we intended to develop a computer-aided diagnosis system based on artificial intelligence to automatically identify the COVID-19 in chest computed tomography images. We utilized transfer learning to obtain the image-level representation (ILR) based on the backbone deep convolutional neural network. Then, a novel neighboring aware representation (NAR) was proposed to exploit the neighboring relationships between the ILR vectors. To obtain the neighboring information in the feature space of the ILRs, an ILR graph was generated based on the k-nearest neighbors algorithm, in which the ILRs were linked with their k-nearest neighboring ILRs. Afterward, the NARs were computed by the fusion of the ILRs and the graph. On the basis of this representation, a novel end-to-end COVID-19 classification architecture called neighboring aware graph neural network (NAGNN) was proposed. The private and public data sets were used for evaluation in the experiments. Results revealed that our NAGNN outperformed all the 10 state-of-the-art methods in terms of generalization ability. Therefore, the proposed NAGNN is effective in detecting COVID-19, which can be used in clinical diagnosis.

14.
Front Neuroinform ; 15: 777977, 2021.
Article in English | MEDLINE | ID: mdl-34899226

ABSTRACT

Schizophrenia (SZ) is a mental disorder whereby due to the secretion of specific chemicals in the brain, the function of some brain regions is out of balance, leading to the lack of coordination between thoughts, actions, and emotions. This study provides various intelligent deep learning (DL)-based methods for automated SZ diagnosis via electroencephalography (EEG) signals. The obtained results are compared with those of conventional intelligent methods. To implement the proposed methods, the dataset of the Institute of Psychiatry and Neurology in Warsaw, Poland, has been used. First, EEG signals were divided into 25 s time frames and then were normalized by z-score or norm L2. In the classification step, two different approaches were considered for SZ diagnosis via EEG signals. In this step, the classification of EEG signals was first carried out by conventional machine learning methods, e.g., support vector machine, k-nearest neighbors, decision tree, naïve Bayes, random forest, extremely randomized trees, and bagging. Various proposed DL models, namely, long short-term memories (LSTMs), one-dimensional convolutional networks (1D-CNNs), and 1D-CNN-LSTMs, were used in the following. In this step, the DL models were implemented and compared with different activation functions. Among the proposed DL models, the CNN-LSTM architecture has had the best performance. In this architecture, the ReLU activation function with the z-score and L2-combined normalization was used. The proposed CNN-LSTM model has achieved an accuracy percentage of 99.25%, better than the results of most former studies in this field. It is worth mentioning that to perform all simulations, the k-fold cross-validation method with k = 5 has been used.

15.
Complex Intell Systems ; 7(3): 1295-1310, 2021.
Article in English | MEDLINE | ID: mdl-34804768

ABSTRACT

Ductal carcinoma in situ (DCIS) is a pre-cancerous lesion in the ducts of the breast, and early diagnosis is crucial for optimal therapeutic intervention. Thermography imaging is a non-invasive imaging tool that can be utilized for detection of DCIS and although it has high accuracy (~ 88%), it is sensitivity can still be improved. Hence, we aimed to develop an automated artificial intelligence-based system for improved detection of DCIS in thermographs. This study proposed a novel artificial intelligence based system based on convolutional neural network (CNN) termed CNN-BDER on a multisource dataset containing 240 DCIS images and 240 healthy breast images. Based on CNN, batch normalization, dropout, exponential linear unit and rank-based weighted pooling were integrated, along with L-way data augmentation. Ten runs of tenfold cross validation were chosen to report the unbiased performances. Our proposed method achieved a sensitivity of 94.08 ± 1.22%, a specificity of 93.58 ± 1.49 and an accuracy of 93.83 ± 0.96. The proposed method gives superior performance than eight state-of-the-art approaches and manual diagnosis. The trained model could serve as a visual question answering system and improve diagnostic accuracy.

16.
Cancers (Basel) ; 13(19)2021 Oct 06.
Article in English | MEDLINE | ID: mdl-34638493

ABSTRACT

Predicting functional outcomes after surgery and early adjuvant treatment is difficult due to the complex, extended, interlocking brain networks that underpin cognition. The aim of this study was to test glioma functional interactions with the rest of the brain, thereby identifying the risk factors of cognitive recovery or deterioration. Seventeen patients with diffuse non-enhancing glioma (aged 22-56 years) were longitudinally MRI scanned and cognitively assessed before and after surgery and during a 12-month recovery period (55 MRI scans in total after exclusions). We initially found, and then replicated in an independent dataset, that the spatial correlation pattern between regional and global BOLD signals (also known as global signal topography) was associated with tumour occurrence. We then estimated the coupling between the BOLD signal from within the tumour and the signal extracted from different brain tissues. We observed that the normative global signal topography is reorganised in glioma patients during the recovery period. Moreover, we found that the BOLD signal within the tumour and lesioned brain was coupled with the global signal and that this coupling was associated with cognitive recovery. Nevertheless, patients did not show any apparent disruption of functional connectivity within canonical functional networks. Understanding how tumour infiltration and coupling are related to patients' recovery represents a major step forward in prognostic development.

17.
Comput Biol Med ; 136: 104697, 2021 09.
Article in English | MEDLINE | ID: mdl-34358994

ABSTRACT

Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Research on the implementation of computer aided diagnosis system (CADS) based on artificial intelligence (AI) to diagnose MS involves conventional machine learning and deep learning (DL) methods. In conventional machine learning, feature extraction, feature selection, and classification steps are carried out by using trial and error; on the contrary, these steps in DL are based on deep layers whose values are automatically learn. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities is provided. Initially, the steps involved in various CADS proposed using MRI modalities and DL techniques for MS diagnosis are investigated. The important preprocessing techniques employed in various works are analyzed. Most of the published papers on MS diagnosis using MRI modalities and DL are presented. The most significant challenges facing and future direction of automated diagnosis of MS using MRI modalities and DL techniques are also provided.


Subject(s)
Deep Learning , Multiple Sclerosis , Artificial Intelligence , Humans , Magnetic Resonance Imaging , Magnetic Resonance Spectroscopy , Multiple Sclerosis/diagnostic imaging
18.
J Imaging ; 7(4)2021 Apr 20.
Article in English | MEDLINE | ID: mdl-34460524

ABSTRACT

Over recent years, deep learning (DL) has established itself as a powerful tool across a broad spectrum of domains in imaging-e [...].

19.
Sensors (Basel) ; 21(11)2021 Jun 07.
Article in English | MEDLINE | ID: mdl-34200287

ABSTRACT

In this paper, a novel medical image encryption method based on multi-mode synchronization of hyper-chaotic systems is presented. The synchronization of hyper-chaotic systems is of great significance in secure communication tasks such as encryption of images. Multi-mode synchronization is a novel and highly complex issue, especially if there is uncertainty and disturbance. In this work, an adaptive-robust controller is designed for multimode synchronized chaotic systems with variable and unknown parameters, despite the bounded disturbance and uncertainty with a known function in two modes. In the first case, it is a main system with some response systems, and in the second case, it is a circular synchronization. Using theorems it is proved that the two synchronization methods are equivalent. Our results show that, we are able to obtain the convergence of synchronization error and parameter estimation error to zero using Lyapunov's method. The new laws to update time-varying parameters, estimating disturbance and uncertainty bounds are proposed such that stability of system is guaranteed. To assess the performance of the proposed synchronization method, various statistical analyzes were carried out on the encrypted medical images and standard benchmark images. The results show effective performance of the proposed synchronization technique in the medical images encryption for telemedicine application.


Subject(s)
Algorithms , Nonlinear Dynamics , Communication , Computer Simulation , Uncertainty
20.
Front Cell Dev Biol ; 9: 813996, 2021.
Article in English | MEDLINE | ID: mdl-35047515

ABSTRACT

Aims: Most blood diseases, such as chronic anemia, leukemia (commonly known as blood cancer), and hematopoietic dysfunction, are caused by environmental pollution, substandard decoration materials, radiation exposure, and long-term use certain drugs. Thus, it is imperative to classify the blood cell images. Most cell classification is based on the manual feature, machine learning classifier or the deep convolution network neural model. However, manual feature extraction is a very tedious process, and the results are usually unsatisfactory. On the other hand, the deep convolution neural network is usually composed of massive layers, and each layer has many parameters. Therefore, each deep convolution neural network needs a lot of time to get the results. Another problem is that medical data sets are relatively small, which may lead to overfitting problems. Methods: To address these problems, we propose seven models for the automatic classification of blood cells: BCARENet, BCR5RENet, BCMV2RENet, BCRRNet, BCRENet, BCRSNet, and BCNet. The BCNet model is the best model among the seven proposed models. The backbone model in our method is selected as the ResNet-18, which is pre-trained on the ImageNet set. To improve the performance of the proposed model, we replace the last four layers of the trained transferred ResNet-18 model with the three randomized neural networks (RNNs), which are RVFL, ELM, and SNN. The final outputs of our BCNet are generated by the ensemble of the predictions from the three randomized neural networks by the majority voting. We use four multi-classification indexes for the evaluation of our model. Results: The accuracy, average precision, average F1-score, and average recall are 96.78, 97.07, 96.78, and 96.77%, respectively. Conclusion: We offer the comparison of our model with state-of-the-art methods. The results of the proposed BCNet model are much better than other state-of-the-art methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...