Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Clin EEG Neurosci ; 55(4): 486-495, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38523306

ABSTRACT

Developing an electroencephalography (EEG)-based brain-computer interface (BCI) system is crucial to enhancing the control of external prostheses by accurately distinguishing various movements through brain signals. This innovation can provide comfortable circumstances for the populace who have movement disabilities. This study combined the most prospering methods used in BCI systems, including one-versus-rest common spatial pattern (OVR-CSP) and convolutional neural network (CNN), to automatically extract features and classify eight different movements of the shoulder, wrist, and elbow via EEG signals. The number of subjects who participated in the experiment was 10, and their EEG signals were recorded while performing movements at fast and slow speeds. We used preprocessing techniques before transforming EEG signals into another space by OVR-CSP, followed by sending signals into the CNN architecture consisting of four convolutional layers. Moreover, we extracted feature vectors after applying OVR-CSP and considered them as inputs to KNN, SVM, and MLP classifiers. Then, the performance of these classifiers was compared with the CNN method. The results demonstrated that the classification of eight movements using the proposed CNN architecture obtained an average accuracy of 97.65% for slow movements and 96.25% for fast movements in the subject-independent model. This method outperformed other classifiers with a substantial difference; ergo, it can be useful in improving BCI systems for better control of prostheses.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Movement , Neural Networks, Computer , Humans , Electroencephalography/methods , Movement/physiology , Adult , Male , Brain/physiology , Signal Processing, Computer-Assisted , Female , Young Adult , Algorithms , Wrist/physiology
2.
Diagnostics (Basel) ; 13(21)2023 Oct 30.
Article in English | MEDLINE | ID: mdl-37958240

ABSTRACT

Images of brain tumors may only show up in a small subset of scans, so important details may be missed. Further, because labeling is typically a labor-intensive and time-consuming task, there are typically only a small number of medical imaging datasets available for analysis. The focus of this research is on the MRI images of the human brain, and an attempt has been made to propose a method for the accurate segmentation of these images to identify the correct location of tumors. In this study, GAN is utilized as a classification network to detect and segment of 3D MRI images. The 3D GAN network model provides dense connectivity, followed by rapid network convergence and improved information extraction. Mutual training in a generative adversarial network can bring the segmentation results closer to the labeled data to improve image segmentation. The BraTS 2021 dataset of 3D images was used to compare two experimental models.

3.
Sensors (Basel) ; 23(17)2023 Sep 01.
Article in English | MEDLINE | ID: mdl-37688038

ABSTRACT

Segmenting the liver and liver tumors in computed tomography (CT) images is an important step toward quantifiable biomarkers for a computer-aided decision-making system and precise medical diagnosis. Radiologists and specialized physicians use CT images to diagnose and classify liver organs and tumors. Because these organs have similar characteristics in form, texture, and light intensity values, other internal organs such as the heart, spleen, stomach, and kidneys confuse visual recognition of the liver and tumor division. Furthermore, visual identification of liver tumors is time-consuming, complicated, and error-prone, and incorrect diagnosis and segmentation can hurt the patient's life. Many automatic and semi-automatic methods based on machine learning algorithms have recently been suggested for liver organ recognition and tumor segmentation. However, there are still difficulties due to poor recognition precision and speed and a lack of dependability. This paper presents a novel deep learning-based technique for segmenting liver tumors and identifying liver organs in computed tomography maps. Based on the LiTS17 database, the suggested technique comprises four Chebyshev graph convolution layers and a fully connected layer that can accurately segment the liver and liver tumors. Thus, the accuracy, Dice coefficient, mean IoU, sensitivity, precision, and recall obtained based on the proposed method according to the LiTS17 dataset are around 99.1%, 91.1%, 90.8%, 99.4%, 99.4%, and 91.2%, respectively. In addition, the effectiveness of the proposed method was evaluated in a noisy environment, and the proposed network could withstand a wide range of environmental signal-to-noise ratios (SNRs). Thus, at SNR = -4 dB, the accuracy of the proposed method for liver organ segmentation remained around 90%. The proposed model has obtained satisfactory and favorable results compared to previous research. According to the positive results, the proposed model is expected to be used to assist radiologists and specialist doctors in the near future.


Subject(s)
Liver Neoplasms , Humans , Liver Neoplasms/diagnostic imaging , Kidney , Algorithms , Computer Systems
4.
Comput Intell Neurosci ; 2023: 9379618, 2023.
Article in English | MEDLINE | ID: mdl-36688224

ABSTRACT

The vast majority of sleep disturbances are caused by various types of sleep arousal. To diagnose sleep disorders and prevent health problems such as cardiovascular disease and cognitive impairment, sleep arousals must be accurately detected. Consequently, sleep specialists must spend considerable time and effort analyzing polysomnography (PSG) recordings to determine the level of arousal during sleep. The development of an automated sleep arousal detection system based on PSG would considerably benefit clinicians. We quantify the EEG-ECG by using Lyapunov exponents, fractals, and wavelet transforms to identify sleep stages and arousal disorders. In this paper, an efficient hybrid-learning method is introduced for the first time to detect and assess arousal incidents. Modified drone squadron optimization (mDSO) algorithm is used to optimize the support vector machine (SVM) with radial basis function (RBF) kernel. EEG-ECG signals are preprocessed samples from the SHHS sleep dataset and the PhysioBank challenge 2018. In comparison to other traditional methods for identifying sleep disorders, our physiological signals correlation innovation is much better than similar approaches. Based on the proposed model, the average error rate was less than 2%-7%, respectively, for two-class and four-class issues. Additionally, the proper classification of the five sleep stages is determined to be accurate 92.3% of the time. In clinical trials of sleep disorders, the hybrid-learning model technique based on EEG-ECG signal correlation features is effective in detecting arousals.


Subject(s)
Sleep Arousal Disorders , Sleep Wake Disorders , Humans , Electroencephalography/methods , Sleep/physiology , Polysomnography/methods , Sleep Wake Disorders/diagnosis
5.
Brain Behav ; 12(11): e2763, 2022 11.
Article in English | MEDLINE | ID: mdl-36196623

ABSTRACT

INTRODUCTION: Epileptic condition can be detected in EEG data seconds before it occurs, according to evidence. To overcome the related long-term mortality and morbidity from epileptic seizures, it is critical to make an initial diagnosis, uncover underlying causes, and avoid applicable risk factors. Progress in diagnosing onset epileptic seizures can ensure that seizures and destroyed damages are detectable at the time of manifestation. Previous seizure detection models had problems with the presence of multiple features, the lack of an appropriate signal descriptor, and the time-consuming analysis, all of which led to uncertainty and different interpretations. Deep learning has recently made tremendous progress in categorizing and detecting epilepsy. METHOD: This work proposes an effective classification strategy in response to these issues. The discrete wavelet transform (DWT) is used to breakdown the EEG signal, and a deep convolutional neural network (DCNN) is used to diagnose epileptic seizures in the first phase. Using a medium-weight DCNN (mw-DCNN) architecture, we use a preprocess phase to improve the decision-maker method. The proposed approach was tested on the CHEG-MIT Scalp EEG database's collected EEG signals. RESULT: The results of the studies reveal that the mw-DCNN algorithm produces proper classification results under various conditions. To solve the uncertainty challenge, K-fold cross-validation was used to assess the algorithm's repeatability at the test level, and the accuracies were evaluated in the range of 99%-100%. CONCLUSION: The suggested structure can assist medical specialistsin analyzing epileptic seizures' EEG signals more precisely.


Subject(s)
Epilepsy , Seizures , Humans , Seizures/diagnosis , Epilepsy/diagnosis , Neural Networks, Computer , Electroencephalography/methods , Algorithms , Signal Processing, Computer-Assisted
6.
Sci Rep ; 12(1): 10282, 2022 06 18.
Article in English | MEDLINE | ID: mdl-35717542

ABSTRACT

Due to the effect of emotions on interactions, interpretations, and decisions, automatic detection and analysis of human emotions based on EEG signals has an important role in the treatment of psychiatric diseases. However, the low spatial resolution of EEG recorders poses a challenge. In order to overcome this problem, in this paper we model each emotion by mapping from scalp sensors to brain sources using Bernoulli-Laplace-based Bayesian model. The standard low-resolution electromagnetic tomography (sLORETA) method is used to initialize the source signals in this algorithm. Finally, a dynamic graph convolutional neural network (DGCNN) is used to classify emotional EEG in which the sources of the proposed localization model are considered as the underlying graph nodes. In the proposed method, the relationships between the EEG source signals are encoded in the DGCNN adjacency matrix. Experiments on our EEG dataset recorded at the Brain-Computer Interface Research Laboratory, University of Tabriz as well as publicly available SEED and DEAP datasets show that brain source modeling by the proposed algorithm significantly improves the accuracy of emotion recognition, such that it achieve a classification accuracy of 99.25% during the classification of the two classes of positive and negative emotions. These results represent an absolute 1-2% improvement in terms of classification accuracy over subject-dependent and subject-independent scenarios over the existing approaches.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Bayes Theorem , Electroencephalography/methods , Emotions , Humans , Neural Networks, Computer
7.
J Med Signals Sens ; 11(4): 237-252, 2021.
Article in English | MEDLINE | ID: mdl-34820296

ABSTRACT

BACKGROUND: One of the common limitations in the treatment of cancer is in the early detection of this disease. The customary medical practice of cancer examination is a visual examination by the dermatologist followed by an invasive biopsy. Nonetheless, this symptomatic approach is timeconsuming and prone to human errors. An automated machine learning model is essential to capacitate fast diagnoses and early treatment. OBJECTIVE: The key objective of this study is to establish a fully automatic model that helps Dermatologists in skin cancer handling process in a way that could improve skin lesion classification accuracy. METHOD: The work is conducted following an implementation of a Deep Convolutional Generative Adversarial Network (DCGAN) using the Python-based deep learning library Keras. We incorporated effective image filtering and enhancement algorithms such as bilateral filter to enhance feature detection and extraction during training. The Deep Convolutional Generative Adversarial Network (DCGAN) needed slightly more fine-tuning to ripe a better return. Hyperparameter optimization was utilized for selecting the best-performed hyperparameter combinations and several network hyperparameters. In this work, we decreased the learning rate from the default 0.001 to 0.0002, and the momentum for Adam optimization algorithm from 0.9 to 0.5, in trying to reduce the instability issues related to GAN models and at each iteration the weights of the discriminative and generative network were updated to balance the loss between them. We endeavour to address a binary classification which predicts two classes present in our dataset, namely benign and malignant. More so, some wellknown metrics such as the receiver operating characteristic -area under the curve and confusion matrix were incorporated for evaluating the results and classification accuracy. RESULTS: The model generated very conceivable lesions during the early stages of the experiment and we could easily visualise a smooth transition in resolution along the way. Thus, we have achieved an overall test accuracy of 93.5% after fine-tuning most parameters of our network. CONCLUSION: This classification model provides spatial intelligence that could be useful in the future for cancer risk prediction. Unfortunately, it is difficult to generate high quality images that are much like the synthetic real samples and to compare different classification methods given the fact that some methods use non-public datasets for training.

8.
J Neurosci Methods ; 339: 108740, 2020 06 01.
Article in English | MEDLINE | ID: mdl-32353472

ABSTRACT

In recent years, multiple noninvasive imaging modalities have been used to develop a better understanding of the human brain functionality, including positron emission tomography, single-photon emission computed tomography, and functional magnetic resonance imaging, all of which provide brain images with millimeter spatial resolutions. Despite good spatial resolution, time resolution of these methods are poor and values are about seconds. Scalp electroencephalography recordings can be used to perform the inverse problem in order to specify the location of the dominant sources of the brain activity. In this paper, EEG source localization method, diagnosis of brain abnormalities using common EEG source localization methods, investigating the effect of the head model on EEG source imaging results have been studied. In this review we present enough evidence that provides motivation for consideration in the future research using EEG source localization methods.


Subject(s)
Brain Diseases , Electroencephalography , Brain/diagnostic imaging , Brain Mapping , Humans , Magnetic Resonance Imaging
9.
Comput Biol Med ; 112: 103365, 2019 09.
Article in English | MEDLINE | ID: mdl-31374349

ABSTRACT

Stem cells are a group of competent cells capable of self-renewal and differentiating into osteogenic, chondrogenic, and adipogenic lineages. These cells provide the possibility of successfully treating patients. During differentiation into adipose tissues, a large number of lipid droplets normally accumulate in these cells, which can be seen through oil red O staining. Although the oil red O staining technique is regularly used for assessing the differentiation degree, its validity for quantitative studies has not been approved yet. Lipid droplet counting has applications in differentiation works and saves time and costs once being automated. In this research, for proving the differentiation of mesenchymal stem cells (MSCs) into adipocyte tissues, their microscopic images were provided. Then, the microscopic images were segmented into square patches, and the lipid droplets were annotated through single-point annotation. The proposed network, based on deep learning, is a fully convolutional regression network processing an image with a small respective field on it. Finally, this method not only does count the lipid droplets but also generates a count map. The average counting accuracy is 94%, which is higher than that of the state-of-the-art methods. It is useful to cell biologists to check the percentage of differentiation in different samples. Also, with a count map, it is possible to observe the regions with high concentrations of lipid droplets without oil red O staining and, thus, examine the total adipocyte differentiation. The contribution of this paper is that a deep learning algorithm has been used for the first time in the field of processing intracellular images.


Subject(s)
Adipocytes , Bone Marrow Cells , Cell Differentiation , Deep Learning , Image Processing, Computer-Assisted , Lipid Droplets/metabolism , Mesenchymal Stem Cells , Adipocytes/cytology , Adipocytes/metabolism , Animals , Bone Marrow Cells/cytology , Bone Marrow Cells/metabolism , Mesenchymal Stem Cells/cytology , Mesenchymal Stem Cells/metabolism , Mice
10.
Ren Fail ; 41(1): 57-68, 2019 Nov.
Article in English | MEDLINE | ID: mdl-30747036

ABSTRACT

BACKGROUND AND OBJECTIVE: Renal disease, such as nephritis and nephropathy, is very harmful to human health. Accordingly, how to achieve early diagnosis and enhance treatment for kidney disorders would be the important lesion. Nevertheless, the clues from the clinical data, such as biochemistry examination, serological examination, and radiological studies are quite indirect and limited. It is no doubt that pathological examination of kidney will supply the direct evidence. There is a requirement for greater understanding of image processing techniques for renal diagnosis to optimize treatment and patient care. METHODS: This study aims to systematically review the literature on publications that has been used image processing methods on pathological microscopic image for renal diagnosis. RESULTS: Nine included studies revealed image analysis techniques for the diagnosis of renal abnormalities on pathological microscopic image, renal image studies are clustered as follows: Glomeruli Segmentation and analysis of the Glomerular basement membrane (55/55%), Blood vessels and tubules classification and detection (22/22%) and The Grading of renal cell carcinomas (22/22%). CONCLUSIONS: A medical image analysis method should provide an auto-adaptive and no external-human action dependency. In addition, since medical systems should have special characteristics such as high accuracy and reliability then clinical validation is highly recommended. New high-quality studies based on Moore neighborhood contour tracking method for glomeruli segmentation and using powerful texture analysis techniques such as the local binary pattern are recommended.


Subject(s)
Carcinoma, Renal Cell/pathology , Image Processing, Computer-Assisted/methods , Kidney Neoplasms/pathology , Kidney/diagnostic imaging , Nephritis/pathology , Algorithms , Biopsy , Carcinoma, Renal Cell/diagnostic imaging , Humans , Kidney/blood supply , Kidney/cytology , Kidney/pathology , Kidney Neoplasms/diagnostic imaging , Microscopy/methods , Neoplasm Grading/methods , Nephritis/diagnostic imaging , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...