Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Front Comput Neurosci ; 15: 650050, 2021.
Article in English | MEDLINE | ID: mdl-33897397

ABSTRACT

Over the last few decades, electroencephalogram (EEG) has become one of the most vital tools used by physicians to diagnose several neurological disorders of the human brain and, in particular, to detect seizures. Because of its peculiar nature, the consequent impact of epileptic seizures on the quality of life of patients made the precise diagnosis of epilepsy extremely essential. Therefore, this article proposes a novel deep-learning approach for detecting seizures in pediatric patients based on the classification of raw multichannel EEG signal recordings that are minimally pre-processed. The new approach takes advantage of the automatic feature learning capabilities of a two-dimensional deep convolution autoencoder (2D-DCAE) linked to a neural network-based classifier to form a unified system that is trained in a supervised way to achieve the best classification accuracy between the ictal and interictal brain state signals. For testing and evaluating our approach, two models were designed and assessed using three different EEG data segment lengths and a 10-fold cross-validation scheme. Based on five evaluation metrics, the best performing model was a supervised deep convolutional autoencoder (SDCAE) model that uses a bidirectional long short-term memory (Bi-LSTM) - based classifier, and EEG segment length of 4 s. Using the public dataset collected from the Children's Hospital Boston (CHB) and the Massachusetts Institute of Technology (MIT), this model has obtained 98.79 ± 0.53% accuracy, 98.72 ± 0.77% sensitivity, 98.86 ± 0.53% specificity, 98.86 ± 0.53% precision, and an F1-score of 98.79 ± 0.53%, respectively. Based on these results, our new approach was able to present one of the most effective seizure detection methods compared to other existing state-of-the-art methods applied to the same dataset.

2.
Front Artif Intell ; 4: 636234, 2021.
Article in English | MEDLINE | ID: mdl-33748748

ABSTRACT

Soil moisture (SM) plays a significant role in determining the probability of flooding in a given area. Currently, SM is most commonly modeled using physically-based numerical hydrologic models. Modeling the natural processes that take place in the soil is difficult and requires assumptions. Besides, hydrologic model runtime is highly impacted by the extent and resolution of the study domain. In this study, we propose a data-driven modeling approach using Deep Learning (DL) models. There are different types of DL algorithms that serve different purposes. For example, the Convolutional Neural Network (CNN) algorithm is well suited for capturing and learning spatial patterns, while the Long Short-Term Memory (LSTM) algorithm is designed to utilize time-series information and to learn from past observations. A DL algorithm that combines the capabilities of CNN and LSTM called ConvLSTM was recently developed. In this study, we investigate the applicability of the ConvLSTM algorithm in predicting SM in a study area located in south Louisiana in the United States. This study reveals that ConvLSTM significantly outperformed CNN in predicting SM. We tested the performance of ConvLSTM based models by using a combination of different sets of predictors and different LSTM sequence lengths. The study results show that ConvLSTM models can predict SM with a mean areal Root Mean Squared Error (RMSE) of 2.5% and mean areal correlation coefficients of 0.9 for our study area. ConvLSTM models can also provide predictions between discrete SM observations, making them potentially useful for applications such as filling observational gaps between satellite overpasses.

3.
IEEE Trans Biomed Circuits Syst ; 14(4): 852-866, 2020 08.
Article in English | MEDLINE | ID: mdl-32746336

ABSTRACT

This paper proposes novel methods for making embryonic bio-inspired hardware efficient against faults through self-healing, fault prediction, and fault-prediction assisted self-healing. The proposed self-healing recovers a faulty embryonic cell through innovative usage of healthy cells. Through experimentations, it is observed that self-healing is effective, but it takes a considerable amount of time for the hardware to recover from a fault that occurs suddenly without forewarning. To get over this problem of delay, novel deep learning-based formulations are proposed for fault predictions. The proposed self-healing technique is then deployed along with the proposed fault prediction methods to gauge the accuracy and delay of embryonic hardware. The proposed fault prediction and self-healing methods have been implemented in VHDL over FPGA. The proposed fault predictions achieve high accuracy with low training time. The accuracy is up to 99.36% with the training time of 2.16 min. The area overhead of the proposed self-healing method is 34%, and the fault recovery percentage is 75%. To the best of our knowledge, this is the first such work in embryonic hardware, and it is expected to open a new frontier in fault-prediction assisted self-healing for embryonic systems.


Subject(s)
Biomimetics , Machine Learning , Models, Biological , Signal Processing, Computer-Assisted , Equipment Failure Analysis , Neural Networks, Computer
4.
IEEE Trans Biomed Circuits Syst ; 14(2): 209-220, 2020 04.
Article in English | MEDLINE | ID: mdl-31796417

ABSTRACT

The task of epileptic focus localization receives great attention due to its role in an effective epileptic surgery. The clinicians highly depend on the intracranial EEG data to make a surgical decision related to epileptic subjects suffering from uncontrollable seizures. This surgery usually aims to remove the epileptogenic region which requires precise characterization of that area using the EEG recordings. In this paper, we propose two methods based on deep learning targeting accurate automatic epileptic focus localization using the non-stationary EEG recordings. Our first proposed method is based on semi-supervised learning, in which a deep convolutional autoencoder is trained and then the pre-trained encoder is used with multi-layer perceptron as a classifier. The goal is to determine the location of the EEG signal that is responsible for the epileptic activity. In the second proposed method, unsupervised learning scheme is implemented by merging deep convolutional variational autoencoder and K-means algorithm for clustering the iEEG signals into two distinct clusters based on the seizure source. The proposed methods automate and integrate the features extraction and classification processes instead of manually extracting the features as done in the previous studies. Dimensionality reduction is achieved using the autoencoder, while the important spatio-temporal features are extracted from the EEG recordings using the convolutional layers. Moreover, we implemented the inference network of the semi-supervised model on FPGA. The results of our experiments demonstrate high classification accuracy and clustering performance in localizing the epileptic focus compared with the state of the art.


Subject(s)
Deep Learning , Electroencephalography/methods , Epilepsy/diagnosis , Signal Processing, Computer-Assisted , Algorithms , Humans , Seizures/diagnosis , Unsupervised Machine Learning
5.
IEEE Trans Biomed Circuits Syst ; 13(5): 804-813, 2019 10.
Article in English | MEDLINE | ID: mdl-31331897

ABSTRACT

Epilepsy is one of the world's most common neurological diseases. Early prediction of the incoming seizures has a great influence on epileptic patients' life. In this paper, a novel patient-specific seizure prediction technique based on deep learning and applied to long-term scalp electroencephalogram (EEG) recordings is proposed. The goal is to accurately detect the preictal brain state and differentiate it from the prevailing interictal state as early as possible and make it suitable for real time. The features extraction and classification processes are combined into a single automated system. Raw EEG signal without any preprocessing is considered as the input to the system which further reduces the computations. Four deep learning models are proposed to extract the most discriminative features which enhance the classification accuracy and prediction time. The proposed approach takes advantage of the convolutional neural network in extracting the significant spatial features from different scalp positions and the recurrent neural network in expecting the incidence of seizures earlier than the current methods. A semi-supervised approach based on transfer learning technique is introduced to improve the optimization problem. A channel selection algorithm is proposed to select the most relevant EEG channels which makes the proposed system good candidate for real-time usage. An effective test method is utilized to ensure robustness. The achieved highest accuracy of 99.6% and lowest false alarm rate of 0.004 h - 1 along with very early seizure prediction time of 1 h make the proposed method the most efficient among the state of the art.


Subject(s)
Deep Learning , Electroencephalography , Models, Neurological , Seizures/physiopathology , Signal Processing, Computer-Assisted , Adolescent , Child , Child, Preschool , Female , Humans , Male
6.
IEEE Trans Image Process ; 20(12): 3566-79, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21521668

ABSTRACT

Hysteresis thresholding is a method that offers enhanced object detection. Due to its recursive nature, it is time consuming and requires a lot of memory resources. This makes it avoided in streaming processors with limited memory. We propose two versions of a memory-efficient and fast architecture for hysteresis thresholding: a high-accuracy pixel-based architecture and a faster block-based one at the expense of some loss in the accuracy. Both designs couple thresholding with connected component analysis and feature extraction in a single pass over the image. Unlike queue-based techniques, the proposed scheme treats candidate pixels almost as foreground until objects complete; a decision is then made to keep or discard these pixels. This allows processing on the fly, thus avoiding additional passes for handling candidate pixels and extracting object features. Moreover, labels are reused so only one row of compact labels is buffered. Both architectures are implemented in MATLAB and VHDL. Simulation results on a set of real and synthetic images show that the execution speed can attain an average increase up to 24× for the pixel-based and 52× for the block-based when compared to state-of-the-art techniques. The memory requirements are also drastically reduced by about 99%.

SELECTION OF CITATIONS
SEARCH DETAIL
...