Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 26(4): 1614-1627, 2022 04.
Article in English | MEDLINE | ID: mdl-34516380

ABSTRACT

Optical coherence tomography (OCT) has been identified as a non-invasive and inexpensive imaging modality to discover potential biomarkers for Alzheimer's diagnosis and progress determination. Current hypotheses presume the thickness of the retinal layers, which are analyzable within OCT scans, as an effective biomarker for the presence of Alzheimer's. As a logical first step, this work concentrates on the accurate segmentation of retinal layers to isolate the layers for further analysis. This paper proposes a generative adversarial network (GAN) that concurrently learns to increase the image resolution for higher clarity and then segment the retinal layers. We propose a multi-stage and multi-discriminatory generative adversarial network (MultiSDGAN) specifically for superresolution and segmentation of OCT scans of the retinal layer. The resulting generator is adversarially trained against multiple discriminator networks at multiple stages. We aim to avoid early saturation of generator model training leading to poor segmentation accuracies and enhance the process of OCT domain translation by satisfying all the discriminators in multiple scales. We also investigated incorporating the Dice loss and Structured Similarity Index Measure (SSIM) as additional loss functions to specifically target and improve our proposed GAN architecture's segmentation and superresolution performance, respectively. The ablation study results conducted on our data set suggest that the proposed MultiSDGAN with ten-fold cross-validation (10-CV) provides a reduced equal error rate with 44.24% and 34.09% relative improvements, respectively (p-values of the improvement level tests .01). Furthermore, our experimental results also demonstrate that the addition of the new terms to the loss function improves the segmentation results significantly by relative improvements of 31.33% (p-value .01).


Subject(s)
Alzheimer Disease , Tomography, Optical Coherence , Alzheimer Disease/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Retina/diagnostic imaging
2.
Biomed Res Int ; 2018: 9796238, 2018.
Article in English | MEDLINE | ID: mdl-29662908

ABSTRACT

A major predicament for Intensive Care Unit (ICU) patients is inconsistent and ineffective communication means. Patients rated most communication sessions as difficult and unsuccessful. This, in turn, can cause distress, unrecognized pain, anxiety, and fear. As such, we designed a portable BCI system for ICU communications (BCI4ICU) optimized to operate effectively in an ICU environment. The system utilizes a wearable EEG cap coupled with an Android app designed on a mobile device that serves as visual stimuli and data processing module. Furthermore, to overcome the challenges that BCI systems face today in real-world scenarios, we propose a novel subject-specific Gaussian Mixture Model- (GMM-) based training and adaptation algorithm. First, we incorporate subject-specific information in the training phase of the SSVEP identification model using GMM-based training and adaptation. We evaluate subject-specific models against other subjects. Subsequently, from the GMM discriminative scores, we generate the transformed vectors, which are passed to our predictive model. Finally, the adapted mixture mean scores of the subject-specific GMMs are utilized to generate the high-dimensional supervectors. Our experimental results demonstrate that the proposed system achieved 98.7% average identification accuracy, which is promising in order to provide effective and consistent communication for patients in the intensive care.


Subject(s)
Brain-Computer Interfaces , Communication , Evoked Potentials, Visual/physiology , Intensive Care Units , Algorithms , Humans , Signal Processing, Computer-Assisted
3.
Sensors (Basel) ; 18(2)2018 Feb 07.
Article in English | MEDLINE | ID: mdl-29414902

ABSTRACT

One of the main reasons for fatal accidents on the road is distracted driving. The continuous attention of an individual driver is a necessity for the task of driving. While driving, certain levels of distraction can cause drivers to lose their attention, which might lead to an accident. Thus, the number of accidents can be reduced by early detection of distraction. Many studies have been conducted to automatically detect driver distraction. Although camera-based techniques have been successfully employed to characterize driver distraction, the risk of privacy violation is high. On the other hand, physiological signals have shown to be a privacy preserving and reliable indicator of driver state, while the acquisition technology might be intrusive to drivers in practical implementation. In this study, we investigate a continuous measure of phasic Galvanic Skin Responses (GSR) using a wristband wearable to identify distraction of drivers during a driving experiment on-the-road. We first decompose the raw GSR signal into its phasic and tonic components using Continuous Decomposition Analysis (CDA), and then the continuous phasic component containing relevant characteristics of the skin conductance signals is investigated for further analysis. We generated a high resolution spectro-temporal transformation of the GSR signals for non-distracted and distracted (calling and texting) scenarios to visualize the associated behavior of the decomposed phasic GSR signal in correlation with distracted scenarios. According to the spectrogram observations, we extract relevant spectral and temporal features to capture the patterns associated with the distracted scenarios at the physiological level. We then performed feature selection using support vector machine recursive feature elimination (SVM-RFE) in order to: (1) generate a rank of the distinguishing features among the subject population, and (2) create a reduced feature subset toward more efficient distraction identification on the edge at the generalization phase. We employed support vector machine (SVM) to generate the 10-fold cross validation (10-CV) identification performance measures. Our experimental results demonstrated cross-validation accuracy of 94.81% using all the features and the accuracy of 93.01% using reduced feature space. The SVM-RFE selected set of features generated a marginal decrease in accuracy while reducing the redundancy in the input feature space toward shorter response time necessary for early notification of distracted state of the driver.


Subject(s)
Distracted Driving , Accidents, Traffic , Attention , Automobile Driving , Galvanic Skin Response , Humans , Wearable Electronic Devices
4.
Sensors (Basel) ; 17(12)2017 Dec 13.
Article in English | MEDLINE | ID: mdl-29236042

ABSTRACT

As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.

5.
Sensors (Basel) ; 17(12)2017 Nov 27.
Article in English | MEDLINE | ID: mdl-29186887

ABSTRACT

The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.


Subject(s)
Gait , Female , Humans , Male , Neural Networks, Computer
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 1757-60, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26736618

ABSTRACT

In recent years, there has been increasing interest in using steady-state visual evoked potentials (SSVEP) in brain-computer interface (BCI) systems for their high signal to noise ratio. However, due to the limitations of brain physiology and the refresh rate of the display devices, the available stimulation frequencies that evoke strong SSVEPs are limited. The goal of this paper is to investigate time-varying and simultaneous frequency stimulation in order to increase the number of visual stimuli with a fixed number of stimulation frequencies in multiclass SSVEP-based BCI systems. This study analyzes the SSVEPs induced by groups of light-emitting diodes (LEDs). The proposed method produces more selections than the number of stimulation frequencies through an efficient combination of time-varying and simultaneous frequencies for stimulation. The feasibility and effectiveness of our proposed method was confirmed by a set of experiments conducted on six subjects. The results confirmed that our proposed stimulation is a promising method to increase the number of stimuli using a fixed number of frequencies for multi-class SSVEP-based BCI tasks.


Subject(s)
Brain-Computer Interfaces , Computer Simulation , Evoked Potentials, Visual , Neurologic Examination/methods , Equipment Design , Humans
7.
Article in English | MEDLINE | ID: mdl-25569911

ABSTRACT

Steady-state visual evoked potential (SSVEP) has become one of the most widely employed modalities in online brain computer interface (BCI) because of its high signal-to-noise ratio. However, due to the limitations of brain physiology and the refresh rate of the display devices, the available stimulation frequencies that evoke strong SSVEPs are generally limited for practical applications. In this paper, we introduce a novel stimulation method using patterns of time-varying frequencies that can increase the number of visual stimuli with a fixed number of stimulation frequencies for use in multi-class SSVEP-based BCI systems. We then propose a probabilistic framework and investigate three approaches to detect different patterns of time-varying frequencies. The results confirmed that our proposed stimulation is a promising method for multi-class SSVEP-based BCI tasks. Our pattern detection approaches improved the detection performance significantly by extracting higher quality discriminative information from the input signal.


Subject(s)
Brain-Computer Interfaces , Evoked Potentials, Visual/physiology , Photic Stimulation/methods , Electroencephalography , Humans , Pattern Recognition, Visual , Signal Processing, Computer-Assisted , Time Factors
8.
Article in English | MEDLINE | ID: mdl-24091391

ABSTRACT

Better understanding of structural class of a given protein reveals important information about its overall folding type and its domain. It can also be directly used to provide critical information on general tertiary structure of a protein which has a profound impact on protein function determination and drug design. Despite tremendous enhancements made by pattern recognition-based approaches to solve this problem, it still remains as an unsolved issue for bioinformatics that demands more attention and exploration. In this study, we propose a novel feature extraction model that incorporates physicochemical and evolutionary-based information simultaneously. We also propose overlapped segmented distribution and autocorrelation-based feature extraction methods to provide more local and global discriminatory information. The proposed feature extraction methods are explored for 15 most promising attributes that are selected from a wide range of physicochemical-based attributes. Finally, by applying an ensemble of different classifiers namely, Adaboost.M1, LogitBoost, naive Bayes, multilayer perceptron (MLP), and support vector machine (SVM) we show enhancement of the protein structural class prediction accuracy for four popular benchmarks.


Subject(s)
Pattern Recognition, Automated/methods , Proteins/chemistry , Sequence Analysis, Protein/methods , Bayes Theorem , Computational Biology/methods , Protein Conformation
9.
Proc Wirel Health ; 20132013 Nov.
Article in English | MEDLINE | ID: mdl-28224139

ABSTRACT

Gait and postural control are important aspects of human movement and balance. Normal movement control in human is subject to change with aging. With aging, the nervous system comprising, somatosensory, visual senses, spatial orientation senses, and neuromuscular control degrade. As a result, the body movement control such as the lateral sway while walking is affected which has been shown to be a significant cause of falling among the elderly. Biofeedback has been investigated to assist elderly improve their body movement and postural ability, by supplementing the feedback to the nervous system. In this paper, we propose a wearable low-power sensor system capable of characterizing lateral sway and gait parameters. Then, it can provide corrective feedback to reduce excessive sway in real-time via vibratory feedback modules. Real-time and low-power characteristics along with wearability of our proposed system allow long-term continuous subjects' sway monitoring while giving direct feedback to enhance walking sway and prevent falling. It can also be used in the clinics as a tool for evaluating the risks of falls, and training users to better maintain their balance. The effectiveness of the biofeedback system was evaluated on 12 older adults as they performed gait and stance tasks with and without biofeedback. Significant improvement (p-value < 0.1) in sway angle in variance of the sway angle, variance of gait phases, and in postural control while on perturbed surface was detected when the proposed Sway Error Feedback System was used.

SELECTION OF CITATIONS
SEARCH DETAIL
...