Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
J Healthc Eng ; 2022: 1573076, 2022.
Article in English | MEDLINE | ID: mdl-35126902

ABSTRACT

Early prediction of epilepsy seizures can warn the patients to take precautions and improve their lives significantly. In recent years, deep learning has become increasingly predominant in seizure prediction. However, existing deep learning-based approaches in this field require a great deal of labeled data to guarantee performance. At the same time, labeling EEG signals does require the expertise of an experienced pathologist and is incredibly time-consuming. To address this issue, we propose a novel Consistency-based Semisupervised Seizure Prediction Model (CSSPM), where only a fraction of training data is labeled. Our method is based on the principle of consistency regularization, which underlines that a robust model should maintain consistent results for the same input under extra perturbations. Specifically, by using stochastic augmentation and dropout, we consider the entire neural network as a stochastic model and apply a consistency constraint to penalize the difference between the current prediction and previous predictions. In this way, unlabeled data could be fully utilized to improve the decision boundary and enhance prediction performance. Compared with existing studies requiring all training data to be labeled, the proposed method only needs a small portion of data to be labeled while still achieving satisfactory results. Our method provides a promising solution to alleviate the labeling cost for real-world applications.


Subject(s)
Epilepsy , Scalp , Electroencephalography/methods , Humans , Neural Networks, Computer , Seizures/diagnosis
2.
Clin Neurophysiol ; 130(1): 25-37, 2019 01.
Article in English | MEDLINE | ID: mdl-30472579

ABSTRACT

OBJECTIVE: Automatic detection of epileptic seizures based on deep learning methods received much attention last year. However, the potential of deep neural networks in seizure detection has not been fully exploited in terms of the optimal design of the model architecture and the detection power of the time-series brain data. In this work, a deep neural network architecture is introduced to learn the temporal dependencies in Electroencephalogram (EEG) data for robust detection of epileptic seizures. METHODS: A deep Long Short-Term Memory (LSTM) network is first used to learn the high-level representations of different EEG patterns. Then, a Fully Connected (FC) layer is adopted to extract the most robust EEG features relevant to epileptic seizures. Finally, these features are supplied to a softmax layer to output predicted labels. RESULTS: The results on a benchmark clinical dataset reveal the prevalence of the proposed approach over the baseline techniques; achieving 100% classification accuracy, 100% sensitivity, and 100% specificity. Our approach is additionally shown to be robust in noisy and real-life conditions. It maintains high detection performance in the existence of common EEG artifacts (muscle activities and eye movement) as well as background noise. CONCLUSIONS: We demonstrate the clinical feasibility of our seizure detection approach achieving superior performance over the cutting-edge techniques in terms of seizure detection performance and robustness. SIGNIFICANCE: Our seizure detection approach can contribute to accurate and robust detection of epileptic seizures in ideal and real-life situations.


Subject(s)
Deep Learning , Electroencephalography/methods , Neural Networks, Computer , Seizures/diagnosis , Seizures/physiopathology , Signal Processing, Computer-Assisted , Deep Learning/standards , Electroencephalography/standards , Humans
3.
PLoS One ; 13(1): e0190783, 2018.
Article in English | MEDLINE | ID: mdl-29351281

ABSTRACT

This paper addresses the problem of quantifying biomarkers in multi-stained tissues based on the color and spatial information of microscopy images of the tissue. A deep learning-based method that can automatically localize and quantify the regions expressing biomarker(s) in any selected area on a whole slide image is proposed. The deep learning network, which we refer to as Whole Image (WI)-Net, is a fully convolutional network whose input is the true RGB color image of a tissue and output is a map showing the locations of each biomarker. The WI-Net relies on a different network, Nuclei (N)-Net, which is a convolutional neural network that classifies each nucleus separately according to the biomarker(s) it expresses. In this study, images of immunohistochemistry (IHC)-stained slides were collected and used. Images of nuclei (4679 RGB images) were manually labeled based on the expressing biomarkers in each nucleus (as p16 positive, Ki-67 positive, p16 and Ki-67 positive, p16 and Ki-67 negative). The labeled nuclei images were used to train the N-Net (obtaining an accuracy of 92% in a test set). The trained N-Net was then extended to WI-Net that generated a map of all biomarkers in any selected sub-image of the whole slide image acquired by the scanner (instead of classifying every nucleus image). The results of our method compare well with the manual labeling by humans (average F-score of 0.96). In addition, we carried a layer-based immunohistochemical analysis of cervical epithelium, and showed that our method can be used by pathologists to differentiate between different grades of cervical intraepithelial neoplasia by quantitatively assessing the percentage of proliferating cells in the different layers of HPV positive lesions.


Subject(s)
Automation , Biomarkers/metabolism , Neural Networks, Computer , Uterine Cervical Neoplasms/metabolism , Biopsy , Female , Humans , Immunohistochemistry , Uterine Cervical Neoplasms/pathology
4.
Int J Comput Assist Radiol Surg ; 11(10): 1765-77, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27287761

ABSTRACT

PURPOSE: Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. METHODS: We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. RESULTS: Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. CONCLUSIONS: Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Humans , Models, Theoretical , Radiographic Image Enhancement
5.
Phys Med Biol ; 61(9): 3536-53, 2016 May 07.
Article in English | MEDLINE | ID: mdl-27055224

ABSTRACT

Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.


Subject(s)
Algorithms , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/standards , Imaging, Three-Dimensional/methods , Phantoms, Imaging , Animals , Cluster Analysis , Computer Simulation , Humans , Rats
6.
J Neural Eng ; 13(2): 026001, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26824461

ABSTRACT

OBJECTIVE: The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject's brain characteristics. APPROACH: To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. MAIN RESULTS: We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. SIGNIFICANCE: Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.


Subject(s)
Algorithms , Brain-Computer Interfaces/standards , Pattern Recognition, Automated/standards , User-Computer Interface , Bayes Theorem , Brain/physiology , Brain-Computer Interfaces/trends , Electroencephalography/methods , Humans , Pattern Recognition, Automated/methods
7.
Biomed Eng Online ; 14: 96, 2015 Oct 24.
Article in English | MEDLINE | ID: mdl-26499452

ABSTRACT

BACKGROUND: Cervical cancer remains a major health problem, especially in developing countries. Colposcopic examination is used to detect high-grade lesions in patients with a history of abnormal pap smears. New technologies are needed to improve the sensitivity and specificity of this technique. We propose to test the potential of fluorescence confocal microscopy to identify high-grade lesions. METHODS: We examined the quantification of ex vivo confocal fluorescence microscopy to differentiate among normal cervical tissue, low-grade Cervical Intraepithelial Neoplasia (CIN), and high-grade CIN. We sought to (1) quantify nuclear morphology and tissue architecture features by analyzing images of cervical biopsies; and (2) determine the accuracy of high-grade CIN detection via confocal microscopy relative to the accuracy of detection by colposcopic impression. Forty-six biopsies obtained from colposcopically normal and abnormal cervical sites were evaluated. Confocal images were acquired at different depths from the epithelial surface and histological images were analyzed using in-house software. RESULTS: The features calculated from the confocal images compared well with those features obtained from the histological images and histopathological reviews of the specimens (obtained by a gynecologic pathologist). The correlations between two of these features (the nuclear-cytoplasmic ratio and the average of three nearest Delaunay-neighbors distance) and the grade of dysplasia were higher than that of colposcopic impression. The sensitivity of detecting high-grade dysplasia by analysing images collected at the surface of the epithelium, and at 15 and 30 µm below the epithelial surface were respectively 100, 100, and 92 %. CONCLUSIONS: Quantitative analysis of confocal fluorescence images showed its capacity for discriminating high-grade CIN lesions vs. low-grade CIN lesions and normal tissues, at different depth of imaging. This approach could be used to help clinicians identify high-grade CIN in clinical settings.


Subject(s)
Microscopy, Confocal/methods , Microscopy, Fluorescence/methods , Uterine Cervical Dysplasia/diagnosis , Uterine Cervical Neoplasms/diagnosis , Adult , Colposcopy , Female , Humans , Middle Aged , Neoplasm Grading , Phenotype , Uterine Cervical Neoplasms/pathology , Young Adult , Uterine Cervical Dysplasia/pathology
8.
PLoS One ; 10(6): e0129435, 2015.
Article in English | MEDLINE | ID: mdl-26090799

ABSTRACT

A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems.


Subject(s)
Brain-Computer Interfaces , Psychomotor Performance , Algorithms , Area Under Curve , Datasets as Topic , Discriminant Analysis , Electroencephalography , Humans , Reproducibility of Results , Support Vector Machine
9.
Sensors (Basel) ; 14(10): 18370-89, 2014 Oct 01.
Article in English | MEDLINE | ID: mdl-25275348

ABSTRACT

Electroencephalogram (EEG) recordings are often contaminated with muscular artifacts that strongly obscure the EEG signals and complicates their analysis. For the conventional case, where the EEG recordings are obtained simultaneously over many EEG channels, there exists a considerable range of methods for removing muscular artifacts. In recent years, there has been an increasing trend to use EEG information in ambulatory healthcare and related physiological signal monitoring systems. For practical reasons, a single EEG channel system must be used in these situations. Unfortunately, there exist few studies for muscular artifact cancellation in single-channel EEG recordings. To address this issue, in this preliminary study, we propose a simple, yet effective, method to achieve the muscular artifact cancellation for the single-channel EEG case. This method is a combination of the ensemble empirical mode decomposition (EEMD) and the joint blind source separation (JBSS) techniques. We also conduct a study that compares and investigates all possible single-channel solutions and demonstrate the performance of these methods using numerical simulations and real-life applications. The proposed method is shown to significantly outperform all other methods. It can successfully remove muscular artifacts without altering the underlying EEG activity. It is thus a promising tool for use in ambulatory healthcare systems.


Subject(s)
Artifacts , Electroencephalography/methods , Signal Processing, Computer-Assisted , Humans , Muscles/physiology
10.
Sensors (Basel) ; 14(2): 2036-51, 2014 Jan 24.
Article in English | MEDLINE | ID: mdl-24469356

ABSTRACT

The emergence of wireless sensor networks (WSNs) has motivated a paradigm shift in patient monitoring and disease control. Epilepsy management is one of the areas that could especially benefit from the use of WSN. By using miniaturized wireless electroencephalogram (EEG) sensors, it is possible to perform ambulatory EEG recording and real-time seizure detection outside clinical settings. One major consideration in using such a wireless EEG-based system is the stringent battery energy constraint at the sensor side. Different solutions to reduce the power consumption at this side are therefore highly desired. The conventional approach incurs a high power consumption, as it transmits the entire EEG signals wirelessly to an external data server (where seizure detection is carried out). This paper examines the use of data reduction techniques for reducing the amount of data that has to be transmitted and, thereby, reducing the required power consumption at the sensor side. Two data reduction approaches are examined: compressive sensing-based EEG compression and low-complexity feature extraction. Their performance is evaluated in terms of seizure detection effectiveness and power consumption. Experimental results show that by performing low-complexity feature extraction at the sensor side and transmitting only the features that are pertinent to seizure detection to the server, a considerable overall saving in power is achieved. The battery life of the system is increased by 14 times, while the same seizure detection rate as the conventional approach (95%) is maintained.


Subject(s)
Seizures/diagnosis , Ambulatory Care , Electroencephalography , Humans , Miniaturization , Seizures/prevention & control , Wireless Technology
11.
Sensors (Basel) ; 14(1): 1474-96, 2014 Jan 15.
Article in English | MEDLINE | ID: mdl-24434840

ABSTRACT

The use of wireless body sensor networks is gaining popularity in monitoring and communicating information about a person's health. In such applications, the amount of data transmitted by the sensor node should be minimized. This is because the energy available in these battery powered sensors is limited. In this paper, we study the wireless transmission of electroencephalogram (EEG) signals. We propose the use of a compressed sensing (CS) framework to efficiently compress these signals at the sensor node. Our framework exploits both the temporal correlation within EEG signals and the spatial correlations amongst the EEG channels. We show that our framework is up to eight times more energy efficient than the typical wavelet compression method in terms of compression and encoding computations and wireless transmission. We also show that for a fixed compression ratio, our method achieves a better reconstruction quality than the CS-based state-of-the art method. We finally demonstrate that our method is robust to measurement noise and to packet loss and that it is applicable to a wide range of EEG signal types.


Subject(s)
Data Compression/methods , Electroencephalography/methods , Algorithms , Signal Processing, Computer-Assisted
12.
Magn Reson Imaging ; 31(3): 448-55, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23102947

ABSTRACT

In this work we exploit two assumed properties of dynamic MRI in order to reconstruct the images from under-sampled K-space samples. The first property assumes the signal is sparse in the x-f space and the second property assumes the signal is rank-deficient in the x-t space. These assumptions lead to an optimization problem that requires minimizing a combined lp-norm and Schatten-p norm. We propose a novel FOCUSS based approach to solve the optimization problem. Our proposed method is compared with state-of-the-art techniques in dynamic MRI reconstruction. Experimental evaluation carried out on three real datasets shows that for all these datasets, our method yields better reconstruction both in quantitative and qualitative evaluation.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
13.
Article in English | MEDLINE | ID: mdl-24505753

ABSTRACT

We address the problem of dynamic CT reconstruction from parsimoniously sampled sinograms. In this paper we propose a novel approach to solve the aforesaid problem by modeling the dynamic CT sequence as a low-rank matrix. This dynamic CT matrix is formed by stacking each frame as a column of the matrix. As these images are temporally correlated, the dynamic CT matrix would therefore be of low-rank as its columns are not independent. We exploit the low-rank information to reconstruct the CT matrix from its parsimoniously sampled sinograms. Mathematically this is a low-rank matrix recovery problem, and we propose a novel algorithm to solve it. Our proposed method reduces the reconstruction error by 50% or more when compared to previous recovery techniques.


Subject(s)
Algorithms , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Animals , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity
14.
IEEE Trans Med Imaging ; 31(12): 2253-66, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22949054

ABSTRACT

This work addresses the problem of real-time online reconstruction of dynamic magnetic resonance imaging sequences. The proposed method reconstructs the difference between the previous and the current image frames. This difference image is sparse. We recover the sparse difference image from its partial k-space scans by using a nonconvex compressed sensing algorithm. As there was no previous fast enough algorithm for real-time reconstruction, we derive a novel algorithm for this purpose. Our proposed method has been compared against state-of-the-art offline and online reconstruction methods. The accuracy of the proposed method is less than offline methods but noticeably higher than the online techniques. For real-time reconstruction we are also concerned about the reconstruction speed. Our method is capable of reconstructing 128 × 128 images at the rate of 6 frames/s, 180 × 180 images at the rate of 5 frames/s and 256 × 256 images at the rate of 2.5 frames/s.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Databases, Factual , Fourier Analysis , Humans , Larynx/anatomy & histology , Myocardial Perfusion Imaging/methods
15.
J Neuroeng Rehabil ; 9: 50, 2012 Jul 27.
Article in English | MEDLINE | ID: mdl-22838499

ABSTRACT

BACKGROUND: A novel artefact removal algorithm is proposed for a self-paced hybrid brain-computer interface (BCI) system. This hybrid system combines a self-paced BCI with an eye-tracker to operate a virtual keyboard. To select a letter, the user must gaze at the target for at least a specific period of time (dwell time) and then activate the BCI by performing a mental task. Unfortunately, electroencephalogram (EEG) signals are often contaminated with artefacts. Artefacts change the quality of EEG signals and subsequently degrade the BCI's performance. METHODS: To remove artefacts in EEG signals, the proposed algorithm uses the stationary wavelet transform combined with a new adaptive thresholding mechanism. To evaluate the performance of the proposed algorithm and other artefact handling/removal methods, semi-simulated EEG signals (i.e., real EEG signals mixed with simulated artefacts) and real EEG signals obtained from seven participants are used. For real EEG signals, the hybrid BCI system's performance is evaluated in an online-like manner, i.e., using the continuous data from the last session as in a real-time environment. RESULTS: With semi-simulated EEG signals, we show that the proposed algorithm achieves lower signal distortion in both time and frequency domains. With real EEG signals, we demonstrate that for dwell time of 0.0s, the number of false-positives/minute is 2 and the true positive rate (TPR) achieved by the proposed algorithm is 44.7%, which is more than 15.0% higher compared to other state-of-the-art artefact handling methods. As dwell time increases to 1.0s, the TPR increases to 73.1%. CONCLUSIONS: The proposed artefact removal algorithm greatly improves the BCI's performance. It also has the following advantages: a) it does not require additional electrooculogram/electromyogram channels, long data segments or a large number of EEG channels, b) it allows real-time processing, and c) it reduces signal distortion.


Subject(s)
Algorithms , Artifacts , Brain-Computer Interfaces , Data Interpretation, Statistical , Electroencephalography/instrumentation , Electroencephalography/methods , Electromyography , Electrooculography , Equipment Design , Eye Movements/physiology , Female , Humans , Male , Regression Analysis , Reproducibility of Results , Signal Processing, Computer-Assisted , User-Computer Interface , Wavelet Analysis , Young Adult
16.
Magn Reson Imaging ; 30(10): 1483-94, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22789845

ABSTRACT

This work addresses the problem of online reconstruction of dynamic magnetic resonance images (MRI). The proposed method reconstructs the difference between the images of previous and current time frames. This difference image is modeled as a rank deficient matrix and is solved from the partially sampled k-space data via nuclear norm minimization. Our proposed method has been compared against state-of-the-art offline and online reconstruction methods. Our method has similar reconstruction accuracy as the offline method and significantly higher accuracy compared to the online technique. It is about an order of magnitude faster than the online technique compared against. Our experimental data consisted of dynamic MRI data that were collected at 6 to 7 frames per second and having resolutions of 128×128 and 256×256 pixels per frame. Experimental evaluation indicates that our proposed method is capable of reconstructing 128×128 images at the rate of 4 frames per second and 256×256 images at the rate of 2 frames per second. The previous online method requires about 3.75 s for reconstructing each image. The improvement in reconstruction speed is clearly discernible. Moreover, our method has a reconstruction error that is about half that of the previous online method.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Brain/pathology , Diagnostic Imaging/methods , Fourier Analysis , Humans , Models, Statistical , Normal Distribution , Perfusion , Software
17.
Magn Reson Imaging ; 30(7): 1032-45, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22503088

ABSTRACT

In parallel magnetic resonance imaging (MRI), the problem is to reconstruct an image given the partial K-space scans from all the receiver coils. Depending on its position within the scanner, each coil has a different sensitivity profile. All existing parallel MRI techniques require estimation of certain parameters pertaining to the sensitivity profile, e.g., the sensitivity map needs to be estimated for the SENSE and SMASH and the interpolation weights need to be calibrated for GRAPPA and SPIRiT. The assumption is that the estimated parameters are applicable at the operational stage. This assumption does not always hold, consequently the reconstruction accuracies of existing parallel MRI methods may suffer. We propose a reconstruction method called Calibration-Less Multi-coil (CaLM) MRI. As the name suggests, our method does not require estimation of any parameters related to the sensitivity maps and hence does not require a calibration stage. CaLM MRI is an image domain method that produces a sensitivity encoded image for each coil. These images are finally combined by the sum-of-squares method to yield the final image. It is based on the theory of Compressed Sensing (CS). During reconstruction, the constraint that "all the coil images should appear similar" is introduced within the CS framework. This leads to a CS optimization problem that promotes group-sparsity. The results from our proposed method are comparable (at least for the data used in this work) with the best results that can be obtained from state-of-the-art methods.


Subject(s)
Algorithms , Brain/anatomy & histology , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/instrumentation , Magnetic Resonance Imaging/methods , Calibration , Humans , Magnetic Resonance Imaging/standards , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity
18.
Magn Reson Imaging ; 30(1): 9-18, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21937179

ABSTRACT

The reconstruction of magnetic resonance (MR) images from the partial samples of their k-space data using compressed sensing (CS)-based methods has generated a lot of interest in recent years. To reconstruct the MR images, these techniques exploit the sparsity of the image in a transform domain (wavelets, total variation, etc.). In a recent work, it has been shown that it is also possible to reconstruct MR images by exploiting their rank deficiency. In this work, it will be shown that, instead of exploiting the sparsity of the image or rank deficiency alone, better reconstruction results can be achieved by combining transform domain sparsity with rank deficiency. To reconstruct an MR image using its transform domain sparsity and its rank deficiency, this work proposes a combined l(1)-norm (of the transform coefficients) and nuclear norm (of the MR image matrix) minimization problem. Since such an optimization problem has not been encountered before, this work proposes and derives a first-order algorithm to solve it. The reconstruction results show that the proposed approach yields significant improvements, in terms of both visual quality as well as the signal to noise ratio, over previous works that reconstruct MR images either by exploiting rank deficiency or by the standard CS-based technique popularly known as the 'Sparse MRI.'


Subject(s)
Algorithms , Brain/anatomy & histology , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
19.
Article in English | MEDLINE | ID: mdl-23366625

ABSTRACT

As the characteristics of EEG signals change over time, updating the classifier of a brain computer interface, BCI, (over time) would improve the performance of the system. Developing an adaptive classifier for a self-paced BCI however is not easy because the user's intention (and therefore the true labels of the EEG signals) are not known during the operation of the system. For certain applications, it may be possible to predict the labels of some of the EEG segments using some information about the user's state (e.g., the error potentials or gaze information). This study proposes a method that adaptively updates the classifier of a self-paced BCI in a supervised or semi-supervised manner, using those EEG segments whose labels can be predicted. We employ the eye position information obtained from an eye-tracker to predict the EEG labels. This eye-tracker is also used along with a self-paced BCI to form a hybrid BCI system. The results obtained from seven individuals show that the proposed algorithm outperforms the non-adaptive and other unsupervised adaptive classifiers. It achieves a true positive rate of 49.7% and lowers the number of false positives significantly to only 2.2 FPs/minute.


Subject(s)
Adaptation, Physiological , Brain-Computer Interfaces , Algorithms , Analysis of Variance , Electroencephalography , Eye Movements , Humans , ROC Curve
20.
Magn Reson Imaging ; 30(2): 213-21, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22055747

ABSTRACT

SENSitivity Encoding (SENSE) is a mathematically optimal parallel magnetic resonance (MRI) imaging technique when the coil sensitivities are known. In recent times, compressed sensing (CS)-based techniques are incorporated within the SENSE reconstruction framework to recover the underlying MR image. CS-based techniques exploit the fact that the MR images are sparse in a transform domain (e.g., wavelets). Mathematically, this leads to an l(1)-norm-regularized SENSE reconstruction. In this work, we show that instead of reconstructing the image by exploiting its transform domain sparsity, we can exploit its rank deficiency to reconstruct it. This leads to a nuclear norm-regularized SENSE problem. The reconstruction accuracy from our proposed method is the same as the l(1)-norm-regularized SENSE, but the advantage of our method is that it is about an order of magnitude faster.


Subject(s)
Algorithms , Brain/anatomy & histology , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Data Interpretation, Statistical , Humans , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...