Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 69
Filter
1.
J Pathol Inform ; 14: 100304, 2023.
Article in English | MEDLINE | ID: mdl-36967835

ABSTRACT

Strategies such as ensemble learning and averaging techniques try to reduce the variance of single deep neural networks. The focus of this study is on ensemble averaging techniques, fusing the results of differently initialized and trained networks. Thereby, using micrograph cell segmentation as an application example, various ensembles have been initialized and formed during network training, whereby the following methods have been applied: (a) random seeds, (b) L 1-norm pruning, (c) variable numbers of training examples, and (d) a combination of the latter 2 items. Furthermore, different averaging methods are in common use and were evaluated in this study. As averaging methods, the mean, the median, and the location parameter of an alpha-stable distribution, fit to the histograms of class membership probabilities (CMPs), as well as a majority vote of the members of an ensemble were considered. The performance of these methods is demonstrated and evaluated on a micrograph cell segmentation use case, employing a common state-of-the art deep convolutional neural network (DCNN) architecture exploiting the principle of the common VGG-architecture. The study demonstrates that for this data set, the choice of the ensemble averaging method only has a marginal influence on the evaluation metrics (accuracy and Dice coefficient) used to measure the segmentation performance. Nevertheless, for practical applications, a simple and fast estimate of the mean of the distribution is highly competitive with respect to the most sophisticated representation of the CMP distributions by an alpha-stable distribution, and hence seems the most proper ensemble averaging method to be used for this application.

2.
J Pathol Inform ; 13: 100114, 2022.
Article in English | MEDLINE | ID: mdl-36268092

ABSTRACT

In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompassed Hematoxylin and Eosin (H&E) colored cell images provided by various clinics. We used a deep convolutional neural network to get the relation between a model's complexity, its concomitant set of parameters, and the size of the training sample necessary to achieve a certain classification accuracy. The complexity of the deep neural networks was reduced by pruning a certain amount of filters in the network. As expected, the unpruned neural network showed best performance. The network with the highest number of trainable parameter achieved, within the estimated standard error of the optimized cross-entropy loss, best results up to 30% pruning. Strongly pruned networks are highly viable and the classification accuracy declines quickly with decreasing number of training patterns. However, up to a pruning ratio of 40%, we found a comparable performance of pruned and unpruned deep convolutional neural networks (DCNN) and densely connected convolutional networks (DCCN).

3.
Netw Neurosci ; 6(3): 665-701, 2022 Jul.
Article in English | MEDLINE | ID: mdl-36607180

ABSTRACT

Comprehending the interplay between spatial and temporal characteristics of neural dynamics can contribute to our understanding of information processing in the human brain. Graph neural networks (GNNs) provide a new possibility to interpret graph-structured signals like those observed in complex brain networks. In our study we compare different spatiotemporal GNN architectures and study their ability to model neural activity distributions obtained in functional MRI (fMRI) studies. We evaluate the performance of the GNN models on a variety of scenarios in MRI studies and also compare it to a VAR model, which is currently often used for directed functional connectivity analysis. We show that by learning localized functional interactions on the anatomical substrate, GNN-based approaches are able to robustly scale to large network studies, even when available data are scarce. By including anatomical connectivity as the physical substrate for information propagation, such GNNs also provide a multimodal perspective on directed connectivity analysis, offering a novel possibility to investigate the spatiotemporal dynamics in brain networks.

4.
Sci Rep ; 11(1): 8061, 2021 04 13.
Article in English | MEDLINE | ID: mdl-33850173

ABSTRACT

A central question in neuroscience is how self-organizing dynamic interactions in the brain emerge on their relatively static structural backbone. Due to the complexity of spatial and temporal dependencies between different brain areas, fully comprehending the interplay between structure and function is still challenging and an area of intense research. In this paper we present a graph neural network (GNN) framework, to describe functional interactions based on the structural anatomical layout. A GNN allows us to process graph-structured spatio-temporal signals, providing a possibility to combine structural information derived from diffusion tensor imaging (DTI) with temporal neural activity profiles, like that observed in functional magnetic resonance imaging (fMRI). Moreover, dynamic interactions between different brain regions discovered by this data-driven approach can provide a multi-modal measure of causal connectivity strength. We assess the proposed model's accuracy by evaluating its capabilities to replicate empirically observed neural activation profiles, and compare the performance to those of a vector auto regression (VAR), like that typically used in Granger causality. We show that GNNs are able to capture long-term dependencies in data and also computationally scale up to the analysis of large-scale networks. Finally we confirm that features learned by a GNN can generalize across MRI scanner types and acquisition protocols, by demonstrating that the performance on small datasets can be improved by pre-training the GNN on data from an earlier study. We conclude that the proposed multi-modal GNN framework can provide a novel perspective on the structure-function relationship in the brain. Accordingly this approach appears to be promising for the characterization of the information flow in brain networks.


Subject(s)
Brain , Diffusion Tensor Imaging , Magnetic Resonance Imaging , Neural Networks, Computer , Humans
5.
Ann Nucl Med ; 34(4): 244-253, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32114682

ABSTRACT

BACKGROUND: Patients with advanced neuroendocrine tumors (NETs) of the midgut are suitable candidates for 177Lu-DOTATOC therapy. Integrated SPECT/CT systems have the potential to help improve the accuracy of patient-specific tumor dosimetry. Dose estimations to target organs are generally performed using the Medical Internal Radiation Dose scheme. We present a novel Monte Carlo-based voxel-wise dosimetry approach to determine organ- and tumor-specific total tumor doses (TTD). METHODS: A cohort of 14 patients with histologically confirmed metastasized NETs of the midgut (11 men, 3 women, 62.3 ± 11.0 years of age) underwent a total of 39 cycles of 177Lu-DOTATOC therapy (mean 2.8 cycles, SD ± 1 cycle). After the first cycle of therapy, regions of interest were defined manually on the SPECT/CT images for the kidneys, the spleen, and all 198 tracer-positive tumor lesions in the field of view. Four SPECT images, taken at 4 h, 24 h, 48 h and 72 h after injection of the radiopharmaceutical, were used to determine their effective half-lives in the structures of interest. The absorbed doses were calculated by a three-dimensional dosimetry method based on Monte Carlo simulations. TTD was calculated as the sum of all products of single tumor doses with single tumor volumes divided by the sum of all tumor volumes. RESULTS: The average dose values per cycle were 3.41 ± 1.28 Gy (1.91-6.22 Gy) for the kidneys, 4.40 ± 2.90 Gy (1.14-11.22 Gy) for the spleen, and 9.70 ± 8.96 Gy (1.47-39.49 Gy) for all 177Lu-DOTATOC-positive tumor lesions. Low- and intermediate-grade tumors (G 1-2) absorbed a higher TTD compared to high-grade tumors (G 3) (signed-rank test, p = < 0.05). The pre-therapeutic chromogranin A (CgA) value and the TTD correlated significantly (Pearson correlation: = 0.67, p = 0.01). Higher TTD resulted in a significant decrease of CgA after therapy. CONCLUSION: These results suggest that Monte Carlo-based voxel-wise dosimetry is a very promising tool for predicting the absorbed TTD based on histological and clinical parameters.


Subject(s)
Antineoplastic Agents/pharmacokinetics , Lutetium/pharmacokinetics , Neuroendocrine Tumors/radiotherapy , Octreotide/analogs & derivatives , Organometallic Compounds/pharmacology , Radioisotopes/pharmacokinetics , Radiopharmaceuticals/pharmacokinetics , Aged , Antineoplastic Agents/administration & dosage , Chromogranin A/radiation effects , Female , Humans , Lutetium/administration & dosage , Male , Middle Aged , Monte Carlo Method , Octreotide/administration & dosage , Octreotide/chemistry , Octreotide/pharmacokinetics , Organometallic Compounds/administration & dosage , Organometallic Compounds/pharmacokinetics , Radioisotopes/administration & dosage , Radiometry , Radiopharmaceuticals/administration & dosage , Radiotherapy Dosage , Single Photon Emission Computed Tomography Computed Tomography , Treatment Outcome
6.
Phys Med Biol ; 65(3): 035007, 2020 02 04.
Article in English | MEDLINE | ID: mdl-31881547

ABSTRACT

Currently methods for predicting absorbed dose after administering a radiopharmaceutical are rather crude in daily clinical practice. Most importantly, individual tissue density distributions as well as local variations of the concentration of the radiopharmaceutical are commonly neglected. The current study proposes machine learning techniques like Green's function-based empirical mode decomposition and deep learning methods on U-net architectures in conjunction with soft tissue kernel Monte Carlo (MC) simulations to overcome current limitations in precision and reliability of dose estimations for clinical dosimetric applications. We present a hybrid method (DNN-EMD) based on deep neural networks (DNN) in combination with empirical mode decomposition (EMD) techniques. The algorithm receives x-ray computed tomography (CT) tissue density maps and dose maps, estimated according to the MIRD protocol, i.e. employing whole organ S-values and related time-integrated activities (TIAs), and from measured SPECT distributions of 177Lu radionuclei, and learns to predict individual absorbed dose distributions. In a second step, density maps are replaced by their intrinsic modes as deduced from an EMD analysis. The system is trained using individual full MC simulation results as reference. Data from a patient cohort of 26 subjects are reported in this study. The proposed methods were validated employing a leave-one-out cross-validation technique. Deviations of estimated dose from corresponding MC results corroborate a superior performance of the newly proposed hybrid DNN-EMD method compared to its related MIRD DVK dose calculation. Not only are the mean deviations much smaller with the new method, but also the related variances are much reduced. If intrinsic modes of the tissue density maps are input to the algorithm, variances become even further reduced though the mean deviations are less affected. The newly proposed hybrid DNN-EMD method for individualized radiation dose prediction outperforms the MIRD DVK dose calculation method. It is fast enough to be of use in daily clinical practice.


Subject(s)
Algorithms , Deep Learning , Lutetium/pharmacokinetics , Lutetium/therapeutic use , Monte Carlo Method , Neoplasms/radiotherapy , Organs at Risk/radiation effects , Radioisotopes/pharmacokinetics , Radioisotopes/therapeutic use , Glutamate Carboxypeptidase II/metabolism , Humans , Neoplasms/metabolism , Neural Networks, Computer , Radiation Dosage , Radiopharmaceuticals/therapeutic use , Reproducibility of Results , Tissue Distribution , Tomography, X-Ray Computed/methods
7.
Phys Med Biol ; 64(24): 245011, 2019 12 19.
Article in English | MEDLINE | ID: mdl-31766045

ABSTRACT

In [Formula: see text] radionuclide therapies, dosimetry is used for determining patient-individual dose burden. Standard approaches provide whole organ doses only. For assessing dose heterogeneity inside organs, voxel-wise dosimetry based on 3D SPECT/CT imaging could be applied. Often, this is achieved by convolving voxel-wise time-activity-curves with appropriate dose-voxel-kernels (DVK). The DVKs are meant to model dose deposition, and can be more accurate if modelled for the specific tissue type under consideration. In literature, DVKs are often not adapted to these inhomogeneities, or simple approximation schemes are applied. For 26 patients, which had previously undergone a [Formula: see text] -PSMA or -DOTATOC therapy, decay maps, mass-density maps as well as tissue-type maps were derived from SPECT/CT acquisitions. These were used for a voxel-based dosimetry based on convolution with DVKs (each of size [Formula: see text]) obtained by four different DVK methods proposed in literature. The simplest only considers a spatially constant soft-tissue DVK (herein named 'constant'), while others either take into account only the local density of the center voxel of the DVK (herein named 'center-voxel') or scale each voxel linearly according to the proper mass density deduced from the CT image (herein named 'density') or considered both the local mass density as well as the direct path between the center voxel and any voxel in its surrounding (herein named 'percentage'). Deviations between resulting dose values and those from full Monte-Carlo simulations (MC simulations) were compared for selected organs and tissue-types. For each DVK method, inter-patient variability was considerable showing both under- and over-estimation of energy dose compared to the MC result for all tissue densities higher than soft tissue. In kidneys and spleen, 'constant' and 'density'-scaled DVKs achieved estimated doses with smallest deviations to the full MC gold standard (∼[Formula: see text] underestimation). For low and high density tissue types such as lung and adipose or bone tissue, alternative DVK methods like 'center-voxel'- and 'percentage'- scaled achieved superior results, respectively. Concerning computational load, dose estimation with the DVK method 'constant' needs about 1.1 s per patient, center-voxel scaling amounts to 1.2 s, density scaling needs 1.4 s while percentage scaling consumes 860.3 s per patient. In this study encompassing a large patient cohort and four different DVK estimation methods, no single DVK-adaption method was consistently better than any other in case of soft tissue kernels. Hence in such cases the simplest DVK method, labeled 'constant', suffices. In case of tumors, often located in tissues of low (lung) or high (bone) density, more sophisticated DVK methods excel. The high inter-patient variability indicates that for evaluating new algorithms, a sufficiently large patient cohort needs to be involved.


Subject(s)
Algorithms , Radiation Dosage , Radiotherapy Planning, Computer-Assisted/methods , Aged , Aged, 80 and over , Dipeptides/therapeutic use , Female , Heterocyclic Compounds, 1-Ring/therapeutic use , Humans , Lutetium , Male , Middle Aged , Octreotide/analogs & derivatives , Octreotide/therapeutic use , Prostate-Specific Antigen , Radiopharmaceuticals/therapeutic use , Radiotherapy/methods , Radiotherapy Dosage , Single Photon Emission Computed Tomography Computed Tomography/methods
8.
Ann Nucl Med ; 33(7): 521-531, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31119607

ABSTRACT

INTRODUCTION: In any radiotherapy, the absorbed dose needs to be estimated based on two factors, the time-integrated activity of the administered radiopharmaceutical and the patient-specific dose kernel. In this study, we consider the uncertainty with which such absorbed dose estimation can be achieved in a clinical environment. METHODS: To calculate the total error of dose estimation we considered the following aspects: The error resulting from computing the time-integrated activity, the difference between the S-value and the patient specific full Monte Carlo simulation, the error from segmenting the volume-of-interest (kidney) and the intrinsic error of the activimeter. RESULTS: The total relative error in dose estimation can amount to 25.0% and is composed of the error of the time-integrated activity 17.1%, the error of the S-value 16.7%, the segmentation error 5.4% and the activimeter accuracy 5.0%. CONCLUSION: Errors from estimating the time-integrated activity and approximations applied to dose kernel computations contribute about equally and represent the dominant contributions far exceeding the contributions from VOI segmentation and activimeter accuracy.


Subject(s)
Lutetium/therapeutic use , Radioisotopes/therapeutic use , Radiometry , Humans , Monte Carlo Method , Phantoms, Imaging , Precision Medicine , Radiotherapy Dosage , Time Factors , Tomography, Emission-Computed, Single-Photon
9.
Med Phys ; 46(5): 2025-2030, 2019 May.
Article in English | MEDLINE | ID: mdl-30748029

ABSTRACT

PURPOSE: High dose rate brachytherapy applies intense and destructive radiation. A treatment plan defines radiation source dwell positions to avoid irradiating healthy tissue. The study discusses methods to quantify any positional changes of source locations along the various treatment sessions. METHODS: Electromagnetic tracking (EMT) localizes the radiation source during the treatment sessions. But in each session the relative position of the patient relative to the filed generator is changed. Hence, the measured dwell point sets need to be registered onto each other to render them comparable. Two point set registration techniques are compared: a probabilistic method called coherent point drift (CPD) and a multidimensional scaling (MDS) technique. RESULTS: Both enable using EMT without external registration and achieve very similar results with respect to dwell position determination of the radiation source. Still MDS achieves smaller grand average deviations (CPD-rPSR: MD = 2.55 mm, MDS-PSR: MD = 2.15 mm) between subsequent dwell position determinations, which also show less variance (CPD-rPSR: IQR = 4 mm, MDS-PSR: IQR = 3 mm). Furthermore, MDS is not based on approximations and does not need an iterative procedure to track sensor positions inside the implanted catheters. CONCLUSION: Although both methods achieve similar results, MDS is to be preferred over rigid CPD while nonrigid CPD is unsuitable as it does not preserve topology.


Subject(s)
Brachytherapy/methods , Breast Neoplasms/radiotherapy , Image Processing, Computer-Assisted/methods , Radiotherapy Planning, Computer-Assisted/methods , Brachytherapy/instrumentation , Breast Neoplasms/pathology , Electromagnetic Phenomena , Equipment Design , Female , Humans , Organs at Risk/radiation effects , Radiotherapy Dosage , Tomography, X-Ray Computed/methods
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 194-197, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31945876

ABSTRACT

Independent component analysis (ICA), as a data driven method, has shown to be a powerful tool for functional magnetic resonance imaging (fMRI) data analysis. One drawback of this multivariate approach is, that it is naturally not convenient for analysis of group studies. Therefore various techniques have been proposed in order to overcome this limitation of ICA. In this paper a novel ICA based work-flow for extracting resting state networks from fMRI group studies is proposed. An empirical mode decomposition (EMD) is used to generate reference signals in a data driven manner, which can be incorporated into a constrained version of ICA (cICA), what helps to overcome the inherent ambiguities. The results of the proposed workflow are then compared to those obtained by a widely used group ICA approach. It is demonstrated that intrinsic modes, extracted by EMD, are suitable to serve as references for cICA to obtain typical resting state patterns, which are consistent over subjects. This novel processing pipeline makes it transparent for the user, how comparable activity patterns across subjects emerge, and also the trade-off between similarity across subjects and preserving individual features can be well adjusted and adapted for different requirements in the new work-flow.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Brain , Brain Mapping , Humans , Principal Component Analysis
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3888-3891, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946722

ABSTRACT

This work presents an unsupervised mining strategy, applied to an independent component analysis (ICA) of segments of data collected while participants are answering to the items of the Halstead Category Test (HCT). This new methodology was developed to achieve signal components at trial level and therefore to study signal dynamics which are not available within participants' ensemble average signals. The study will be focused on the signal component that can be elicited by the binary visual feedback which is part of the HCT protocol. The experimental study is conducted using a cohort of 58 participants.


Subject(s)
Electroencephalography , Scalp , Signal Processing, Computer-Assisted , Trail Making Test , Algorithms , Artifacts , Female , Humans , Male
12.
PLoS One ; 12(9): e0183608, 2017.
Article in English | MEDLINE | ID: mdl-28934238

ABSTRACT

During High Dose Rate Brachytherapy (HDR-BT) the spatial position of the radiation source inside catheters implanted into a female breast is determined via electromagnetic tracking (EMT). Dwell positions and dwell times of the radiation source are established, relative to the patient's anatomy, from an initial X-ray-CT-image. During the irradiation treatment, catheter displacements can occur due to patient movements. The current study develops an automatic analysis tool of EMT data sets recorded with a solenoid sensor to assure concordance of the source movement with the treatment plan. The tool combines machine learning techniques such as multi-dimensional scaling (MDS), ensemble empirical mode decomposition (EEMD), singular spectrum analysis (SSA) and particle filter (PF) to precisely detect and quantify any mismatch between the treatment plan and actual EMT measurements. We demonstrate that movement artifacts as well as technical signal distortions can be removed automatically and reliably, resulting in artifact-free reconstructed signals. This is a prerequisite for a highly accurate determination of any deviations of dwell positions from the treatment plan.


Subject(s)
Brachytherapy/instrumentation , Breast Neoplasms/radiotherapy , Catheters , Electromagnetic Phenomena , Radiation Dosage , Aged , Automation , Breast Neoplasms/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted , Male , Middle Aged , Motion , Phantoms, Imaging , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed
13.
Comput Methods Programs Biomed ; 151: 91-99, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28947009

ABSTRACT

BACKGROUND AND OBJECTIVE: The study follows the proposal of decomposing a given data matrix into a product of independent spatial and temporal component matrices. A multi-variate decomposition approach is presented, based on an approximate diagonalization of a set of matrices computed using a latent space representation. METHODS: The proposed methodology follows an algebraic approach, which is common to space, temporal or spatiotemporal blind source separation algorithms. More specifically, the algebraic approach relies on singular value decomposition techniques, which avoids computationally costly and numerically instable matrix inversion. The method is equally applicable to correlation matrices determined from second order correlations or by considering fourth order correlations. RESULTS: The resulting algorithms are applied to fMRI data sets either to extract the underlying fMRI components or to extract connectivity maps from resting state fMRI data collected for a dynamic functional connectivity analysis. Intriguingly, our algorithm shows increased spatial specificity compared to common approaches, while temporal precision stays similar. CONCLUSION: The study presents a novel spatiotemporal blind source separation algorithm, which is both robust and avoids parameters that are difficult to fine tune. Applied on experimental data sets, the new method yields highly confined and focused areas with least spatial extent in the retinotopy case, and similar results in the dynamic functional connectivity analyses compared to other blind source separation algorithms. Therefore, we conclude that our novel algorithm is highly competitive and yields results, which are superior or at least similar to existing approaches.


Subject(s)
Algorithms , Brain/diagnostic imaging , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Connectome , Humans
14.
Phys Med Biol ; 62(19): 7617-7640, 2017 Sep 12.
Article in English | MEDLINE | ID: mdl-28796645

ABSTRACT

Modern radiotherapy of female breast cancers often employs high dose rate brachytherapy, where a radioactive source is moved inside catheters, implanted in the female breast, according to a prescribed treatment plan. Source localization relative to the patient's anatomy is determined with solenoid sensors whose spatial positions are measured with an electromagnetic tracking system. Precise sensor dwell position determination is of utmost importance to assure irradiation of the cancerous tissue according to the treatment plan. We present a hybrid data analysis system which combines multi-dimensional scaling with particle filters to precisely determine sensor dwell positions in the catheters during subsequent radiation treatment sessions. Both techniques are complemented with empirical mode decomposition for the removal of superimposed breathing artifacts. We show that the hybrid model robustly and reliably determines the spatial positions of all catheters used during the treatment and precisely determines any deviations of actual sensor dwell positions from the treatment plan. The hybrid system only relies on sensor positions measured with an EMT system and relates them to the spatial positions of the implanted catheters as initially determined with a computed x-ray tomography.


Subject(s)
Brachytherapy/instrumentation , Breast Neoplasms/radiotherapy , Electromagnetic Phenomena , Phantoms, Imaging , Radiotherapy Planning, Computer-Assisted/methods , Aged , Artifacts , Breast Neoplasms/diagnostic imaging , Catheters , Female , Humans , Male , Middle Aged , Radiotherapy Dosage , Tomography, X-Ray Computed/methods
15.
Phys Med Biol ; 62(20): 7959-7980, 2017 Oct 03.
Article in English | MEDLINE | ID: mdl-28854159

ABSTRACT

High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni-or bi-modal heavy-tailed distributions. The latter are well represented by α-stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.


Subject(s)
Brachytherapy/instrumentation , Catheters , Electromagnetic Phenomena , Neoplasms/radiotherapy , Phantoms, Imaging , Radiotherapy Planning, Computer-Assisted/methods , Brachytherapy/methods , Humans , Neoplasms/diagnostic imaging , Radiotherapy Dosage
16.
J Neural Eng ; 14(1): 016011, 2017 02.
Article in English | MEDLINE | ID: mdl-27991435

ABSTRACT

OBJECTIVE: We propose a combination of a constrained independent component analysis (cICA) with an ensemble empirical mode decomposition (EEMD) to analyze electroencephalographic recordings from depressed or schizophrenic subjects during olfactory stimulation. APPROACH: EEMD serves to extract intrinsic modes (IMFs) underlying the recorded EEG time. The latter then serve as reference signals to extract the most similar underlying independent component within a constrained ICA. The extracted modes are further analyzed considering their power spectra. MAIN RESULTS: The analysis of the extracted modes reveals clear differences in the related power spectra between the disease characteristics of depressed and schizophrenic patients. Such differences appear in the high frequency γ-band in the intrinsic modes, but also in much more detail in the low frequency range in the α-, θ- and δ-bands. SIGNIFICANCE: The proposed method provides various means to discriminate both disease pictures in a clinical environment.


Subject(s)
Depression/diagnosis , Depression/physiopathology , Electroencephalography/methods , Olfaction Disorders/diagnosis , Olfaction Disorders/physiopathology , Schizophrenia/diagnosis , Schizophrenia/physiopathology , Adult , Brain/physiopathology , Depression/complications , Female , Humans , Male , Olfaction Disorders/complications , Olfactory Perception , Principal Component Analysis , Reproducibility of Results , Schizophrenia/complications , Sensitivity and Specificity , Young Adult
17.
Curr Alzheimer Res ; 13(7): 838-44, 2016.
Article in English | MEDLINE | ID: mdl-27087440

ABSTRACT

In this work, we present a fully automatic computer-aided diagnosis method for the early diagnosis of the Alzheimer's disease. We study the distance between classes (labelled as normal controls and possible Alzheimer's disease) calculated in 116 regions of the brain using the Welchs's t-test. We select the regions with highest Welchs's t-test value as features to perform classification. Furthermore, we also study the less discriminative region according to the t-test (regions with lowest t-test absolute values) in order to use them as reference. We show that the mean and standard deviation of the intensity values in these two regions, the less and most discriminative according to the Welch's ttest, can be combined as a vector. The modulus and phase of this vector reveal statistical differences between groups which can be used to improve the classification task. We show how they can be used as input for a support vector machine classifier. The proposed methodology is tested in a SPECT brain database of 70 SPECT brain images yielding an accuracy up to 91.5% for a wide range of selected voxels.


Subject(s)
Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Brain/diagnostic imaging , Brain/pathology , Female , Humans , Image Interpretation, Computer-Assisted , Male , Support Vector Machine , Tomography, Emission-Computed, Single-Photon
18.
Curr Alzheimer Res ; 13(6): 695-707, 2016.
Article in English | MEDLINE | ID: mdl-27001676

ABSTRACT

Positron emission tomography (PET) provides a functional imaging modality to detect signs of dementias in human brains. Two-dimensional empirical mode decomposition (2D-EMD) provides means to analyze such images. It decomposes the latter into characteristic modes which represent textures on different spatial scales. These textures provide informative features for subsequent classification purposes. The study proposes a new EMD variant which relies on a Green's function based estimation method including a tension parameter to fast and reliably estimate the envelope hypersurfaces interpolating extremal points of the two-dimensional intensity distrubution of the images. The new method represents a fast and stable bi-dimensional EMD which speeds up computations roughly 100-fold. In combination with proper classifiers these exploratory feature extraction techniques can form a computer aided diagnosis (CAD) system to assist clinicians in identifying various diseases from functional images alone. PET images of subjects suffering from Alzheimer's disease are taken to illustrate this ability.


Subject(s)
Alzheimer Disease/diagnostic imaging , Alzheimer Disease/physiopathology , Brain Mapping/methods , Brain/diagnostic imaging , Brain/physiopathology , Positron-Emission Tomography/methods , Cognitive Dysfunction/diagnostic imaging , Cognitive Dysfunction/physiopathology , Fluorodeoxyglucose F18 , Humans , Nonlinear Dynamics , Radiopharmaceuticals , Support Vector Machine
19.
J Neurosci Methods ; 253: 193-205, 2015 Sep 30.
Article in English | MEDLINE | ID: mdl-26162614

ABSTRACT

BACKGROUND: Empirical mode decomposition (EMD) is an empirical data decomposition technique. Recently there is growing interest in applying EMD in the biomedical field. NEW METHOD: EMDLAB is an extensible plug-in for the EEGLAB toolbox, which is an open software environment for electrophysiological data analysis. RESULTS: EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on EEG data. In addition, EMDLAB is a user-friendly toolbox and closely implemented in the EEGLAB toolbox. COMPARISON WITH EXISTING METHODS: EMDLAB gains an advantage over other open-source toolboxes by exploiting the advantageous visualization capabilities of EEGLAB for extracted intrinsic mode functions (IMFs) and Event-Related Modes (ERMs) of the signal. CONCLUSIONS: EMDLAB is a reliable, efficient, and automated solution for extracting and visualizing the extracted IMFs and ERMs by EMD algorithms in EEG study.


Subject(s)
Algorithms , Brain/physiology , Signal Processing, Computer-Assisted , Software , Electroencephalography , Electromyography , Humans , Nonlinear Dynamics
20.
Comput Biol Med ; 43(5): 559-67, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23485201

ABSTRACT

This work presents a study of the distribution of the grey matter (GM) and white matter (WM) in brain magnetic resonance imaging (MRI). The distribution of GM and WM is characterized using a mixture of α-stable distributions. A Bayesian α-stable mixture model for histogram data is presented and unknown parameters are sampled using the Metropolis-Hastings algorithm. The proposed methodology is tested in 18 real images from the MRI brain segmentation repository. The GM and WM distributions are accurately estimated. The α-stable distribution mixture model presented in this paper can be used as previous step in more complex MRI segmentation procedures using spatial information. Furthermore, due to the fact that the α-stable distribution is a generalization of the Gaussian distribution, the proposed methodology can be applied instead of the Gaussian mixture model, which is widely used in segmentation of brain MRI in the literature.


Subject(s)
Brain/anatomy & histology , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Bayes Theorem , Brain/physiology , Databases, Factual , Humans , Linear Models , Signal Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL
...