Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
J Biomech ; 87: 161-166, 2019 04 18.
Article in English | MEDLINE | ID: mdl-30824236

ABSTRACT

Data reduction techniques are commonly applied to dynamic plantar pressure measurements, often prior to the measurement's analysis. In performing these data reductions, information is discarded from the measurement before it can be evaluated, leading to unkonwn consequences. In this study, we aim to provide the first assessment of what impact data reduction techniques have on plantar pressure measurements. Specifically, we quantify the extent to which information of any kind is discarded when performing common data reductions. Plantar pressure measurements were collected from 33 healthy controls, 8 Hallux Valgus patients, and 10 Metatarsalgia patients. Eleven common data reductions were then applied to the measurements, and the resulting datasets were compared to the original measurement in three ways. First, information theory was used to estimate the information content present in the original and reduced datasets. Second, principal component analysis was used to estimate the number of intrinsic dimensions present. Finally, a permutational multivariate ANOVA was performed to evaluate the significance of group differences between the healthy controls, Hallux Valgus, and Metatarsalgia groups. The evaluated data reductions showed a minimum of 99.1% loss in information content and losses of dimensionality between 20.8% and 83.3%. Significant group differences were also lost after each of the 11 data reductions (α=0.05), but these results may differ for other patient groups (especially those with highly-deformed footprints) or other region of interest definitions. Nevertheless, the existence of these results suggest that the diagnostic content of dynamic plantar pressure measurements is yet to be fully exploited.


Subject(s)
Foot/physiopathology , Hallux Valgus/physiopathology , Metatarsalgia/physiopathology , Pressure , Principal Component Analysis/standards , Analysis of Variance , Female , Humans , Male , Plastic Surgery Procedures
2.
Genetics ; 211(4): 1179-1189, 2019 04.
Article in English | MEDLINE | ID: mdl-30692194

ABSTRACT

High-throughput measurements of molecular phenotypes provide an unprecedented opportunity to model cellular processes and their impact on disease. These highly structured datasets are usually strongly confounded, creating false positives and reducing power. This has motivated many approaches based on principal components analysis (PCA) to estimate and correct for confounders, which have become indispensable elements of association tests between molecular phenotypes and both genetic and nongenetic factors. Here, we show that these correction approaches induce a bias, and that it persists for large sample sizes and replicates out-of-sample. We prove this theoretically for PCA by deriving an analytic, deterministic, and intuitive bias approximation. We assess other methods with realistic simulations, which show that perturbing any of several basic parameters can cause false positive rate (FPR) inflation. Our experiments show the bias depends on covariate and confounder sparsity, effect sizes, and their correlation. Surprisingly, when the covariate and confounder have [Formula: see text], standard two-step methods all have [Formula: see text]-fold FPR inflation. Our analysis informs best practices for confounder correction in genomic studies, and suggests many false discoveries have been made and replicated in some differential expression analyses.


Subject(s)
Genome-Wide Association Study/methods , Phenotype , Principal Component Analysis/methods , Animals , Genome-Wide Association Study/standards , Humans , Models, Genetic , Principal Component Analysis/standards , Quantitative Trait Loci , Reproducibility of Results
3.
J Neural Eng ; 14(6): 066007, 2017 12.
Article in English | MEDLINE | ID: mdl-29130452

ABSTRACT

OBJECTIVE: Making mistakes is inevitable, but identifying them allows us to correct or adapt our behavior to improve future performance. Current brain-machine interfaces (BMIs) make errors that need to be explicitly corrected by the user, thereby consuming time and thus hindering performance. We hypothesized that neural correlates of the user perceiving the mistake could be used by the BMI to automatically correct errors. However, it was unknown whether intracortical outcome error signals were present in the premotor and primary motor cortices, brain regions successfully used for intracortical BMIs. APPROACH: We report here for the first time a putative outcome error signal in spiking activity within these cortices when rhesus macaques performed an intracortical BMI computer cursor task. MAIN RESULTS: We decoded BMI trial outcomes shortly after and even before a trial ended with 96% and 84% accuracy, respectively. This led us to develop and implement in real-time a first-of-its-kind intracortical BMI error 'detect-and-act' system that attempts to automatically 'undo' or 'prevent' mistakes. The detect-and-act system works independently and in parallel to a kinematic BMI decoder. In a challenging task that resulted in substantial errors, this approach improved the performance of a BMI employing two variants of the ubiquitous Kalman velocity filter, including a state-of-the-art decoder (ReFIT-KF). SIGNIFICANCE: Detecting errors in real-time from the same brain regions that are commonly used to control BMIs should improve the clinical viability of BMIs aimed at restoring motor function to people with paralysis.


Subject(s)
Action Potentials/physiology , Brain-Computer Interfaces , Motor Cortex/physiology , Support Vector Machine , Acoustic Stimulation/methods , Animals , Brain-Computer Interfaces/standards , Electrodes, Implanted , Macaca mulatta , Male , Photic Stimulation/methods , Principal Component Analysis/methods , Principal Component Analysis/standards , Support Vector Machine/standards
4.
Pharm Biol ; 55(1): 2129-2135, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28969478

ABSTRACT

CONTEXT: Dipsaci Radix is derived from the dry root of Dipsacus asper Wall.ex Henry (Dipsacaceae). It has attracted increasing attention as one of the most popular and precious herbal medicines in clinical use. OBJECTIVE: To develop a HPLC-DAD method for quantitative analysis and quality control of eight active components in crude and sweated Dipsaci Radix. MATERIALS AND METHODS: The eight components in Dipsaci Radix were analyzed by HPLC-DAD on an Agilent Eclipse XDB-C18 column within a gradient elution of acetonitrile and 0.05% formic acid aqueous solution. ESI-MS spectra were acquired on a triple quadrupole mass spectrometer. Validation was performed in order to demonstrate linearity, precision, repeatability, stability, and accuracy of the method. The results were processed with principal component analysis (PCA) and discriminant analysis (DA). RESULTS: The eight components showed good linearity (R2 > 0.9991) in the ranges of 60.40-1208.00, 151.00-3020.00, 3.06-61.20, 30.76-615.20, 5.13-102.60, 10.17-203.40, 10.20-204.00, and 151.60-3032.00 mg/mL, respectively. The overall recoveries were in the range of 99.03-102.38%, with RSDs ranging from 1.89% to 4.05%. Through PCA, the degree of importance of the eight components in sequence was CA > AVI > IA > LA > LN > IC > IB > CaA. The crude and sweated Dipsaci Radix were distinguished obviously by DA. DISCUSSION AND CONCLUSION: The method, using HPLC-DAD analysis in combination with PCA and DA, could provide a more comprehensive and quantitative chemical pattern recognition and quality evaluation to crude and sweated Dipsaci Radix.


Subject(s)
Dipsacaceae , Drugs, Chinese Herbal/analysis , Plant Roots/chemistry , Principal Component Analysis/methods , Chromatography, High Pressure Liquid/methods , Chromatography, High Pressure Liquid/standards , Discriminant Analysis , Principal Component Analysis/standards
5.
J Stud Alcohol Drugs ; 77(2): 354-61, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26997195

ABSTRACT

OBJECTIVE: People consume alcohol at problematic levels for many reasons. These different motivational pathways may have different biological underpinnings. Valid, brief measures that discriminate individuals' reasons for drinking could facilitate inquiry into whether varied drinking motivations account for differential response to pharmacotherapies for alcohol use disorders. The current study evaluated the factor structure and predictive validity of a brief measure of alcohol use motivations developed for use in randomized clinical trials, the Reasons for Heavy Drinking Questionnaire (RHDQ). METHOD: The RHDQ was administered before treatment to 265 participants (70% male) with alcohol dependence according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, in three pharmacotherapy randomized clinical trials. Principal components analysis was used in half the sample to determine the RHDQ factor structure. This structure was verified with confirmatory factor analysis in the second half of the sample. The factors derived from this analysis were evaluated with respect to alcohol dependence severity indices. RESULTS: A two-factor solution was identified. Factors were interpreted as Reinforcement and Normalizing. Reinforcement scores were weakly to moderately associated with severity, whereas normalizing scores were moderately to strongly associated with severity. In all cases in which significant associations between RHDQ scores and severity indices were observed, the relationship was significantly stronger for normalizing than for reinforcing. CONCLUSIONS: The RHDQ is a promising brief assessment of motivations for heavy alcohol use, particularly in the context of randomized clinical trials. Additional research should address factor structure stability in non-treatment-seeking individuals and the RHDQ's utility in detecting and accounting for changes in drinking behavior, including in response to intervention.


Subject(s)
Alcoholism/diagnosis , Alcoholism/epidemiology , Surveys and Questionnaires/standards , Adolescent , Adult , Alcoholism/drug therapy , Cross-Sectional Studies , Female , Humans , Male , Motivation , Principal Component Analysis/standards , Randomized Controlled Trials as Topic , Reproducibility of Results
6.
ACS Chem Neurosci ; 7(3): 349-59, 2016 Mar 16.
Article in English | MEDLINE | ID: mdl-26758246

ABSTRACT

The use of principal component regression, a multivariate calibration method, in the analysis of in vivo fast-scan cyclic voltammetry data allows for separation of overlapping signal contributions, permitting evaluation of the temporal dynamics of multiple neurotransmitters simultaneously. To accomplish this, the technique relies on information about current-concentration relationships across the scan-potential window gained from analysis of training sets. The ability of the constructed models to resolve analytes depends critically on the quality of these data. Recently, the use of standard training sets obtained under conditions other than those of the experimental data collection (e.g., with different electrodes, animals, or equipment) has been reported. This study evaluates the analyte resolution capabilities of models constructed using this approach from both a theoretical and experimental viewpoint. A detailed discussion of the theory of principal component regression is provided to inform this discussion. The findings demonstrate that the use of standard training sets leads to misassignment of the current-concentration relationships across the scan-potential window. This directly results in poor analyte resolution and, consequently, inaccurate quantitation, which may lead to erroneous conclusions being drawn from experimental data. Thus, it is strongly advocated that training sets be obtained under the experimental conditions to allow for accurate data analysis.


Subject(s)
Dopamine/analysis , Electrochemical Techniques/standards , Principal Component Analysis/standards , Animals , Brain/metabolism , Brain Chemistry/physiology , Calibration , Hydrogen-Ion Concentration
7.
J Pharm Biomed Anal ; 117: 345-51, 2016 Jan 05.
Article in English | MEDLINE | ID: mdl-26432385

ABSTRACT

The flowers of Lonicera japonica Thunb. were extensively used to treat many diseases. As the demands for L. japonica increased, some related Lonicera plants were often confused or misused. Caffeoylquinic acids were always regarded as chemical markers in the quality control of L. japonica, but they could be found in all Lonicera species. Thus, a simple and reliable method for the evaluation of different Lonicera flowers is necessary to be established. In this work a method based on single standard to determine multi-components (SSDMC) combined with principal component analysis (PCA) for control and distinguish of Lonicera species flowers have been developed. Six components including three caffeoylquinic acids and three iridoid glycosides were assayed simultaneously using chlorogenic acid as the reference standard. The credibility and feasibility of the SSDMC method were carefully validated and the results demonstrated that there were no remarkable differences compared with external standard method. Finally, a total of fifty-one batches covering five Lonicera species were analyzed and PCA was successfully applied to distinguish the Lonicera species. This strategy simplifies the processes in the quality control of multiple-componential herbal medicine which effectively adapted for improving the quality control of those herbs belonging to closely related species.


Subject(s)
Flowers , Lonicera , Plant Extracts/analysis , Principal Component Analysis/methods , Principal Component Analysis/standards , Plant Extracts/chemistry , Reference Standards
8.
Food Chem ; 196: 783-90, 2016 Apr 01.
Article in English | MEDLINE | ID: mdl-26593555

ABSTRACT

Synchronous fluorescence spectroscopy was used in combination with principal component analysis (PCA) and linear discriminant analysis (LDA) for the differentiation of plum spirits according to their geographical origin. A total of 14 Czech, 12 Hungarian and 18 Slovak plum spirit samples were used. The samples were divided in two categories: colorless (22 samples) and colored (22 samples). Synchronous fluorescence spectra (SFS) obtained at a wavelength difference of 60 nm provided the best results. Considering the PCA-LDA applied to the SFS of all samples, Czech, Hungarian and Slovak colorless samples were properly classified in both the calibration and prediction sets. 100% of correct classification was also obtained for Czech and Hungarian colored samples. However, one group of Slovak colored samples was classified as belonging to the Hungarian group in the calibration set. Thus, the total correct classifications obtained were 94% and 100% for the calibration and prediction steps, respectively. The results were compared with those obtained using near-infrared (NIR) spectroscopy. Applying PCA-LDA to NIR spectra (5500-6000 cm(-1)), the total correct classifications were 91% and 92% for the calibration and prediction steps, respectively, which were slightly lower than those obtained using SFS.


Subject(s)
Alcoholic Beverages/analysis , Prunus domestica/chemistry , Spectrometry, Fluorescence/methods , Calibration , Discriminant Analysis , Principal Component Analysis/methods , Principal Component Analysis/standards , Spectrometry, Fluorescence/standards
9.
Span. j. psychol ; 19: e62.1-e62.13, 2016. ilus, tab, graf
Article in English | IBECS | ID: ibc-160277

ABSTRACT

Correlation and Principal Component Analysis (PCA) of behavioral measures from two experimental tasks (Delayed Match-to-Sample and Oddball), and standard scores from a neuropsychological test battery (Working Memory Test Battery for Children) was performed on data from participants between 6-18 years old. The correlation analysis (p < .05) results showed a common maturational trend in working memory performance between these two types of tasks. Applying PCA (Eigenvalues > 1), the scores of the first extracted component were significantly correlated (p < .05) to most behavioral measures, suggesting some commonalities of the processes of age-related changes in the measured variables. The results suggest that this first component would be related to age but also to individual differences during the cognitive maturation process across childhood and adolescence stages. The fourth component would represent the speed-accuracy trade-off phenomenon as it presents loading components with different signs for reaction times and errors (AU)


No disponible


Subject(s)
Humans , Male , Female , Child , Adolescent , Memory/physiology , Principal Component Analysis/methods , Principal Component Analysis/standards , Child Development/physiology , Adolescent Development/physiology , Mental Processes/physiology , Mental Competency/psychology , Helsinki Declaration , Child Behavior/psychology , Adolescent Behavior/psychology , Fujita-Pearson Scale
10.
J Neurosci Methods ; 241: 18-29, 2015 Feb 15.
Article in English | MEDLINE | ID: mdl-25481542

ABSTRACT

BACKGROUND: Functional magnetic resonance imaging (fMRI) time series are subject to corruption by many noise sources, especially physiological noise and motion. Researchers have developed many methods to reduce physiological noise, including RETROICOR, which retroactively removes cardiac and respiratory waveforms collected during the scan, and CompCor, which applies principal components analysis (PCA) to remove physiological noise components without any physiological monitoring during the scan. NEW METHOD: We developed four variants of the CompCor method. The optimized CompCor method applies PCA to time series in a noise mask, but orthogonalizes each component to the BOLD response waveform and uses an algorithm to determine a favorable number of components to use as "nuisance regressors." Whole brain component correction (WCompCor) is similar, except that it applies PCA to time-series throughout the whole brain. Low-pass component correction (LCompCor) identifies low-pass filtered components throughout the brain, while high-pass component correction (HCompCor) identifies high-pass filtered components. COMPARISON WITH EXISTING METHOD: We compared the new methods with the original CompCor method by examining the resulting functional contrast-to-noise ratio (CNR), sensitivity, and specificity. RESULTS: (1) The optimized CompCor method increased the CNR and sensitivity compared to the original CompCor method and (2) the application of WCompCor yielded the best improvement in the CNR and sensitivity. CONCLUSIONS: The sensitivity of the optimized CompCor, WCompCor, and LCompCor methods exceeded that of the original CompCor method. However, regressing noise signals showed a paradoxical consequence of reducing specificity for all noise reduction methods attempted.


Subject(s)
Artifacts , Magnetic Resonance Imaging/standards , Movement/physiology , Principal Component Analysis/standards , Psychomotor Performance/physiology , Sensorimotor Cortex/physiology , Adult , Female , Humans , Image Enhancement/standards , Male , Middle Aged
11.
Parkinsonism Relat Disord ; 21(2): 142-6, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25523963

ABSTRACT

INTRODUCTION: Several studies have validated the Hamilton Depression Rating Scale (HAMD) in patients with Parkinson's disease (PD), and reported adequate reliability and construct validity. However, the factorial validity of the HAMD has not yet been investigated. The aim of our analysis was to explore the factor structure of the HAMD in a large sample of PD patients. METHODS: A principal component analysis of the 17-item HAMD was performed on data of 341 PD patients, available from a previous cross sectional study on anxiety. An eigenvalue ≥1 was used to determine the number of factors. Factor loadings ≥0.4 in combination with oblique rotations were used to identify which variables made up the factors. Kaiser-Meyer-Olkin measure (KMO), Cronbach's alpha, Bartlett's test, communality, percentage of non-redundant residuals and the component correlation matrix were computed to assess factor validity. RESULTS: KMO verified the sample's adequacy for factor analysis and Cronbach's alpha indicated a good internal consistency of the total scale. Six factors had eigenvalues ≥1 and together explained 59.19% of the variance. The number of items per factor varied from 1 to 6. Inter-item correlations within each component were low. There was a high percentage of non-redundant residuals and low communality. CONCLUSION: This analysis demonstrates that the factorial validity of the HAMD in PD is unsatisfactory. This implies that the scale is not appropriate for studying specific symptom domains of depression based on factorial structure in a PD population.


Subject(s)
Depression/diagnosis , Depression/psychology , Parkinson Disease/diagnosis , Parkinson Disease/psychology , Principal Component Analysis/standards , Psychiatric Status Rating Scales/standards , Adult , Aged , Aged, 80 and over , Cross-Sectional Studies , Depression/epidemiology , Factor Analysis, Statistical , Female , Humans , Male , Middle Aged , Parkinson Disease/epidemiology , Principal Component Analysis/methods
12.
Anal Bioanal Chem ; 407(8): 2255-64, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25542565

ABSTRACT

Conventional mass spectrometry image preprocessing methods used for denoising, such as the Savitzky-Golay smoothing or discrete wavelet transformation, typically do not only remove noise but also weak signals. Recently, memory-efficient principal component analysis (PCA) in conjunction with random projections (RP) has been proposed for reversible compression and analysis of large mass spectrometry imaging datasets. It considers single-pixel spectra in their local context and consequently offers the prospect of using information from the spectra of adjacent pixels for denoising or signal enhancement. However, little systematic analysis of key RP-PCA parameters has been reported so far, and the utility and validity of this method for context-dependent enhancement of known medically or pharmacologically relevant weak analyte signals in linear-mode matrix-assisted laser desorption/ionization (MALDI) mass spectra has not been explored yet. Here, we investigate MALDI imaging datasets from mouse models of Alzheimer's disease and gastric cancer to systematically assess the importance of selecting the right number of random projections k and of principal components (PCs) L for reconstructing reproducibly denoised images after compression. We provide detailed quantitative data for comparison of RP-PCA-denoising with the Savitzky-Golay and wavelet-based denoising in these mouse models as a resource for the mass spectrometry imaging community. Most importantly, we demonstrate that RP-PCA preprocessing can enhance signals of low-intensity amyloid-ß peptide isoforms such as Aß1-26 even in sparsely distributed Alzheimer's ß-amyloid plaques and that it enables enhanced imaging of multiply acetylated histone H4 isoforms in response to pharmacological histone deacetylase inhibition in vivo. We conclude that RP-PCA denoising may be a useful preprocessing step in biomarker discovery workflows.


Subject(s)
Alzheimer Disease/metabolism , Image Processing, Computer-Assisted/standards , Principal Component Analysis/standards , Stomach Neoplasms/metabolism , Amyloid beta-Peptides/analysis , Amyloid beta-Peptides/metabolism , Animals , Disease Models, Animal , Female , Histones/metabolism , Humans , Image Processing, Computer-Assisted/methods , Mice , Spectrometry, Mass, Matrix-Assisted Laser Desorption-Ionization/methods , Stomach Neoplasms/chemistry
13.
BMC Bioinformatics ; 14: 338, 2013 Nov 21.
Article in English | MEDLINE | ID: mdl-24261687

ABSTRACT

BACKGROUND: Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. RESULTS: In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experiments even when experimental pilot data are not available. The key motivation for MetSizeR is that it considers the type of analysis the researcher intends to use for data analysis when estimating sample size. MetSizeR uses information about the data analysis technique and prior expert knowledge of the metabolomic experiment to simulate pilot data from a statistical model. Permutation based techniques are then applied to the simulated pilot data to estimate the required sample size. CONCLUSIONS: The MetSizeR methodology, and a publicly available software package which implements the approach, are illustrated through real metabolomic applications. Sample size estimates, informed by the intended statistical analysis technique, and the associated uncertainty are provided.


Subject(s)
Metabolomics/statistics & numerical data , Algorithms , Animals , Computer Simulation , Longitudinal Studies , Models, Statistical , Nuclear Magnetic Resonance, Biomolecular/methods , Pilot Projects , Principal Component Analysis/standards , Sample Size , Software
14.
Res Dev Disabil ; 34(10): 3576-82, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23962604

ABSTRACT

The objectives of this research were to: (1) investigate the component structure and psychometric properties of the Self- and Other-Deception Questionnaires-Intellectual Disabilities (SDQ-ID and ODQ-ID), (2) examine the relationship between social desirability and IQ, and (3) compare social desirability scores of those with intellectual disabilities (IDs) and a history of criminal offending to the social desirability scores of participants with IDs and those without IDs and no such history, controlling for general intellectual functioning. Men with mild to borderline IDs detained within medium secure inpatient forensic mental health services (N=40) completed the SDQ-ID and ODQ-ID at Time 1 and then two-weeks later at Time 2. Data for the men with and without IDs and no known criminal offending history were taken from a previous study (N=60). Following exploratory Principal Components Analysis, the number of questionnaire items were reduced, and a two-factor structure was found for the SDQ-ID which was labelled: (1) Positive Self Representation and (2) Denial of Intrusive Thoughts. A two-factor structure was also found for the ODQ-ID and these two factors were labelled: (1) Denial of Negative Social Interaction and (2) Untrustworthiness. Both the SDQ-ID and ODQ-ID had acceptable internal consistency and test-retest reliability. Fifteen percent of the variance in SDQ-ID scores was explained by Full Scale IQ, while 21% of the variance in ODQ-ID scores was explained by Full Scale IQ. Between group comparisons controlling for intelligence did not yield any significant differences. The shortened SDQ-ID and ODQ-ID have promising psychometric properties, and their component structures appear robust. Differences between men with and without IDs on these two measures of social desirability can be accounted for by differences in general intellectual functioning.


Subject(s)
Criminals/psychology , Intellectual Disability/diagnosis , Intellectual Disability/psychology , Principal Component Analysis/standards , Social Desirability , Surveys and Questionnaires/standards , Adult , Humans , Learning Disabilities/diagnosis , Learning Disabilities/psychology , Male , Middle Aged , Principal Component Analysis/methods , Prisoners/psychology , Psychometrics/methods , Psychometrics/standards , Reproducibility of Results , Young Adult
16.
Stat Appl Genet Mol Biol ; 11(4)2012 Jul 12.
Article in English | MEDLINE | ID: mdl-22850062

ABSTRACT

Viruses such as HIV and Hepatitis C (HCV) replicate rapidly and with high transcription error rates, which may facilitate their escape from immune detection through the encoding of mutations at key positions within human leukocyte antigen (HLA)-specific peptides, thus impeding T-cell recognition. Large-scale population-based host-viral association studies are conducted as hypothesis-generating analyses which aim to determine the positions within the viral sequence at which host HLA immune pressure may have led to these viral escape mutations. When transmission of the virus to the host is HLA-associated, however, standard tests of association can be confounded by the viral relatedness of contemporarily circulating viral sequences, as viral sequences descended from a common ancestor may share inherited patterns of polymorphisms, termed 'founder effects'. Recognizing the correspondence between this problem and the confounding of case-control genome-wide association studies by population stratification, we adapt methods taken from that field to the analysis of host-viral associations. In particular, we consider methods based on principal components analysis within a logistic regression framework motivated by alternative formulations in the Frisch-Waugh-Lovell Theorem. We demonstrate via simulation their utility in detecting true host-viral associations whilst minimizing confounding by associations generated by founder effects. The proposed methods incorporate relatively robust, standard statistical procedures which can be easily implemented using widely available software, and provide alternatives to the more complex computer intensive methods often implemented in this area.


Subject(s)
Founder Effect , Host-Pathogen Interactions/genetics , Principal Component Analysis , Virus Diseases/transmission , Alleles , Calibration , Case-Control Studies , Cohort Studies , Computer Simulation , Genetic Association Studies/standards , Genetic Association Studies/statistics & numerical data , Genetic Predisposition to Disease/epidemiology , Genetic Predisposition to Disease/genetics , HLA Antigens/genetics , Humans , Logistic Models , Polymorphism, Genetic , Principal Component Analysis/methods , Principal Component Analysis/standards , Research Design , Virus Diseases/epidemiology , Virus Diseases/genetics
17.
Toxicology ; 290(1): 50-8, 2011 Nov 28.
Article in English | MEDLINE | ID: mdl-21871943

ABSTRACT

The application of toxicogenomics as a predictive tool for chemical risk assessment has been under evaluation by the toxicology community for more than a decade. However, it predominately remains a tool for investigative research rather than for regulatory risk assessment. In this study, we assessed whether the current generation of microarray technology in combination with an in vitro experimental design was capable of generating robust, reproducible data of sufficient quality to show promise as a tool for regulatory risk assessment. To this end, we designed a prospective collaborative study to determine the level of inter- and intra-laboratory reproducibility between three independent laboratories. All test centres (TCs) adopted the same protocols for all aspects of the toxicogenomic experiment including cell culture, chemical exposure, RNA extraction, microarray data generation and analysis. As a case study, the genotoxic carcinogen benzo[a]pyrene (B[a]P) and the human hepatoma cell line HepG2 were used to generate three comparable toxicogenomic data sets. High levels of technical reproducibility were demonstrated using a widely employed gene expression microarray platform. While differences at the global transcriptome level were observed between the TCs, a common subset of B[a]P responsive genes (n=400 gene probes) was identified at all TCs which included many genes previously reported in the literature as B[a]P responsive. These data show promise that the current generation of microarray technology, in combination with a standard in vitro experimental design, can produce robust data that can be generated reproducibly in independent laboratories. Future work will need to determine whether such reproducible in vitro model(s) can be predictive for a range of toxic chemicals with different mechanisms of action and thus be considered as part of future testing regimes for regulatory risk assessment.


Subject(s)
Databases, Genetic/standards , Laboratories/standards , Research Design/standards , Toxicogenetics/standards , Hep G2 Cells , Humans , Principal Component Analysis/methods , Principal Component Analysis/standards , Prospective Studies , Protein Array Analysis/methods , Protein Array Analysis/standards , Reproducibility of Results , Toxicogenetics/methods
18.
J Nerv Ment Dis ; 199(6): 394-7, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21629018

ABSTRACT

Because there has been a lack of a single comprehensive measure for assessing workplace well-being, we elected to develop such a self-report measure. Provisional items were extracted from the literature on "positive psychology" and were adapted to capture their workplace application. The provisional 50-item set was completed by a nonclinical sample of 150 adults. A second and third sample was recruited to examine its reliability and any impact of depressed mood and sociodemographic and work-related variables, respectively. Factor analysis identified four domains, "Work Satisfaction," "Organizational Respect for the Employee," "Employer Care," and a negative construct-"Intrusion of Work into Private Life." High test-retest reliability was demonstrated for the final 31-item measure, whereas there was no distinct impact of depressed mood on the scale scores. Work Satisfaction scale scores were influenced by job type. Gender effects were found for two of the four scales, whereas a longer period of employment inversely linked to Organizational Respect for the Employee and Employer Care scores and was conversely associated with higher Intrusion of Work into Private Life scores. The refined measure should enable individuals and employers to quantify the levels of support and well-being provided by employing organizations.


Subject(s)
Job Satisfaction , Personal Satisfaction , Principal Component Analysis/standards , Surveys and Questionnaires/standards , Workplace/psychology , Adult , Female , Humans , Male , Reproducibility of Results
19.
J Neurosci Methods ; 199(2): 183-91, 2011 Aug 15.
Article in English | MEDLINE | ID: mdl-21600926

ABSTRACT

The use of Granger causality (GC) for studying dependencies in neuroimaging data has recently been gaining popularity. Several frameworks exist for applying GC to neurophysiological questions but many rely heavily on specific statistical assumptions regarding autoregressive (AR) models for hypothesis testing. Since it is often difficult to satisfy these assumptions in practical settings, this study proposes an alternative statistical methodology based on the classification of individual trials of data. Instead of testing for significance using statistics based on estimated AR models or prediction errors, hypotheses were tested by determining whether or not individual magnetoencephalography (MEG) recording segments belonging to either of two experimental conditions can be successfully classified using features derived from AR and GC concepts. Using this novel approach, we show that bivariate temporal GC can be used to distinguish button presses based on whether they were experimentally forced or free. Additionally, the methodology was used to determine useful parameter settings for various steps of the analysis and this revealed surprising insight into several aspects of AR and GC analysis which, previously, could not be obtained in a comparable manner. A final mean accuracy of 79.2% was achieved for classifying forced and free button presses for 6 subjects suggesting that classification using GC features is a viable option for studying MEG signals and useful for evaluating the effectiveness of parameter variations in GC analysis.


Subject(s)
Algorithms , Magnetoencephalography/methods , Magnetoencephalography/statistics & numerical data , Models, Neurological , Neurophysiology/statistics & numerical data , Signal Processing, Computer-Assisted , Bayes Theorem , Humans , Magnetoencephalography/standards , Neurophysiology/methods , Neurophysiology/standards , Principal Component Analysis/methods , Principal Component Analysis/standards , Time Factors
20.
Neural Netw ; 24(5): 501-11, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21420831

ABSTRACT

The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics.


Subject(s)
Algorithms , Artificial Intelligence , Neural Networks, Computer , Principal Component Analysis/standards , Computer Simulation/standards , Electronic Data Processing/methods , Electronic Data Processing/standards , Humans , Pattern Recognition, Automated/methods , Pattern Recognition, Automated/standards , Software/standards , Software Design , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...