Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
Neurodegener Dis ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38865972

RESUMO

INTRODUCTION: Manual motor problems have been reported in mild cognitive impairment (MCI) and Alzheimer's disease (AD), but the specific aspects that are affected, their neuropathology, and potential value for classification modeling is unknown. The current study examined if multiple measures of motor strength, dexterity, and speed are affected in MCI and AD, related to AD biomarkers, and are able to classify MCI or AD. METHODS: Fifty-three cognitively normal (CN), 33 amnestic MCI, and 28 AD subjects completed five manual motor measures: grip force, Trail Making Test A, spiral tracing, finger tapping, and a simulated feeding task. Analyses included: 1) group differences in manual performance; 2) associations between manual function and AD biomarkers (PET amyloid ß, hippocampal volume, and APOE ε4 alleles); and 3) group classification accuracy of manual motor function using machine learning. RESULTS: amnestic MCI and AD subjects exhibited slower psychomotor speed and AD subjects had weaker dominant hand grip strength than CN subjects. Performance on these measures was related to amyloid ß deposition (both) and hippocampal volume (psychomotor speed only). Support vector classification well-discriminated control and AD subjects (area under the curve of 0.73 and 0.77 respectively), but poorly discriminated MCI from controls or AD. CONCLUSION: Grip strength and spiral tracing appear preserved, while psychomotor speed is affected in amnestic MCI and AD. The association of motor performance with amyloid ß deposition and atrophy could indicate that this is due to amyloid deposition in- and atrophy of motor brain regions, which generally occurs later in the disease process. The promising discriminatory abilities of manual motor measures for AD emphasize their value alongside other cognitive and motor assessment outcomes in classification and prediction models, as well as potential enrichment of outcome variables in AD clinical trials.

3.
Nat Methods ; 21(5): 809-813, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38605111

RESUMO

Neuroscience is advancing standardization and tool development to support rigor and transparency. Consequently, data pipeline complexity has increased, hindering FAIR (findable, accessible, interoperable and reusable) access. brainlife.io was developed to democratize neuroimaging research. The platform provides data standardization, management, visualization and processing and automatically tracks the provenance history of thousands of data objects. Here, brainlife.io is described and evaluated for validity, reliability, reproducibility, replicability and scientific utility using four data modalities and 3,200 participants.


Assuntos
Computação em Nuvem , Neurociências , Neurociências/métodos , Humanos , Neuroimagem/métodos , Reprodutibilidade dos Testes , Software , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
4.
J Alzheimers Dis ; 95(3): 1233-1252, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37694362

RESUMO

BACKGROUND: Despite reports of gross motor problems in mild cognitive impairment (MCI) and Alzheimer's disease (AD), fine motor function has been relatively understudied. OBJECTIVE: We examined if finger tapping is affected in AD, related to AD biomarkers, and able to classify MCI or AD. METHODS: Forty-seven cognitively normal, 27 amnestic MCI, and 26 AD subjects completed unimanual and bimanual computerized tapping tests. We tested 1) group differences in tapping with permutation models; 2) associations between tapping and biomarkers (PET amyloid-ß, hippocampal volume, and APOEɛ4 alleles) with linear regression; and 3) the predictive value of tapping for group classification using machine learning. RESULTS: AD subjects had slower reaction time and larger speed variability than controls during all tapping conditions, except for dual tapping. MCI subjects performed worse than controls on reaction time and speed variability for dual and non-dominant hand tapping. Tapping speed and variability were related to hippocampal volume, but not to amyloid-ß deposition or APOEɛ4 alleles. Random forest classification (overall accuracy = 70%) discriminated control and AD subjects, but poorly discriminated MCI from controls or AD. CONCLUSIONS: MCI and AD are linked to more variable finger tapping with slower reaction time. Associations between finger tapping and hippocampal volume, but not amyloidosis, suggest that tapping deficits are related to neuropathology that presents later during the disease. Considering that tapping performance is able to differentiate between control and AD subjects, it can offer a cost-efficient tool for augmenting existing AD biomarkers.


Assuntos
Doença de Alzheimer , Amiloidose , Disfunção Cognitiva , Humanos , Doença de Alzheimer/psicologia , Peptídeos beta-Amiloides , Disfunção Cognitiva/psicologia , Biomarcadores
5.
J Comput Graph Stat ; 32(2): 413-433, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37377728

RESUMO

Independent component analysis is commonly applied to functional magnetic resonance imaging (fMRI) data to extract independent components (ICs) representing functional brain networks. While ICA produces reliable group-level estimates, single-subject ICA often produces noisy results. Template ICA is a hierarchical ICA model using empirical population priors to produce more reliable subject-level estimates. However, this and other hierarchical ICA models assume unrealistically that subject effects are spatially independent. Here, we propose spatial template ICA (stICA), which incorporates spatial priors into the template ICA framework for greater estimation efficiency. Additionally, the joint posterior distribution can be used to identify brain regions engaged in each network using an excursions set approach. By leveraging spatial dependencies and avoiding massive multiple comparisons, stICA has high power to detect true effects. We derive an efficient expectation-maximization algorithm to obtain maximum likelihood estimates of the model parameters and posterior moments of the latent fields. Based on analysis of simulated data and fMRI data from the Human Connectome Project, we find that stICA produces estimates that are more accurate and reliable than benchmark approaches, and identifies larger and more reliable areas of engagement. The algorithm is computationally tractable, achieving convergence within 12 hours for whole-cortex fMRI analysis.

6.
ArXiv ; 2023 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-37332566

RESUMO

Neuroscience research has expanded dramatically over the past 30 years by advancing standardization and tool development to support rigor and transparency. Consequently, the complexity of the data pipeline has also increased, hindering access to FAIR data analysis to portions of the worldwide research community. brainlife.io was developed to reduce these burdens and democratize modern neuroscience research across institutions and career levels. Using community software and hardware infrastructure, the platform provides open-source data standardization, management, visualization, and processing and simplifies the data pipeline. brainlife.io automatically tracks the provenance history of thousands of data objects, supporting simplicity, efficiency, and transparency in neuroscience research. Here brainlife.io's technology and data services are described and evaluated for validity, reliability, reproducibility, replicability, and scientific utility. Using data from 4 modalities and 3,200 participants, we demonstrate that brainlife.io's services produce outputs that adhere to best practices in modern neuroscience research.

8.
Neuroimage ; 274: 120138, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37116766

RESUMO

Most neuroimaging studies display results that represent only a tiny fraction of the collected data. While it is conventional to present "only the significant results" to the reader, here we suggest that this practice has several negative consequences for both reproducibility and understanding. This practice hides away most of the results of the dataset and leads to problems of selection bias and irreproducibility, both of which have been recognized as major issues in neuroimaging studies recently. Opaque, all-or-nothing thresholding, even if well-intentioned, places undue influence on arbitrary filter values, hinders clear communication of scientific results, wastes data, is antithetical to good scientific practice, and leads to conceptual inconsistencies. It is also inconsistent with the properties of the acquired data and the underlying biology being studied. Instead of presenting only a few statistically significant locations and hiding away the remaining results, studies should "highlight" the former while also showing as much as possible of the rest. This is distinct from but complementary to utilizing data sharing repositories: the initial presentation of results has an enormous impact on the interpretation of a study. We present practical examples and extensions of this approach for voxelwise, regionwise and cross-study analyses using publicly available data that was analyzed previously by 70 teams (NARPS; Botvinik-Nezer, et al., 2020), showing that it is possible to balance the goals of displaying a full set of results with providing the reader reasonably concise and "digestible" findings. In particular, the highlighting approach sheds useful light on the kind of variability present among the NARPS teams' results, which is primarily a varied strength of agreement rather than disagreement. Using a meta-analysis built on the informative "highlighting" approach shows this relative agreement, while one using the standard "hiding" approach does not. We describe how this simple but powerful change in practice-focusing on highlighting results, rather than hiding all but the strongest ones-can help address many large concerns within the field, or at least to provide more complete information about them. We include a list of practical suggestions for results reporting to improve reproducibility, cross-study comparisons and meta-analyses.


Assuntos
Neuroimagem , Humanos , Reprodutibilidade dos Testes , Viés , Viés de Seleção
9.
Neuroimage ; 270: 119972, 2023 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-36842522

RESUMO

Functional MRI (fMRI) data may be contaminated by artifacts arising from a myriad of sources, including subject head motion, respiration, heartbeat, scanner drift, and thermal noise. These artifacts cause deviations from common distributional assumptions, introduce spatial and temporal outliers, and reduce the signal-to-noise ratio of the data-all of which can have negative consequences for the accuracy and power of downstream statistical analysis. Scrubbing is a technique for excluding fMRI volumes thought to be contaminated by artifacts and generally comes in two flavors. Motion scrubbing based on subject head motion-derived measures is popular but suffers from a number of drawbacks, among them the need to choose a threshold, a lack of generalizability to multiband acquisitions, and high rates of censoring of individual volumes and entire subjects. Alternatively, data-driven scrubbing methods like DVARS are based on observed noise in the processed fMRI timeseries and may avoid some of these issues. Here we propose "projection scrubbing", a novel data-driven scrubbing method based on a statistical outlier detection framework and strategic dimension reduction, including independent component analysis (ICA), to isolate artifactual variation. We undertake a comprehensive comparison of motion scrubbing with data-driven projection scrubbing and DVARS. We argue that an appropriate metric for the success of scrubbing is maximal data retention subject to reasonable performance on typical benchmarks such as the validity, reliability, and identifiability of functional connectivity. We find that stringent motion scrubbing yields worsened validity, worsened reliability, and produced small improvements to fingerprinting. Meanwhile, data-driven scrubbing methods tend to yield greater improvements to fingerprinting while not generally worsening validity or reliability. Importantly, however, data-driven scrubbing excludes a fraction of the number of volumes or entire sessions compared to motion scrubbing. The ability of data-driven fMRI scrubbing to improve data retention without negatively impacting the quality of downstream analysis has major implications for sample sizes in population neuroscience research.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Artefatos , Movimento (Física) , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos
10.
Proc Natl Acad Sci U S A ; 119(32): e2203020119, 2022 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-35925887

RESUMO

Inference in neuroimaging typically occurs at the level of focal brain areas or circuits. Yet, increasingly, well-powered studies paint a much richer picture of broad-scale effects distributed throughout the brain, suggesting that many focal reports may only reflect the tip of the iceberg of underlying effects. How focal versus broad-scale perspectives influence the inferences we make has not yet been comprehensively evaluated using real data. Here, we compare sensitivity and specificity across procedures representing multiple levels of inference using an empirical benchmarking procedure that resamples task-based connectomes from the Human Connectome Project dataset (∼1,000 subjects, 7 tasks, 3 resampling group sizes, 7 inferential procedures). Only broad-scale (network and whole brain) procedures obtained the traditional 80% statistical power level to detect an average effect, reflecting >20% more statistical power than focal (edge and cluster) procedures. Power also increased substantially for false discovery rate- compared with familywise error rate-controlling procedures. The downsides are fairly limited; the loss in specificity for broad-scale and FDR procedures was relatively modest compared to the gains in power. Furthermore, the broad-scale methods we introduce are simple, fast, and easy to use, providing a straightforward starting point for researchers. This also points to the promise of more sophisticated broad-scale methods for not only functional connectivity but also related fields, including task-based activation. Altogether, this work demonstrates that shifting the scale of inference and choosing FDR control are both immediately attainable and can help remedy the issues with statistical power plaguing typical studies in the field.


Assuntos
Conectoma , Imageamento por Ressonância Magnética , Encéfalo/fisiologia , Conectoma/métodos , Humanos , Imageamento por Ressonância Magnética/métodos
11.
Neuroimage ; 260: 119434, 2022 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-35792293

RESUMO

BACKGROUND: Classic psychedelics, such as psilocybin and LSD, and other serotonin 2A receptor (5-HT2AR) agonists evoke acute alterations in perception and cognition. Altered thalamocortical connectivity has been hypothesized to underlie these effects, which is supported by some functional MRI (fMRI) studies. These studies have treated the thalamus as a unitary structure, despite known differential 5-HT2AR expression and functional specificity of different intrathalamic nuclei. Independent Component Analysis (ICA) has been previously used to identify reliable group-level functional subdivisions of the thalamus from resting-state fMRI (rsfMRI) data. We build on these efforts with a novel data-maximizing ICA-based approach to examine psilocybin-induced changes in intrathalamic functional organization and thalamocortical connectivity in individual participants. METHODS: Baseline rsfMRI data (n=38) from healthy individuals with a long-term meditation practice was utilized to generate a statistical template of thalamic functional subdivisions. This template was then applied in a novel ICA-based analysis of the acute effects of psilocybin on intra- and extra-thalamic functional organization and connectivity in follow-up scans from a subset of the same individuals (n=18). We examined correlations with subjective reports of drug effect and compared with a previously reported analytic approach (treating the thalamus as a single functional unit). RESULTS: Several intrathalamic components showed significant psilocybin-induced alterations in spatial organization, with effects of psilocybin largely localized to the mediodorsal and pulvinar nuclei. The magnitude of changes in individual participants correlated with reported subjective effects. These components demonstrated predominant decreases in thalamocortical connectivity, largely with visual and default mode networks. Analysis in which the thalamus is treated as a singular unitary structure showed an overall numerical increase in thalamocortical connectivity, consistent with previous literature using this approach, but this increase did not reach statistical significance. CONCLUSIONS: We utilized a novel analytic approach to discover psilocybin-induced changes in intra- and extra-thalamic functional organization and connectivity of intrathalamic nuclei and cortical networks known to express the 5-HT2AR. These changes were not observed using whole-thalamus analyses, suggesting that psilocybin may cause widespread but modest increases in thalamocortical connectivity that are offset by strong focal decreases in functionally relevant intrathalamic nuclei.


Assuntos
Psilocibina , Serotonina , Córtex Cerebral/fisiologia , Humanos , Imageamento por Ressonância Magnética , Vias Neurais/fisiologia , Psilocibina/farmacologia , Descanso , Tálamo/fisiologia
12.
Neuroimage ; 255: 119180, 2022 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-35395402

RESUMO

Longitudinal fMRI studies hold great promise for the study of neurodegenerative diseases, development and aging, but realizing their full potential depends on extracting accurate fMRI-based measures of brain function and organization in individual subjects over time. This is especially true for studies of rare, heterogeneous and/or rapidly progressing neurodegenerative diseases. These often involve small samples with heterogeneous functional features, making traditional group-difference analyses of limited utility. One such disease is amyotrophic lateral sclerosis (ALS), a severe disease resulting in extreme loss of motor function and eventual death. Here, we use an advanced individualized task fMRI analysis approach to analyze a rich longitudinal dataset containing 190 hand clench fMRI scans from 16 ALS patients (78 scans) and 22 age-matched healthy controls (112 scans). Specifically, we adopt our cortical surface-based spatial Bayesian general linear model (GLM), which has high power and precision to detect activations in individual subjects, and propose a novel longitudinal extension to leverage information shared across visits. We perform all analyses in native surface space to preserve individual anatomical and functional features. Using mixed-effects models to subsequently study the relationship between size of activation and ALS disease progression, we observe for the first time an inverted U-shaped trajectory of motor activations: at relatively mild motor disability we observe enlarging activations, while at higher levels of motor disability we observe severely diminished activation, reflecting progression toward complete loss of motor function. We further observe distinct trajectories depending on clinical progression rate, with faster progressors exhibiting more extreme changes at an earlier stage of disability. These differential trajectories suggest that initial hyper-activation is likely attributable to loss of inhibitory neurons, rather than functional compensation as earlier assumed. These findings substantially advance scientific understanding of the ALS disease process. This study also provides the first real-world example of how surface-based spatial Bayesian analysis of task fMRI can further scientific understanding of neurodegenerative disease and other phenomena. The surface-based spatial Bayesian GLM is implemented in the BayesfMRI R package.


Assuntos
Esclerose Lateral Amiotrófica , Pessoas com Deficiência , Transtornos Motores , Doenças Neurodegenerativas , Esclerose Lateral Amiotrófica/diagnóstico por imagem , Teorema de Bayes , Progressão da Doença , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética , Doenças Neurodegenerativas/diagnóstico por imagem
13.
Neuroimage ; 249: 118908, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35032660

RESUMO

The general linear model (GLM) is a widely popular and convenient tool for estimating the functional brain response and identifying areas of significant activation during a task or stimulus. However, the classical GLM is based on a massive univariate approach that does not explicitly leverage the similarity of activation patterns among neighboring brain locations. As a result, it tends to produce noisy estimates and be underpowered to detect significant activations, particularly in individual subjects and small groups. A recently proposed alternative, a cortical surface-based spatial Bayesian GLM, leverages spatial dependencies among neighboring cortical vertices to produce more accurate estimates and areas of functional activation. The spatial Bayesian GLM can be applied to individual and group-level analysis. In this study, we assess the reliability and power of individual and group-average measures of task activation produced via the surface-based spatial Bayesian GLM. We analyze motor task data from 45 subjects in the Human Connectome Project (HCP) and HCP Retest datasets. We also extend the model to multi-run analysis and employ subject-specific cortical surfaces rather than surfaces inflated to a sphere for more accurate distance-based modeling. Results show that the surface-based spatial Bayesian GLM produces highly reliable activations in individual subjects and is powerful enough to detect trait-like functional topologies. Additionally, spatial Bayesian modeling enhances reliability of group-level analysis even in moderately sized samples (n=45). Notably, the power of the spatial Bayesian GLM to detect activations above a scientifically meaningful effect size is nearly invariant to sample size, exhibiting high power even in small samples (n=10). The spatial Bayesian GLM is computationally efficient in individuals and groups and is convenient to implement with the open-source BayesfMRI R package.


Assuntos
Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Conectoma/normas , Imageamento por Ressonância Magnética/normas , Modelos Teóricos , Análise e Desempenho de Tarefas , Adulto , Teorema de Bayes , Conectoma/métodos , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes
14.
Neuroimage ; 250: 118877, 2022 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-35051581

RESUMO

There is significant interest in adopting surface- and grayordinate-based analysis of MR data for a number of reasons, including improved whole-cortex visualization, the ability to perform surface smoothing to avoid issues associated with volumetric smoothing, improved inter-subject alignment, and reduced dimensionality. The CIFTI grayordinate file format introduced by the Human Connectome Project further advances grayordinate-based analysis by combining gray matter data from the left and right cortical hemispheres with gray matter data from the subcortex and cerebellum into a single file. Analyses performed in grayordinate space are well-suited to leverage information shared across the brain and across subjects through both traditional analysis techniques and more advanced statistical methods, including Bayesian methods. The R statistical environment facilitates use of advanced statistical techniques, yet little support for grayordinates analysis has been previously available in R. Indeed, few comprehensive programmatic tools for working with CIFTI files have been available in any language. Here, we present the ciftiTools R package, which provides a unified environment for reading, writing, visualizing, and manipulating CIFTI files and related data formats. We illustrate ciftiTools' convenient and user-friendly suite of tools for working with grayordinates and surface geometry data in R, and we describe how ciftiTools is being utilized to advance the statistical analysis of grayordinate-based functional MRI data.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Neuroimagem , Conectoma , Interpretação Estatística de Dados , Humanos , Software
15.
Biometrics ; 78(3): 1109-1112, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34897649

RESUMO

I applaud the authors on their innovative generalized independent component analysis (ICA) framework for neuroimaging data. Although ICA has enjoyed great popularity for the analysis of functional magnetic resonance imaging (fMRI) data, its applicability to other modalities has been limited because standard ICA algorithms may not be directly applicable to a diversity of data representations. This is particularly true for single-subject structural neuroimaging, where only a single measurement is collected at each location in the brain. The ingenious idea of Wu et al. (2021) is to transform the data to a vector of probabilities via a mixture distribution with K components, which (following a simple transformation to R K - 1 $\mathbb {R}^{K-1}$ ) can be directly analyzed with standard ICA algorithms, such as infomax (Bell and Sejnowski, 1995) or fastICA (Hyvarinen, 1999). The underlying distribution forming the basis of the mixture is customized to the particular modality being analyzed. This framework, termed distributional ICA (DICA), is applicable in theory to nearly any neuroimaging modality. This has substantial implications for ICA as a general tool for neuroimaging analysis, with particular promise for structural modalities and multimodal studies. This invited commentary focuses on the applicability and potential of DICA for different neuroimaging modalities, questions around details of implementation and performance, and limitations of the validation study presented in the paper.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Análise de Componente Principal
16.
Front Neurosci ; 16: 1051424, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36685218

RESUMO

Introduction: Analysis of task fMRI studies is typically based on using ordinary least squares within a voxel- or vertex-wise linear regression framework known as the general linear model. This use produces estimates and standard errors of the regression coefficients representing amplitudes of task-induced activations. To produce valid statistical inferences, several key statistical assumptions must be met, including that of independent residuals. Since task fMRI residuals often exhibit temporal autocorrelation, it is common practice to perform "prewhitening" to mitigate that dependence. Prewhitening involves estimating the residual correlation structure and then applying a filter to induce residual temporal independence. While theoretically straightforward, a major challenge in prewhitening for fMRI data is accurately estimating the residual autocorrelation at each voxel or vertex of the brain. Assuming a global model for autocorrelation, which is the default in several standard fMRI software tools, may under- or over-whiten in certain areas and produce differential false positive control across the brain. The increasing popularity of multiband acquisitions with faster temporal resolution increases the challenge of effective prewhitening because more complex models are required to accurately capture the strength and structure of autocorrelation. These issues are becoming more critical now because of a trend toward subject-level analysis and inference. In group-average or group-difference analyses, the within-subject residual correlation structure is accounted for implicitly, so inadequate prewhitening is of little real consequence. For individual subject inference, however, accurate prewhitening is crucial to avoid inflated or spatially variable false positive rates. Methods: In this paper, we first thoroughly examine the patterns, sources and strength of residual autocorrelation in multiband task fMRI data. Second, we evaluate the ability of different autoregressive (AR) model-based prewhitening strategies to effectively mitigate autocorrelation and control false positives. We consider two main factors: the choice of AR model order and the level of spatial regularization of AR model coefficients, ranging from local smoothing to global averaging. We also consider determining the AR model order optimally at every vertex, but we do not observe an additional benefit of this over the use of higher-order AR models (e.g. (AR(6)). To overcome the computational challenge associated with spatially variable prewhitening, we developed a computationally efficient R implementation using parallelization and fast C++ backend code. This implementation is included in the open source R package BayesfMRI. Results: We find that residual autocorrelation exhibits marked spatial variance across the cortex and is influenced by many factors including the task being performed, the specific acquisition protocol, mis-modeling of the hemodynamic response function, unmodeled noise due to subject head motion, and systematic individual differences. We also find that local regularization is much more effective than global averaging at mitigating autocorrelation. While increasing the AR model order is also helpful, it has a lesser effect than allowing AR coefficients to vary spatially. We find that prewhitening with an AR(6) model with local regularization is effective at reducing or even eliminating autocorrelation and controlling false positives. Conclusion: Our analysis revealed dramatic spatial differences in autocorrelation across the cortex. This spatial topology is unique to each session, being influenced by the task being performed, the acquisition technique, various modeling choices, and individual differences. If not accounted for, these differences will result in differential false positive control and power across the cortex and across subjects.

17.
J Am Stat Assoc ; 115(530): 501-520, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33060871

RESUMO

Cortical surface fMRI (cs-fMRI) has recently grown in popularity versus traditional volumetric fMRI. In addition to offering better whole-brain visualization, dimension reduction, removal of extraneous tissue types, and improved alignment of cortical areas across subjects, it is also more compatible with common assumptions of Bayesian spatial models. However, as no spatial Bayesian model has been proposed for cs-fMRI data, most analyses continue to employ the classical general linear model (GLM), a "massive univariate" approach. Here, we propose a spatial Bayesian GLM for cs-fMRI, which employs a class of sophisticated spatial processes to model latent activation fields. We make several advances compared with existing spatial Bayesian models for volumetric fMRI. First, we use integrated nested Laplacian approximations (INLA), a highly accurate and efficient Bayesian computation technique, rather than variational Bayes (VB). To identify regions of activation, we utilize an excursions set method based on the joint posterior distribution of the latent fields, rather than the marginal distribution at each location. Finally, we propose the first multi-subject spatial Bayesian modeling approach, which addresses a major gap in the existing literature. The methods are very computationally advantageous and are validated through simulation studies and two task fMRI studies from the Human Connectome Project.

18.
J Am Stat Assoc ; 115(531): 1151-1177, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33060872

RESUMO

Large brain imaging databases contain a wealth of information on brain organization in the populations they target, and on individual variability. While such databases have been used to study group-level features of populations directly, they are currently underutilized as a resource to inform single-subject analysis. Here, we propose leveraging the information contained in large functional magnetic resonance imaging (fMRI) databases by establishing population priors to employ in an empirical Bayesian framework. We focus on estimation of brain networks as source signals in independent component analysis (ICA). We formulate a hierarchical "template" ICA model where source signals-including known population brain networks and subject-specific signals-are represented as latent variables. For estimation, we derive an expectation maximization (EM) algorithm having an explicit solution. However, as this solution is computationally intractable, we also consider an approximate subspace algorithm and a faster two-stage approach. Through extensive simulation studies, we assess performance of both methods and compare with dual regression, a popular but ad-hoc method. The two proposed algorithms have similar performance, and both dramatically outperform dual regression. We also conduct a reliability study utilizing the Human Connectome Project and find that template ICA achieves substantially better performance than dual regression, achieving 75-250% higher intra-subject reliability.

19.
Proc Natl Acad Sci U S A ; 117(39): 24154-24164, 2020 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-32929006

RESUMO

Science is undergoing rapid change with the movement to improve science focused largely on reproducibility/replicability and open science practices. This moment of change-in which science turns inward to examine its methods and practices-provides an opportunity to address its historic lack of diversity and noninclusive culture. Through network modeling and semantic analysis, we provide an initial exploration of the structure, cultural frames, and women's participation in the open science and reproducibility literatures (n = 2,926 articles and conference proceedings). Network analyses suggest that the open science and reproducibility literatures are emerging relatively independently of each other, sharing few common papers or authors. We next examine whether the literatures differentially incorporate collaborative, prosocial ideals that are known to engage members of underrepresented groups more than independent, winner-takes-all approaches. We find that open science has a more connected, collaborative structure than does reproducibility. Semantic analyses of paper abstracts reveal that these literatures have adopted different cultural frames: open science includes more explicitly communal and prosocial language than does reproducibility. Finally, consistent with literature suggesting the diversity benefits of communal and prosocial purposes, we find that women publish more frequently in high-status author positions (first or last) within open science (vs. reproducibility). Furthermore, this finding is further patterned by team size and time. Women are more represented in larger teams within reproducibility, and women's participation is increasing in open science over time and decreasing in reproducibility. We conclude with actionable suggestions for cultivating a more prosocial and diverse culture of science.


Assuntos
Reprodutibilidade dos Testes , Ciência/tendências , Mulheres , Autoria , Humanos , Disseminação de Informação , Publicação de Acesso Aberto
20.
Nat Commun ; 10(1): 4314, 2019 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-31541096

RESUMO

Healthcare industry players make payments to medical providers for non-research expenses. While these payments may pose conflicts of interest, their relationship with overall healthcare costs remains largely unknown. In this study, we linked Open Payments data on providers' industry payments with Medicare data on healthcare costs. We investigated 374,766 providers' industry payments and healthcare costs. We demonstrate that providers receiving higher amounts of industry payments tend to bill higher drug and medical costs. Specifically, we find that a 10% increase in industry payments is associated with 1.3% higher medical and 1.8% higher drug costs. For a typical provider, for example, a 10% or $25 increase in annual industry payments would be associated with approximately $1,100 higher medical costs and $100 higher drug costs. Furthermore, the association between payments and healthcare costs varies markedly across states and correlates with political leaning, being stronger in more conservative states.


Assuntos
Conflito de Interesses , Custos e Análise de Custo , Custos de Cuidados de Saúde , Pessoal de Saúde/economia , Atenção à Saúde/economia , Custos de Medicamentos , Indústria Farmacêutica/economia , Ética Médica , Gastos em Saúde , Serviços de Saúde/economia , Humanos , Medicare , Modelos Teóricos , Saúde Pública/economia , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...