Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 321
Filtrar
2.
bioRxiv ; 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38948863

RESUMO

Functional connectivity (FC) is the degree of synchrony of time series between distinct, spatially separated brain regions. While traditional FC analysis assumes the temporal stationarity throughout a brain scan, there is growing recognition that connectivity can change over time and is not stationary, leading to the concept of dynamic FC (dFC). Resting-state functional magnetic resonance imaging (fMRI) can assess dFC using the sliding window method with the correlation analysis of fMRI signals. Accurate statistical inference of sliding window correlation must consider the autocorrelated nature of the time series. Currently, the dynamic consideration is mainly confined to the point estimation of sliding window correlations. Using in vivo resting-state fMRI data, we first demonstrate the non-stationarity in both the cross-correlation function (XCF) and the autocorrelation function (ACF). Then, we propose the variance estimation of the sliding window correlation considering the nonstationary of XCF and ACF. This approach provides a means to dynamically estimate confidence intervals in assessing dynamic connectivity. Using simulations, we compare the performance of the proposed method with other methods, showing the impact of dynamic ACF and XCF on connectivity inference. Accurate variance estimation can help in addressing the critical issue of false positivity and negativity.

3.
Crit Care ; 28(1): 217, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38961495

RESUMO

BACKGROUND: The outcomes of several randomized trials on extracorporeal cardiopulmonary resuscitation (ECPR) in patients with refractory out-of-hospital cardiac arrest were examined using frequentist methods, resulting in a dichotomous interpretation of results based on p-values rather than in the probability of clinically relevant treatment effects. To determine such a probability of a clinically relevant ECPR-based treatment effect on neurological outcomes, the authors of these trials performed a Bayesian meta-analysis of the totality of randomized ECPR evidence. METHODS: A systematic search was applied to three electronic databases. Randomized trials that compared ECPR-based treatment with conventional CPR for refractory out-of-hospital cardiac arrest were included. The study was preregistered in INPLASY (INPLASY2023120060). The primary Bayesian hierarchical meta-analysis estimated the difference in 6-month neurologically favorable survival in patients with all rhythms, and a secondary analysis assessed this difference in patients with shockable rhythms (Bayesian hierarchical random-effects model). Primary Bayesian analyses were performed under vague priors. Outcomes were formulated as estimated median relative risks, mean absolute risk differences, and numbers needed to treat with corresponding 95% credible intervals (CrIs). The posterior probabilities of various clinically relevant absolute risk difference thresholds were estimated. RESULTS: Three randomized trials were included in the analysis (ECPR, n = 209 patients; conventional CPR, n = 211 patients). The estimated median relative risk of ECPR for 6-month neurologically favorable survival was 1.47 (95%CrI 0.73-3.32) with a mean absolute risk difference of 8.7% (- 5.0; 42.7%) in patients with all rhythms, and the median relative risk was 1.54 (95%CrI 0.79-3.71) with a mean absolute risk difference of 10.8% (95%CrI - 4.2; 73.9%) in patients with shockable rhythms. The posterior probabilities of an absolute risk difference > 0% and > 5% were 91.0% and 71.1% in patients with all rhythms and 92.4% and 75.8% in patients with shockable rhythms, respectively. CONCLUSION: The current Bayesian meta-analysis found a 71.1% and 75.8% posterior probability of a clinically relevant ECPR-based treatment effect on 6-month neurologically favorable survival in patients with all rhythms and shockable rhythms. These results must be interpreted within the context of the reported credible intervals and varying designs of the randomized trials. REGISTRATION: INPLASY (INPLASY2023120060, December 14th, 2023, https://doi.org/10.37766/inplasy2023.12.0060 ).


Assuntos
Teorema de Bayes , Reanimação Cardiopulmonar , Parada Cardíaca Extra-Hospitalar , Humanos , Parada Cardíaca Extra-Hospitalar/terapia , Parada Cardíaca Extra-Hospitalar/mortalidade , Reanimação Cardiopulmonar/métodos , Reanimação Cardiopulmonar/normas , Oxigenação por Membrana Extracorpórea/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Resultado do Tratamento
5.
Stat Med ; 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38899515

RESUMO

Meta-analysis is an essential tool to comprehensively synthesize and quantitatively evaluate results of multiple clinical studies in evidence-based medicine. In many meta-analyses, the characteristics of some studies might markedly differ from those of the others, and these outlying studies can generate biases and potentially yield misleading results. In this article, we provide effective robust statistical inference methods using generalized likelihoods based on the density power divergence. The robust inference methods are designed to adjust the influences of outliers through the use of modified estimating equations based on a robust criterion, even when multiple and serious influential outliers are present. We provide the robust estimators, statistical tests, and confidence intervals via the generalized likelihoods for the fixed-effect and random-effects models of meta-analysis. We also assess the contribution rates of individual studies to the robust overall estimators that indicate how the influences of outlying studies are adjusted. Through simulations and applications to two recently published systematic reviews, we demonstrate that the overall conclusions and interpretations of meta-analyses can be markedly changed if the robust inference methods are applied and that only the conventional inference methods might produce misleading evidence. These methods would be recommended to be used at least as a sensitivity analysis method in the practice of meta-analysis. We have also developed an R package, robustmeta, that implements the robust inference methods.

6.
Entropy (Basel) ; 26(6)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38920443

RESUMO

The road passenger transportation enterprise is a complex system, requiring a clear understanding of their active safety situation (ASS), trends, and influencing factors. This facilitates transportation authorities to promptly receive signals and take effective measures. Through exploratory factor analysis and confirmatory factor analysis, we delved into potential factors for evaluating ASS and extracted an ASS index. To predict obtaining a higher ASS information rate, we compared multiple time series models, including GRU (gated recurrent unit), LSTM (long short-term memory), ARIMA, Prophet, Conv_LSTM, and TCN (temporal convolutional network). This paper proposed the WDA-DBN (water drop algorithm-Deep Belief Network) model and employed DEEPSHAP to identify factors with higher ASS information content. TCN and GRU performed well in the prediction. Compared to the other models, WDA-DBN exhibited the best performance in terms of MSE and MAE. Overall, deep learning models outperform econometric models in terms of information processing. The total time spent processing alarms positively influences ASS, while variables such as fatigue driving occurrences, abnormal driving occurrences, and nighttime driving alarm occurrences have a negative impact on ASS.

7.
Entropy (Basel) ; 26(6)2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38920515

RESUMO

Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified models. (2) MMA improves on IT model selection by implementing a simple form of shrinkage estimation (a way to make accurate predictions from a model with many parameters relative to the amount of data, by "shrinking" parameter estimates toward zero). However, other shrinkage estimators such as penalized regression or Bayesian hierarchical models with regularizing priors are more computationally efficient and better supported theoretically. (3) In general, the procedures for extracting confidence intervals from MMA are overconfident, providing overly narrow intervals. If researchers want to use limited data sets to accurately estimate the strength of multiple competing ecological processes along with reliable confidence intervals, the current best approach is to use full (maximal) statistical models (possibly with Bayesian priors) after making principled, a priori decisions about model complexity.

8.
Rev. neurol. (Ed. impr.) ; 78(7): 209-211, Ene-Jun, 2024.
Artigo em Espanhol | IBECS | ID: ibc-232183

RESUMO

Las revistas científicas más importantes en campos como medicina, biología y sociología publican reiteradamente artículos y editoriales denunciando que un gran porcentaje de médicos no entiende los conceptos básicos del análisis estadístico, lo que favorece el riesgo de cometer errores al interpretar los datos, los hace más vulnerables frente a informaciones falsas y reduce la eficacia de la investigación. Este problema se extiende a lo largo de toda su carrera profesional y se debe, en gran parte, a una enseñanza deficiente en estadística que es común en países desarrollados. En palabras de H. Halle y S. Krauss, ‘el 90% de los profesores universitarios alemanes que usan con asiduidad el valor de p de los test no entiende lo que mide ese valor’. Es importante destacar que los razonamientos básicos del análisis estadístico son similares a los que realizamos en nuestra vida cotidiana y que comprender los conceptos básicos del análisis estadístico no requiere conocimiento matemático alguno. En contra de lo que muchos investigadores creen, el valor de p del test no es un ‘índice matemático’ que nos permita concluir claramente si, por ejemplo, un fármaco es más efectivo que el placebo. El valor de p del test es simplemente un porcentaje.(AU)


Abstract. Leading scientific journals in fields such as medicine, biology and sociology repeatedly publish articles and editorials claiming that a large percentage of doctors do not understand the basics of statistical analysis, which increases the risk of errors in interpreting data, makes them more vulnerable to misinformation and reduces the effectiveness of research. This problem extends throughout their careers and is largely due to the poor training they receive in statistics – a problem that is common in developed countries. As stated by H. Halle and S. Krauss, ‘90% of German university lecturers who regularly use the p-value in tests do not understand what that value actually measures’. It is important to note that the basic reasoning of statistical analysis is similar to what we do in our daily lives and that understanding the basic concepts of statistical analysis does not require any knowledge of mathematics. Contrary to what many researchers believe, the p-value of the test is not a ‘mathematical index’ that allows us to clearly conclude whether, for example, a drug is more effective than a placebo. The p-value of the test is simply a percentage.(AU)


Assuntos
Humanos , Masculino , Feminino , Pesquisa Biomédica , Publicação Periódica , Publicações Científicas e Técnicas , Testes de Hipótese , Valor Preditivo dos Testes
9.
Psychometrika ; 89(2): 542-568, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38664342

RESUMO

When analyzing data, researchers make some choices that are either arbitrary, based on subjective beliefs about the data-generating process, or for which equally justifiable alternative choices could have been made. This wide range of data-analytic choices can be abused and has been one of the underlying causes of the replication crisis in several fields. Recently, the introduction of multiverse analysis provides researchers with a method to evaluate the stability of the results across reasonable choices that could be made when analyzing data. Multiverse analysis is confined to a descriptive role, lacking a proper and comprehensive inferential procedure. Recently, specification curve analysis adds an inferential procedure to multiverse analysis, but this approach is limited to simple cases related to the linear model, and only allows researchers to infer whether at least one specification rejects the null hypothesis, but not which specifications should be selected. In this paper, we present a Post-selection Inference approach to Multiverse Analysis (PIMA) which is a flexible and general inferential approach that considers for all possible models, i.e., the multiverse of reasonable analyses. The approach allows for a wide range of data specifications (i.e., preprocessing) and any generalized linear model; it allows testing the null hypothesis that a given predictor is not associated with the outcome, by combining information from all reasonable models of multiverse analysis, and provides strong control of the family-wise error rate allowing researchers to claim that the null hypothesis can be rejected for any specification that shows a significant effect. The inferential proposal is based on a conditional resampling procedure. We formally prove that the Type I error rate is controlled, and compute the statistical power of the test through a simulation study. Finally, we apply the PIMA procedure to the analysis of a real dataset on the self-reported hesitancy for the COronaVIrus Disease 2019 (COVID-19) vaccine before and after the 2020 lockdown in Italy. We conclude with practical recommendations to be considered when implementing the proposed procedure.


Assuntos
Psicometria , Humanos , Psicometria/métodos , Modelos Estatísticos , Interpretação Estatística de Dados , COVID-19/epidemiologia , Modelos Lineares , Simulação por Computador
10.
Proc Natl Acad Sci U S A ; 121(15): e2322083121, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38568975

RESUMO

While reliable data-driven decision-making hinges on high-quality labeled data, the acquisition of quality labels often involves laborious human annotations or slow and expensive scientific measurements. Machine learning is becoming an appealing alternative as sophisticated predictive techniques are being used to quickly and cheaply produce large amounts of predicted labels; e.g., predicted protein structures are used to supplement experimentally derived structures, predictions of socioeconomic indicators from satellite imagery are used to supplement accurate survey data, and so on. Since predictions are imperfect and potentially biased, this practice brings into question the validity of downstream inferences. We introduce cross-prediction: a method for valid inference powered by machine learning. With a small labeled dataset and a large unlabeled dataset, cross-prediction imputes the missing labels via machine learning and applies a form of debiasing to remedy the prediction inaccuracies. The resulting inferences achieve the desired error probability and are more powerful than those that only leverage the labeled data. Closely related is the recent proposal of prediction-powered inference [A. N. Angelopoulos, S. Bates, C. Fannjiang, M. I. Jordan, T. Zrnic, Science 382, 669-674 (2023)], which assumes that a good pretrained model is already available. We show that cross-prediction is consistently more powerful than an adaptation of prediction-powered inference in which a fraction of the labeled data is split off and used to train the model. Finally, we observe that cross-prediction gives more stable conclusions than its competitors; its CIs typically have significantly lower variability.

11.
Prog Transplant ; 34(1-2): 58-59, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38449093
12.
Comput Biol Med ; 173: 108349, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38547660

RESUMO

BACKGROUND: Ventilator dyssynchrony (VD) can worsen lung injury and is challenging to detect and quantify due to the complex variability in the dyssynchronous breaths. While machine learning (ML) approaches are useful for automating VD detection from the ventilator waveform data, scalable severity quantification and its association with pathogenesis and ventilator mechanics remain challenging. OBJECTIVE: We develop a systematic framework to quantify pathophysiological features observed in ventilator waveform signals such that they can be used to create feature-based severity stratification of VD breaths. METHODS: A mathematical model was developed to represent the pressure and volume waveforms of individual breaths in a feature-based parametric form. Model estimates of respiratory effort strength were used to assess the severity of flow-limited (FL)-VD breaths compared to normal breaths. A total of 93,007 breath waveforms from 13 patients were analyzed. RESULTS: A novel model-defined continuous severity marker was developed and used to estimate breath phenotypes of FL-VD breaths. The phenotypes had a predictive accuracy of over 97% with respect to the previously developed ML-VD identification algorithm. To understand the incidence of FL-VD breaths and their association with the patient state, these phenotypes were further successfully correlated with ventilator-measured parameters and electronic health records. CONCLUSION: This work provides a computational pipeline to identify and quantify the severity of FL-VD breaths and paves the way for a large-scale study of VD causes and effects. This approach has direct application to clinical practice and in meaningful knowledge extraction from the ventilator waveform data.


Assuntos
Lesão Pulmonar , Humanos , Ventiladores Mecânicos , Pulmão/fisiologia , Respiração Artificial/métodos
13.
Philos Trans A Math Phys Eng Sci ; 382(2270): 20230140, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38403052

RESUMO

The collective statistics of voting on judicial courts present hints about their inner workings. Many approaches for studying these statistics, however, assume that judges' decisions are conditionally independent: a judge reaches a decision based on the case at hand and his or her personal views. In reality, judges interact. We develop a minimal model that accounts for judge bias, depending on the context of the case, and peer interaction. We apply the model to voting data from the US Supreme Court. We find strong evidence that interaction is an important factor across natural courts from 1946 to 2021. We also find that, after accounting for interaction, the recovered biases differ from highly cited ideological scores. Our method exemplifies how physics and complexity-inspired modelling can drive the development of theoretical models and improved measures for political voting. This article is part of the theme issue 'A complexity science approach to law and governance'.

16.
Stat Med ; 43(6): 1103-1118, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38183296

RESUMO

Regression modeling is the workhorse of statistics and there is a vast literature on estimation of the regression function. It has been realized in recent years that in regression analysis the ultimate aim may be the estimation of a level set of the regression function, ie, the set of covariate values for which the regression function exceeds a predefined level, instead of the estimation of the regression function itself. The published work on estimation of the level set has thus far focused mainly on nonparametric regression, especially on point estimation. In this article, the construction of confidence sets for the level set of linear regression is considered. In particular, 1 - α $$ 1-\alpha $$ level upper, lower and two-sided confidence sets are constructed for the normal-error linear regression. It is shown that these confidence sets can be easily constructed from the corresponding 1 - α $$ 1-\alpha $$ level simultaneous confidence bands. It is also pointed out that the construction method is readily applicable to other parametric regression models where the mean response depends on a linear predictor through a monotonic link function, which include generalized linear models, linear mixed models and generalized linear mixed models. Therefore, the method proposed in this article is widely applicable. Simulation studies with both linear and generalized linear models are conducted to assess the method and real examples are used to illustrate the method.


Assuntos
Modelos Estatísticos , Humanos , Modelos Lineares , Análise de Regressão , Simulação por Computador
17.
Rev. neurol. (Ed. impr.) ; 78(1)1 - 15 de Enero 2024. tab
Artigo em Espanhol | IBECS | ID: ibc-229062

RESUMO

Una práctica muy habitual en la investigación médica, durante el proceso de análisis de los datos, es dicotomizar variables numéricas en dos grupos. Dicha práctica conlleva la pérdida de información muy útil que puede restar eficacia a la investigación. A través de varios ejemplos, se muestra cómo con la dicotomización de variables numéricas los estudios pierden potencia estadística. Esto puede ser un aspecto crítico que impida valorar, por ejemplo, si un procedimiento terapéutico es más efectivo o si un determinado factor es de riesgo. Por tanto, se recomienda no dicotomizar las variables continuas si no existe un motivo muy concreto para ello. (AU)


Abstract. A very common practice in medical research, during the process of data analysis, is to dichotomise numerical variables in two groups. This leads to the loss of very useful information that can undermine the effectiveness of the research. Several examples are used to show how the dichotomisation of numerical variables can lead to a loss of statistical power in studies. This can be a critical aspect in assessing, for example, whether a therapeutic procedure is more effective or whether a certain factor is a risk factor. Dichotomising continuous variables is therefore not recommended unless there is a very specific reason to do so. (AU)


Assuntos
Pesquisa Biomédica/estatística & dados numéricos , Modelos Estatísticos
18.
bioRxiv ; 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38045416

RESUMO

Typical statistical practices in the biological sciences have been increasingly called into question due to difficulties in replication of an increasing number of studies, many of which are confounded by the relative difficulty of null significance hypothesis testing designs and interpretation of p-values. Bayesian inference, representing a fundamentally different approach to hypothesis testing, is receiving renewed interest as a potential alternative or complement to traditional null significance hypothesis testing due to its ease of interpretation and explicit declarations of prior assumptions. Bayesian models are more mathematically complex than equivalent frequentist approaches, which have historically limited applications to simplified analysis cases. However, the advent of probability distribution sampling tools with exponential increases in computational power now allows for quick and robust inference under any distribution of data. Here we present a practical tutorial on the use of Bayesian inference in the context of neuroscientific studies. We first start with an intuitive discussion of Bayes' rule and inference followed by the formulation of Bayesian-based regression and ANOVA models using data from a variety of neuroscientific studies. We show how Bayesian inference leads to easily interpretable analysis of data while providing an open-source toolbox to facilitate the use of Bayesian tools.

19.
Vox Sang ; 119(1): 34-42, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38018286

RESUMO

BACKGROUND AND OBJECTIVES: Although the genetic determinants of haemoglobin and ferritin have been widely studied, those of the clinically and globally relevant iron deficiency anaemia (IDA) and deferral due to hypohaemoglobinemia (Hb-deferral) are unclear. In this investigation, we aimed to quantify the value of genetic information in predicting IDA and Hb-deferral. MATERIALS AND METHODS: We analysed genetic data from up to 665,460 participants of the FinnGen, Blood Service Biobank and UK Biobank, and used INTERVAL (N = 39,979) for validation. We performed genome-wide association studies (GWASs) of IDA and Hb-deferral and utilized publicly available genetic associations to compute polygenic scores for IDA, ferritin and Hb. We fitted models to estimate the effect sizes of these polygenic risk scores (PRSs) on IDA and Hb-deferral risk while accounting for the individual's age, sex, weight, height, smoking status and blood donation history. RESULTS: Significant variants in GWASs of IDA and Hb-deferral appear to be a small subset of variants associated with ferritin and Hb. Effect sizes of genetic predictors of IDA and Hb-deferral are similar to those of age and weight which are typically used in blood donor management. A total genetic score for Hb-deferral was estimated for each individual. The odds ratio estimate between first decile against that at ninth decile of total genetic score distribution ranged from 1.4 to 2.2. CONCLUSION: The value of genetic data in predicting IDA or suitability to donate blood appears to be on a practically useful level.


Assuntos
Anemia Ferropriva , Humanos , Anemia Ferropriva/genética , Estudo de Associação Genômica Ampla , Ferritinas/genética , Hemoglobinas/análise
20.
Rev. saúde pública (Online) ; 58: 01, 2024. graf
Artigo em Inglês | LILACS | ID: biblio-1536768

RESUMO

ABSTRACT OBJECTIVE This study aims to propose a comprehensive alternative to the Bland-Altman plot method, addressing its limitations and providing a statistical framework for evaluating the equivalences of measurement techniques. This involves introducing an innovative three-step approach for assessing accuracy, precision, and agreement between techniques, which enhances objectivity in equivalence assessment. Additionally, the development of an R package that is easy to use enables researchers to efficiently analyze and interpret technique equivalences. METHODS Inferential statistics support for equivalence between measurement techniques was proposed in three nested tests. These were based on structural regressions with the goal to assess the equivalence of structural means (accuracy), the equivalence of structural variances (precision), and concordance with the structural bisector line (agreement in measurements obtained from the same subject), using analytical methods and robust approach by bootstrapping. To promote better understanding, graphical outputs following Bland and Altman's principles were also implemented. RESULTS The performance of this method was shown and confronted by five data sets from previously published articles that used Bland and Altman's method. One case demonstrated strict equivalence, three cases showed partial equivalence, and one showed poor equivalence. The developed R package containing open codes and data are available for free and with installation instructions at Harvard Dataverse at https://doi.org/10.7910/DVN/AGJPZH. CONCLUSION Although easy to communicate, the widely cited and applied Bland and Altman plot method is often misinterpreted, since it lacks suitable inferential statistical support. Common alternatives, such as Pearson's correlation or ordinal least-square linear regression, also fail to locate the weakness of each measurement technique. It may be possible to test whether two techniques have full equivalence by preserving graphical communication, in accordance with Bland and Altman's principles, but also adding robust and suitable inferential statistics. Decomposing equivalence into three features (accuracy, precision, and agreement) helps to locate the sources of the problem when fixing a new technique.


Assuntos
Intervalos de Confiança , Análise de Regressão , Interpretação Estatística de Dados , Inferência Estatística , Confiabilidade dos Dados
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...