Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 578
Filter
1.
Cancer Epidemiol ; 92: 102624, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39094299

ABSTRACT

BACKGROUND: Renal cell carcinoma (RCC) remains a global health concern due to its poor survival rate. This study aimed to investigate the influence of medical determinants and socioeconomic status on survival outcomes of RCC patients. We analyzed the survival data of 41,563 RCC patients recorded under the Surveillance, Epidemiology, and End Results (SEER) program from 2012 to 2020. METHODS: We employed a competing risk model, assuming lifetime of RCC patients under various risks follows Chen distribution. This model accounts for uncertainty related to survival time as well as causes of death, including missing cause of death. For model analysis, we utilized Bayesian inference and obtained the estimate of various key parameters such as cumulative incidence function (CIF) and cause-specific hazard. Additionally, we performed Bayesian hypothesis testing to assess the impact of multiple factors on the survival time of RCC patients. RESULTS: Our findings revealed that the survival time of RCC patients is significantly influenced by gender, income, marital status, chemotherapy, tumor size, and laterality. However, we observed no significant effect of race and origin on patient's survival time. The CIF plots indicated a number of important distinctions in incidence of causes of death corresponding to factors income, marital status, race, chemotherapy, and tumor size. CONCLUSIONS: The study highlights the impact of various medical and socioeconomic factors on survival time of RCC patients. Moreover, it also demonstrates the utility of competing risk model for survival analysis of RCC patients under Bayesian paradigm. This model provides a robust and flexible framework to deal with missing data, which can be particularly useful in real-life situations where patients information might be incomplete.

2.
Rep Prog Phys ; 87(9)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39087757

ABSTRACT

Quantum illumination (QI) and quantum radar have emerged as potentially groundbreaking technologies, leveraging the principles of quantum mechanics to revolutionise the field of remote sensing and target detection. The protocol, particularly in the context of quantum radar, has been subject to a great deal of aspirational conjecture as well as criticism with respect to its realistic potential. In this review, we present a broad overview of the field of quantum target detection focusing on QI and its potential as an underlying scheme for a quantum radar operating at microwave frequencies. We provide context for the field by considering its historical development and fundamental principles. Our aim is to provide a balanced discussion on the state of theoretical and experimental progress towards realising a working QI-based quantum radar, and draw conclusions about its current outlook and future directions.

3.
J Comput Biol ; 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39092497

ABSTRACT

To improve the forecasting accuracy of the spread of infectious diseases, a hybrid model was recently introduced where the commonly assumed constant disease transmission rate was actively estimated from enforced mitigating policy data by a machine learning (ML) model and then fed to an extended susceptible-infected-recovered model to forecast the number of infected cases. Testing only one ML model, that is, gradient boosting model (GBM), the work left open whether other ML models would perform better. Here, we compared GBMs, linear regressions, k-nearest neighbors, and Bayesian networks (BNs) in forecasting the number of COVID-19-infected cases in the United States and Canadian provinces based on policy indices of future 35 days. There was no significant difference in the mean absolute percentage errors of these ML models over the combined dataset [H(3)=3.10,p=0.38]. In two provinces, a significant difference was observed [H(3)=8.77,H(3)=8.07,p<0.05], yet posthoc tests revealed no significant difference in pairwise comparisons. Nevertheless, BNs significantly outperformed the other models in most of the training datasets. The results put forward that the ML models have equal forecasting power overall, and BNs are best for data-fitting applications.

4.
Biometrika ; 111(1): 255-272, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38948429

ABSTRACT

Quantile regression has become a widely used tool for analysing competing risk data. However, quantile regression for competing risk data with a continuous mark is still scarce. The mark variable is an extension of cause of failure in a classical competing risk model where cause of failure is replaced by a continuous mark only observed at uncensored failure times. An example of the continuous mark variable is the genetic distance that measures dissimilarity between the infecting virus and the virus contained in the vaccine construct. In this article, we propose a novel mark-specific quantile regression model. The proposed estimation method borrows strength from data in a neighbourhood of a mark and is based on an induced smoothed estimation equation, which is very different from the existing methods for competing risk data with discrete causes. The asymptotic properties of the resulting estimators are established across mark and quantile continuums. In addition, a mark-specific quantile-type vaccine efficacy is proposed and its statistical inference procedures are developed. Simulation studies are conducted to evaluate the finite sample performances of the proposed estimation and hypothesis testing procedures. An application to the first HIV vaccine efficacy trial is provided.

5.
Entropy (Basel) ; 26(7)2024 Jul 12.
Article in English | MEDLINE | ID: mdl-39056958

ABSTRACT

A central challenge in hypothesis testing (HT) lies in determining the optimal balance between Type I (false positive) and Type II (non-detection or false negative) error probabilities. Analyzing these errors' exponential rate of convergence, known as error exponents, provides crucial insights into system performance. Error exponents offer a lens through which we can understand how operational restrictions, such as resource constraints and impairments in communications, affect the accuracy of distributed inference in networked systems. This survey presents a comprehensive review of key results in HT, from the foundational Stein's Lemma to recent advancements in distributed HT, all unified through the framework of error exponents. We explore asymptotic and non-asymptotic results, highlighting their implications for designing robust and efficient networked systems, such as event detection through lossy wireless sensor monitoring networks, collective perception-based object detection in vehicular environments, and clock synchronization in distributed environments, among others. We show that understanding the role of error exponents provides a valuable tool for optimizing decision-making and improving the reliability of networked systems.

6.
J Exp Bot ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38954539

ABSTRACT

Linear mixed models (LMMs) are a commonly used method for genome-wide association studies (GWAS) that aim to detect associations between genetic markers and phenotypic measurements in a population of individuals while accounting for population structure and cryptic relatedness. In a standard GWAS, hundreds of thousands to millions of statistical tests are performed, requiring control for multiple hypothesis testing. Typically, static corrections that penalize the number of tests performed are used to control for the family-wise error rate, which is the probability of making at least one false positive. However, it has been shown that in practice this threshold is too conservative for normally distributed phenotypes and not stringent enough for non-normally distributed phenotypes. Therefore, permutation-based LMM approaches have recently been proposed to provide a more realistic threshold that takes phenotypic distributions into account. In this work, we will discuss the advantages of permutation-based GWAS approaches, including new simulations and results from a re-analysis of all publicly available Arabidopsis thaliana phenotypes from the AraPheno database.

7.
Neurotrauma Rep ; 5(1): 699-707, 2024.
Article in English | MEDLINE | ID: mdl-39071981

ABSTRACT

The field of neurotrauma is grappling with the effects of the recently identified replication crisis. As such, care must be taken to identify and perform the most appropriate statistical analyses. This will prevent misuse of research resources and ensure that conclusions are reasonable and within the scope of the data. We anticipate that Bayesian statistical methods will see increasing use in the coming years. Bayesian methods integrate prior beliefs (or prior data) into a statistical model to merge historical information and current experimental data. These methods may improve the ability to detect differences between experimental groups (i.e., statistical power) when used appropriately. However, researchers need to be aware of the strengths and limitations of such approaches if they are to implement or evaluate these analyses. Ultimately, an approach using Bayesian methodologies may have substantial benefits to statistical power, but caution needs to be taken when identifying and defining prior beliefs.

8.
Accid Anal Prev ; 206: 107690, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38968865

ABSTRACT

Analyzing crash data is a complex and labor-intensive process that requires careful consideration of multiple interdependent modeling aspects, such as functional forms, transformations, likely contributing factors, correlations, and unobserved heterogeneity. Limited time, knowledge, and experience may lead to over-simplified, over-fitted, or misspecified models overlooking important insights. This paper proposes an extensive hypothesis testing framework including a multi-objective mathematical programming formulation and solution algorithms to estimate crash frequency models considering simultaneously likely contributing factors, transformations, non-linearities, and correlated random parameters. The mathematical programming formulation minimizes both in-sample fit and out-of-sample prediction. To address the complexity and non-convexity of the mathematical program, the proposed solution framework utilizes a variety of metaheuristic solution algorithms. Specifically, Harmony Search demonstrated minimal sensitivity to hyperparameters, enabling an efficient search for solutions without being influenced by the choice of hyperparameters. The effectiveness of the framework was evaluated using two real-world datasets and one synthetic dataset. Comparative analyses were performed using the two real-world datasets and the corresponding models published in literature by independent teams. The proposed framework showed its capability to pinpoint efficient model specifications, produce accurate estimates, and provide valuable insights for both researchers and practitioners. The proposed approach allows for the discovery of numerous insights while minimizing the time spent on model development. By considering a broader set of contributing factors, models with varied qualities can be generated. For instance, when applied to crash data from Queensland, the proposed approach revealed that the inclusion of medians on sharp curved roads can effectively reduce the occurrence of crashes, when applied to crash data from Washington, the simultaneous consideration of traffic volume and road curvature resulted in a notable reduction in crash variances but an increase in crash means.


Subject(s)
Accidents, Traffic , Algorithms , Models, Statistical , Humans , Accidents, Traffic/prevention & control , Accidents, Traffic/statistics & numerical data
9.
Br J Dev Psychol ; 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39011820

ABSTRACT

When learning new categories, do children benefit from the same types of training as adults? We compared the effects of feedback-based training with observational training in young adults (ages 18-25) and early school aged children (ages 6-7) across two different multimodal category learning tasks: conjunctive rule based and information integration. We used multimodal stimuli that varied across a visual feature (rotation speed of the "planet" stimulus) and an auditory feature (pitch frequency of a pure tone stimulus). We found an interaction between age and training type for the rule-based category task, such that adults performed better in feedback training than in observational training, whereas training type had no significant effect on children's category learning performance. Overall adults performed better than children in learning both the rule based and information integration category structures. In information integration category learning, feedback versus observational training did not have a significant effect on either adults' or children's category learning. Computational modelling revealed that children defaulted to univariate rules in both tasks. The finding that children do not benefit from feedback training and can learn successfully via observational learning has implications for the design of educational interventions appropriate for children.

10.
Epigenomics ; : 1-14, 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39016098

ABSTRACT

Aim: Hypotheses about what phenotypes to include in causal analyses, that in turn can have clinical and policy implications, can be guided by hypothesis-free approaches leveraging the epigenome, for example. Materials & methods: Minimally adjusted epigenome-wide association studies (EWAS) using ALSPAC data were performed for example conditions, dysmenorrhea and heavy menstrual bleeding (HMB). Differentially methylated CpGs were searched in the EWAS Catalog and associated traits identified. Traits were compared between those with and without the example conditions in ALSPAC. Results: Seven CpG sites were associated with dysmenorrhea and two with HMB. Smoking and adverse childhood experience score were associated with both conditions in the hypothesis-testing phase. Conclusion: Hypothesis-generating EWAS can help identify associations for future analyses.


To inform policy and improve clinical practice, it is important that researchers who study people's health find out which traits might increase the risk of illness. However, it can be difficult to know which traits should be looked at. In this study, we wanted to look for traits that might increase the risk of painful and heavy periods, using data about the switches that turn our genes on and off. There are some people in the Children of the 90s study that have data on gene switches. We compared all the switches between those with and without painful or heavy periods. For painful periods, we found links with seven switches and for heavy periods, we found two. We then used another data source, called the EWAS Catalog, to see which traits were associated with these switches. The traits we found included body size, smoking and child abuse. Finally, when using data on traits from the wider Children of the 90s group, we found that smoking and more difficult childhoods were some of the traits related to painful and heavy periods. A good thing about this approach is that we could find new traits that might increase the risk of painful or heavy periods; these should be looked at in future studies.

11.
bioRxiv ; 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38979274

ABSTRACT

Within-individual coupling between measures of brain structure and function evolves in development and may underlie differential risk for neuropsychiatric disorders. Despite increasing interest in the development of structure-function relationships, rigorous methods to quantify and test individual differences in coupling remain nascent. In this article, we explore and address gaps in approaches for testing and spatially localizing individual differences in intermodal coupling. We propose a new method, called CIDeR, which is designed to simultaneously perform hypothesis testing in a way that limits false positive results and improve detection of true positive results. Through a comparison across different approaches to testing individual differences in intermodal coupling, we delineate subtle differences in the hypotheses they test, which may ultimately lead researchers to arrive at different results. Finally, we illustrate the utility of CIDeR in two applications to brain development using data from the Philadelphia Neurodevelopmental Cohort.

12.
Commun Stat Theory Methods ; 53(9): 3063-3077, 2024.
Article in English | MEDLINE | ID: mdl-38835516

ABSTRACT

This article considers a way to test the hypothesis that two collections of objects are from the same uniform distribution of such objects. The exact p-value is calculated based on the distribution for the observed overlaps. In addition, an interval estimate of the number of distinct objects, when all objects are equally likely, is indicated.

13.
Evol Anthropol ; : e22037, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38859704

ABSTRACT

Smith and Smith and Wood proposed that the human fossil record offers special challenges for causal hypotheses because "unique" adaptations resist the comparative method. We challenge their notions of "uniqueness" and offer a refutation of the idea that there is something epistemologically special about human prehistoric data. Although paleontological data may be sparse, there is nothing inherent about this information that prevents its use in the inductive or deductive process, nor in the generation and testing of scientific hypotheses. The imprecision of the fossil record is well-understood, and such imprecision is often factored into hypotheses and methods. While we acknowledge some oversteps within the discipline, we also note that the history of paleoanthropology is clearly one of progress, with ideas tested and resolution added as data (fossils) are uncovered and new technologies applied, much like in sciences as diverse as astronomy, molecular genetics, and geology.

14.
J Biopharm Stat ; : 1-20, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38841980

ABSTRACT

For implementation of adaptive design, the adjustment of bias in treatment effect estimation becomes an increasingly important topic in recent years. While adaptive design literature traditionally focuses on the control of type I error rate and the adjustment of overall unconditional bias, the research on adjusting conditional bias has been limited. This paper proposes a conditional bias adjustment estimator of treatment effect under the context of 2-in-1 adaptive design and aims to provide a comprehensive investigation on their statistical properties including bias, mean squared error and coverage probability of confidence intervals. It demonstrated that conditional bias adjusted estimators greatly reduce the conditional bias and have similarly negligible unconditional bias compared with mean and median (unconditional) unbiased estimators. In addition, the test statistics is constructed based on the conditional bias adjustment estimators and compared with the naive unadjusted test.

15.
J Eval Clin Pract ; 2024 Jun 02.
Article in English | MEDLINE | ID: mdl-38825756

ABSTRACT

RATIONALE: Hypothesis testing is integral to health research and is commonly completed through frequentist statistics focused on computing p values. p Values have been long criticized for offering limited information about the relationship of variables and strength of evidence concerning the plausibility, presence and certainty of associations among variables. Bayesian statistics is a potential alternative for inference-making. Despite emerging discussion on Bayesian statistics across various disciplines, the uptake of Bayesian statistics in health research is still limited. AIM: To offer a primer on Bayesian statistics and Bayes factors for health researchers to gain preliminary knowledge of its use, application and interpretation in health research. METHODS: Theoretical and empirical literature on Bayesian statistics and methods were used to develop this methodological primer. CONCLUSIONS: Using Bayesian statistics in health research without a careful and complete understanding of its underlying philosophy and differences from frequentist testing, estimation and interpretation methods can result in similar ritualistic use as done for p values. IMPLICATIONS: Health researchers should supplement frequentists statistics with Bayesian statistics when analysing research data. The overreliance on p values for clinical decisions making should be avoided. Bayes factors offer a more intuitive measure of assessing the strength of evidence for null and alternative hypothesis.

16.
Hum Brain Mapp ; 45(8): e26714, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38878300

ABSTRACT

Functional networks often guide our interpretation of spatial maps of brain-phenotype associations. However, methods for assessing enrichment of associations within networks of interest have varied in terms of both scientific rigor and underlying assumptions. While some approaches have relied on subjective interpretations, others have made unrealistic assumptions about spatial properties of imaging data, leading to inflated false positive rates. We seek to address this gap in existing methodology by borrowing insight from a method widely used in genetics research for testing enrichment of associations between a set of genes and a phenotype of interest. We propose network enrichment significance testing (NEST), a flexible framework for testing the specificity of brain-phenotype associations to functional networks or other sub-regions of the brain. We apply NEST to study enrichment of associations with structural and functional brain imaging data from a large-scale neurodevelopmental cohort study.


Subject(s)
Brain , Phenotype , Humans , Brain/diagnostic imaging , Brain/physiology , Magnetic Resonance Imaging/methods , Nerve Net/diagnostic imaging , Nerve Net/physiology , Cohort Studies , Female , Male
17.
Stat Med ; 43(18): 3524-3538, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-38863133

ABSTRACT

Moderate calibration, the expected event probability among observations with predicted probability z being equal to z, is a desired property of risk prediction models. Current graphical and numerical techniques for evaluating moderate calibration of risk prediction models are mostly based on smoothing or grouping the data. As well, there is no widely accepted inferential method for the null hypothesis that a model is moderately calibrated. In this work, we discuss recently-developed, and propose novel, methods for the assessment of moderate calibration for binary responses. The methods are based on the limiting distributions of functions of standardized partial sums of prediction errors converging to the corresponding laws of Brownian motion. The novel method relies on well-known properties of the Brownian bridge which enables joint inference on mean and moderate calibration, leading to a unified "bridge" test for detecting miscalibration. Simulation studies indicate that the bridge test is more powerful, often substantially, than the alternative test. As a case study we consider a prediction model for short-term mortality after a heart attack, where we provide suggestions on graphical presentation and the interpretation of results. Moderate calibration can be assessed without requiring arbitrary grouping of data or using methods that require tuning of parameters.


Subject(s)
Computer Simulation , Models, Statistical , Humans , Risk Assessment/methods , Myocardial Infarction/mortality , Statistics, Nonparametric , Calibration , Probability
18.
BMC Med Res Methodol ; 24(1): 110, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714936

ABSTRACT

Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.


Subject(s)
Bayes Theorem , Clinical Trials as Topic , Humans , Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Research Design/standards , Sample Size , Data Interpretation, Statistical , Models, Statistical
19.
Brain Sci ; 14(5)2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38790421

ABSTRACT

Information theory explains how systems encode and transmit information. This article examines the neuronal system, which processes information via neurons that react to stimuli and transmit electrical signals. Specifically, we focus on transfer entropy to measure the flow of information between sequences and explore its use in determining effective neuronal connectivity. We analyze the causal relationships between two discrete time series, X:=Xt:t∈Z and Y:=Yt:t∈Z, which take values in binary alphabets. When the bivariate process (X,Y) is a jointly stationary ergodic variable-length Markov chain with memory no larger than k, we demonstrate that the null hypothesis of the test-no causal influence-requires a zero transfer entropy rate. The plug-in estimator for this function is identified with the test statistic of the log-likelihood ratios. Since under the null hypothesis, this estimator follows an asymptotic chi-squared distribution, it facilitates the calculation of p-values when applied to empirical data. The efficacy of the hypothesis test is illustrated with data simulated from a neuronal network model, characterized by stochastic neurons with variable-length memory. The test results identify biologically relevant information, validating the underlying theory and highlighting the applicability of the method in understanding effective connectivity between neurons.

20.
Elife ; 122024 May 13.
Article in English | MEDLINE | ID: mdl-38739437

ABSTRACT

In several large-scale replication projects, statistically non-significant results in both the original and the replication study have been interpreted as a 'replication success.' Here, we discuss the logical problems with this approach: Non-significance in both studies does not ensure that the studies provide evidence for the absence of an effect and 'replication success' can virtually always be achieved if the sample sizes are small enough. In addition, the relevant error rates are not controlled. We show how methods, such as equivalence testing and Bayes factors, can be used to adequately quantify the evidence for the absence of an effect and how they can be applied in the replication setting. Using data from the Reproducibility Project: Cancer Biology, the Experimental Philosophy Replicability Project, and the Reproducibility Project: Psychology we illustrate that many original and replication studies with 'null results' are in fact inconclusive. We conclude that it is important to also replicate studies with statistically non-significant results, but that they should be designed, analyzed, and interpreted appropriately.


Subject(s)
Bayes Theorem , Reproducibility of Results , Humans , Research Design , Sample Size , Data Interpretation, Statistical
SELECTION OF CITATIONS
SEARCH DETAIL
...