Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 647
Filter
1.
Braz J Phys Ther ; 28(3): 101079, 2024.
Article in English | MEDLINE | ID: mdl-38865832

ABSTRACT

BACKGROUND: The physical therapy profession has made efforts to increase the use of confidence intervals due to the valuable information they provide for clinical decision-making. Confidence intervals indicate the precision of the results and describe the strength and direction of a treatment effect measure. OBJECTIVES: To determine the prevalence of reporting of confidence intervals, achievement of intended sample size, and adjustment for multiple primary outcomes in randomised trials of physical therapy interventions. METHODS: We randomly selected 100 trials published in 2021 and indexed on the Physiotherapy Evidence Database. Two independent reviewers extracted the number of participants, any sample size calculation, and any adjustments for multiple primary outcomes. We extracted whether at least one between-group comparison was reported with a 95 % confidence interval and whether any confidence intervals were interpreted. RESULTS: The prevalence of use of confidence intervals was 47 % (95 % CI 38, 57). Only 6 % of trials (95 % CI: 3, 12) both reported and interpreted a confidence interval. Among the 100 trials, 59 (95 % CI: 49, 68) calculated and achieved the required sample size. Among the 100 trials, 19 % (95 % CI: 13, 28) had a problem with unadjusted multiplicity on the primary outcomes. CONCLUSIONS: Around half of trials of physical therapy interventions published in 2021 reported confidence intervals around between-group differences. This represents an increase of 5 % from five years earlier. Very few trials interpreted the confidence intervals. Most trials reported a sample size calculation, and among these most achieved that sample size. There is still a need to increase the use of adjustment for multiple comparisons.


Subject(s)
Physical Therapy Modalities , Randomized Controlled Trials as Topic , Humans , Sample Size , Confidence Intervals
2.
Korean J Anesthesiol ; 77(3): 316-325, 2024 06.
Article in English | MEDLINE | ID: mdl-38835136

ABSTRACT

The statistical significance of a clinical trial analysis result is determined by a mathematical calculation and probability based on null hypothesis significance testing. However, statistical significance does not always align with meaningful clinical effects; thus, assigning clinical relevance to statistical significance is unreasonable. A statistical result incorporating a clinically meaningful difference is a better approach to present statistical significance. Thus, the minimal clinically important difference (MCID), which requires integrating minimum clinically relevant changes from the early stages of research design, has been introduced. As a follow-up to the previous statistical round article on P values, confidence intervals, and effect sizes, in this article, we present hands-on examples of MCID and various effect sizes and discuss the terms statistical significance and clinical relevance, including cautions regarding their use.


Subject(s)
Minimal Clinically Important Difference , Humans , Probability , Research Design , Clinical Trials as Topic/methods , Data Interpretation, Statistical , Confidence Intervals
3.
J Biopharm Stat ; : 1-12, 2024 Apr 14.
Article in English | MEDLINE | ID: mdl-38615346

ABSTRACT

The randomization design employed to gather the data is the basis for the exact distributions of the permutation tests. One of the designs that is frequently used in clinical trials to force balance and remove experimental bias is the truncated binomial design. The exact distribution of the weighted log-rank class of tests for censored cluster medical data under the truncated binomial design is examined in this paper. For p-values in this class, a double saddlepoint approximation is developed using the truncated binomial design. With the right censored cluster data, the saddlepoint approximation's speed and accuracy over the normal asymptotic make it easier to invert the weighted log-rank tests and find nominal 95% confidence intervals for the treatment effect.

4.
Anal Chim Acta ; 1305: 342597, 2024 May 29.
Article in English | MEDLINE | ID: mdl-38677839

ABSTRACT

BACKGROUND: Increasingly, measurement uncertainty has been used by pure and applied analytical chemistry to ensure decision-making in commercial transactions and technical-scientific applications. Until recently, it was considered that measurement uncertainty boiled down to analytical uncertainty; however, over the last two decades, uncertainty arising from sampling has also been considered. However, the second version of the EURACHEM guide, published in 2019, assumes that the frequency distribution is approximately normal or can be normalized through logarithmic transformations, without treating data that deviate from the normality. RESULTS: Here, six examples (four from Eurachem guide) were treated by classical ANOVA and submitted to an innovative nonparametric approach for estimating the uncertainty contribution arising from sampling. Based on bootstrapping method, confidence intervals were used to guarantee metrological compatibility between the uncertainty ratios arising from the results of the traditional parametric tests and the unprecedented proposed nonparametric methodology. SIGNIFICANCE AND NOVELTY: The present study proposed an innovative methodology for covering this gap in the literature based on nonparametric statistics (NONPANOVA) using the median absolute deviation concepts. Supplementary material based on Excel spreadsheets was developed, assisting users in the statistical treatment of their real examples.

5.
Health Technol Assess ; 28(16): 1-93, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38551135

ABSTRACT

Background: Guidelines for sepsis recommend treating those at highest risk within 1 hour. The emergency care system can only achieve this if sepsis is recognised and prioritised. Ambulance services can use prehospital early warning scores alongside paramedic diagnostic impression to prioritise patients for treatment or early assessment in the emergency department. Objectives: To determine the accuracy, impact and cost-effectiveness of using early warning scores alongside paramedic diagnostic impression to identify sepsis requiring urgent treatment. Design: Retrospective diagnostic cohort study and decision-analytic modelling of operational consequences and cost-effectiveness. Setting: Two ambulance services and four acute hospitals in England. Participants: Adults transported to hospital by emergency ambulance, excluding episodes with injury, mental health problems, cardiac arrest, direct transfer to specialist services, or no vital signs recorded. Interventions: Twenty-one early warning scores used alongside paramedic diagnostic impression, categorised as sepsis, infection, non-specific presentation, or other specific presentation. Main outcome measures: Proportion of cases prioritised at the four hospitals; diagnostic accuracy for the sepsis-3 definition of sepsis and receiving urgent treatment (primary reference standard); daily number of cases with and without sepsis prioritised at a large and a small hospital; the minimum treatment effect associated with prioritisation at which each strategy would be cost-effective, compared to no prioritisation, assuming willingness to pay £20,000 per quality-adjusted life-year gained. Results: Data from 95,022 episodes involving 71,204 patients across four hospitals showed that most early warning scores operating at their pre-specified thresholds would prioritise more than 10% of cases when applied to non-specific attendances or all attendances. Data from 12,870 episodes at one hospital identified 348 (2.7%) with the primary reference standard. The National Early Warning Score, version 2 (NEWS2), had the highest area under the receiver operating characteristic curve when applied only to patients with a paramedic diagnostic impression of sepsis or infection (0.756, 95% confidence interval 0.729 to 0.783) or sepsis alone (0.655, 95% confidence interval 0.63 to 0.68). None of the strategies provided high sensitivity (> 0.8) with acceptable positive predictive value (> 0.15). NEWS2 provided combinations of sensitivity and specificity that were similar or superior to all other early warning scores. Applying NEWS2 to paramedic diagnostic impression of sepsis or infection with thresholds of > 4, > 6 and > 8 respectively provided sensitivities and positive predictive values (95% confidence interval) of 0.522 (0.469 to 0.574) and 0.216 (0.189 to 0.245), 0.447 (0.395 to 0.499) and 0.274 (0.239 to 0.313), and 0.314 (0.268 to 0.365) and 0.333 (confidence interval 0.284 to 0.386). The mortality relative risk reduction from prioritisation at which each strategy would be cost-effective exceeded 0.975 for all strategies analysed. Limitations: We estimated accuracy using a sample of older patients at one hospital. Reliable evidence was not available to estimate the effectiveness of prioritisation in the decision-analytic modelling. Conclusions: No strategy is ideal but using NEWS2, in patients with a paramedic diagnostic impression of infection or sepsis could identify one-third to half of sepsis cases without prioritising unmanageable numbers. No other score provided clearly superior accuracy to NEWS2. Research is needed to develop better definition, diagnosis and treatments for sepsis. Study registration: This study is registered as Research Registry (reference: researchregistry5268). Funding: This award was funded by the National Institute for Health and Care Research (NIHR) Health Technology Assessment programme (NIHR award ref: 17/136/10) and is published in full in Health Technology Assessment; Vol. 28, No. 16. See the NIHR Funding and Awards website for further award information.


Sepsis is a life-threatening condition in which an abnormal response to infection causes heart, lung or kidney failure. People with sepsis need urgent treatment. They need to be prioritised at the emergency department rather than waiting in the queue. Paramedics attempt to identify people with possible sepsis using an early warning score (based on simple measurements, such as blood pressure and heart rate) alongside their impression of the patient's diagnosis. They can then alert the hospital to assess the patient quickly. However, an inaccurate early warning score might miss cases of sepsis or unnecessarily prioritise people without sepsis. We aimed to measure how accurately early warning scores identified people with sepsis when used alongside paramedic diagnostic impression. We collected data from 71,204 people that two ambulance services transported to four different hospitals in 2019. We recorded paramedic diagnostic impressions and calculated early warning scores for each patient. At one hospital, we linked ambulance records to hospital records and identified who had sepsis. We then calculated the accuracy of using the scores alongside diagnostic impression to diagnose sepsis. Finally, we used modelling to predict how many patients (with and without sepsis) paramedics would prioritise using different strategies based on early warning scores and diagnostic impression. We found that none of the currently available early warning scores were ideal. When they were applied to all patients, they prioritised too many people. When they were only applied to patients whom the paramedics thought had infection, they missed many cases of sepsis. The NEWS2, score, which ambulance services already use, was as good as or better than all the other scores we studied. We found that using the NEWS2, score in people with a paramedic impression of infection could achieve a reasonable balance between prioritising too many patients and avoiding missing patients with sepsis.


Subject(s)
Early Warning Score , Emergency Medical Services , Sepsis , Adult , Humans , Cost-Benefit Analysis , Retrospective Studies , Sepsis/diagnosis
6.
Biometrics ; 80(2)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38536746

ABSTRACT

The paper extends the empirical likelihood (EL) approach of Liu et al. to a new and very flexible family of latent class models for capture-recapture data also allowing for serial dependence on previous capture history, conditionally on latent type and covariates. The EL approach allows to estimate the overall population size directly rather than by adding estimates conditional to covariate configurations. A Fisher-scoring algorithm for maximum likelihood estimation is proposed and a more efficient alternative to the traditional EL approach for estimating the non-parametric component is introduced; this allows us to show that the mapping between the non-parametric distribution of the covariates and the probabilities of being never captured is one-to-one and strictly increasing. Asymptotic results are outlined, and a procedure for constructing profile likelihood confidence intervals for the population size is presented. Two examples based on real data are used to illustrate the proposed approach and a simulation study indicates that, when estimating the overall undercount, the method proposed here is substantially more efficient than the one based on conditional maximum likelihood estimation, especially when the sample size is not sufficiently large.


Subject(s)
Models, Statistical , Likelihood Functions , Computer Simulation , Population Density , Sample Size
7.
Stat Med ; 43(8): 1577-1603, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38339872

ABSTRACT

Due to the dependency structure in the sampling process, adaptive trial designs create challenges in point and interval estimation and in the calculation of P-values. Optimal adaptive designs, which are designs where the parameters governing the adaptivity are chosen to maximize some performance criterion, suffer from the same problem. Various analysis methods which are able to handle this dependency structure have already been developed. In this work, we aim to give a comprehensive summary of these methods and show how they can be applied to the class of designs with planned adaptivity, of which optimal adaptive designs are an important member. The defining feature of these kinds of designs is that the adaptive elements are completely prespecified. This allows for explicit descriptions of the calculations involved, which makes it possible to evaluate different methods in a fast and accurate manner. We will explain how to do so, and present an extensive comparison of the performance characteristics of various estimators between an optimal adaptive design and its group-sequential counterpart.


Subject(s)
Research Design , Humans , Confidence Intervals , Sample Size
8.
Diagnostics (Basel) ; 14(4)2024 Feb 12.
Article in English | MEDLINE | ID: mdl-38396440

ABSTRACT

The role of medical diagnosis is essential in patient care and healthcare. Established diagnostic practices typically rely on predetermined clinical criteria and numerical thresholds. In contrast, Bayesian inference provides an advanced framework that supports diagnosis via in-depth probabilistic analysis. This study's aim is to introduce a software tool dedicated to the quantification of uncertainty in Bayesian diagnosis, a field that has seen minimal exploration to date. The presented tool, a freely available specialized software program, utilizes uncertainty propagation techniques to estimate the sampling, measurement, and combined uncertainty of the posterior probability for disease. It features two primary modules and fifteen submodules, all designed to facilitate the estimation and graphical representation of the standard uncertainty of the posterior probability estimates for diseased and non-diseased population samples, incorporating parameters such as the mean and standard deviation of the test measurand, the size of the samples, and the standard measurement uncertainty inherent in screening and diagnostic tests. Our study showcases the practical application of the program by examining the fasting plasma glucose data sourced from the National Health and Nutrition Examination Survey. Parametric distribution models are explored to assess the uncertainty of Bayesian posterior probability for diabetes mellitus, using the oral glucose tolerance test as the reference diagnostic method.

9.
Cureus ; 16(1): e51964, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38333481

ABSTRACT

Overconfidence in statistical results in medicine is fueled by improper practices and historical biases afflicting the concept of statistical significance. In particular, the dichotomization of significance (i.e., significant vs. not significant), blending of Fisherian and Neyman-Pearson approaches, magnitude and nullification fallacies, and other fundamental misunderstandings distort the purpose of statistical investigations entirely, impacting their ability to inform public health decisions or other fields of science in general. For these reasons, the international statistical community has attempted to propose various alternatives or different interpretative modes. However, as of today, such misuses still prevail. In this regard, the present paper discusses the use of multiple confidence (or, more aptly, compatibility) intervals to address these issues at their core. Additionally, an extension of the concept of confidence interval, called surprisal interval (S-interval), is proposed in the realm of statistical surprisal. The aforementioned is based on comparing the statistical surprise to an easily interpretable phenomenon, such as obtaining S consecutive heads when flipping a fair coin. This allows for a complete departure from the notions of statistical significance and confidence, which carry with them longstanding misconceptions.

10.
Contemp Clin Trials ; 138: 107453, 2024 03.
Article in English | MEDLINE | ID: mdl-38253253

ABSTRACT

BACKGROUND: Clinical trials often include interim analyses of the proportion of participants experiencing an event by a fixed time-point. A pre-specified proportion excluded from a corresponding confidence interval (CI) may lead an independent monitoring committee to recommend stopping the trial. Frequently this cumulative proportion is estimated by the Kaplan-Meier estimator with a Wald approximate CI, which may have coverage issues with small samples. METHODS: We reviewed four alternative CI methods for cumulative proportions (Beta Product Confidence Procedure (BPCP), BPCP Mid P, Rothman-Wilson, Thomas-Grunkemeier) and two CI methods for simple proportions (Clopper-Pearson, Wilson). We conducted a simulation study comparing CI methods across true event proportions for 12 scenarios differentiated by sample sizes and censoring patterns. We re-analyzed interim data from A5340, a HIV cure trial considering the proportion of participants experiencing virologic failure. RESULTS: Our simulation study highlights the lower and upper tail error probabilities for each CI method. Across scenarios, we found differences in the performance of lower versus upper bounds. No single method is always preferred. The upper bound of a Wald approximate CI performed reasonably with some error inflation, whereas the lower bound of the BPCP Mid P method performed well. For a trial design similar to A5340, we recommend BPCP Mid P. CONCLUSIONS: The design of future single-arm interim analyses of event proportions should consider the most appropriate CI method based on the relevant bound, anticipated sample size and event proportion. Our paper summarizes available methods, demonstrates performance in a simulation study, and includes code for implementation.


Subject(s)
Research Design , Humans , Confidence Intervals , Sample Size , Computer Simulation , Survival Analysis
11.
Entropy (Basel) ; 26(1)2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38275503

ABSTRACT

The paper makes a case that the current discussions on replicability and the abuse of significance testing have overlooked a more general contributor to the untrustworthiness of published empirical evidence, which is the uninformed and recipe-like implementation of statistical modeling and inference. It is argued that this contributes to the untrustworthiness problem in several different ways, including [a] statistical misspecification, [b] unwarranted evidential interpretations of frequentist inference results, and [c] questionable modeling strategies that rely on curve-fitting. What is more, the alternative proposals to replace or modify frequentist testing, including [i] replacing p-values with observed confidence intervals and effects sizes, and [ii] redefining statistical significance, will not address the untrustworthiness of evidence problem since they are equally vulnerable to [a]-[c]. The paper calls for distinguishing between unduly data-dependant 'statistical results', such as a point estimate, a p-value, and accept/reject H0, from 'evidence for or against inferential claims'. The post-data severity (SEV) evaluation of the accept/reject H0 results, converts them into evidence for or against germane inferential claims. These claims can be used to address/elucidate several foundational issues, including (i) statistical vs. substantive significance, (ii) the large n problem, and (iii) the replicability of evidence. Also, the SEV perspective sheds light on the impertinence of the proposed alternatives [i]-[iii], and oppugns [iii] the alleged arbitrariness of framing H0 and H1 which is often exploited to undermine the credibility of frequentist testing.

12.
Curr Med Chem ; 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38173071

ABSTRACT

BACKGROUND AND OBJECTIVE: Blood cystatin C level has been introduced as a promising biomarker to detect early kidney injury in cirrhotic patients. The purpose of this meta-analysis was to investigate the association of blood cystatin C level with all-- cause mortality in patients with liver cirrhosis. METHODS: PubMed, ScienceDirect, and Embase databases were searched from the inception to November 15, 2022. Observational studies evaluating the value of blood cystatin C level in predicting all-cause mortality in patients with ACS were selected. The pooled hazard risk (HR) with 95% confidence intervals (CI) was calculated using a random effect model meta-analysis. RESULTS: Twelve studies with 1983 cirrhotic patients were identified. The pooled adjusted HR of all-cause mortality was 3.59 (95% CI 2.39-5.39) for the high versus low group of cystatin C level. Stratified analysis by study design, characteristics of patients, geographical region, sample size, and length of follow-up further supported the predictive value elevated cystatin C level. CONCLUSION: Elevated cystatin C level was an independent predictor of poor survival in patients with liver cirrhosis. Detection of blood cystatin C level may provide important prognostic information in cirrhotic patients.

13.
Theor Popul Biol ; 155: 1-9, 2024 02.
Article in English | MEDLINE | ID: mdl-38000513

ABSTRACT

By quantifying key life history parameters in populations, such as growth rate, longevity, and generation time, researchers and administrators can obtain valuable insights into its dynamics. Although point estimates of demographic parameters have been available since the inception of demography as a scientific discipline, the construction of confidence intervals has typically relied on approximations through series expansions or computationally intensive techniques. This study introduces the first mathematical expression for calculating confidence intervals for the aforementioned life history traits when individuals are unidentifiable and data are presented as a life table. The key finding is the accurate estimation of the confidence interval for r, the instantaneous growth rate, which is tested using Monte Carlo simulations with four arbitrary discrete distributions. In comparison to the bootstrap method, the proposed interval construction method proves more efficient, particularly for experiments with a total offspring size below 400. We discuss handling cases where data are organized in extended life tables or as a matrix of vital rates. We have developed and provided accompanying code to facilitate these computations.


Subject(s)
Longevity , Population Growth , Humans , Confidence Intervals , Population Dynamics , Life Tables
14.
Rev. saúde pública (Online) ; 58: 01, 2024. graf
Article in English | LILACS | ID: biblio-1536768

ABSTRACT

ABSTRACT OBJECTIVE This study aims to propose a comprehensive alternative to the Bland-Altman plot method, addressing its limitations and providing a statistical framework for evaluating the equivalences of measurement techniques. This involves introducing an innovative three-step approach for assessing accuracy, precision, and agreement between techniques, which enhances objectivity in equivalence assessment. Additionally, the development of an R package that is easy to use enables researchers to efficiently analyze and interpret technique equivalences. METHODS Inferential statistics support for equivalence between measurement techniques was proposed in three nested tests. These were based on structural regressions with the goal to assess the equivalence of structural means (accuracy), the equivalence of structural variances (precision), and concordance with the structural bisector line (agreement in measurements obtained from the same subject), using analytical methods and robust approach by bootstrapping. To promote better understanding, graphical outputs following Bland and Altman's principles were also implemented. RESULTS The performance of this method was shown and confronted by five data sets from previously published articles that used Bland and Altman's method. One case demonstrated strict equivalence, three cases showed partial equivalence, and one showed poor equivalence. The developed R package containing open codes and data are available for free and with installation instructions at Harvard Dataverse at https://doi.org/10.7910/DVN/AGJPZH. CONCLUSION Although easy to communicate, the widely cited and applied Bland and Altman plot method is often misinterpreted, since it lacks suitable inferential statistical support. Common alternatives, such as Pearson's correlation or ordinal least-square linear regression, also fail to locate the weakness of each measurement technique. It may be possible to test whether two techniques have full equivalence by preserving graphical communication, in accordance with Bland and Altman's principles, but also adding robust and suitable inferential statistics. Decomposing equivalence into three features (accuracy, precision, and agreement) helps to locate the sources of the problem when fixing a new technique.


Subject(s)
Confidence Intervals , Regression Analysis , Data Interpretation, Statistical , Statistical Inference , Data Accuracy
15.
Ann Appl Stat ; 17(4): 3550-3569, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38106966

ABSTRACT

The Scientific Registry of Transplant Recipients (SRTR) system has become a rich resource for understanding the complex mechanisms of graft failure after kidney transplant, a crucial step for allocating organs effectively and implementing appropriate care. As transplant centers that treated patients might strongly confound graft failures, Cox models stratified by centers can eliminate their confounding effects. Also, since recipient age is a proven non-modifiable risk factor, a common practice is to fit models separately by recipient age groups. The moderate sample sizes, relative to the number of covariates, in some age groups may lead to biased maximum stratified partial likelihood estimates and unreliable confidence intervals even when samples still outnumber covariates. To draw reliable inference on a comprehensive list of risk factors measured from both donors and recipients in SRTR, we propose a de-biased lasso approach via quadratic programming for fitting stratified Cox models. We establish asymptotic properties and verify via simulations that our method produces consistent estimates and confidence intervals with nominal coverage probabilities. Accounting for nearly 100 confounders in SRTR, the de-biased method detects that the graft failure hazard nonlinearly increases with donor's age among all recipient age groups, and that organs from older donors more adversely impact the younger recipients. Our method also delineates the associations between graft failure and many risk factors such as recipients' primary diagnoses (e.g. polycystic disease, glomerular disease, and diabetes) and donor-recipient mismatches for human leukocyte antigen loci across recipient age groups. These results may inform the refinement of donor-recipient matching criteria for stakeholders.

16.
Sensors (Basel) ; 23(21)2023 Oct 31.
Article in English | MEDLINE | ID: mdl-37960554

ABSTRACT

The paper explores the application of Steiner's most-frequent-value (MFV) statistical method in sensor data analysis. The MFV is introduced as a powerful tool to identify the most-common value in a dataset, even when data points are scattered, unlike traditional mode calculations. Furthermore, the paper underscores the MFV method's versatility in estimating environmental gamma background blue (the natural level of gamma radiation present in the environment, typically originating from natural sources such as rocks, soil, and cosmic rays), making it useful in scenarios where traditional statistical methods are challenging. It presents the MFV approach as a reliable technique for characterizing ambient radiation levels around large-scale experiments, such as the DEAP-3600 dark matter detector. Using the MFV alongside passive sensors such as thermoluminescent detectors and employing a bootstrapping approach, this study showcases its effectiveness in evaluating background radiation and its aptness for estimating confidence intervals. In summary, this paper underscores the importance of the MFV and bootstrapping as valuable statistical tools in various scientific fields that involve the analysis of sensor data. These tools help in estimating the most-common values and make data analysis easier, especially in complex situations, where we need to be reasonably confident about our estimated ranges. Our calculations based on MFV statistics and bootstrapping indicate that the ambient radiation level in Cube Hall at SNOLAB is 35.19 µGy for 1342 h of exposure, with an uncertainty range of +3.41 to -3.59µGy, corresponding to a 68.27% confidence level. In the vicinity of the DEAP-3600 water shielding, the ambient radiation level is approximately 34.80 µGy, with an uncertainty range of +3.58 to -3.48µGy, also at a 68.27% confidence level. These findings offer crucial guidance for experimental design at SNOLAB, especially in the context of dark matter research.

17.
Inf inference ; 12(4): iaad040, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37982049

ABSTRACT

We consider asymptotically exact inference on the leading canonical correlation directions and strengths between two high-dimensional vectors under sparsity restrictions. In this regard, our main contribution is developing a novel representation of the Canonical Correlation Analysis problem, based on which one can operationalize a one-step bias correction on reasonable initial estimators. Our analytic results in this regard are adaptive over suitable structural restrictions of the high-dimensional nuisance parameters, which, in this set-up, correspond to the covariance matrices of the variables of interest. We further supplement the theoretical guarantees behind our procedures with extensive numerical studies.

18.
Health Technol Assess ; 27(15): 1-83, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37842916

ABSTRACT

Background: Antidepressants are commonly prescribed during pregnancy, despite a lack of evidence from randomised trials on the benefits or risks. Some studies have reported associations of antidepressants during pregnancy with adverse offspring neurodevelopment, but whether or not such associations are causal is unclear. Objectives: To study the associations of antidepressants for depression in pregnancy with outcomes using multiple methods to strengthen causal inference. Design: This was an observational cohort design using multiple methods to strengthen causal inference, including multivariable regression, propensity score matching, instrumental variable analysis, negative control exposures, comparison across indications and exposure discordant pregnancies analysis. Setting: This took place in UK general practice. Participants: Participants were pregnant women with depression. Interventions: The interventions were initiation of antidepressants in pregnancy compared with no initiation, and continuation of antidepressants in pregnancy compared with discontinuation. Main outcome measures: The maternal outcome measures were the use of primary care and secondary mental health services during pregnancy, and during four 6-month follow-up periods up to 24 months after pregnancy, and antidepressant prescription status 24 months following pregnancy. The child outcome measures were diagnosis of autism, diagnosis of attention deficit hyperactivity disorder and intellectual disability. Data sources: UK Clinical Practice Research Datalink. Results: Data on 80,103 pregnancies were used to study maternal primary care outcomes and were linked to 34,274 children with at least 4-year follow-up for neurodevelopmental outcomes. Women who initiated or continued antidepressants during pregnancy were more likely to have contact with primary and secondary health-care services during and after pregnancy and more likely to be prescribed an antidepressant 2 years following the end of pregnancy than women who did not initiate or continue antidepressants during pregnancy (odds ratioinitiation 2.16, 95% confidence interval 1.95 to 2.39; odds ratiocontinuation 2.40, 95% confidence interval 2.27 to 2.53). There was little evidence for any substantial association with autism (odds ratiomultivariableregression 1.10, 95% confidence interval 0.90 to 1.35; odds ratiopropensityscore 1.06, 95% confidence interval 0.84 to 1.32), attention deficit hyperactivity disorder (odds ratiomultivariableregression 1.02, 95% confidence interval 0.80 to 1.29; odds ratiopropensityscore 0.97, 95% confidence interval 0.75 to 1.25) or intellectual disability (odds ratiomultivariableregression 0.81, 95% confidence interval 0.55 to 1.19; odds ratiopropensityscore 0.89, 95% confidence interval 0.61 to 1.31) in children of women who continued antidepressants compared with those who discontinued antidepressants. There was inconsistent evidence of an association between initiation of antidepressants in pregnancy and diagnosis of autism in offspring (odds ratiomultivariableregression 1.23, 95% confidence interval 0.85 to 1.78; odds ratiopropensityscore 1.64, 95% confidence interval 1.01 to 2.66) but not attention deficit hyperactivity disorder or intellectual disability; however, but results were imprecise owing to smaller numbers. Limitations: Several causal-inference analyses lacked precision owing to limited numbers. In addition, adherence to the prescribed treatment was not measured. Conclusions: Women prescribed antidepressants during pregnancy had greater service use during and after pregnancy than those not prescribed antidepressants. The evidence against any substantial association with autism, attention deficit hyperactivity disorder or intellectual disability in the children of women who continued compared with those who discontinued antidepressants in pregnancy is reassuring. Potential association of initiation of antidepressants during pregnancy with offspring autism needs further investigation. Future work: Further research on larger samples could increase the robustness and precision of these findings. These methods applied could be a template for future pharmaco-epidemiological investigation of other pregnancy-related prescribing safety concerns. Funding: This project was funded by the National Institute for Health and Care Research (NIHR) Health Technology Assessment programme (15/80/19) and will be published in full in Health Technology Assessment; Vol. 27, No. 15. See the NIHR Journals Library website for further project information.


About one in seven women experience depression during pregnancy. Left untreated, this may harm them and their unborn babies. However, the decision to take antidepressants during pregnancy is difficult because women often worry about the risks to their unborn baby. Research findings have been inconsistent, so women often do not have clear information to enable them to make informed decisions. We studied women's and children's outcomes after starting (compared with not starting) or continuing (compared with stopping) antidepressants in pregnancy. We used a large UK primary care database and several novel methods of analysis. We tracked 80,103 pregnancies of women with depression for up to 2 years after pregnancy. We also tracked 34,274 children from these pregnancies for at least 4 years to check for developmental outcomes. Women prescribed antidepressants were more likely than women not prescribed antidepressants to use general practice and mental health services during and after pregnancy, and to be prescribed antidepressants 2 years after pregnancy. This suggests that antidepressants were being prescribed to women with greater clinical need. Women who continued antidepressants in pregnancy had no higher likelihood than those who discontinued antidepressants of autism, attention deficit hyperactivity disorder or intellectual disability in their children. This should reassure women making the decision to continue taking their medications in pregnancy. Women who started antidepressants in pregnancy may possibly have had a slightly higher likelihood of autism in their children than those who did not start them. These findings were not seen in all analyses and were based on smaller numbers; therefore, they should be viewed with caution. Importantly, over 98 in every 100 children of women who initiated or continued antidepressants in pregnancy did not receive an autism diagnosis. The findings may help women and clinicians make informed decisions on treatment with antidepressants in pregnancy.


Subject(s)
Autistic Disorder , Intellectual Disability , Humans , Child , Female , Pregnancy , Intellectual Disability/drug therapy , Antidepressive Agents/adverse effects , Family , Technology Assessment, Biomedical
19.
Cancers (Basel) ; 15(19)2023 Sep 22.
Article in English | MEDLINE | ID: mdl-37835368

ABSTRACT

This article describes rationales and limitations for making inferences based on data from randomized controlled trials (RCTs). We argue that obtaining a representative random sample from a patient population is impossible for a clinical trial because patients are accrued sequentially over time and thus comprise a convenience sample, subject only to protocol entry criteria. Consequently, the trial's sample is unlikely to represent a definable patient population. We use causal diagrams to illustrate the difference between random allocation of interventions within a clinical trial sample and true simple or stratified random sampling, as executed in surveys. We argue that group-specific statistics, such as a median survival time estimate for a treatment arm in an RCT, have limited meaning as estimates of larger patient population parameters. In contrast, random allocation between interventions facilitates comparative causal inferences about between-treatment effects, such as hazard ratios or differences between probabilities of response. Comparative inferences also require the assumption of transportability from a clinical trial's convenience sample to a targeted patient population. We focus on the consequences and limitations of randomization procedures in order to clarify the distinctions between pairs of complementary concepts of fundamental importance to data science and RCT interpretation. These include internal and external validity, generalizability and transportability, uncertainty and variability, representativeness and inclusiveness, blocking and stratification, relevance and robustness, forward and reverse causal inference, intention to treat and per protocol analyses, and potential outcomes and counterfactuals.

20.
J Med Life ; 16(6): 873-882, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37675163

ABSTRACT

The severity of the 2019 coronavirus disease (COVID-19) and its effects remain unpredictable. Certain factors, such as obesity, hypertension, and type 2 diabetes mellitus, may increase the severity of the disease. Rheumatology experts suggest that patients with active autoimmune conditions and controlled autoimmune diseases on immunosuppressive therapy may be at higher risk of developing severe COVID-19. In this retrospective observational study, we aimed to examine the patterns of COVID-19 in patients with underlying rheumatological diseases and their association with disease severity and hospital outcomes. A total of 34 patients with underlying rheumatological diseases who tested positive for severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) by polymerase chain reaction (PCR) were included between March 2020 and April 2021 at King Fahd Hospital of the University. The study population consisted of 76.47% female and 23.53% male patients, with a mean age ranging from 20 to 40 years. Female gender (p=0.0001) and younger age (p=0.004) were associated with milder disease. The most frequent rheumatological disease was systemic lupus erythematosus (SLE) (38.24%), which was associated with a milder infection (p=0.045). Patients treated with mycophenolate mofetil (MMF) had a milder disease course (p=0.0037). Hypertension was significantly associated with severe COVID-19 disease (p=0.037). There was no significant relationship between SLE and the need for ICU admission. Patients on hydroxychloroquine and MMF tended to develop milder disease, and there was no association between the severity of the infection and the treatment with steroids.


Subject(s)
Autoimmune Diseases , COVID-19 , Diabetes Mellitus, Type 2 , Hypertension , Lupus Erythematosus, Systemic , Rheumatic Diseases , Humans , Female , Male , Young Adult , Adult , Saudi Arabia/epidemiology , COVID-19/complications , COVID-19/epidemiology , SARS-CoV-2 , Lupus Erythematosus, Systemic/complications , Lupus Erythematosus, Systemic/drug therapy , Lupus Erythematosus, Systemic/epidemiology , Hypertension/complications , Hypertension/epidemiology , Mycophenolic Acid , Rheumatic Diseases/complications , Rheumatic Diseases/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...