Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Res Synth Methods ; 15(3): 398-412, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38111354

RESUMO

Outcomes of meta-analyses are increasingly used to inform evidence-based decision making in various research fields. However, a number of recent studies have reported rapid temporal changes in magnitude and significance of the reported effects which could make policy-relevant recommendations from meta-analyses to quickly go out of date. We assessed the extent and patterns of temporal trends in magnitude and statistical significance of the cumulative effects in meta-analyses in applied ecology and conservation published between 2004 and 2018. Of the 121 meta-analyses analysed, 93% showed a temporal trend in cumulative effect magnitude or significance with 27% of the datasets exhibiting temporal trends in both. The most common trend was the early study effect when at least one of the first 5 years effect size estimates exhibited more than 50% magnitude difference to the subsequent estimate. The observed temporal trends persisted in majority of datasets once moderators were accounted for. Only 5 datasets showed significant changes in sample size over time which could potentially explain the observed temporal change in the cumulative effects. Year of publication of meta-analysis had no significant effect on presence of temporal trends in cumulative effects. Our results show that temporal changes in magnitude and statistical significance in applied ecology are widespread and represent a serious potential threat to use of meta-analyses for decision-making in conservation and environmental management. We recommend use of cumulative meta-analyses and call for more studies exploring the causes of the temporal effects.


Assuntos
Conservação dos Recursos Naturais , Ecologia , Metanálise como Assunto , Humanos , Tomada de Decisões , Tamanho da Amostra , Fatores de Tempo , Projetos de Pesquisa
2.
Res Synth Methods ; 14(5): 671-688, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37381621

RESUMO

For estimation of heterogeneity variance τ 2 in meta-analysis of log-odds-ratio, we derive new mean- and median-unbiased point estimators and new interval estimators based on a generalized Q statistic, Q F , in which the weights depend on only the studies' effective sample sizes. We compare them with familiar estimators based on the inverse-variance-weights version of Q , Q IV . In an extensive simulation, we studied the bias (including median bias) of the point estimators and the coverage (including left and right coverage error) of the confidence intervals. Most estimators add 0.5 to each cell of the 2 × 2 table when one cell contains a zero count; we include a version that always adds 0.5 . The results show that: two of the new point estimators and two of the familiar point estimators are almost unbiased when the total sample size n ≥ 250 and the probability in the Control arm ( p iC ) is 0.1, and when n ≥ 100 and p iC is 0.2 or 0.5; for 0.1 ≤ τ 2 ≤ 1 , all estimators have negative bias for small to medium sample sizes, but for larger sample sizes some of the new median-unbiased estimators are almost median-unbiased; choices of interval estimators depend on values of parameters, but one of the new estimators is reasonable when p iC = 0.1 and another, when p iC = 0.2 or p iC = 0.5 ; and lack of balance between left and right coverage errors for small n and/or p iC implies that the available approximations for the distributions of Q IV and Q F are accurate only for larger sample sizes.


Assuntos
Razão de Chances , Probabilidade , Simulação por Computador , Tamanho da Amostra , Viés
3.
BMC Med Res Methodol ; 23(1): 146, 2023 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-37344771

RESUMO

BACKGROUND: Cochran's Q statistic is routinely used for testing heterogeneity in meta-analysis. Its expected value (under an incorrect null distribution) is part of several popular estimators of the between-study variance, [Formula: see text]. Those applications generally do not account for use of the studies' estimated variances in the inverse-variance weights that define Q (more explicitly, [Formula: see text]). Importantly, those weights make approximating the distribution of [Formula: see text] rather complicated. METHODS: As an alternative, we are investigating a Q statistic, [Formula: see text], whose constant weights use only the studies' arm-level sample sizes. For log-odds-ratio (LOR), log-relative-risk (LRR), and risk difference (RD) as the measures of effect, we study, by simulation, approximations to distributions of [Formula: see text] and [Formula: see text], as the basis for tests of heterogeneity. RESULTS: The results show that: for LOR and LRR, a two-moment gamma approximation to the distribution of [Formula: see text] works well for small sample sizes, and an approximation based on an algorithm of Farebrother is recommended for larger sample sizes. For RD, the Farebrother approximation works very well, even for small sample sizes. For [Formula: see text], the standard chi-square approximation provides levels that are much too low for LOR and LRR and too high for RD. The Kulinskaya et al. (Res Synth Methods 2:254-70, 2011) approximation for RD and the Kulinskaya and Dollinger (BMC Med Res Methodol 15:49, 2015) approximation for LOR work well for [Formula: see text] but have some convergence issues for very small sample sizes combined with small probabilities. CONCLUSIONS: The performance of the standard [Formula: see text] approximation is inadequate for all three binary effect measures. Instead, we recommend a test of heterogeneity based on [Formula: see text] and provide practical guidelines for choosing an appropriate test at the .05 level for all three effect measures.


Assuntos
Algoritmos , Humanos , Simulação por Computador , Probabilidade , Razão de Chances , Tamanho da Amostra
4.
Stat Med ; 42(18): 3114-3127, 2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37190904

RESUMO

The Cox regression, a semi-parametric method of survival analysis, is extremely popular in biomedical applications. The proportional hazards assumption is a key requirement in the Cox model. To accommodate non-proportional hazards, we propose to parameterize the shape parameter of the baseline hazard function using the additional, separate Cox-regression term which depends on the vector of the covariates. This parametrization retains the general form of the hazard function over the strata and is similar to one in Devarajan and Ebrahimi (Comput Stat Data Anal. 2011;55:667-676) in the case of the Weibull distribution, but differs for other hazard functions. We call this model the double-Cox model. We formally introduce the double-Cox model with shared frailty and investigate, by simulation, the estimation bias and the coverage of the proposed point and interval estimation methods for the Gompertz and the Weibull baseline hazards. For real-life applications with low frailty variance and a large number of clusters, the marginal likelihood estimation is almost unbiased and the profile likelihood-based confidence intervals provide good coverage for all model parameters. We also compare the results from the over-parametrized double-Cox model to those from the standard Cox model with frailty in the case of the scale-only proportional hazards. The model is illustrated on an example of the survival after a diagnosis of type 2 diabetes mellitus. The R programs for fitting the double-Cox model are available on Github.


Assuntos
Diabetes Mellitus Tipo 2 , Fragilidade , Humanos , Modelos de Riscos Proporcionais , Funções Verossimilhança , Análise de Sobrevida
5.
J Stroke Cerebrovasc Dis ; 31(9): 106663, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35907306

RESUMO

OBJECTIVE: Transient ischaemic attacks (TIA) serve as warning signs for future stroke, and the impact of TIA on long term survival is uncertain. We assessed the long-term hazards of all-cause mortality following a first episode of a transient ischaemic attack (TIA). DESIGN: Retrospective matched cohort study. METHODS: Cohort study using electronic primary health care records from The Health Improvement Network (THIN) database in the United Kingdom. Cases born in or before 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Double-Cox Weibull survival model with a random frailty effect of general practice, while adjusting for different socio-demographic factors, medical therapies, and comorbidities. RESULTS: 20,633 cases and 58,634 controls were included. During the study period, 24,176 participants died comprising of 7,745 (37.5%) cases and 16,431(28.0%) controls. In terms of hazards of mortality, cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to their 39-60 years matched controls (HR = 3.04 (2.91 - 3.18)). The HR for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to their same-aged matched controls. Cases aged 39-60 at TIA onset who were prescribed aspirin were associated with reduced HR of 0.93 (0.84 - 1.01), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5, 10 and 15 years respectively, compared to the same aged cases who were not prescribed any antiplatelet. Statistically significant reductions in hazard ratios were observed with aspirin at 10 and 15 years in all age groups. Hazard ratio point estimates for other antiplatelets (dipyridamole or clopidogrel) and dual antiplatelet therapy were very similar to aspirin at 5, 10 and 15 years but with wider confidence intervals that included 1. There was no survival benefit associated with antiplatelet prescription in controls. CONCLUSIONS: The overall risk of death was considerably elevated in all age groups after a first-ever TIA event. Aspirin prescription was associated with a reduced risk. These findings support the use of aspirin in secondary prevention for people with a TIA. The results do not support the use of antiplatelet medication in people without TIA.


Assuntos
Ataque Isquêmico Transitório , Acidente Vascular Cerebral , Aspirina/uso terapêutico , Estudos de Coortes , Humanos , Ataque Isquêmico Transitório/complicações , Ataque Isquêmico Transitório/diagnóstico , Ataque Isquêmico Transitório/terapia , Inibidores da Agregação Plaquetária/uso terapêutico , Estudos Retrospectivos , Acidente Vascular Cerebral/tratamento farmacológico , Acidente Vascular Cerebral/terapia
6.
Br J Math Stat Psychol ; 75(3): 444-465, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35094381

RESUMO

Cochran's Q statistic is routinely used for testing heterogeneity in meta-analysis. Its expected value is also used in several popular estimators of the between-study variance, τ 2 . Those applications generally have not considered the implications of its use of estimated variances in the inverse-variance weights. Importantly, those weights make approximating the distribution of Q (more explicitly, Q IV ) rather complicated. As an alternative, we investigate a new Q statistic, Q F , whose constant weights use only the studies' effective sample sizes. For the standardized mean difference as the measure of effect, we study, by simulation, approximations to distributions of Q IV and Q F , as the basis for tests of heterogeneity and for new point and interval estimators of τ 2 . These include new DerSimonian-Kacker-type moment estimators based on the first moment of Q F , and novel median-unbiased estimators. The results show that: an approximation based on an algorithm of Farebrother follows both the null and the alternative distributions of Q F reasonably well, whereas the usual chi-squared approximation for the null distribution of Q IV and the Biggerstaff-Jackson approximation to its alternative distribution are poor; in estimating τ 2 , our moment estimator based on Q F is almost unbiased, the Mandel - Paule estimator has some negative bias in some situations, and the DerSimonian-Laird and restricted maximum likelihood estimators have considerable negative bias; and all 95% interval estimators have coverage that is too high when τ 2 = 0 , but otherwise the Q-profile interval performs very well.


Assuntos
Algoritmos , Modelos Estatísticos , Simulação por Computador
7.
Res Synth Methods ; 13(1): 48-67, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34427058

RESUMO

To present time-varying evidence, cumulative meta-analysis (CMA) updates results of previous meta-analyses to incorporate new study results. We investigate the properties of CMA, suggest possible improvements and provide the first in-depth simulation study of the use of CMA and CUSUM methods for detection of temporal trends in random-effects meta-analysis. We use the standardized mean difference (SMD) as an effect measure of interest. For CMA, we compare the standard inverse-variance-weighted estimation of the overall effect using REML-based estimation of between-study variance τ 2 with the sample-size-weighted estimation of the effect accompanied by Kulinskaya-Dollinger-Bjørkestøl (Biometrics. 2011; 67:203-212) (KDB) estimation of τ 2 . For all methods, we consider Type 1 error under no shift and power under a shift in the mean in the random-effects model. To ameliorate the lack of power in CMA, we introduce two-stage CMA, in which τ 2 is estimated at Stage 1 (from the first 5-10 studies), and further CMA monitors a target value of effect, keeping the τ 2 value fixed. We recommend this two-stage CMA combined with cumulative testing for positive shift in τ 2 . In practice, use of CMA requires at least 15-20 studies.


Assuntos
Tamanho da Amostra , Simulação por Computador
8.
Stat Methods Med Res ; 30(7): 1667-1690, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34110941

RESUMO

Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of random-effects meta-analysis of log-odds-ratios, we investigate how choices in generating data affect such conclusions. The choices we study include the overall log-odds-ratio, the distribution of probabilities in the control arm, and the distribution of study-level sample sizes. We retain the customary normal distribution of study-level effects. To examine the impact of the components of simulations, we assess the performance of the best available inverse-variance-weighted two-stage method, a two-stage method with constant sample-size-based weights, and two generalized linear mixed models. The results show no important differences between fixed and random sample sizes. In contrast, we found differences among data-generation models in estimation of heterogeneity variance and overall log-odds-ratio. This sensitivity to design poses challenges for use of simulation in choosing methods of meta-analysis.


Assuntos
Modelos Estatísticos , Simulação por Computador , Modelos Lineares , Razão de Chances , Tamanho da Amostra
9.
Res Synth Methods ; 12(6): 711-730, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33969638

RESUMO

The conventional Q statistic, using estimated inverse-variance (IV) weights, underlies a variety of problems in random-effects meta-analysis. In previous work on standardized mean difference and log-odds-ratio, we found superior performance with an estimator of the overall effect whose weights use only group-level sample sizes. The Q statistic with those weights has the form proposed by DerSimonian and Kacker. The distribution of this Q and the Q with IV weights must generally be approximated. We investigate approximations for those distributions, as a basis for testing and estimating the between-study variance (τ2 ). A simulation study, with mean difference as the effect measure, provides a framework for assessing accuracy of the approximations, level and power of the tests, and bias in estimating τ2 . Two examples illustrate estimation of τ2 and the overall mean difference. Use of Q with sample-size-based weights and its exact distribution (available for mean difference and evaluated by Farebrother's algorithm) provides precise levels even for very small and unbalanced sample sizes. The corresponding estimator of τ2 is almost unbiased for 10 or more small studies. This performance compares favorably with the extremely liberal behavior of the standard tests of heterogeneity and the largely biased estimators based on inverse-variance weights.


Assuntos
Algoritmos , Modelos Estatísticos , Simulação por Computador , Razão de Chances , Tamanho da Amostra
10.
Artigo em Inglês | MEDLINE | ID: mdl-34031184

RESUMO

OBJECTIVE: Assess whether statins reduce mortality in the general population aged 60 years and above. DESIGN: Retrospective cohort study. SETTING: Primary care practices contributing to The Health Improvement Network database, England and Wales, 1990-2017. PARTICIPANTS: Cohort who turned age 60 between 1990 and 2000 with no previous cardiovascular disease or statin prescription and followed up until 2017. RESULTS: Current statin prescription was associated with a significant reduction in all-cause mortality from age 65 years onward, with greater reductions seen at older ages. The adjusted HRs of mortality associated with statin prescription at ages 65, 70, 75, 80 and 85 years were 0.76 (95% CI 0.71 to 0.81), 0.71 (95% CI 0.68 to 0.75), 0.68 (95% CI 0.65 to 0.72), 0.63 (95% CI 0.53 to 0.73) and 0.54 (95% CI 0.33 to 0.92), respectively. The adjusted HRs did not vary by sex or cardiac risk. CONCLUSIONS: Using regularly updated clinical information on sequential treatment decisions in older people, mortality predictions were updated every 6 months until age 85 years in a combined primary and secondary prevention population. The consistent mortality reduction of statins from age 65 years onward supports their use where clinically indicated at age 75 and older, where there has been particular uncertainty of the benefits.


Assuntos
Inibidores de Hidroximetilglutaril-CoA Redutases , Idoso , Idoso de 80 Anos ou mais , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases/uso terapêutico , Estudos Longitudinais , Pessoa de Meia-Idade , Atenção Primária à Saúde , Estudos Retrospectivos , Prevenção Secundária
11.
BMC Biol ; 19(1): 33, 2021 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-33596922

RESUMO

BACKGROUND: Meta-analysis is often used to make generalisations across all available evidence at the global scale. But how can these global generalisations be used for evidence-based decision making at the local scale, if the global evidence is not perceived to be relevant to local decisions? We show how an interactive method of meta-analysis-dynamic meta-analysis-can be used to assess the local relevance of global evidence. RESULTS: We developed Metadataset ( www.metadataset.com ) as a proof-of-concept for dynamic meta-analysis. Using Metadataset, we show how evidence can be filtered and weighted, and results can be recalculated, using dynamic methods of subgroup analysis, meta-regression, and recalibration. With an example from agroecology, we show how dynamic meta-analysis could lead to different conclusions for different subsets of the global evidence. Dynamic meta-analysis could also lead to a rebalancing of power and responsibility in evidence synthesis, since evidence users would be able to make decisions that are typically made by systematic reviewers-decisions about which studies to include (e.g. critical appraisal) and how to handle missing or poorly reported data (e.g. sensitivity analysis). CONCLUSIONS: In this study, we show how dynamic meta-analysis can meet an important challenge in evidence-based decision making-the challenge of using global evidence for local decisions. We suggest that dynamic meta-analysis can be used for subject-wide evidence synthesis in several scientific disciplines, including agroecology and conservation biology. Future studies should develop standardised classification systems for the metadata that are used to filter and weight the evidence. Future studies should also develop standardised software packages, so that researchers can efficiently publish dynamic versions of their meta-analyses and keep them up-to-date as living systematic reviews. Metadataset is a proof-of-concept for this type of software, and it is open source. Future studies should improve the user experience, scale the software architecture, agree on standards for data and metadata storage and processing, and develop protocols for responsible evidence use.


Assuntos
Tomada de Decisões , Metanálise como Assunto , Projetos de Pesquisa , Software , Humanos
13.
BMC Med Res Methodol ; 20(1): 263, 2020 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-33092521

RESUMO

BACKGROUND: For outcomes that studies report as the means in the treatment and control groups, some medical applications and nearly half of meta-analyses in ecology express the effect as the ratio of means (RoM), also called the response ratio (RR), analyzed in the logarithmic scale as the log-response-ratio, LRR. METHODS: In random-effects meta-analysis of LRR, with normal and lognormal data, we studied the performance of estimators of the between-study variance, τ2, (measured by bias and coverage) in assessing heterogeneity of study-level effects, and also the performance of related estimators of the overall effect in the log scale, λ. We obtained additional empirical evidence from two examples. RESULTS: The results of our extensive simulations showed several challenges in using LRR as an effect measure. Point estimators of τ2 had considerable bias or were unreliable, and interval estimators of τ2 seldom had the intended 95% coverage for small to moderate-sized samples (n<40). Results for estimating λ differed between lognormal and normal data. CONCLUSIONS: For lognormal data, we can recommend only SSW, a weighted average in which a study's weight is proportional to its effective sample size, (when n≥40) and its companion interval (when n≥10). Normal data posed greater challenges. When the means were far enough from 0 (more than one standard deviation, 4 in our simulations), SSW was practically unbiased, and its companion interval was the only option.


Assuntos
Tamanho da Amostra , Humanos
14.
PLoS One ; 15(8): e0236701, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32750091

RESUMO

BACKGROUND: Hip replacement and hip resurfacing are common surgical procedures with an estimated risk of revision of 4% over 10 year period. Approximately 58% of hip replacements will last 25 years. Some implants have higher revision rates and early identification of poorly performing hip replacement implant brands and cup/head brand combinations is vital. AIMS: Development of a dynamic monitoring method for the revision rates of hip implants. METHODS: Data on the outcomes following the hip replacement surgery between 2004 and 2012 was obtained from the National Joint Register (NJR) in the UK. A novel dynamic algorithm based on the CUmulative SUM (CUSUM) methodology with adjustment for casemix and random frailty for an operating unit was developed and implemented to monitor the revision rates over time. The Benjamini-Hochberg FDR method was used to adjust for multiple testing of numerous hip replacement implant brands and cup/ head combinations at each time point. RESULTS: Three poorly performing cup brands and two cup/ head brand combinations have been detected. Wright Medical UK Ltd Conserve Plus Resurfacing Cup (cup o), DePuy ASR Resurfacing Cup (cup e), and Endo Plus (UK) Limited EP-Fit Plus Polyethylene cup (cup g) showed stable multiple alarms over the period of a year or longer. An addition of a random frailty term did not change the list of underperforming components. The model with added random effect was more conservative, showing less and more delayed alarms. CONCLUSIONS: Our new algorithm is an efficient method for early detection of poorly performing components in hip replacement surgery. It can also be used for similar tasks of dynamic quality monitoring in healthcare.


Assuntos
Artroplastia de Quadril/efeitos adversos , Prótese de Quadril/efeitos adversos , Desenho de Prótese , Falha de Prótese , Reoperação/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sistema de Registros , Reino Unido
15.
Res Synth Methods ; 11(3): 426-442, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32112619

RESUMO

In random-effects meta-analysis the between-study variance ( τ2 ) has a key role in assessing heterogeneity of study-level estimates and combining them to estimate an overall effect. For odds ratios the most common methods suffer from bias in estimating τ2 and the overall effect and produce confidence intervals with below-nominal coverage. An improved approximation to the moments of Cochran's Q statistic, suggested by Kulinskaya and Dollinger (KD), yields new point and interval estimators of τ2 and of the overall log-odds-ratio. Another, simpler approach (SSW) uses weights based only on study-level sample sizes to estimate the overall effect. In extensive simulations we compare our proposed estimators with established point and interval estimators for τ2 and point and interval estimators for the overall log-odds-ratio (including the Hartung-Knapp-Sidik-Jonkman interval). Additional simulations included three estimators based on generalized linear mixed models and the Mantel-Haenszel fixed-effect estimator. Results of our simulations show that no single point estimator of τ2 can be recommended exclusively, but Mandel-Paule and KD provide better choices for small and large numbers of studies, respectively. The KD estimator provides reliable coverage of τ2 . Inverse-variance-weighted estimators of the overall effect are substantially biased, as are the Mantel-Haenszel odds ratio and the estimators from the generalized linear mixed models. The SSW estimator of the overall effect and a related confidence interval provide reliable point and interval estimation of the overall log-odds-ratio.


Assuntos
Metanálise como Assunto , Pré-Eclâmpsia/tratamento farmacológico , Algoritmos , Análise de Variância , Simulação por Computador , Interpretação Estatística de Dados , Diuréticos , Feminino , Humanos , Modelos Lineares , Modelos Estatísticos , Razão de Chances , Gravidez , Projetos de Pesquisa
16.
Stat Med ; 39(2): 171-191, 2020 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-31709582

RESUMO

Methods for random-effects meta-analysis require an estimate of the between-study variance, τ2 . The performance of estimators of τ2 (measured by bias and coverage) affects their usefulness in assessing heterogeneity of study-level effects and also the performance of related estimators of the overall effect. However, as we show, the performance of the methods varies widely among effect measures. For the effect measures mean difference (MD) and standardized MD (SMD), we use improved effect-measure-specific approximations to the expected value of Q for both MD and SMD to introduce two new methods of point estimation of τ2 for MD (Welch-type and corrected DerSimonian-Laird) and one WT interval method. We also introduce one point estimator and one interval estimator for τ2 in SMD. Extensive simulations compare our methods with four point estimators of τ2 (the popular methods of DerSimonian-Laird, restricted maximum likelihood, and Mandel and Paule, and the less-familiar method of Jackson) and four interval estimators for τ2 (profile likelihood, Q-profile, Biggerstaff and Jackson, and Jackson). We also study related point and interval estimators of the overall effect, including an estimator whose weights use only study-level sample sizes. We provide measure-specific recommendations from our comprehensive simulation study and discuss an example.


Assuntos
Funções Verossimilhança , Metanálise como Assunto , Simulação por Computador , Humanos
17.
BMC Med Res Methodol ; 19(1): 217, 2019 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-31775636

RESUMO

BACKGROUND: Continuous monitoring of surgical outcomes after joint replacement is needed to detect which brands' components have a higher than expected failure rate and are therefore no longer recommended to be used in surgical practice. We developed a monitoring method based on cumulative sum (CUSUM) chart specifically for this application. METHODS: Our method entails the use of the competing risks model with the Weibull and the Gompertz hazard functions adjusted for observed covariates to approximate the baseline time-to-revision and time-to-death distributions, respectively. The correlated shared frailty terms for competing risks, corresponding to the operating unit, are also included in the model. A bootstrap-based boundary adjustment is then required for risk-adjusted CUSUM charts to guarantee a given probability of the false alarm rates. We propose a method to evaluate the CUSUM scores and the adjusted boundary for a survival model with the shared frailty terms. We also introduce a unit performance quality score based on the posterior frailty distribution. This method is illustrated using the 2003-2012 hip replacement data from the UK National Joint Registry (NJR). RESULTS: We found that the best model included the shared frailty for revision but not for death. This means that the competing risks of revision and death are independent in NJR data. Our method was superior to the standard NJR methodology. For one of the two monitored components, it produced alarms four years before the increased failure rate came to the attention of the UK regulatory authorities. The hazard ratios of revision across the units varied from 0.38 to 2.28. CONCLUSIONS: An earlier detection of failure signal by our method in comparison to the standard method used by the NJR may be explained by proper risk-adjustment and the ability to accommodate time-dependent hazards. The continuous monitoring of hip replacement outcomes should include risk adjustment at both the individual and unit level.


Assuntos
Artroplastia de Quadril/mortalidade , Fragilidade/mortalidade , Risco Ajustado , Medição de Risco , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Probabilidade , Sistema de Registros , Reoperação , Análise de Sobrevida , Resultado do Tratamento , Reino Unido
18.
Trends Ecol Evol ; 34(10): 895-902, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31196571

RESUMO

A shift towards evidence-based conservation and environmental management over the last two decades has resulted in an increased use of systematic reviews and meta-analyses as tools to combine existing scientific evidence. However, to guide policy making decisions in conservation and management, the conclusions of meta-analyses need to remain stable for at least some years. Alarmingly, numerous recent studies indicate that the magnitude, statistical significance, and even the sign of the effects reported in the literature might change over relatively short time periods. We argue that such rapid temporal changes in cumulative evidence represent a real threat to policy making in conservation and environmental management and call for systematic monitoring of temporal changes in evidence and exploration of their causes.


Assuntos
Tomada de Decisões , Formulação de Políticas
19.
Res Synth Methods ; 10(3): 398-419, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30854785

RESUMO

For meta-analysis of studies that report outcomes as binomial proportions, the most popular measure of effect is the odds ratio (OR), usually analyzed as log(OR). Many meta-analyses use the risk ratio (RR) and its logarithm because of its simpler interpretation. Although log(OR) and log(RR) are both unbounded, use of log(RR) must ensure that estimates are compatible with study-level event rates in the interval (0, 1). These complications pose a particular challenge for random-effects models, both in applications and in generating data for simulations. As background, we review the conventional random-effects model and then binomial generalized linear mixed models (GLMMs) with the logit link function, which do not have these complications. We then focus on log-binomial models and explore implications of using them; theoretical calculations and simulation show evidence of biases. The main competitors to the binomial GLMMs use the beta-binomial (BB) distribution, either in BB regression or by maximizing a BB likelihood; a simulation produces mixed results. Two examples and an examination of Cochrane meta-analyses that used RR suggest bias in the results from the conventional inverse-variance-weighted approach. Finally, we comment on other measures of effect that have range restrictions, including risk difference, and outline further research.


Assuntos
Antidepressivos Tricíclicos/efeitos adversos , Antidepressivos Tricíclicos/uso terapêutico , Depressão/tratamento farmacológico , Metanálise como Assunto , Medição de Risco/métodos , Risco , Algoritmos , Simulação por Computador , Diuréticos/uso terapêutico , Feminino , Humanos , Funções Verossimilhança , Modelos Lineares , Razão de Chances , Pré-Eclâmpsia/tratamento farmacológico , Gravidez , Análise de Regressão
20.
J Hypertens ; 37(4): 837-843, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30817466

RESUMO

OBJECTIVE: Compare outcomes of intensive treatment of SBP to less than 120 mmHg versus standard treatment to less than 140 mmHg in the US clinical Systolic Blood Pressure Intervention Trial (SPRINT) with similar hypertensive patients managed in routine primary care in the United Kingdom. METHODS: Hypertensive patients aged 50-90 without diabetes or chronic kidney disease (CKD) were selected in SPRINT and The Health Improvement Network (THIN) database. Patients were enrolled in 2010-2013 and followed-up to 2015 (SPRINT N = 4112; THIN N = 8631). Cox's proportional hazards regressions were fitted to estimate the hazard of all-cause mortality or CKD (main adverse effect) associated with intensive treatment, adjusted for sex, age, ethnicity, smoking, blood pressure, cardiovascular disease, aspirin, statin, number of antihypertensive drugs at baseline, change in number of antihypertensive drugs at trial entry, and clinical site. RESULTS: Almost half of the patients had intensive treatment (43-45%). In SPRINT, intensive treatment was associated with a decreased hazard of mortality of 0.63 (0.43-0.92), while in THIN with an increased hazard of 1.66 (1.28-2.15). In THIN, this effect was time-dependent. Intensive treatment was associated with an increased hazard of CKD of 2.67 (1.74-4.11) in SPRINT and 1.35 (1.08-1.70) in THIN. In THIN, this effect differed by the number of antihypertensive drugs prescribed at baseline. CONCLUSION: It appears that intensive treatment of SBP may be harmful in the general population where all have access to routine healthcare as with the UK National Health Services, but could be beneficial in high-risk patients who are closely monitored.


Assuntos
Anti-Hipertensivos/administração & dosagem , Pressão Sanguínea , Hipertensão/tratamento farmacológico , Insuficiência Renal Crônica/etiologia , Idoso , Idoso de 80 Anos ou mais , Determinação da Pressão Arterial , Feminino , Humanos , Inibidores de Hidroximetilglutaril-CoA Redutases , Hipertensão/mortalidade , Masculino , Pessoa de Meia-Idade , Fatores de Risco , Resultado do Tratamento , Reino Unido/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...