Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
JAMA Netw Open ; 7(2): e240649, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38421646

ABSTRACT

Importance: Systematic reviews of medical imaging diagnostic test accuracy (DTA) studies are affected by between-study heterogeneity due to a range of factors. Failure to appropriately assess the extent and causes of heterogeneity compromises the interpretability of systematic review findings. Objective: To assess how heterogeneity has been examined in medical imaging DTA studies. Evidence Review: The PubMed database was searched for systematic reviews of medical imaging DTA studies that performed a meta-analysis. The search was limited to the 40 journals with highest impact factor in the radiology, nuclear medicine, and medical imaging category in the InCites Journal Citation Reports of 2021 to reach a sample size of 200 to 300 included studies. Descriptive analysis was performed to characterize the imaging modality, target condition, type of meta-analysis model used, strategies for evaluating heterogeneity, and sources of heterogeneity identified. Multivariable logistic regression was performed to assess whether any factors were associated with at least 1 source of heterogeneity being identified in the included meta-analyses. Methodological quality evaluation was not performed. Data analysis occurred from October to December 2022. Findings: A total of 242 meta-analyses involving a median (range) of 987 (119-441 510) patients across a diverse range of disease categories and imaging modalities were included. The extent of heterogeneity was adequately described (ie, whether it was absent, low, moderate, or high) in 220 studies (91%) and was most commonly assessed using the I2 statistic (185 studies [76%]) and forest plots (181 studies [75%]). Heterogeneity was rated as moderate to high in 191 studies (79%). Of all included meta-analyses, 122 (50%) performed subgroup analysis and 87 (36%) performed meta-regression. Of the 242 studies assessed, 189 (78%) included 10 or more primary studies. Of these 189 studies, 60 (32%) did not perform meta-regression or subgroup analysis. Reasons for being unable to investigate sources of heterogeneity included inadequate reporting of primary study characteristics and a low number of included primary studies. Use of meta-regression was associated with identification of at least 1 source of variability (odds ratio, 1.90; 95% CI, 1.11-3.23; P = .02). Conclusions and Relevance: In this systematic review of assessment of heterogeneity in medical imaging DTA meta-analyses, most meta-analyses were impacted by a moderate to high level of heterogeneity, presenting interpretive challenges. These findings suggest that, despite the development and availability of more rigorous statistical models, heterogeneity appeared to be incomplete, inconsistently evaluated, or methodologically questionable in many cases, which lessened the interpretability of the analyses performed; comprehensive heterogeneity assessment should be addressed at the author level by improving personal familiarity with appropriate statistical methodology for assessing heterogeneity and involving biostatisticians and epidemiologists in study design, as well as at the editorial level, by mandating adherence to methodologic standards in primary DTA studies and DTA meta-analyses.


Subject(s)
Data Analysis , Diagnostic Imaging , Humans , Systematic Reviews as Topic , Databases, Factual , Diagnostic Tests, Routine
2.
J Clin Neurosci ; 115: 89-94, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37541083

ABSTRACT

BACKGROUND: Diagnostic neuroimaging plays an essential role in guiding clinical decision-making in the management of patients with cerebral aneurysms. Imaging technologies for investigating cerebral aneurysms constantly evolve, and clinicians rely on the published literature to remain up to date. Reporting guidelines have been developed to standardise and strengthen the reporting of clinical evidence. Therefore, it is essential that radiological diagnostic accuracy studies adhere to such guidelines to ensure completeness of reporting. Incomplete reporting hampers the reader's ability to detect bias, determine generalisability of study results or replicate investigation parameters, detracting from the credibility and reliability of studies. OBJECTIVE: The purpose of this systematic review was to evaluate adherence to the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015 reporting guideline amongst imaging diagnostic accuracy studies for cerebral aneurysms. METHODS: A systematic search for cerebral aneurysm imaging diagnostic accuracy studies was conducted. Journals were cross examined against the STARD 2015 checklist and their compliance with item numbers was recorded. RESULTS: The search yielded 66 articles. The mean number of STARD items reported was 24.2 ± 2.7 (71.2% ± 7.9%), with a range of 19 to 30 out of a maximum number of 34 items. CONCLUSION: Taken together, these results indicate that adherence to the STARD 2015 guideline in cerebral aneurysm imaging diagnostic accuracy studies was moderate. Measures to improve compliance include mandating STARD 2015 adherence in instructions to authors issued by journals.


Subject(s)
Intracranial Aneurysm , Humans , Intracranial Aneurysm/diagnostic imaging , Quality Control , Reproducibility of Results , Guideline Adherence , Neuroimaging , Research Design
3.
JAMA Netw Open ; 5(8): e2228776, 2022 08 01.
Article in English | MEDLINE | ID: mdl-36006641

ABSTRACT

Importance: Small study effects are the phenomena that studies with smaller sample sizes tend to report larger and more favorable effect estimates than studies with larger sample sizes. Objective: To evaluate the presence and extent of small study effects in diagnostic imaging accuracy meta-analyses. Data Sources: A search was conducted in the PubMed database for diagnostic imaging accuracy meta-analyses published between 2010 and 2019. Study Selection: Meta-analyses with 10 or more studies of medical imaging diagnostic accuracy, assessing a single imaging modality, and providing 2 × 2 contingency data were included. Studies that did not assess diagnostic accuracy of medical imaging techniques, compared 2 or more imaging modalities or different methods of 1 imaging modality, were cost analyses, used predictive or prognostic tests, did not provide individual patient data, or were network meta-analyses were excluded. Data Extraction and Synthesis: Data extraction was performed in accordance with the PRISMA guidelines. Main Outcomes and Measures: The diagnostic odds ratio (DOR) was calculated for each primary study using 2 × 2 contingency data. Regression analysis was used to examine the association between effect size estimate and precision across meta-analyses. Results: A total of 31 meta-analyses involving 668 primary studies and 80 206 patients were included. Fixed effects analysis produced a regression coefficient for the natural log of DOR against the SE of the natural log of DOR of 2.19 (95% CI, 1.49-2.90; P < .001), with computed tomography as the reference modality. Interaction test for modality and SE of the natural log of DOR did not depend on modality (Wald statistic P = .50). Taken together, this analysis found an inverse association between effect size estimate and precision that was independent of imaging modality. Of 26 meta-analyses that formally assessed for publication bias using funnel plots and statistical tests for funnel plot asymmetry, 21 found no evidence for such bias. Conclusions and Relevance: This meta-analysis found evidence of widespread prevalence of small study effects in the diagnostic imaging accuracy literature. One likely contributor to the observed effects is publication bias, which can undermine the results of many meta-analyses. Conventional methods for detecting funnel plot asymmetry conducted by included studies appeared to underestimate the presence of small study effects. Further studies are required to elucidate the various factors that contribute to small study effects.


Subject(s)
Tomography, X-Ray Computed , Bias , Humans , Odds Ratio , Publication Bias , Sample Size
4.
Neurosurgery ; 90(3): 262-269, 2022 03 01.
Article in English | MEDLINE | ID: mdl-35849494

ABSTRACT

BACKGROUND: Statistically significant positive results are more likely to be published than negative or insignificant outcomes. This phenomenon, also termed publication bias, can skew the interpretation of meta-analyses. The widespread presence of publication bias in the biomedical literature has led to the development of various statistical approaches, such as the visual inspection of funnel plots, Begg test, and Egger test, to assess and account for it. OBJECTIVE: To determine how well publication bias is assessed for in meta-analyses of the neurosurgical literature. METHODS: A systematic search for meta-analyses from the top neurosurgery journals was conducted. Data relevant to the presence, assessment, and adjustments for publication bias were extracted. RESULTS: The search yielded 190 articles. Most of the articles (n = 108, 56.8%) were assessed for publication bias, of which 40 (37.0%) found evidence for publication bias whereas 61 (56.5%) did not. In the former case, only 11 (27.5%) made corrections for the bias using the trim-and-fill method, whereas 29 (72.5%) made no correction. Thus, 111 meta-analyses (58.4%) either did not assess for publication bias or, if assessed to be present, did not adjust for it. CONCLUSION: Taken together, these results indicate that publication bias remains largely unaccounted for in neurosurgical meta-analyses.


Subject(s)
Neurosurgery , Publication Bias , Humans , Meta-Analysis as Topic , Neurosurgical Procedures , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL
...