Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
BMJ Open ; 7(9): e017462, 2017 Sep 15.
Article in English | MEDLINE | ID: mdl-28918414

ABSTRACT

INTRODUCTION: Systematic reviews evaluating the impact of interventions to improve the quality of peer review for biomedical publications highlighted that interventions were limited and have little impact. This study aims to compare the accuracy of early career peer reviewers who use an innovative online tool to the usual peer reviewer process in evaluating the completeness of reporting and switched primary outcomes in completed reports. METHODS AND ANALYSIS: This is a cross-sectional study of individual two-arm parallel-group randomised controlled trials (RCTs) published in the BioMed Central series medical journals, BMJ, BMJ Open and Annals of Emergency Medicine and indexed with the publication type 'Randomised Controlled Trial'. First, we will develop an online tool and training module based (a) on the Consolidated Standards of Reporting Trials (CONSORT) 2010 checklist and the Explanation and Elaboration document that would be dedicated to junior peer reviewers for assessing the completeness of reporting of key items and (b) the Centre for Evidence-Based Medicine Outcome Monitoring Project process used to identify switched outcomes in completed reports of the primary results of RCTs when initially submitted. Then, we will compare the performance of early career peer reviewers who use the online tool to the usual peer review process in identifying inadequate reporting and switched outcomes in completed reports of RCTs at initial journal submission. The primary outcome will be the mean number of items accurately classified per manuscript. The secondary outcomes will be the mean number of items accurately classified per manuscript for the CONSORT items and the sensitivity, specificity and likelihood ratio to detect the item as adequately reported and to identify a switch in outcomes. We aim to include 120 RCTs and 120 early career peer reviewers. ETHICS AND DISSEMINATION: The research protocol was approved by the ethics committee of the INSERM Institutional Review Board (21 January 2016). The study is based on voluntary participation and informed written consent. TRIAL REGISTRATION NUMBER: NCT03119376.


Subject(s)
Medical Writing/standards , Peer Review, Research/standards , Randomized Controlled Trials as Topic , Research Report/standards , Checklist , Cross-Sectional Studies , Evidence-Based Medicine , Humans , Publications/standards , Research Design
3.
Res Integr Peer Rev ; 2: 20, 2017.
Article in English | MEDLINE | ID: mdl-29451534

ABSTRACT

BACKGROUND: There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline. METHODS: This was a prospective 14-week before-after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline ('before'). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that 'no guidelines were applicable' ('after'). For each article submitted, the authors' response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded. RESULTS: Overall, 590 manuscripts were included in this analysis-300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (n = 106) and 37% (n = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (p < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (p < 0.0001), and a 1.7% reduction in authors choosing a guideline (p = 0.10). However, the 'after' cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; p = 0.04). CONCLUSION: This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.

4.
PeerJ ; 4: e1887, 2016.
Article in English | MEDLINE | ID: mdl-27069817

ABSTRACT

Background. The Journal Citation Reports journal impact factors (JIFs) are widely used to rank and evaluate journals, standing as a proxy for the relative importance of a journal within its field. However, numerous criticisms have been made of use of a JIF to evaluate importance. This problem is exacerbated when the use of JIFs is extended to evaluate not only the journals, but the papers therein. The purpose of this study was therefore to investigate the relationship between the number of citations and journal IF for identical articles published simultaneously in multiple journals. Methods. Eligible articles were consensus research reporting statements listed on the EQUATOR Network website that were published simultaneously in three or more journals. The correlation between the citation count for each article and the median journal JIF over the published period, and between the citation count and number of article accesses was calculated for each reporting statement. Results. Nine research reporting statements were included in this analysis, representing 85 articles published across 58 journals in biomedicine. The number of citations was strongly correlated to the JIF for six of the nine reporting guidelines, with moderate correlation shown for the remaining three guidelines (median r = 0.66, 95% CI [0.45-0.90]). There was also a strong positive correlation between the number of citations and the number of article accesses (median r = 0.71, 95% CI [0.5-0.8]), although the number of data points for this analysis were limited. When adjusted for the individual reporting guidelines, each logarithm unit of JIF predicted a median increase of 0.8 logarithm units of citation counts (95% CI [-0.4-5.2]), and each logarithm unit of article accesses predicted a median increase of 0.1 logarithm units of citation counts (95% CI [-0.9-1.4]). This model explained 26% of the variance in citations (median adjusted r (2) = 0.26, range 0.18-1.0). Conclusion. The impact factor of the journal in which a reporting statement was published was shown to influence the number of citations that statement will gather over time. Similarly, the number of article accesses also influenced the number of citations, although to a lesser extent than the impact factor. This demonstrates that citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.

8.
Trials ; 16: 261, 2015 Jun 05.
Article in English | MEDLINE | ID: mdl-26044814

ABSTRACT

Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge ( www.trialforge.org ) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants' views on the processes in the life of a randomised trial that should be covered by Trial Forge.General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a 'go to' website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.


Subject(s)
Efficiency, Organizational , Evidence-Based Medicine/methods , Randomized Controlled Trials as Topic/methods , Research Design , Access to Information , Cooperative Behavior , Efficiency , Evidence-Based Medicine/organization & administration , Humans , Information Dissemination , Interdisciplinary Communication , International Cooperation
9.
Trials ; 16: 151, 2015 Apr 11.
Article in English | MEDLINE | ID: mdl-25873052

ABSTRACT

The limitations of the traditional research paper are well known and widely discussed; however, rather than seeking solutions to the problems created by this model of publication, it is time to do away with a print era anachronism and design a new model of publication, with modern technology embedded at its heart. Instead of the current system with multiple publications, across multiple journals, publication could move towards a single, evolving document that begins with trial registration and then extends to include the full protocol and results as they become available, underpinned by the raw clinical data and all code used to obtain the result. This model would lead to research being evaluated prospectively, based on its hypothesis and methodology as stated in the study protocol, and move away from considering serendipitous results to be synonymous with quality, while also presenting readers with the opportunity to reliably evaluate bias or selective reporting in the published literature.


Subject(s)
Biomedical Research , Information Dissemination , Periodicals as Topic , Writing , Access to Information , Authorship , Biomedical Research/standards , Guidelines as Topic , Humans , Peer Review, Research , Periodicals as Topic/standards , Publication Bias , Quality Control
11.
J Negat Results Biomed ; 13: 2, 2014 Jan 24.
Article in English | MEDLINE | ID: mdl-24460678
SELECTION OF CITATIONS
SEARCH DETAIL
...