Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 18(1): e0274429, 2023.
Article in English | MEDLINE | ID: mdl-36701303

ABSTRACT

As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability is their capacity to test scientific claims without the costs of full replication. Experimental data supports the validity of this process, with a validation study producing a classification accuracy of 84% and an Area Under the Curve of 0.94, meeting or exceeding the accuracy of other techniques used to predict replicability. The repliCATS process provides other benefits. It is highly scalable, able to be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period through an online elicitation platform, having been used to assess 3000 research claims over an 18 month period. It is available to be implemented in a range of ways and we describe one such implementation. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to provide insight in understanding the limits of generalizability of scientific claims. The primary limitation of the repliCATS process is its reliance on human-derived predictions with consequent costs in terms of participant fatigue although careful design can minimise these costs. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.


Subject(s)
Behavioral Sciences , Data Accuracy , Humans , Reproducibility of Results , Costs and Cost Analysis , Peer Review
2.
Res Synth Methods ; 13(4): 533-545, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35472127

ABSTRACT

Systematic searching aims to find all possibly relevant research from multiple sources, the basis for an unbiased and comprehensive evidence base. Along with bibliographic databases, systematic reviewers use a variety of additional methods to minimise procedural bias. Citation chasing exploits connections between research articles to identify relevant records for a review by making use of explicit mentions of one article within another. Citation chasing is a popular supplementary search method because it helps to build on the work of primary research and review authors. It does so by identifying potentially relevant studies that might otherwise not be retrieved by other search methods; for example, because they did not use the review authors' search terms in the specified combinations in their titles, abstracts, or keywords. Here, we briefly provide an overview of citation chasing as a method for systematic reviews. Furthermore, given the challenges and high resource requirements associated with citation chasing, the limited application of citation chasing in otherwise rigorous systematic reviews, and the potential benefit of identifying terminologically disconnected but semantically linked research studies, we have developed and describe a free and open source tool that allows for rapid forward and backward citation chasing. We introduce citationchaser, an R package and Shiny app for conducting forward and backward citation chasing from a starting set of articles. We describe the sources of data, the backend code functionality, and the user interface provided in the Shiny app.


Subject(s)
Information Storage and Retrieval , Research Design , Databases, Bibliographic , Systematic Reviews as Topic
SELECTION OF CITATIONS
SEARCH DETAIL
...