Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Res Synth Methods ; 13(2): 214-228, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34558198

ABSTRACT

Randomised trials are often funded by commercial companies and methodological studies support a widely held suspicion that commercial funding may influence trial results and conclusions. However, these studies often have a risk of confounding and reporting bias. The risk of confounding is markedly reduced in meta-epidemiological studies that compare fairly similar trials within meta-analyses, and risk of reporting bias is reduced with access to unpublished data. Therefore, we initiated the COMmercial Funding In Trials (COMFIT) study aimed at investigating the impact of commercial funding on estimated intervention effects in randomised clinical trials based on a consortium of researchers who agreed to share meta-epidemiological study datasets with information on meta-analyses and trials included in meta-epidemiological studies. Here, we describe the COMFIT study, its database, and descriptive results. We included meta-epidemiological studies with published or unpublished data on trial funding source and results or conclusions. We searched five bibliographic databases and other sources. We invited authors of eligible meta-epidemiological studies to join the COMFIT consortium and to share data. The final construction of the COMFIT database involves checking data quality, identifying trial references, harmonising variable categories, and removing non-informative meta-analyses as well as correlated meta-analyses and trial results. We included data from 17 meta-epidemiological studies, covering 728 meta-analyses and 6841 trials. Seven studies (405 meta-analyses, 3272 trials) had not published analyses on the impact of commercial funding, but shared unpublished data on funding source. On this basis, we initiated the construction of a combined database. Once completed, the database will enable comprehensive analyses of the impact of commercial funding on trial results and conclusions with increased statistical power and a markedly reduced risk of confounding and reporting bias.


Subject(s)
Epidemiologic Studies , Bias
2.
J Affect Disord Rep ; 6: 100271, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34841385

ABSTRACT

BACKGROUND: The COVID-19 pandemic has had an impact on the mental health of healthcare and social care workers, and its potential effect on suicidal thoughts and behaviour is of particular concern. METHODS: This systematic review identified and appraised the published literature that has reported on the impact of COVID-19 on suicidal thoughts and behaviour and self-harm amongst healthcare and social care workers worldwide up to May 31, 2021. RESULTS: Out of 37 potentially relevant papers identified, ten met our eligibility criteria. Our review has highlighted that the impact of COVID-19 has varied as a function of setting, working relationships, occupational roles, and psychiatric comorbidities. LIMITATIONS: There have been no completed cohort studies comparing pre- and post-pandemic suicidal thoughts and behaviours. It is possible some papers may have been missed in the search. CONCLUSIONS: The current quality of evidence pertaining to suicidal behaviour in healthcare workers is poor, and evidence is entirely absent for those working in social care. The clinical relevance of this work is to bring attention to what evidence exists, and to encourage, in practice, proactive approaches to interventions for improving healthcare and social care worker mental health.

3.
F1000Res ; 10: 401, 2021.
Article in English | MEDLINE | ID: mdl-34408850

ABSTRACT

Background: The reliable and usable (semi)automation of data extraction can support the field of systematic review by reducing the workload required to gather information about the conduct and results of the included studies. This living systematic review examines published approaches for data extraction from reports of clinical studies. Methods: We systematically and continually search PubMed, ACL Anthology, arXiv, OpenAlex via EPPI-Reviewer, and the  dblp computer science bibliography. Full text screening and data extraction are conducted within an open-source living systematic review application created for the purpose of this review. This living review update includes publications up to December 2022 and OpenAlex content up to March 2023. Results: 76 publications are included in this review. Of these, 64 (84%) of the publications addressed extraction of data from abstracts, while 19 (25%) used full texts. A total of 71 (93%) publications developed classifiers for randomised controlled trials. Over 30 entities were extracted, with PICOs (population, intervention, comparator, outcome) being the most frequently extracted. Data are available from 25 (33%), and code from 30 (39%) publications. Six (8%) implemented publicly available tools Conclusions: This living systematic review presents an overview of (semi)automated data-extraction literature of interest to different types of literature review. We identified a broad evidence base of publications describing data extraction for interventional reviews and a small number of publications extracting epidemiological or diagnostic accuracy data. Between review updates, trends for sharing data and code increased strongly: in the base-review, data and code were available for 13 and 19% respectively, these numbers increased to 78 and 87% within the 23 new publications. Compared with the base-review, we observed another research trend, away from straightforward data extraction and towards additionally extracting relations between entities or automatic text summarisation. With this living review we aim to review the literature continually.

4.
F1000Res ; 9: 210, 2020.
Article in English | MEDLINE | ID: mdl-32724560

ABSTRACT

Background: Researchers in evidence-based medicine cannot keep up with the amounts of both old and newly published primary research articles. Conducting and updating of systematic reviews is time-consuming. In practice, data extraction is one of the most complex tasks in this process. Exponential improvements in computational processing speed and data storage are fostering the development of data extraction models and algorithms. This, in combination with quicker pathways to publication, led to a large landscape of tools and methods for data extraction tasks. Objective: To review published methods and tools for data extraction to (semi)automate the systematic reviewing process. Methods: We propose to conduct a living review. With this methodology we aim to do monthly search updates, as well as bi-annual review updates if new evidence permits it. In a cross-sectional analysis we will extract methodological characteristics and assess the quality of reporting in our included papers. Conclusions: We aim to increase transparency in the reporting and assessment of machine learning technologies to the benefit of data scientists, systematic reviewers and funders of health research. This living review will help to reduce duplicate efforts by data scientists who develop data extraction methods. It will also serve to inform systematic reviewers about possibilities to support their data extraction.


Subject(s)
Automation , Data Mining , Evidence-Based Medicine , Systematic Reviews as Topic , Cross-Sectional Studies , Research Design
5.
F1000Res ; 9: 1097, 2020.
Article in English | MEDLINE | ID: mdl-33604025

ABSTRACT

Background: The COVID-19 pandemic has caused considerable morbidity, mortality and disruption to people's lives around the world. There are concerns that rates of suicide and suicidal behaviour may rise during and in its aftermath. Our living systematic review synthesises findings from emerging literature on incidence and prevalence of suicidal behaviour as well as suicide prevention efforts in relation to COVID-19, with this iteration synthesising relevant evidence up to 19 th October 2020. Method:  Automated daily searches feed into a web-based database with screening and data extraction functionalities. Eligibility criteria include incidence/prevalence of suicidal behaviour, exposure-outcome relationships and effects of interventions in relation to the COVID-19 pandemic. Outcomes of interest are suicide, self-harm or attempted suicide and suicidal thoughts. No restrictions are placed on language or study type, except for single-person case reports. We exclude one-off cross-sectional studies without either pre-pandemic measures or comparisons of COVID-19 positive vs. unaffected individuals. Results: Searches identified 6,226 articles. Seventy-eight articles met our inclusion criteria. We identified a further 64 relevant cross-sectional studies that did not meet our revised inclusion criteria. Thirty-four articles were not peer-reviewed (e.g. research letters, pre-prints). All articles were based on observational studies. There was no consistent evidence of a rise in suicide but many studies noted adverse economic effects were evolving. There was evidence of a rise in community distress, fall in hospital presentation for suicidal behaviour and early evidence of an increased frequency of suicidal thoughts in those who had become infected with COVID-19. Conclusions:  Research evidence of the impact of COVID-19 on suicidal behaviour is accumulating rapidly. This living review provides a regular synthesis of the most up-to-date research evidence to guide public health and clinical policy to mitigate the impact of COVID-19 on suicide risk as the longer term impacts of the pandemic on suicide risk are researched.


Subject(s)
COVID-19 , Self-Injurious Behavior , Cross-Sectional Studies , Humans , Pandemics , SARS-CoV-2 , Self-Injurious Behavior/epidemiology , Suicidal Ideation
6.
J Biomed Inform ; 94: 103202, 2019 06.
Article in English | MEDLINE | ID: mdl-31075531

ABSTRACT

CONTEXT: Citation screening (also called study selection) is a phase of systematic review process that has attracted a growing interest on the use of text mining (TM) methods to support it to reduce time and effort. Search results are usually imbalanced between the relevant and the irrelevant classes of returned citations. Class imbalance among other factors has been a persistent problem that impairs the performance of TM models, particularly in the context of automatic citation screening for systematic reviews. This has often caused the performance of classification models using the basic title and abstract data to ordinarily fall short of expectations. OBJECTIVE: In this study, we explore the effects of using full bibliography data in addition to title and abstract on text classification performance for automatic citation screening. METHODS: We experiment with binary and Word2vec feature representations and SVM models using 4 software engineering (SE) and 15 medical review datasets. We build and compare 3 types of models (binary-non-linear, Word2vec-linear and Word2vec-non-linear kernels) with each dataset using the two feature sets. RESULTS: The bibliography enriched data exhibited consistent improved performance in terms of recall, work saved over sampling (WSS) and Matthews correlation coefficient (MCC) in 3 of the 4 SE datasets that are fairly large in size. For the medical datasets, the results vary, however in the majority of cases the performance is the same or better. CONCLUSION: Inclusion of the bibliography data provides the potential of improving the performance of the models but to date results are inconclusive.


Subject(s)
Bibliographies as Topic , Data Mining/methods , Automation , Computational Biology/methods , Models, Theoretical
7.
J Biomed Inform ; 73: 1-13, 2017 09.
Article in English | MEDLINE | ID: mdl-28711679

ABSTRACT

CONTEXT: Independent validation of published scientific results through study replication is a pre-condition for accepting the validity of such results. In computation research, full replication is often unrealistic for independent results validation, therefore, study reproduction has been justified as the minimum acceptable standard to evaluate the validity of scientific claims. The application of text mining techniques to citation screening in the context of systematic literature reviews is a relatively young and growing computational field with high relevance for software engineering, medical research and other fields. However, there is little work so far on reproduction studies in the field. OBJECTIVE: In this paper, we investigate the reproducibility of studies in this area based on information contained in published articles and we propose reporting guidelines that could improve reproducibility. METHODS: The study was approached in two ways. Initially we attempted to reproduce results from six studies, which were based on the same raw dataset. Then, based on this experience, we identified steps considered essential to successful reproduction of text mining experiments and characterized them to measure how reproducible is a study given the information provided on these steps. 33 articles were systematically assessed for reproducibility using this approach. RESULTS: Our work revealed that it is currently difficult if not impossible to independently reproduce the results published in any of the studies investigated. The lack of information about the datasets used limits reproducibility of about 80% of the studies assessed. Also, information about the machine learning algorithms is inadequate in about 27% of the papers. On the plus side, the third party software tools used are mostly free and available. CONCLUSIONS: The reproducibility potential of most of the studies can be significantly improved if more attention is paid to information provided on the datasets used, how they were partitioned and utilized, and how any randomization was controlled. We introduce a checklist of information that needs to be provided in order to ensure that a published study can be reproduced.


Subject(s)
Checklist , Data Mining , Review Literature as Topic , Biomedical Research , Humans , Publications , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...