Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Syst Rev ; 12(1): 7, 2023 01 17.
Article in English | MEDLINE | ID: mdl-36650579

ABSTRACT

BACKGROUND: Machine learning (ML) tools exist that can reduce or replace human activities in repetitive or complex tasks. Yet, ML is underutilized within evidence synthesis, despite the steadily growing rate of primary study publication and the need to periodically update reviews to reflect new evidence. Underutilization may be partially explained by a paucity of evidence on how ML tools can reduce resource use and time-to-completion of reviews. METHODS: This protocol describes how we will answer two research questions using a retrospective study design: Is there a difference in resources used to produce reviews using recommended ML versus not using ML, and is there a difference in time-to-completion? We will also compare recommended ML use to non-recommended ML use that merely adds ML use to existing procedures. We will retrospectively include all reviews conducted at our institute from 1 August 2020, corresponding to the commission of the first review in our institute that used ML. CONCLUSION: The results of this study will allow us to quantitatively estimate the effect of ML adoption on resource use and time-to-completion, providing our organization and others with better information to make high-level organizational decisions about ML.


Subject(s)
Machine Learning , Humans , Retrospective Studies , Pilot Projects
2.
BMC Med Res Methodol ; 22(1): 167, 2022 06 08.
Article in English | MEDLINE | ID: mdl-35676632

ABSTRACT

BACKGROUND: Machine learning and automation are increasingly used to make the evidence synthesis process faster and more responsive to policymakers' needs. In systematic reviews of randomized controlled trials (RCTs), risk of bias assessment is a resource-intensive task that typically requires two trained reviewers. One function of RobotReviewer, an off-the-shelf machine learning system, is an automated risk of bias assessment. METHODS: We assessed the feasibility of adopting RobotReviewer within a national public health institute using a randomized, real-time, user-centered study. The study included 26 RCTs and six reviewers from two projects examining health and social interventions. We randomized these studies to one of two RobotReviewer platforms. We operationalized feasibility as accuracy, time use, and reviewer acceptability. We measured accuracy by the number of corrections made by human reviewers (either to automated assessments or another human reviewer's assessments). We explored acceptability through group discussions and individual email responses after presenting the quantitative results. RESULTS: Reviewers were equally likely to accept judgment by RobotReviewer as each other's judgement during the consensus process when measured dichotomously; risk ratio 1.02 (95% CI 0.92 to 1.13; p = 0.33). We were not able to compare time use. The acceptability of the program by researchers was mixed. Less experienced reviewers were generally more positive, and they saw more benefits and were able to use the tool more flexibly. Reviewers positioned human input and human-to-human interaction as superior to even a semi-automation of this process. CONCLUSION: Despite being presented with evidence of RobotReviewer's equal performance to humans, participating reviewers were not interested in modifying standard procedures to include automation. If further studies confirm equal accuracy and reduced time compared to manual practices, we suggest that the benefits of RobotReviewer may support its future implementation as one of two assessors, despite reviewer ambivalence. Future research should study barriers to adopting automated tools and how highly educated and experienced researchers can adapt to a job market that is increasingly challenged by new technologies.


Subject(s)
Bias , Systematic Reviews as Topic , Humans , Machine Learning , Randomized Controlled Trials as Topic , Risk Assessment
3.
Telemed J E Health ; 28(7): 942-969, 2022 07.
Article in English | MEDLINE | ID: mdl-34665645

ABSTRACT

Background: One lesson from the current COVID-19 pandemic is the need to optimize health care provision outside of traditional settings, and potentially over longer periods of time. An important strategy is remote patient monitoring (RPM), allowing patients to remain at home, while they transmit health data and receive follow-up services. Materials and Methods: We conducted an overview of the latest systematic reviews that had included randomized controlled trials with adult patients with chronic diseases. We summarized results and displayed these in forest plots, and used GRADE (Grading of Recommendations Assessment, Development, and Evaluation) to assess our certainty of the evidence. Results: We included 4 systematic reviews that together reported on 11 trials that met our definition of RPM, each including patients with diabetes and/or hypertension. RPM probably makes little to no difference on HbA1c levels. RPM probably leads to a slight reduction in systolic blood pressure, with questionable clinical meaningfulness. RPM probably has a small negative effect on the physical component of health-related quality of life, but the clinical significance of this reduction is uncertain. We have low confidence in the finding that RPM makes no difference to the remaining five primary outcomes. Conclusion: Most of our findings are consistent with reviews of other, broader definitions of RPM. The type of RPM examined in this review is as effective as standard treatment for patients with diabetes/hypertension. If this or other types of RPM are to be used for "long covid" patients or for other chronic disease groups post-pandemic, we need to understand why RPM may negatively affect quality of life.


Subject(s)
COVID-19 , Diabetes Mellitus , Hypertension , Adult , COVID-19/epidemiology , Chronic Disease , Diabetes Mellitus/therapy , Humans , Hypertension/therapy , Monitoring, Physiologic/methods , Pandemics , Primary Health Care , Quality of Life
4.
Res Synth Methods ; 13(2): 229-241, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34919321

ABSTRACT

Systematic reviews are resource-intensive. The machine learning tools being developed mostly focus on the study identification process, but tools to assist in analysis and categorization are also needed. One possibility is to use unsupervised automatic text clustering, in which each study is automatically assigned to one or more meaningful clusters. Our main aim was to assess the usefulness of an automated clustering method, Lingo3G, in categorizing studies in a simplified rapid review, then compare performance (precision and recall) of this method compared to manual categorization. We randomly assigned all 128 studies in a review to be coded by a human researcher blinded to cluster assignment (mimicking two independent researchers) or by a human researcher non-blinded to cluster assignment (mimicking one researcher checking another's work). We compared time use, precision and recall of manual categorization versus automated clustering. Automated clustering and manual categorization organized studies by population and intervention/context. Automated clustering failed to identify two manually identified categories but identified one additional category not identified by the human researcher. We estimate that automated clustering has similar precision to both blinded and non-blinded researchers (e.g., 88% vs. 89%), but higher recall (e.g., 89% vs. 84%). Manual categorization required 49% more time than automated clustering. Using a specific clustering algorithm, automated clustering can be helpful with categorization of and identifying patterns across studies in simpler systematic reviews. We found that the clustering was sensitive enough to group studies according to linguistic differences that often corresponded to the manual categories.


Subject(s)
Algorithms , Machine Learning , Cluster Analysis , Humans , Research Design , Systematic Reviews as Topic
SELECTION OF CITATIONS
SEARCH DETAIL
...