Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 740
Filter
1.
J Clin Epidemiol ; : 111422, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38849061

ABSTRACT

PURPOSE: Although comprehensive and widespread guidelines on how to conduct systematic reviews of outcome measurement instruments (OMIs) exist, for example from the COSMIN (COnsensus-based Standards for the selection of health Measure- ment INstruments) initiative, key information is often missing in published reports. This article describes the development of an extension of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guideline: PRISMA-COSMIN for OMIs 2024. METHODS: The development process followed the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) guidelines and included a literature search, expert consultations, a Delphi study, a hybrid workgroup meeting, pilot testing, and an end-of-project meeting, with integrated patient/public involvement. RESULTS: From the literature and expert consultation, 49 potentially relevant reporting items were identified. Round 1 of the Delphi study was completed by 103 panelists, whereas round 2 and 3 were completed by 78 panelists. After 3 rounds, agreement (≥ 67%) on inclusion and wording was reached for 44 items. Eleven items without consensus for inclusion and/or wording were discussed at a workgroup meeting attended by 24 participants. Agreement was reached for the inclusion and wording of 10 items, and the deletion of 1 item. Pilot testing with 65 authors of OMI systematic reviews further improved the guideline through minor changes in wording and structure, finalized during the end-of-project meeting. The final check- list to facilitate the reporting of full systematic review reports contains 54 (sub)items addressing the review's title, abstract, plain language summary, open science, introduction, methods, results, and discussion. Thirteen items pertaining to the title and abstract are also included in a separate abstract checklist, guiding authors in reporting for example conference abstracts. CONCLUSION: PRISMA-COSMIN for OMIs 2024 consists of two checklists (full reports; abstracts), their corresponding expla- nation and elaboration documents detailing the rationale and examples for each item, and a data flow diagram. PRISMA- COSMIN for OMIs 2024 can improve the reporting of systematic reviews of OMIs, fostering their reproducibility and allowing end-users to appraise the quality of OMIs and select the most appropriate OMI for a specific application.

2.
PLoS One ; 19(5): e0302655, 2024.
Article in English | MEDLINE | ID: mdl-38701100

ABSTRACT

BACKGROUND: Open science practices are implemented across many scientific fields to improve transparency and reproducibility in research. Complementary, alternative, and integrative medicine (CAIM) is a growing field that may benefit from adoption of open science practices. The efficacy and safety of CAIM practices, a popular concern with the field, can be validated or refuted through transparent and reliable research. Investigating open science practices across CAIM journals by using the Transparency and Openness Promotion (TOP) guidelines can potentially promote open science practices across CAIM journals. The purpose of this study is to conduct an audit that compares and ranks open science practices adopted by CAIM journals against TOP guidelines laid out by the Center for Open Science (COS). METHODS: CAIM-specific journals with titles containing the words "complementary", "alternative" and/or "integrative" were included in this audit. Each of the eight TOP criteria were used to extract open science practices from each of the CAIM journals. Data was summarized by the TOP guideline and ranked using the TOP Factor to identify commonalities and differences in practices across the included journals. RESULTS: A total of 19 CAIM journals were included in this audit. Across all journals, the mean TOP Factor was 2.95 with a median score of 2. The findings of this study reveal high variability among the open science practices required by journals in this field. Four journals (21%) had a final TOP score of 0, while the total scores of the remaining 15 (79%) ranged from 1 to 8. CONCLUSION: While several studies have audited open science practices across discipline-specific journals, none have focused on CAIM journals. The results of this study indicate that CAIM journals provide minimal guidelines to encourage or require authors to adhere to open science practices and there is an opportunity to improve the use of open science practices in the field.


Subject(s)
Complementary Therapies , Integrative Medicine , Periodicals as Topic , Humans , Periodicals as Topic/standards , Integrative Medicine/standards
3.
PLoS One ; 19(5): e0302108, 2024.
Article in English | MEDLINE | ID: mdl-38696383

ABSTRACT

OBJECTIVE: To assess the reporting quality of published RCT abstracts regarding patients with endometriosis pelvic pain and investigate the prevalence and characteristics of spin in these abstracts. METHODS: PubMed and Scopus were searched for RCT abstracts addressing endometriosis pelvic pain published from January 1st, 2010 to December 1st, 2023.The reporting quality of RCT abstracts was assessed using the CONSORT statement for abstracts. Additionally, spin was evaluated in the results and conclusions section of the abstracts, defined as the misleading reporting of study findings to emphasize the perceived benefits of an intervention or to confound readers from statistically non-significant results. Assessing factors affecting the reporting quality and spin existence, linear and logistic regression was used, respectively. RESULTS: A total of 47 RCT abstracts were included. Out of 16 checklist items, only three items including objective, intervention and conclusions were sufficiently reported in the most abstracts (more than 95%), and none of the abstracts presented precise data as required by the CONSORT-A guidelines. In the reporting quality of material and method section, trial design, type of randomization, the generation of random allocation sequences, the allocation concealment and blinding were most items identified that were suboptimal. The total score for the quality varied between 5 and 15 (mean: 9.59, SD: 3.03, median: 9, IQR: 5). Word count (beta = 0.015, p-value = 0.005) and publishing in open-accessed journals (beta = 2.023, p-value = 0.023) were the significant factors that affecting the reporting quality. Evaluating spin within each included paper, we found that 18 (51.43%) papers had statistically non-significant results. From these studies, 12 (66.66%) had spin in both results and conclusion sections. Furthermore, the spin intensity increased during 2010-2023 and 38.29% of abstracts had spin in both results and conclusion sections. CONCLUSION: Overall poor adherence to CONSORT-A was observed, with spin detected in several RCTs featuring non-significant primary endpoints in obstetrics and gynecology literature.


Subject(s)
Endometriosis , Randomized Controlled Trials as Topic , Humans , Female , Randomized Controlled Trials as Topic/standards , Research Design/standards , Pelvic Pain , Abstracting and Indexing/standards
4.
Integr Med Res ; 13(2): 101047, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38799120

ABSTRACT

This educational article explores the convergence of open science practices and traditional, complementary, and integrative medicine (TCIM), shedding light on the potential benefits and challenges of open science for the development, dissemination, and implementation of evidence-based TCIM. We emphasize the transformative shift in medical science towards open and collaborative practices, highlighting the limited application of open science in TCIM research despite its growing acceptance among patients. We define open science practices and discuss those that are applicable to TCIM, including: study registration; reporting guidelines; data, code and material sharing; preprinting; publishing open access; and reproducibility/replication studies. We explore the benefits of open science in TCIM, spanning improved research quality, increased public trust, accelerated innovation, and enhanced evidence-based decision-making. We also acknowledge challenges such as data privacy concerns, limited resources, and resistance to cultural change. We propose strategies to overcome these challenges, including ethical guidelines, education programs, funding advocacy, interdisciplinary dialogue, and patient engagement. Looking to the future, we envision the maturation of open science in TCIM, the development of TCIM-specific guidelines for open science practices, advancements in data sharing platforms, the integration of open data and artificial intelligence in TCIM research, and changes in the context of policy and regulation. We foresee a future where open science in TCIM leads to a better evidence base, informed decision-making, interdisciplinary collaboration, and transformative impacts on healthcare and research methodologies, highlighting the promising synergy between open science and TCIM for holistic, evidence-based healthcare solutions.

5.
PLoS One ; 19(5): e0301251, 2024.
Article in English | MEDLINE | ID: mdl-38709739

ABSTRACT

INTRODUCTION AND OBJECTIVE: Open science (OS) aims to make the dissemination of knowledge and the research process transparent and accessible to everyone. With the increasing popularity of complementary, alternative, and integrative medicine (CAIM), our goal was to explore what are CAIM researchers' practices and perceived barriers related to OS. METHODS: We conducted an anonymous online survey of researchers who published in journals listed in Scopus containing the words "complementary", "alternative", or "integrative" medicine in their names. We emailed 6040 researchers our purpose-built electronic survey after extracting their email address from one of their publications in our sample of journals. We questioned their familiarity with different OS concepts, along with their experiences and challenges engaging in these practices over the last 12 months. RESULTS: The survey was completed by 392 researchers (6.5% response rate, 97.1% completion rate). Most respondents were CAIM researchers familiar with the overall concept of OS, indicated by those actively publishing open access (OA) (n = 244, 76.0%), registering a study protocol (n = 148, 48.0%), and using reporting guidelines (n = 181, 59.0%) in the past 12 months. Preprinting, sharing raw data, and sharing study materials were less popular. A lack of funding was reported as the greatest barrier to publishing OA by most respondents (n = 252, 79.0%), and that additional funding is the most significant incentive in applying more OS practices to their research (n = 229,72.2%). With respect to preprinting barriers, 36.3% (n = 110) participants believed there are potential harms in sharing non-peer-reviewed work and 37.0% (n = 112) feared preprinting would reduce the likelihood of their manuscript being accepted by a journal. Respondents were also concerned about intellectual property control regarding sharing data (n = 94, 31.7%) and research study materials (n = 80, 28.7%). CONCLUSIONS: Although many participants were familiar with and practiced aspects of OS, many reported facing barriers relating to lack of funding to enable OS and perceived risks of revealing research ideas and data prior to publication. Future research should monitor the adoption and implementation of OS interventions in CAIM.


Subject(s)
Complementary Therapies , Integrative Medicine , Research Personnel , Humans , Cross-Sectional Studies , Research Personnel/psychology , Surveys and Questionnaires , Complementary Therapies/statistics & numerical data , Female , Male , Adult , Middle Aged
6.
J Clin Epidemiol ; 171: 111367, 2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38642717

ABSTRACT

Research integrity is guided by a set of principles to ensure research reliability and rigor. It serves as a pillar to uphold society's trust in science and foster scientific progress. However, over the past 2 decades, a surge in research integrity concerns, including fraudulent research, reproducibility challenges, and questionable practices, has raised critical questions about the reliability of scientific outputs, particularly in biomedical research. In the biomedical sciences, any breaches in research integrity could potentially lead to a domino effect impacting patient care, medical interventions, and the broader implementation of healthcare policies. Addressing these breaches requires measures such as rigorous research methods, transparent reporting, and changing the research culture. Institutional support through clear guidelines, robust training, and mentorship is crucial to fostering a culture of research integrity. However, structural and institutional factors, including research incentives and recognition systems, play an important role in research behavior. Therefore, promoting research integrity demands a collective effort from all stakeholders to maintain public trust in the scientific community and ensure the reliability of science. Here we discuss some definitions and principles, the implications for biomedical sciences, and propose actionable steps to foster research integrity.

7.
J Clin Epidemiol ; 169: 111309, 2024 May.
Article in English | MEDLINE | ID: mdl-38428538

ABSTRACT

OBJECTIVES: To describe, and explain the rationale for, the methods used and decisions made during development of the updated SPIRIT 2024 and CONSORT 2024 reporting guidelines. METHODS: We developed SPIRIT 2024 and CONSORT 2024 together to facilitate harmonization of the two guidelines, and incorporated content from key extensions. We conducted a scoping review of comments suggesting changes to SPIRIT 2013 and CONSORT 2010, and compiled a list of other possible revisions based on existing SPIRIT and CONSORT extensions, other reporting guidelines, and personal communications. From this, we generated a list of potential modifications or additions to SPIRIT and CONSORT, which we presented to stakeholders for feedback in an international online Delphi survey. The Delphi survey results were discussed at an online expert consensus meeting attended by 30 invited international participants. We then drafted the updated SPIRIT and CONSORT checklists and revised them based on further feedback from meeting attendees. RESULTS: We compiled 83 suggestions for revisions or additions to SPIRIT and/or CONSORT from the scoping review and 85 from other sources, from which we generated 33 potential changes to SPIRIT (n = 5) or CONSORT (n = 28). Of 463 participants invited to take part in the Delphi survey, 317 (68%) responded to Round 1, 303 (65%) to Round 2 and 290 (63%) to Round 3. Two additional potential checklist changes were added to the Delphi survey based on Round 1 comments. Overall, 14/35 (SPIRIT n = 0; CONSORT n = 14) proposed changes reached the predefined consensus threshold (≥80% agreement), and participants provided 3580 free-text comments. The consensus meeting participants agreed with implementing 11/14 of the proposed changes that reached consensus in the Delphi and supported implementing a further 4/21 changes (SPIRIT n = 2; CONSORT n = 2) that had not reached the Delphi threshold. They also recommended further changes to refine key concepts and for clarity. CONCLUSION: The forthcoming SPIRIT 2024 and CONSORT 2024 Statements will provide updated, harmonized guidance for reporting randomized controlled trial protocols and results, respectively. The simultaneous development of the SPIRIT and CONSORT checklists has been informed by current empirical evidence and extensive input from stakeholders. We hope that this report of the methods used will be helpful for developers of future reporting guidelines.


Subject(s)
Checklist , Delphi Technique , Guidelines as Topic , Humans , Checklist/standards , Research Design/standards , Consensus , Randomized Controlled Trials as Topic/standards
8.
Photochem Photobiol Sci ; 23(2): 387-394, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38341812

ABSTRACT

This is a protocol for an overview to summarize the findings of Systematic Reviews (SR) dealing with Photodynamic Inactivation (PDI) for control of oral diseases. Specific variables of oral infectious will be considered as outcomes, according to dental specialty. Cochrane Database of Systematic Reviews (CDSR), MEDLINE, LILACS, Embase, and Epistemonikos will be searched, as well as reference lists. A search strategy was developed for each database using only terms related to the intervention (PDI) aiming to maximize sensitivity. After checking for duplicate entries, selection of reviews will be performed in a two-stage technique: two authors will independently screening titles and abstracts, and then full texts will be assessed for inclusion/exclusion criteria. Any disagreement will be resolved through discussion and/or consultation with a third reviewer. Data will be extracted following the recommendations in Chapter V of Cochrane Handbook and using an electronic pre-specified form. The evaluation of the methodological quality and risk of bias (RoB) of the SR included will be carried out using the AMSTAR 2 and ROBIS. Narrative summaries of relevant results from the individual SR will be carried out and displayed in tables and figures. A specific summary will focus on PDI parameters and study designs, such as the type and concentration of photosensitizer, pre-irradiation time, irradiation dosimetry, and infection or microbiological models, to identify the PDI protocols with clinical potential. We will summarize the quantitative results of the SRs narratively.


Subject(s)
Specialties, Dental , Systematic Reviews as Topic
9.
Rev Panam Salud Publica ; 48: e13, 2024.
Article in Spanish | MEDLINE | ID: mdl-38352035

ABSTRACT

The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.


A declaração CONSORT 2010 apresenta diretrizes mínimas para relatórios de ensaios clínicos randomizados. Seu uso generalizado tem sido fundamental para garantir a transparência na avaliação de novas intervenções. Recentemente, tem-se reconhecido cada vez mais que intervenções que incluem inteligência artificial (IA) precisam ser submetidas a uma avaliação rigorosa e prospectiva para demonstrar seus impactos sobre os resultados de saúde. A extensão CONSORT-AI (Consolidated Standards of Reporting Trials ­ Artificial Intelligence) é uma nova diretriz para relatórios de ensaios clínicos que avaliam intervenções com um componente de IA. Ela foi desenvolvida em paralelo à sua declaração complementar para protocolos de ensaios clínicos, a SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials ­ Artificial Intelligence). Ambas as diretrizes foram desenvolvidas por meio de um processo de consenso em etapas que incluiu revisão da literatura e consultas a especialistas para gerar 29 itens candidatos. Foram feitas consultas sobre esses itens a um grupo internacional composto por 103 interessados diretos, que participaram de uma pesquisa Delphi em duas etapas. Chegou-se a um acordo sobre os itens em uma reunião de consenso que incluiu 31 interessados diretos, e os itens foram refinados por meio de uma lista de verificação piloto que envolveu 34 participantes. A extensão CONSORT-AI inclui 14 itens novos que, devido à sua importância para as intervenções de IA, devem ser informados rotineiramente juntamente com os itens básicos da CONSORT 2010. A CONSORT-AI preconiza que os pesquisadores descrevam claramente a intervenção de IA, incluindo instruções e as habilidades necessárias para seu uso, o contexto no qual a intervenção de IA está inserida, considerações sobre o manuseio dos dados de entrada e saída da intervenção de IA, a interação humano-IA e uma análise dos casos de erro. A CONSORT-AI ajudará a promover a transparência e a integralidade nos relatórios de ensaios clínicos com intervenções que utilizam IA. Seu uso ajudará editores e revisores, bem como leitores em geral, a entender, interpretar e avaliar criticamente a qualidade do desenho do ensaio clínico e o risco de viés nos resultados relatados.

10.
Integr Med Res ; 13(1): 101024, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38384497

ABSTRACT

The convergence of traditional, complementary, and integrative medicine (TCIM) with artificial intelligence (AI) is a promising frontier in healthcare. TCIM is a patient-centric approach that combines conventional medicine with complementary therapies, emphasizing holistic well-being. AI can revolutionize healthcare through data-driven decision-making and personalized treatment plans. This article explores how AI technologies can complement and enhance TCIM, aligning with the shared objectives of researchers from both fields in improving patient outcomes, enhancing care quality, and promoting holistic wellness. This integration of TCIM and AI introduces exciting opportunities but also noteworthy challenges. AI may augment TCIM by assisting in early disease detection, providing personalized treatment plans, predicting health trends, and enhancing patient engagement. Challenges at the intersection of AI and TCIM include data privacy and security, regulatory complexities, maintaining the human touch in patient-provider relationships, and mitigating bias in AI algorithms. Patients' trust, informed consent, and legal accountability are all essential considerations. Future directions in AI-enhanced TCIM include advanced personalized medicine, understanding the efficacy of herbal remedies, and studying patient-provider interactions. Research on bias mitigation, patient acceptance, and trust in AI-driven TCIM healthcare is crucial. In this article, we outlined that the merging of TCIM and AI holds great promise in enhancing healthcare delivery, personalizing treatment plans, preventive care, and patient engagement. Addressing challenges and fostering collaboration between AI experts, TCIM practitioners, and policymakers, however, is vital to harnessing the full potential of this integration.

11.
Medicine (Baltimore) ; 103(7): e37079, 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38363902

ABSTRACT

BACKGROUND: Quality reporting contributes to effective translation of health research in practice and policy. As an initial step in the development of a reporting guideline for scaling, the Standards for reporting stUdies of sCaling evidenCEd-informED interventions (SUCCEED), we performed a systematic review to identify relevant guidelines and compile a list of potential items. METHODS: We conducted a systematic review according to Cochrane method guidelines. We searched the following databases: MEDLINE, Embase, PsycINFO, Cochrane Library, CINAHL, Web of Science, from their respective inceptions. We also searched websites of relevant organizations and Google. We included any document that provided instructions or recommendations, e.g., reporting guideline, checklist, guidance, framework, standard; could inform the design or reporting of scaling interventions; and related to the health sector. We extracted characteristics of the included guidelines and assessed their methodological quality using a 3-item internal validity assessment tool. We extracted all items from the guidelines and classified them according to the main sections of reporting guidelines (title, abstract, introduction, methods, results, discussion and other information). We performed a narrative synthesis based on descriptive statistics. RESULTS: Of 7704 records screened (published between 1999 and 2019), we included 39 guidelines, from which data were extracted from 57 reports. Of the 39 guidelines, 17 were for designing scaling interventions and 22 for reporting implementation interventions. At least one female author was listed in 31 guidelines, and 21 first authors were female. None of the authors belonged to the patient stakeholder group. Only one guideline clearly identified a patient as having participated in the consensus process. More than half the guidelines (56%) had been developed using an evidence-based process. In total, 750 items were extracted from the 39 guidelines and distributed into the 7 main sections. CONCLUSION: Relevant items identified could inform the development of a reporting guideline for scaling studies of evidence-based health interventions. This and our assessment of guidelines could contribute to better reporting in the science and practice of scaling.


Subject(s)
Guidelines as Topic , Health Services Research , Humans , Health Services Research/standards
12.
Nat Commun ; 15(1): 1619, 2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38388497

ABSTRACT

The Consolidated Standards of Reporting Trials extension for Artificial Intelligence interventions (CONSORT-AI) was published in September 2020. Since its publication, several randomised controlled trials (RCTs) of AI interventions have been published but their completeness and transparency of reporting is unknown. This systematic review assesses the completeness of reporting of AI RCTs following publication of CONSORT-AI and provides a comprehensive summary of RCTs published in recent years. 65 RCTs were identified, mostly conducted in China (37%) and USA (18%). Median concordance with CONSORT-AI reporting was 90% (IQR 77-94%), although only 10 RCTs explicitly reported its use. Several items were consistently under-reported, including algorithm version, accessibility of the AI intervention or code, and references to a study protocol. Only 3 of 52 included journals explicitly endorsed or mandated CONSORT-AI. Despite a generally high concordance amongst recent AI RCTs, some AI-specific considerations remain systematically poorly reported. Further encouragement of CONSORT-AI adoption by journals and funders may enable more complete adoption of the full CONSORT-AI guidelines.


Subject(s)
Artificial Intelligence , Reference Standards , China , Randomized Controlled Trials as Topic
13.
Open Heart ; 11(1)2024 Jan 17.
Article in English | MEDLINE | ID: mdl-38233041

ABSTRACT

OBJECTIVE: Open science is a movement and set of practices to conduct research more transparently. Implementing open science will significantly improve public access and supports equity. It also has the potential to foster innovation and reduce duplication through data and materials sharing. Here, we survey an international group of researchers publishing in cardiovascular journals regarding their perceptions and practices related to open science. METHODS: We identified the top 100 'Cardiology and Cardiovascular Medicine' subject category journals from the SCImago journal ranking platform. This is a publicly available portal that draws from Scopus. We then extracted the corresponding author's name and email from all articles published in these journals between 1 March 2021 and 1 March 2022. Participants were sent a purpose-built survey about open science. The survey contained primarily multiple choice and scale-based questions for which we report count data and percentages. For the few text-based responses we conducted thematic content analysis. RESULTS: 198 participants responded to our survey. Participants had a mean response of 6.8 (N=197, SD=1.8) on a 9-point scale with endpoints, not at all familiar (1) and extremely familiar (9), when indicating how familiar they were with open science. When asked about where they obtained open science training, most participants indicated this was done on the job self-initiated while conducting research (n=103, 52%), or that they had no formal training with respect to open science (n=72, 36%). More than half of the participants indicated they would benefit from practical support from their institution on how to perform open science practices (N=106, 54%). A diversity of barriers to each of the open science practices presented to participants were acknowledged. Participants indicated that funding was the most essential incentive to adopt open science. CONCLUSIONS: It is clear that policy alone will not lead to the effective implementation of open science. This survey serves as a baseline for the cardiovascular research community's open science performance and perception and can be used to inform future interventions and monitoring.


Subject(s)
Cardiology , Humans , Cardiology/trends , Biomedical Research/trends , Publishing/trends
14.
J Clin Epidemiol ; 168: 111247, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38185190

ABSTRACT

OBJECTIVES: Evidence-based research (EBR) is the systematic and transparent use of prior research to inform a new study so that it answers questions that matter in a valid, efficient, and accessible manner. This study surveyed experts about existing (e.g., citation analysis) and new methods for monitoring EBR and collected ideas about implementing these methods. STUDY DESIGN AND SETTING: We conducted a cross-sectional study via an online survey between November 2022 and March 2023. Participants were experts from the fields of evidence synthesis and research methodology in health research. Open-ended questions were coded by recurring themes; descriptive statistics were used for quantitative questions. RESULTS: Twenty-eight expert participants suggested that citation analysis should be supplemented with content evaluation (not just what is cited but also in which context), content expert involvement, and assessment of the quality of cited systematic reviews. They also suggested that citation analysis could be facilitated with automation tools. They emphasized that EBR monitoring should be conducted by ethics committees and funding bodies before the research starts. Challenges identified for EBR implementation monitoring were resource constraints and clarity on responsibility for EBR monitoring. CONCLUSION: Ideas proposed in this study for monitoring the implementation of EBR can be used to refine methods and define responsibility but should be further explored in terms of feasibility and acceptability. Different methods may be needed to determine if the use of EBR is improving over time.


Subject(s)
Research Design , Humans , Cross-Sectional Studies
15.
Syst Rev ; 13(1): 17, 2024 01 05.
Article in English | MEDLINE | ID: mdl-38183086

ABSTRACT

PURPOSE: To inform updated recommendations by the Canadian Task Force on Preventive Health Care on screening in a primary care setting for hypertension in adults aged 18 years and older. This protocol outlines the scope and methods for a series of systematic reviews and one overview of reviews. METHODS: To evaluate the benefits and harms of screening for hypertension, the Task Force will rely on the relevant key questions from the 2021 United States Preventive Services Task Force systematic review. In addition, a series of reviews will be conducted to identify, appraise, and synthesize the evidence on (1) the association of blood pressure measurement methods and future cardiovascular (CVD)-related outcomes, (2) thresholds for discussions of treatment initiation, and (3) patient acceptability of hypertension screening methods. For the review of blood pressure measurement methods and future CVD-related outcomes, we will perform a de novo review and search MEDLINE, Embase, CENTRAL, and APA PsycInfo for randomized controlled trials, prospective or retrospective cohort studies, nested case-control studies, and within-arm analyses of intervention studies. For the thresholds for discussions of treatment initiation review, we will perform an overview of reviews and update results from a relevant 2019 UK NICE review. We will search MEDLINE, Embase, APA PsycInfo, and Epistemonikos for systematic reviews. For the acceptability review, we will perform a de novo systematic review and search MEDLINE, Embase, and APA PsycInfo for randomized controlled trials, controlled clinical trials, and observational studies with comparison groups. Websites of relevant organizations, gray literature sources, and the reference lists of included studies and reviews will be hand-searched. Title and abstract screening will be completed by two independent reviewers. Full-text screening, data extraction, risk-of-bias assessment, and GRADE (Grading of Recommendations Assessment, Development and Evaluation) will be completed independently by two reviewers. Results from included studies will be synthesized narratively and pooled via meta-analysis when appropriate. The GRADE approach will be used to assess the certainty of evidence for outcomes. DISCUSSION: The results of the evidence reviews will be used to inform Canadian recommendations on screening for hypertension in adults aged 18 years and older. SYSTEMATIC REVIEW REGISTRATION: This protocol is registered on PROSPERO and is available on the Open Science Framework (osf.io/8w4tz).


Subject(s)
Hypertension , Adult , Humans , Prospective Studies , Retrospective Studies , Canada , Systematic Reviews as Topic , Hypertension/diagnosis , Hypertension/prevention & control , Meta-Analysis as Topic
16.
Syst Rev ; 13(1): 48, 2024 01 31.
Article in English | MEDLINE | ID: mdl-38291528

ABSTRACT

BACKGROUND: The transition from childhood to adolescence is associated with an increase in rates of some psychiatric disorders, including major depressive disorder, a debilitating mood disorder. The aim of this systematic review is to update the evidence on the benefits and harms of screening for depression in primary care and non-mental health clinic settings among children and adolescents. METHODS: This review is an update of a previous systematic review, for which the last search was conducted in 2017. We searched Ovid MEDLINE® ALL, Embase Classic+Embase, PsycINFO, Cochrane Central Register of Controlled Trials, and CINAHL on November 4, 2019, and updated on February 19, 2021. If no randomized controlled trials were found, we planned to conduct an additional search for non-randomized trials with a comparator group. For non-randomized trials, we applied a non-randomized controlled trial filter and searched the same databases except for Cochrane Central Register of Controlled Trials from January 2015 to February 2021. We also conducted a targeted search of the gray literature for unpublished documents. Title and abstract, and full-text screening were completed independently by pairs of reviewers. RESULTS: In this review update, we were unable to find any randomized controlled studies that satisfied our eligibility criteria and evaluated the potential benefits and harms of screening for depression in children and adolescents. Additionally, a search for non-randomized trials yielded no studies that met the inclusion criteria. CONCLUSIONS: The findings of this review indicate a lack of available evidence regarding the potential benefits and harms of screening for depression in children and adolescents. This absence of evidence emphasizes the necessity for well-conducted clinical trials to evaluate the effectiveness of depression screening among children and adolescents in primary care and non-mental health clinic settings. SYSTEMATIC REVIEW REGISTRATION: PROSPERO CRD42020150373 .


Subject(s)
Depression , Depressive Disorder, Major , Adolescent , Child , Humans , Depression/diagnosis , Depression/prevention & control , Depressive Disorder, Major/diagnosis , Primary Health Care , Research Design
17.
Trials ; 25(1): 96, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38287439

ABSTRACT

BACKGROUND: Despite the critical importance of clinical trials to provide evidence about the effects of intervention for children and youth, a paucity of published high-quality pediatric clinical trials persists. Sub-optimal reporting of key trial elements necessary to critically appraise and synthesize findings is prevalent. To harmonize and provide guidance for reporting in pediatric controlled clinical trial protocols and reports, reporting guideline extensions to the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) and Consolidated Standards of Reporting Trials (CONSORT) guidelines specific to pediatrics are being developed: SPIRIT-Children (SPIRIT-C) and CONSORT-Children (CONSORT-C). METHODS: The development of SPIRIT-C/CONSORT-C will be informed by the Enhancing the Quality and Transparency of Health Research Quality (EQUATOR) method for reporting guideline development in the following stages: (1) generation of a preliminary list of candidate items, informed by (a) items developed during initial development efforts and child relevant items from recent published SPIRIT and CONSORT extensions; (b) two systematic reviews and environmental scan of the literature; (c) workshops with young people; (2) an international Delphi study, where a wide range of panelists will vote on the inclusion or exclusion of candidate items on a nine-point Likert scale; (3) a consensus meeting to discuss items that have not reached consensus in the Delphi study and to "lock" the checklist items; (4) pilot testing of items and definitions to ensure that they are understandable, useful, and applicable; and (5) a final project meeting to discuss each item in the context of pilot test results. Key partners, including young people (ages 12-24 years) and family caregivers (e.g., parents) with lived experiences with pediatric clinical trials, and individuals with expertise and involvement in pediatric trials will be involved throughout the project. SPIRIT-C/CONSORT-C will be disseminated through publications, academic conferences, and endorsement by pediatric journals and relevant research networks and organizations. DISCUSSION: SPIRIT/CONSORT-C may serve as resources to facilitate comprehensive reporting needed to understand pediatric clinical trial protocols and reports, which may improve transparency within pediatric clinical trials and reduce research waste. TRIAL REGISTRATION: The development of these reporting guidelines is registered with the EQUATOR Network: SPIRIT-Children ( https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-clinical-trials-protocols/#35 ) and CONSORT-Children ( https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-clinical-trials/#CHILD ).


Subject(s)
Checklist , Child Health , Humans , Child , Adolescent , Consensus , Research Design , Reference Standards
18.
Am J Epidemiol ; 193(2): 323-338, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-37689835

ABSTRACT

A goal of evidence synthesis for trials of complex interventions is to inform the design or implementation of novel versions of complex interventions by predicting expected outcomes with each intervention version. Conventional aggregate data meta-analyses of studies comparing complex interventions have limited ability to provide such information. We argue that evidence synthesis for trials of complex interventions should forgo aspirations of estimating causal effects and instead model the response surface of study results to 1) summarize the available evidence and 2) predict the average outcomes of future studies or in new settings. We illustrate this modeling approach using data from a systematic review of diabetes quality improvement (QI) interventions involving at least 1 of 12 QI strategy components. We specify a series of meta-regression models to assess the association of specific components with the posttreatment outcome mean and compare the results to conventional meta-analysis approaches. Compared with conventional approaches, modeling the response surface of study results can better reflect the associations between intervention components and study characteristics with the posttreatment outcome mean. Modeling study results using a response surface approach offers a useful and feasible goal for evidence synthesis of complex interventions that rely on aggregate data.

19.
J Clin Epidemiol ; 166: 111229, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38052277

ABSTRACT

OBJECTIVES: To determine the reproducibility of biomedical systematic review search strategies. STUDY DESIGN AND SETTING: A cross-sectional reproducibility study was conducted on a random sample of 100 systematic reviews indexed in MEDLINE in November 2021. The primary outcome measure is the percentage of systematic reviews for which all database searches can be reproduced, operationalized as fulfilling six key Preferred Reporting Items for Systematic reviews and Meta-Analyses literature search extension (PRISMA-S) reporting guideline items and having all database searches reproduced within 10% of the number of original results. Key reporting guideline items included database name, multi-database searching, full search strategies, limits and restrictions, date(s) of searches, and total records. RESULTS: The 100 systematic review articles contained 453 database searches. Only 22 (4.9%) database searches reported all six PRISMA-S items. Forty-seven (10.4%) database searches could be reproduced within 10% of the number of results from the original search; six searches differed by more than 1,000% between the originally reported number of results and the reproduction. Only one systematic review article provided the necessary search details to be fully reproducible. CONCLUSION: Systematic review search reporting is poor. To correct this will require a multifaceted response from authors, peer reviewers, journal editors, and database providers.


Subject(s)
Research Design , Systematic Reviews as Topic , Cross-Sectional Studies , Databases, Factual , MEDLINE , Reproducibility of Results
20.
J Clin Epidemiol ; 165: 111208, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37939742

ABSTRACT

OBJECTIVES: To investigate the extent to which articles of economic evaluations of healthcare interventions indexed in MEDLINE incorporate research practices that promote transparency, openness, and reproducibility. STUDY DESIGN AND SETTING: We evaluated a random sample of health economic evaluations indexed in MEDLINE during 2019. We included articles written in English reporting an incremental cost-effectiveness ratio in terms of costs per life years gained, quality-adjusted life years, and/or disability-adjusted life years. Reproducible research practices, openness, and transparency in each article were extracted in duplicate. We explored whether reproducible research practices were associated with self-report use of a guideline. RESULTS: We included 200 studies published in 147 journals. Almost half were published as open access articles (n = 93; 47%). Most studies (n = 150; 75%) were model-based economic evaluations. In 109 (55%) studies, authors self-reported use a guideline (e.g., for study conduct or reporting). Few studies (n = 31; 16%) reported working from a protocol. In 112 (56%) studies, authors reported the data needed to recreate the incremental cost-effectiveness ratio for the base case analysis. This percentage was higher in studies using a guideline than studies not using a guideline (72/109 [66%] with guideline vs. 40/91 [44%] without guideline; risk ratio 1.50, 95% confidence interval 1.15-1.97). Only 10 (5%) studies mentioned access to raw data and analytic code for reanalyses. CONCLUSION: Transparency, openness, and reproducible research practices are frequently underused in health economic evaluations. This study provides baseline data to compare future progress in the field.


Subject(s)
Delivery of Health Care , Research Design , Humans , Cost-Benefit Analysis , Reproducibility of Results , Quality-Adjusted Life Years
SELECTION OF CITATIONS
SEARCH DETAIL
...