Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 202
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-39278616

RESUMEN

OBJECTIVES: The task of writing structured content reviews and guidelines has grown stronger and more complex. We propose to go beyond search tools, toward curation tools, by automating time-consuming and repetitive steps of extracting and organizing information. METHODS: SciScribe is built as an extension of IBM's Deep Search platform, which provides document processing and search capabilities. This platform was used to ingest and search full-content publications from PubMed Central (PMC) and official, structured records from the ClinicalTrials and OpenPayments databases. Author names and NCT numbers, mentioned within the publications, were used to link publications to these official records as context. Search strategies involve traditional keyword-based search as well as natural language question and answering via large language models (LLMs). RESULTS: SciScribe is a web-based tool that helps accelerate literature reviews through key features: 1. Accumulate a personal collection from publication sources, such as PMC or other sources; 2. Incorporate contextual information from external databases into the presented papers, promoting a more informed assessment by readers. 3. Semantic question and answering of a document to quickly assess relevance and hierarchical organization. 4. Semantic question answering for each document within a collection, collated into tables. CONCLUSIONS: Emergent language processing techniques open new avenues to accelerate and enhance the literature review process, for which we have demonstrated a use case implementation within cardiac surgery. SciScribe automates and accelerates this process, mitigates errors associated with repetition and fatigue, as well as contextualizes results by linking relevant external data sources, instantaneously.

2.
Front Cardiovasc Med ; 11: 1438556, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39253389

RESUMEN

Background: Patients with prior cardiac surgery undergoing acute type A aortic dissection (ATAAD) are thought to have worse clinical outcomes as compared to the patients without prior cardiac surgery. Aim: To compare the safety and efficacy of ATAAD in patients with prior cardiac surgery. Methods: We systematically searched PubMed, Cochrane Library and Google Scholar from database inception until April 2024. We included nine studies which consisted of a population of 524 in the prior surgery group and 5,249 in the non-prior surgery group. Our primary outcome was mortality. Secondary outcomes included reoperation for bleeding, myocardial infarction, stroke, renal failure, sternal wound infection, cardiopulmonary bypass (CPB) time, cross-clamp time, hospital stay, and ICU stay. Results: Our pooled estimate shows a significantly lower rate of mortality in the non-prior cardiac surgery group compared to the prior cardiac surgery group (RR = 0.60, 95% CI = 0.48-0.74). Among the secondary outcomes, the rate of reoperation for bleeding was significantly lower in the non-prior cardiac surgery group (RR = 0.66, 95% CI = 0.50-0.88). Additionally, the non-prior cardiac surgery group had significantly shorter CPB time (MD = -31.06, 95% CI = -52.20 to -9.93) and cross-clamp time (MD = -21.95, 95% CI = -42.65 to -1.24). All other secondary outcomes were statistically insignificant. Conclusion: Patients with prior cardiac surgery have a higher mortality rate as compared to patients who have not undergone cardiac surgery previously. Patients with prior cardiac surgery have higher mortality and longer CPB and cross-clamp times. Tailored strategies are needed to improve outcomes in this high-risk group.

3.
JMIR AI ; 3: e56537, 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39159446

RESUMEN

BACKGROUND: With the rapid evolution of artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT-4 (OpenAI), there is an increasing interest in their potential to assist in scholarly tasks, including conducting literature reviews. However, the efficacy of AI-generated reviews compared with traditional human-led approaches remains underexplored. OBJECTIVE: This study aims to compare the quality of literature reviews conducted by the ChatGPT-4 model with those conducted by human researchers, focusing on the relational dynamics between physicians and patients. METHODS: We included 2 literature reviews in the study on the same topic, namely, exploring factors affecting relational dynamics between physicians and patients in medicolegal contexts. One review used GPT-4, last updated in September 2021, and the other was conducted by human researchers. The human review involved a comprehensive literature search using medical subject headings and keywords in Ovid MEDLINE, followed by a thematic analysis of the literature to synthesize information from selected articles. The AI-generated review used a new prompt engineering approach, using iterative and sequential prompts to generate results. Comparative analysis was based on qualitative measures such as accuracy, response time, consistency, breadth and depth of knowledge, contextual understanding, and transparency. RESULTS: GPT-4 produced an extensive list of relational factors rapidly. The AI model demonstrated an impressive breadth of knowledge but exhibited limitations in in-depth and contextual understanding, occasionally producing irrelevant or incorrect information. In comparison, human researchers provided a more nuanced and contextually relevant review. The comparative analysis assessed the reviews based on criteria including accuracy, response time, consistency, breadth and depth of knowledge, contextual understanding, and transparency. While GPT-4 showed advantages in response time and breadth of knowledge, human-led reviews excelled in accuracy, depth of knowledge, and contextual understanding. CONCLUSIONS: The study suggests that GPT-4, with structured prompt engineering, can be a valuable tool for conducting preliminary literature reviews by providing a broad overview of topics quickly. However, its limitations necessitate careful expert evaluation and refinement, making it an assistant rather than a substitute for human expertise in comprehensive literature reviews. Moreover, this research highlights the potential and limitations of using AI tools like GPT-4 in academic research, particularly in the fields of health services and medical research. It underscores the necessity of combining AI's rapid information retrieval capabilities with human expertise for more accurate and contextually rich scholarly outputs.

4.
Medwave ; 24(05): e2781, 30-06-2024.
Artículo en Inglés | LILACS-Express | LILACS | ID: biblio-1570695

RESUMEN

Introduction Updating recommendations for guidelines requires a comprehensive and efficient literature search. Although new information platforms are available for developing groups, their relative contributions to this purpose remain uncertain. Methods As part of a review/update of eight selected evidence-based recommendationsfor type 2 diabetes, we evaluated the following five literature search approaches (targeting systematic reviews, using predetermined criteria): PubMed for MEDLINE, Epistemonikos database basic search, Epistemonikos database using a structured search strategy, Living overview of evidence (L.OVE) platform, and TRIP database. Three reviewers independently classified the retrieved references as definitely eligible, probably eligible, or not eligible. Those falling in the same "definitely" categories for all reviewers were labelled as "true" positives/negatives. The rest went to re-assessment and if found eligible/not eligible by consensus became "false" negatives/positives, respectively. We described the yield for each approach and computed "diagnostic accuracy" measures and agreement statistics. Results Altogether, the five approaches identified 318 to 505 references for the eight recommendations, from which reviewers considered 4.2 to 9.4% eligible after the two rounds. While Pubmed outperformed the other approaches (diagnostic odds ratio 12.5 versus 2.6 to 5.3), no single search approach returned eligible references for all recommendations. Individually, searches found up to 40% of all eligible references (n = 71), and no combination of any three approaches could find over 80% of them. Kappa statistics for retrieval between searches were very poor (9 out of 10 paired comparisons did not surpass the chance-expected agreement). Conclusion Among the information platforms assessed, PubMed appeared to be more efficient in updating this set of recommendations. However, the very poor agreement among search approaches in the reference yield demands that developing groups add information from several (probably more than three) sources for this purpose. Further research is needed to replicate our findings and enhance our understanding of how to efficiently update recommendations.


Introducción La actualización de recomendaciones de las guías de práctica clínica requiere búsquedas bibliográficas exhaustivas y eficientes. Aunque están disponibles nuevas plataformas de información para grupos desarrolladores, su contribución a este propósito sigue siendo incierta. Métodos Como parte de una revisión/actualización de 8 recomendaciones basadas en evidencia seleccionadas sobre diabetes tipo 2, evaluamos las siguientes cinco aproximaciones de búsqueda bibliográfica (dirigidas a revisiones sistemáticas, utilizando criterios predeterminados): PubMed para MEDLINE; Epistemonikos utilizando una búsqueda básica; Epistemonikos utilizando una estrategia de búsqueda estructurada; plataforma (L.OVE) y TRIP . Tres revisores clasificaron de forma independiente las referencias recuperadas como definitivamente o probablemente elegibles/no elegibles. Aquellas clasificadas en las mismas categorías "definitivas" para todos los revisores, se etiquetaron como "verdaderas" positivas/negativas. El resto se sometieron a una nueva evaluación y, si se consideraban por consenso elegibles/no elegibles, se convirtieron en "falsos" negativos/positivos, respectivamente. Describimos el rendimiento de cada aproximación, junto a sus medidas de "precisión diagnóstica" y las estadísticas de acuerdo. Resultados En conjunto, las cinco aproximaciones identificaron 318-505 referencias para las 8 recomendaciones, de las cuales los revisores consideraron elegibles el 4,2 a 9,4% tras las dos rondas. Mientras que Pubmed superó a las otras aproximaciones (odds ratio de diagnóstico 12,5 versus 2,6 a 53), ninguna aproximación de búsqueda identificó por sí misma referencias elegibles para todas las recomendaciones. Individualmente, las búsquedas identificaron hasta el 40% de todas las referencias elegibles (n=71), y ninguna combinación de cualquiera de los tres enfoques pudo identificar más del 80% de ellas. Las estadísticas Kappa para la recuperación entre búsquedas fueron muy pobres (9 de cada 10 comparaciones pareadas no superaron el acuerdo esperado por azar). Conclusiones Entre las plataformas de información evaluadas, Pubmed parece ser la más eficiente para actualizar este conjunto de recomendaciones. Sin embargo, la escasa concordancia en el rendimiento de las referencias exige que los grupos desarrolladores incorporen información de varias fuentes (probablemente más de tres) para este fin. Es necesario seguir investigando para replicar nuestros hallazgos y mejorar nuestra comprensión de cómo actualizar recomendaciones de forma eficiente.

5.
Medwave ; 24(5): e2781, 2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38885522

RESUMEN

Introduction: Updating recommendations for guidelines requires a comprehensive and efficient literature search. Although new information platforms are available for developing groups, their relative contributions to this purpose remain uncertain. Methods: As part of a review/update of eight selected evidence-based recommendationsfor type 2 diabetes, we evaluated the following five literature search approaches (targeting systematic reviews, using predetermined criteria): PubMed for MEDLINE, Epistemonikos database basic search, Epistemonikos database using a structured search strategy, Living overview of evidence (L.OVE) platform, and TRIP database. Three reviewers independently classified the retrieved references as definitely eligible, probably eligible, or not eligible. Those falling in the same "definitely" categories for all reviewers were labelled as "true" positives/negatives. The rest went to re-assessment and if found eligible/not eligible by consensus became "false" negatives/positives, respectively. We described the yield for each approach and computed "diagnostic accuracy" measures and agreement statistics. Results: Altogether, the five approaches identified 318 to 505 references for the eight recommendations, from which reviewers considered 4.2 to 9.4% eligible after the two rounds. While Pubmed outperformed the other approaches (diagnostic odds ratio 12.5 versus 2.6 to 5.3), no single search approach returned eligible references for all recommendations. Individually, searches found up to 40% of all eligible references (n = 71), and no combination of any three approaches could find over 80% of them. Kappa statistics for retrieval between searches were very poor (9 out of 10 paired comparisons did not surpass the chance-expected agreement). Conclusion: Among the information platforms assessed, PubMed appeared to be more efficient in updating this set of recommendations. However, the very poor agreement among search approaches in the reference yield demands that developing groups add information from several (probably more than three) sources for this purpose. Further research is needed to replicate our findings and enhance our understanding of how to efficiently update recommendations.


Introducción: La actualización de recomendaciones de las guías de práctica clínica requiere búsquedas bibliográficas exhaustivas y eficientes. Aunque están disponibles nuevas plataformas de información para grupos desarrolladores, su contribución a este propósito sigue siendo incierta. Métodos: Como parte de una revisión/actualización de 8 recomendaciones basadas en evidencia seleccionadas sobre diabetes tipo 2, evaluamos las siguientes cinco aproximaciones de búsqueda bibliográfica (dirigidas a revisiones sistemáticas, utilizando criterios predeterminados): PubMed para MEDLINE; Epistemonikos utilizando una búsqueda básica; Epistemonikos utilizando una estrategia de búsqueda estructurada; plataforma (L.OVE) y TRIP . Tres revisores clasificaron de forma independiente las referencias recuperadas como definitivamente o probablemente elegibles/no elegibles. Aquellas clasificadas en las mismas categorías "definitivas" para todos los revisores, se etiquetaron como "verdaderas" positivas/negativas. El resto se sometieron a una nueva evaluación y, si se consideraban por consenso elegibles/no elegibles, se convirtieron en "falsos" negativos/positivos, respectivamente. Describimos el rendimiento de cada aproximación, junto a sus medidas de "precisión diagnóstica" y las estadísticas de acuerdo. Resultados: En conjunto, las cinco aproximaciones identificaron 318-505 referencias para las 8 recomendaciones, de las cuales los revisores consideraron elegibles el 4,2 a 9,4% tras las dos rondas. Mientras que Pubmed superó a las otras aproximaciones (odds ratio de diagnóstico 12,5 versus 2,6 a 53), ninguna aproximación de búsqueda identificó por sí misma referencias elegibles para todas las recomendaciones. Individualmente, las búsquedas identificaron hasta el 40% de todas las referencias elegibles (n=71), y ninguna combinación de cualquiera de los tres enfoques pudo identificar más del 80% de ellas. Las estadísticas Kappa para la recuperación entre búsquedas fueron muy pobres (9 de cada 10 comparaciones pareadas no superaron el acuerdo esperado por azar). Conclusiones: Entre las plataformas de información evaluadas, Pubmed parece ser la más eficiente para actualizar este conjunto de recomendaciones. Sin embargo, la escasa concordancia en el rendimiento de las referencias exige que los grupos desarrolladores incorporen información de varias fuentes (probablemente más de tres) para este fin. Es necesario seguir investigando para replicar nuestros hallazgos y mejorar nuestra comprensión de cómo actualizar recomendaciones de forma eficiente.


Asunto(s)
Diabetes Mellitus Tipo 2 , Medicina Basada en la Evidencia , Guías de Práctica Clínica como Asunto , Humanos , Colombia , Bases de Datos Bibliográficas , Almacenamiento y Recuperación de la Información/métodos , Almacenamiento y Recuperación de la Información/normas
6.
ArXiv ; 2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-38903741

RESUMEN

Searching for a related article based on a reference article is an integral part of scientific research. PubMed, like many academic search engines, has a "similar articles" feature that recommends articles relevant to the current article viewed by a user. Explaining recommended items can be of great utility to users, particularly in the literature search process. With more than a million biomedical papers being published each year, explaining the recommended similar articles would facilitate researchers and clinicians in searching for related articles. Nonetheless, the majority of current literature recommendation systems lack explanations for their suggestions. We employ a post hoc approach to explaining recommendations by identifying relevant tokens in the titles of similar articles. Our major contribution is building PubCLogs by repurposing 5.6 million pairs of coclicked articles from PubMed's user query logs. Using our PubCLogs dataset, we train the Highlight Similar Article Title (HSAT), a transformer-based model designed to select the most relevant parts of the title of a similar article, based on the title and abstract of a seed article. HSAT demonstrates strong performance in our empirical evaluations, achieving an F1 score of 91.72 percent on the PubCLogs test set, considerably outperforming several baselines including BM25 (70.62), MPNet (67.11), MedCPT (62.22), GPT-3.5 (46.00), and GPT-4 (64.89). Additional evaluations on a separate, manually annotated test set further verifies HSAT's performance. Moreover, participants of our user study indicate a preference for HSAT, due to its superior balance between conciseness and comprehensiveness. Our study suggests that repurposing user query logs of academic search engines can be a promising way to train state-of-the-art models for explaining literature recommendation.

7.
Eur Urol Focus ; 2024 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-38876943

RESUMEN

BACKGROUND: Defining optimal therapeutic sequencing strategies in prostate cancer (PC) is challenging and may be assisted by artificial intelligence (AI)-based tools for an analysis of the medical literature. OBJECTIVE: To demonstrate that INSIDE PC can help clinicians query the literature on therapeutic sequencing in PC and to develop previously unestablished practices for evaluating the outputs of AI-based support platforms. DESIGN, SETTING, AND PARTICIPANTS: INSIDE PC was developed by customizing PubMed Bidirectional Encoder Representations from Transformers. Publications were ranked and aggregated for relevance using data visualization and analytics. Publications returned by INSIDE PC and PubMed were given normalized discounted cumulative gain (nDCG) scores by PC experts reflecting ranking and relevance. INTERVENTION: INSIDE PC for AI-based semantic literature analysis. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: INSIDE PC was evaluated for relevance and accuracy for three test questions on the efficacy of therapeutic sequencing of systemic therapies in PC. RESULTS AND LIMITATIONS: In this initial evaluation, INSIDE PC outperformed PubMed for question 1 (novel hormonal therapy [NHT] followed by NHT) for the top five, ten, and 20 publications (nDCG score, +43, +33, and +30 percentage points [pps], respectively). For question 2 (NHT followed by poly [adenosine diphosphate ribose] polymerase inhibitors [PARPi]), INSIDE PC and PubMed performed similarly. For question 3 (NHT or PARPi followed by 177Lu-prostate-specific membrane antigen-617), INSIDE PC outperformed PubMed for the top five, ten, and 20 publications (+16, +4, and +5 pps, respectively). CONCLUSIONS: We applied INSIDE PC to develop standards for evaluating the performance of AI-based tools for literature extraction. INSIDE PC performed competitively with PubMed and can assist clinicians with therapeutic sequencing in PC. PATIENT SUMMARY: The medical literature is often very difficult for doctors and patients to search. In this report, we describe INSIDE PC-an artificial intelligence (AI) system created to help search articles published in medical journals and determine the best order of treatments for advanced prostate cancer in a much better time frame. We found that INSIDE PC works as well as another search tool, PubMed, a widely used resource for searching and retrieving articles published in medical journals. Our work with INSIDE PC shows new ways in which AI can be used to search published articles in medical journals and how these systems might be evaluated to support shared decision-making.

8.
Lab Anim ; 58(4): 369-373, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38872231

RESUMEN

The search for 3R-relevant information is a prerequisite for any planned experimental approach considering animal use. Such a literature search includes all methods to replace, reduce and refine (3Rs) animal testing with the aim of improving animal welfare, and requires an intensive screening of literature databases reflecting the current state of knowledge in experimental biomedicine. We developed SMAFIRA, a freely available online tool to facilitate the screening of PubMed/MEDLINE for possible alternatives to animal testing. SMAFIRA employs state-of-the-art language models from the field of deep learning, and provides relevant literature citations in a ranked order, classified according to the experimental model used. By using this classification, the search for alternative methods in the biomedical literature will become much more efficient. The tool is available at https://smafira.bf3r.de.


Asunto(s)
Internet , Animales , Alternativas a las Pruebas en Animales/métodos , Almacenamiento y Recuperación de la Información/métodos , Bienestar del Animal , Programas Informáticos
9.
J Ayurveda Integr Med ; 15(4): 100996, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38943905

RESUMEN

The basic concepts of research are learned through systematic literature searches which form the basis of a research statement and research topic. Then the research question, hypothesis, aim, and objectives, as well as the experimental design, are developed. Given the context provided, the primary focus is on the importance of adequately training postgraduates and young research investigators in research methodology and project development. It is evident that there is a lack of proper training in these areas, and the rapid expansion of colleges in India exacerbates this issue. To address this, research students must receive comprehensive instruction in scientific research methodology, experimental design, statistics, scientific writing, publishing, and research ethics. Our team has been conducting workshops and symposia for more than two decades to improve the current teaching methods in these areas. Most recently, we organized a series of national and international workshops and seminars in multiple states across India to fortify the core concepts of scientific research for students and faculty members. This report highlights the key aspects of these workshops and the positive outcomes experienced by participants.

10.
Nurs Stand ; 39(7): 46-49, 2024 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-38712355

RESUMEN

RATIONALE AND KEY POINTS: Scoping reviews have become a popular approach for exploring what literature has been published on a particular field of interest. They can enable nurses to gain an overview of the contemporary evidence base relating to a practice area, treatment or specific patient demographic, for example. This article provides a concise guide for nurses planning to undertake a scoping review, explaining the various steps involved. REFLECTIVE ACTIVITY: 'How to' articles can help to update your practice and ensure it remains evidence-based. Apply this article to your practice. Reflect on and write a short account of: • How this article might improve your practice when undertaking a scoping review.• How you could use this information to educate nursing students and colleagues on the appropriate techniques and evidence base required for scoping the literature.


Asunto(s)
Literatura de Revisión como Asunto , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA