Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
2.
Ann Fam Med ; 22(2): 113-120, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38527823

RESUMO

PURPOSE: Worldwide clinical knowledge is expanding rapidly, but physicians have sparse time to review scientific literature. Large language models (eg, Chat Generative Pretrained Transformer [ChatGPT]), might help summarize and prioritize research articles to review. However, large language models sometimes "hallucinate" incorrect information. METHODS: We evaluated ChatGPT's ability to summarize 140 peer-reviewed abstracts from 14 journals. Physicians rated the quality, accuracy, and bias of the ChatGPT summaries. We also compared human ratings of relevance to various areas of medicine to ChatGPT relevance ratings. RESULTS: ChatGPT produced summaries that were 70% shorter (mean abstract length of 2,438 characters decreased to 739 characters). Summaries were nevertheless rated as high quality (median score 90, interquartile range [IQR] 87.0-92.5; scale 0-100), high accuracy (median 92.5, IQR 89.0-95.0), and low bias (median 0, IQR 0-7.5). Serious inaccuracies and hallucinations were uncommon. Classification of the relevance of entire journals to various fields of medicine closely mirrored physician classifications (nonlinear standard error of the regression [SER] 8.6 on a scale of 0-100). However, relevance classification for individual articles was much more modest (SER 22.3). CONCLUSIONS: Summaries generated by ChatGPT were 70% shorter than mean abstract length and were characterized by high quality, high accuracy, and low bias. Conversely, ChatGPT had modest ability to classify the relevance of articles to medical specialties. We suggest that ChatGPT can help family physicians accelerate review of the scientific literature and have developed software (pyJournalWatch) to support this application. Life-critical medical decisions should remain based on full, critical, and thoughtful evaluation of the full text of research articles in context with clinical guidelines.


Assuntos
Medicina , Humanos , Médicos de Família
3.
Kans J Med ; 16: 294-296, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38076615
4.
JAMIA Open ; 6(2): ooad026, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37063406

RESUMO

Objective: Our objective is to assess the accuracy of the COVID-19 vaccination status within the electronic health record (EHR) for a panel of patients in a primary care practice when manual queries of the state immunization databases are required to access outside immunization records. Materials and Methods: This study evaluated COVID-19 vaccination status of adult primary care patients within a university-based health system EHR by manually querying the Kansas and Missouri Immunization Information Systems. Results: A manual query of the local Immunization Information Systems for 4114 adult patients with "unknown" vaccination status showed 44% of the patients were previously vaccinated. Attempts to assess the comprehensiveness of the Immunization Information Systems were hampered by incomplete documentation in the chart and poor response to patient outreach. Conclusions: When the interface between the patient chart and the local Immunization Information System depends on a manual query for the transfer of data, the COVID-19 vaccination status for a panel of patients is often inaccurate.

5.
Iperception ; 12(2): 2041669521994150, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35145614

RESUMO

Visual crowding, the impairment of object recognition in peripheral vision due to flanking objects, has generally been studied using simple stimuli on blank backgrounds. While crowding is widely assumed to occur in natural scenes, it has not been shown rigorously yet. Given that scene contexts can facilitate object recognition, crowding effects may be dampened in real-world scenes. Therefore, this study investigated crowding using objects in computer-generated real-world scenes. In two experiments, target objects were presented with four flanker objects placed uniformly around the target. Previous research indicates that crowding occurs when the distance between the target and flanker is approximately less than half the retinal eccentricity of the target. In each image, the spacing between the target and flanker objects was varied considerably above or below the standard (0.5) threshold to either suppress or facilitate the crowding effect. Experiment 1 cued the target location and then briefly flashed the scene image before participants could move their eyes. Participants then selected the target object's category from a 15-alternative forced choice response set (including all objects shown in the scene). Experiment 2 used eye tracking to ensure participants were centrally fixating at the beginning of each trial and showed the image for the duration of the participant's fixation. Both experiments found object recognition accuracy decreased with smaller spacing between targets and flanker objects. Thus, this study rigorously shows crowding of objects in semantically consistent real-world scenes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...