Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 211
Filtrar
1.
Hu Li Za Zhi ; 71(5): 7-13, 2024 Oct.
Artigo em Chinês | MEDLINE | ID: mdl-39350704

RESUMO

Artificial intelligence (AI) is driving global change, and the implementation of generative AI in higher education is inevitable. AI language models such as the chat generative pre-trained transformer (ChatGPT) hold the potential to revolutionize the delivery of nursing education in the future. Nurse educators play a crucial role in preparing nursing students for a future technology-integrated healthcare system. While the technology has limitations and potential biases, the emergence of ChatGPT presents both opportunities and challenges. It is critical for faculty to be familiar with the capabilities and limitations of this model to foster effective, ethical, and responsible utilization of AI technology while preparing students in advance for the dynamic and rapidly advancing landscape of nursing and healthcare. Therefore, this article was written to present a strengths, weaknesses, opportunities, and threats (SWOT) analysis of integrating ChatGPT into nursing education, providing a guide for implementing ChatGPT in nursing education and offering a well-rounded assessment to help nurse educators make informed decisions.


Assuntos
Inteligência Artificial , Educação em Enfermagem , Humanos
2.
Front Artif Intell ; 7: 1393903, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39351510

RESUMO

Introduction: Recent advances in generative Artificial Intelligence (AI) and Natural Language Processing (NLP) have led to the development of Large Language Models (LLMs) and AI-powered chatbots like ChatGPT, which have numerous practical applications. Notably, these models assist programmers with coding queries, debugging, solution suggestions, and providing guidance on software development tasks. Despite known issues with the accuracy of ChatGPT's responses, its comprehensive and articulate language continues to attract frequent use. This indicates potential for ChatGPT to support educators and serve as a virtual tutor for students. Methods: To explore this potential, we conducted a comprehensive analysis comparing the emotional content in responses from ChatGPT and human answers to 2000 questions sourced from Stack Overflow (SO). The emotional aspects of the answers were examined to understand how the emotional tone of AI responses compares to that of human responses. Results: Our analysis revealed that ChatGPT's answers are generally more positive compared to human responses. In contrast, human answers often exhibit emotions such as anger and disgust. Significant differences were observed in emotional expressions between ChatGPT and human responses, particularly in the emotions of anger, disgust, and joy. Human responses displayed a broader emotional spectrum compared to ChatGPT, suggesting greater emotional variability among humans. Discussion: The findings highlight a distinct emotional divergence between ChatGPT and human responses, with ChatGPT exhibiting a more uniformly positive tone and humans displaying a wider range of emotions. This variance underscores the need for further research into the role of emotional content in AI and human interactions, particularly in educational contexts where emotional nuances can impact learning and communication.

3.
JMIR Form Res ; 8: e51383, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39353189

RESUMO

BACKGROUND: Generative artificial intelligence (AI) and large language models, such as OpenAI's ChatGPT, have shown promising potential in supporting medical education and clinical decision-making, given their vast knowledge base and natural language processing capabilities. As a general purpose AI system, ChatGPT can complete a wide range of tasks, including differential diagnosis without additional training. However, the specific application of ChatGPT in learning and applying a series of specialized, context-specific tasks mimicking the workflow of a human assessor, such as administering a standardized assessment questionnaire, followed by inputting assessment results in a standardized form, and interpretating assessment results strictly following credible, published scoring criteria, have not been thoroughly studied. OBJECTIVE: This exploratory study aims to evaluate and optimize ChatGPT's capabilities in administering and interpreting the Sour Seven Questionnaire, an informant-based delirium assessment tool. Specifically, the objectives were to train ChatGPT-3.5 and ChatGPT-4 to understand and correctly apply the Sour Seven Questionnaire to clinical vignettes using prompt engineering, assess the performance of these AI models in identifying and scoring delirium symptoms against scores from human experts, and refine and enhance the models' interpretation and reporting accuracy through iterative prompt optimization. METHODS: We used prompt engineering to train ChatGPT-3.5 and ChatGPT-4 models on the Sour Seven Questionnaire, a tool for assessing delirium through caregiver input. Prompt engineering is a methodology used to enhance the AI's processing of inputs by meticulously structuring the prompts to improve accuracy and consistency in outputs. In this study, prompt engineering involved creating specific, structured commands that guided the AI models in understanding and applying the assessment tool's criteria accurately to clinical vignettes. This approach also included designing prompts to explicitly instruct the AI on how to format its responses, ensuring they were consistent with clinical documentation standards. RESULTS: Both ChatGPT models demonstrated promising proficiency in applying the Sour Seven Questionnaire to the vignettes, despite initial inconsistencies and errors. Performance notably improved through iterative prompt engineering, enhancing the models' capacity to detect delirium symptoms and assign scores. Prompt optimizations included adjusting the scoring methodology to accept only definitive "Yes" or "No" responses, revising the evaluation prompt to mandate responses in a tabular format, and guiding the models to adhere to the 2 recommended actions specified in the Sour Seven Questionnaire. CONCLUSIONS: Our findings provide preliminary evidence supporting the potential utility of AI models such as ChatGPT in administering standardized clinical assessment tools. The results highlight the significance of context-specific training and prompt engineering in harnessing the full potential of these AI models for health care applications. Despite the encouraging results, broader generalizability and further validation in real-world settings warrant additional research.


Assuntos
Delírio , Humanos , Delírio/diagnóstico , Inquéritos e Questionários , Inteligência Artificial
4.
Small Methods ; : e2401108, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39359026

RESUMO

Transmission electron microscopy (TEM) plays a crucial role in heterogeneous catalysis for assessing the size distribution of supported metal nanoparticles. Typically, nanoparticle size is quantified by measuring the diameter under the assumption of spherical geometry, a simplification that limits the precision needed for advancing synthesis-structure-performance relationships. Currently, there is a lack of techniques that can reliably extract more meaningful information from atomically resolved TEM images, like nuclearity or geometry. Here, cycle-consistent generative adversarial networks (CycleGANs) are explored to bridge experimental and simulated images, directly linking experimental observations with information from their underlying atomic structure. Using the versatile Pt/CeO2 (Pt particles centered ≈2 nm) catalyst synthesized by impregnation, large datasets of experimental scanning transmission electron micrographs and physical image simulations are created to train a CycleGAN. A subsequent size-estimation network is developed to determine the nuclearity of imaged nanoparticles, providing plausible estimates for ≈70% of experimentally observed particles. This automatic approach enables precise size determination of supported nanoparticle-based catalysts overcoming crystal orientation limitations of conventional techniques, promising high accuracy with sufficient training data. Tools like this are envisioned to be of great use in designing and characterizing catalytic materials with improved atomic precision.

5.
Arthroplast Today ; 29: 101503, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39376670

RESUMO

Background: Discrepancies in medical data sets can perpetuate bias, especially when training deep learning models, potentially leading to biased outcomes in clinical applications. Understanding these biases is crucial for the development of equitable healthcare technologies. This study employs generative deep learning technology to explore and understand radiographic differences based on race among patients undergoing total hip arthroplasty. Methods: Utilizing a large institutional registry, we retrospectively analyzed pelvic radiographs from total hip arthroplasty patients, characterized by demographics and image features. Denoising diffusion probabilistic models generated radiographs conditioned on demographic and imaging characteristics. Fréchet Inception Distance assessed the generated image quality, showing the diversity and realism of the generated images. Sixty transition videos were generated that showed transforming White pelvises to their closest African American counterparts and vice versa while controlling for patients' sex, age, and body mass index. Two expert surgeons and 2 radiologists carefully studied these videos to understand the systematic differences that are present in the 2 races' radiographs. Results: Our data set included 480,407 pelvic radiographs, with a predominance of White patients over African Americans. The generative denoising diffusion probabilistic model created high-quality images and reached an Fréchet Inception Distance of 6.8. Experts identified 6 characteristics differentiating races, including interacetabular distance, osteoarthritis degree, obturator foramina shape, femoral neck-shaft angle, pelvic ring shape, and femoral cortical thickness. Conclusions: This study demonstrates the potential of generative models for understanding disparities in medical imaging data sets. By visualizing race-based differences, this method aids in identifying bias in downstream tasks, fostering the development of fairer healthcare practices.

6.
Ann Nucl Med ; 2024 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-39320419

RESUMO

This review explores the potential applications of Large Language Models (LLMs) in nuclear medicine, especially nuclear medicine examinations such as PET and SPECT, reviewing recent advancements in both fields. Despite the rapid adoption of LLMs in various medical specialties, their integration into nuclear medicine has not yet been sufficiently explored. We first discuss the latest developments in nuclear medicine, including new radiopharmaceuticals, imaging techniques, and clinical applications. We then analyze how LLMs are being utilized in radiology, particularly in report generation, image interpretation, and medical education. We highlight the potential of LLMs to enhance nuclear medicine practices, such as improving report structuring, assisting in diagnosis, and facilitating research. However, challenges remain, including the need for improved reliability, explainability, and bias reduction in LLMs. The review also addresses the ethical considerations and potential limitations of AI in healthcare. In conclusion, LLMs have significant potential to transform existing frameworks in nuclear medicine, making it a critical area for future research and development.

7.
Front Artif Intell ; 7: 1415782, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39263526

RESUMO

In this study, we aimed to explore the frequency of use and perceived usefulness of LLM generative AI chatbots (e.g., ChatGPT) for schoolwork, particularly in relation to adolescents' executive functioning (EF), which includes critical cognitive processes like planning, inhibition, and cognitive flexibility essential for academic success. Two studies were conducted, encompassing both younger (Study 1: N = 385, 46% girls, mean age 14 years) and older (Study 2: N = 359, 67% girls, mean age 17 years) adolescents, to comprehensively examine these associations across different age groups. In Study 1, approximately 14.8% of participants reported using generative AI, while in Study 2, the adoption rate among older students was 52.6%, with ChatGPT emerging as the preferred tool among adolescents in both studies. Consistently across both studies, we found that adolescents facing more EF challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Notably, academic achievement showed no significant associations with AI usage or usefulness, as revealed in Study 1. This study represents the first exploration into how individual characteristics, such as EF, relate to the frequency and perceived usefulness of LLM generative AI chatbots for schoolwork among adolescents. Given the early stage of generative AI chatbots during the survey, future research should validate these findings and delve deeper into the utilization and integration of generative AI into educational settings. It is crucial to adopt a proactive approach to address the potential challenges and opportunities associated with these emerging technologies in education.

8.
PNAS Nexus ; 3(9): pgae387, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39290437

RESUMO

This article evaluated the effectiveness of using generative AI to simplify science communication and enhance the public's understanding of science. By comparing lay summaries of journal articles from PNAS, yoked to those generated by AI, this work first assessed linguistic simplicity differences across such summaries and public perceptions in follow-up experiments. Specifically, study 1a analyzed simplicity features of PNAS abstracts (scientific summaries) and significance statements (lay summaries), observing that lay summaries were indeed linguistically simpler, but effect size differences were small. Study 1b used a large language model, GPT-4, to create significance statements based on paper abstracts and this more than doubled the average effect size without fine-tuning. Study 2 experimentally demonstrated that simply-written generative pre-trained transformer (GPT) summaries facilitated more favorable perceptions of scientists (they were perceived as more credible and trustworthy, but less intelligent) than more complexly written human PNAS summaries. Crucially, study 3 experimentally demonstrated that participants comprehended scientific writing better after reading simple GPT summaries compared to complex PNAS summaries. In their own words, participants also summarized scientific papers in a more detailed and concrete manner after reading GPT summaries compared to PNAS summaries of the same article. AI has the potential to engage scientific communities and the public via a simple language heuristic, advocating for its integration into scientific dissemination for a more informed society.

9.
PNAS Nexus ; 3(9): pgae346, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39290441

RESUMO

Culture fundamentally shapes people's reasoning, behavior, and communication. As people increasingly use generative artificial intelligence (AI) to expedite and automate personal and professional tasks, cultural values embedded in AI models may bias people's authentic expression and contribute to the dominance of certain cultures. We conduct a disaggregated evaluation of cultural bias for five widely used large language models (OpenAI's GPT-4o/4-turbo/4/3.5-turbo/3) by comparing the models' responses to nationally representative survey data. All models exhibit cultural values resembling English-speaking and Protestant European countries. We test cultural prompting as a control strategy to increase cultural alignment for each country/territory. For later models (GPT-4, 4-turbo, 4o), this improves the cultural alignment of the models' output for 71-81% of countries and territories. We suggest using cultural prompting and ongoing evaluation to reduce cultural bias in the output of generative AI.

10.
Narra J ; 4(2): e917, 2024 08.
Artigo em Inglês | MEDLINE | ID: mdl-39280327

RESUMO

Since its public release on November 30, 2022, ChatGPT has shown promising potential in diverse healthcare applications despite ethical challenges, privacy issues, and possible biases. The aim of this study was to identify and assess the most influential publications in the field of ChatGPT utility in healthcare using bibliometric analysis. The study employed an advanced search on three databases, Scopus, Web of Science, and Google Scholar, to identify ChatGPT-related records in healthcare education, research, and practice between November 27 and 30, 2023. The ranking was based on the retrieved citation count in each database. The additional alternative metrics that were evaluated included (1) Semantic Scholar highly influential citations, (2) PlumX captures, (3) PlumX mentions, (4) PlumX social media and (5) Altmetric Attention Scores (AASs). A total of 22 unique records published in 17 different scientific journals from 14 different publishers were identified in the three databases. Only two publications were in the top 10 list across the three databases. Variable publication types were identified, with the most common being editorial/commentary publications (n=8/22, 36.4%). Nine of the 22 records had corresponding authors affiliated with institutions in the United States (40.9%). The range of citation count varied per database, with the highest range identified in Google Scholar (1019-121), followed by Scopus (242-88), and Web of Science (171-23). Google Scholar citations were correlated significantly with the following metrics: Semantic Scholar highly influential citations (Spearman's correlation coefficient ρ=0.840, p<0.001), PlumX captures (ρ=0.831, p<0.001), PlumX mentions (ρ=0.609, p=0.004), and AASs (ρ=0.542, p=0.009). In conclusion, despite several acknowledged limitations, this study showed the evolving landscape of ChatGPT utility in healthcare. There is an urgent need for collaborative initiatives by all stakeholders involved to establish guidelines for ethical, transparent, and responsible use of ChatGPT in healthcare. The study revealed the correlation between citations and alternative metrics, highlighting its usefulness as a supplement to gauge the impact of publications, even in a rapidly growing research field.


Assuntos
Bibliometria , Humanos , Mídias Sociais , Aniversários e Eventos Especiais
11.
Artigo em Inglês | MEDLINE | ID: mdl-39278616

RESUMO

OBJECTIVES: The task of writing structured content reviews and guidelines has grown stronger and more complex. We propose to go beyond search tools, toward curation tools, by automating time-consuming and repetitive steps of extracting and organizing information. METHODS: SciScribe is built as an extension of IBM's Deep Search platform, which provides document processing and search capabilities. This platform was used to ingest and search full-content publications from PubMed Central (PMC) and official, structured records from the ClinicalTrials and OpenPayments databases. Author names and NCT numbers, mentioned within the publications, were used to link publications to these official records as context. Search strategies involve traditional keyword-based search as well as natural language question and answering via large language models (LLMs). RESULTS: SciScribe is a web-based tool that helps accelerate literature reviews through key features: 1. Accumulate a personal collection from publication sources, such as PMC or other sources; 2. Incorporate contextual information from external databases into the presented papers, promoting a more informed assessment by readers. 3. Semantic question and answering of a document to quickly assess relevance and hierarchical organization. 4. Semantic question answering for each document within a collection, collated into tables. CONCLUSIONS: Emergent language processing techniques open new avenues to accelerate and enhance the literature review process, for which we have demonstrated a use case implementation within cardiac surgery. SciScribe automates and accelerates this process, mitigates errors associated with repetition and fatigue, as well as contextualizes results by linking relevant external data sources, instantaneously.

12.
Cureus ; 16(9): e69710, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39308847

RESUMO

This study introduces a novel methodology for enhancing intelligent tutoring systems (ITS) through the integration of generative artificial intelligence (GenAI) and specialized AI agents. We present a proof of concept (PoC) demo that implements a dual-layer GenAI validation approach that utilizes multiple large language models to ensure the reliability and pedagogical integrity of the AI-generated content. The system features role-specific AI agents, a GenAI-powered scoring mechanism, and an AI mentor that provides periodic guidance. This approach demonstrates capabilities in dynamic scenario generation and real-time adaptability while addressing key challenges in AI-driven education, such as personalization, scalability, and domain-specific knowledge integration. Although exemplified here through a case study in healthcare root cause analysis, the methodology is designed for broad applicability across various fields. Our findings suggest that this approach has significant potential for advancing adaptive learning and personalized instruction while raising important considerations regarding ethical AI application in education. This work provides a foundation for further research into the efficacy and impact of GenAI-enhanced ITS on learning outcomes and instructional design across diverse educational domains.

13.
Blood Purif ; : 1-13, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39217985

RESUMO

BACKGROUND: Generative artificial intelligence (AI) is rapidly transforming various aspects of healthcare, including critical care nephrology. Large language models (LLMs), a key technology in generative AI, show promise in enhancing patient care, streamlining workflows, and advancing research in this field. SUMMARY: This review analyzes the current applications and future prospects of generative AI in critical care nephrology. Recent studies demonstrate the capabilities of LLMs in diagnostic accuracy, clinical reasoning, and continuous renal replacement therapy (CRRT) alarm troubleshooting. As we enter an era of multiagent models and automation, the integration of generative AI into critical care nephrology holds promise for improving patient care, optimizing clinical processes, and accelerating research. However, careful consideration of ethical implications and continued refinement of these technologies are essential for their responsible implementation in clinical practice. This review explores the current and potential applications of generative AI in nephrology, focusing on clinical decision support, patient education, research, and medical education. Additionally, we examine the challenges and limitations of AI implementation, such as privacy concerns, potential bias, and the necessity for human oversight. KEY MESSAGES: (i) LLMs have shown potential in enhancing diagnostic accuracy, clinical reasoning, and CRRT alarm troubleshooting in critical care nephrology. (ii) Generative AI offers promising applications in patient education, literature review, and academic writing within the field of nephrology. (iii) The integration of AI into electronic health records and clinical workflows presents both opportunities and challenges for improving patient care and research. (iv) Addressing ethical concerns, ensuring data privacy, and maintaining human oversight are crucial for the responsible implementation of AI in critical care nephrology.

16.
Artigo em Inglês | MEDLINE | ID: mdl-39238375

RESUMO

Predictions are made by artificial intelligence, especially through machine learning, which uses algorithms and past knowledge. Notably, there has been an increase in interest in using artificial intelligence, particularly generative AI, in the pharmacovigilance of pharmaceuticals under development, as well as those already in the market. This review was conducted to understand how generative AI can play an important role in pharmacovigilance and improving drug safety monitoring. Data from previously published articles and news items were reviewed in order to obtain information. We used PubMed and Google Scholar as our search engines, and keywords (pharmacovigilance, artificial intelligence, machine learning, drug safety, and patient safety) were used. In toto, we reviewed 109 articles published till 31 January 2024, and the obtained information was interpreted, compiled, evaluated, and conclusions were reached. Generative AI has transformative potential in pharmacovigilance, showcasing benefits, such as enhanced adverse event detection, data-driven risk prediction, and optimized drug development. By making it easier to process and analyze big datasets, generative artificial intelligence has applications across a variety of disease states. Machine learning and automation in this field can streamline pharmacovigilance procedures and provide a more efficient way to assess safety-related data. Nevertheless, more investigation is required to determine how this optimization affects the caliber of safety analyses. In the near future, the increased utilization of artificial intelligence is anticipated, especially in predicting side effects and Adverse Drug Reactions (ADRs).

17.
Stud Health Technol Inform ; 317: 21-29, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39234703

RESUMO

Individual health data is crucial for scientific advancements, particularly in developing Artificial Intelligence (AI); however, sharing real patient information is often restricted due to privacy concerns. A promising solution to this challenge is synthetic data generation. This technique creates entirely new datasets that mimic the statistical properties of real data, while preserving confidential patient information. In this paper, we present the workflow and different services developed in the context of Germany's National Data Infrastructure project NFDI4Health. First, two state-of-the-art AI tools (namely, VAMBN and MultiNODEs) for generating synthetic health data are outlined. Further, we introduce SYNDAT (a public web-based tool) which allows users to visualize and assess the quality and risk of synthetic data provided by desired generative models. Additionally, the utility of the proposed methods and the web-based tool is showcased using data from Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Center for Cancer Registry Data of the Robert Koch Institute (RKI).


Assuntos
Fluxo de Trabalho , Humanos , Alemanha , Gestão de Riscos , Inteligência Artificial , Doença de Alzheimer
18.
Artigo em Inglês | MEDLINE | ID: mdl-39243338

RESUMO

PURPOSE OF REVIEW: The integration of digital technology into medical practice is often thrust upon clinicians, with standards and routines developed long after initiation. Clinicians should endeavor towards a basic understanding even of emerging technologies so that they can direct its use. The intent of this review is to describe the current state of rapidly evolving generative artificial intelligence (GAI), and to explore both how pediatric gastroenterology practice may benefit as well as challenges that will be faced. RECENT FINDINGS: Although little research demonstrating the acceptance, practice, and outcomes associated with GAI in pediatric gastroenterology is published, there are relevant data adjacent to the specialty and overwhelming potential as professed in the media. Best practice guidelines are widely developed in academic publishing and resources to initiate and improve practical user skills are prevalent. Initial published evidence supports broad acceptance of the technology as part of medical practice by clinicians and patients, describes methods with which higher quality GAI can be developed, and identifies the potential for bias and disparities resulting from its use. GAI is broadly available as a digital tool for incorporation into medical practice and holds promise for improved quality and efficiency of care, but investigation into how GAI can best be used remains at an early stage despite rapid evolution of the technology.

19.
Oncotarget ; 15: 607-608, 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39236061

RESUMO

Generative AI is revolutionizing oncological imaging, enhancing cancer detection and diagnosis. This editorial explores its impact on expanding datasets, improving image quality, and enabling predictive oncology. We discuss ethical considerations and introduce a unique perspective on personalized cancer screening using AI-generated digital twins. This approach could optimize screening protocols, improve early detection, and tailor treatment plans. While challenges remain, generative AI in oncological imaging offers unprecedented opportunities to advance cancer care and improve patient outcomes.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Neoplasias/diagnóstico , Neoplasias/diagnóstico por imagem , Detecção Precoce de Câncer/métodos , Diagnóstico por Imagem/métodos , Medicina de Precisão/métodos
20.
J Med Internet Res ; 26: e56121, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39250188

RESUMO

Using simulated patients to mimic 9 established noncommunicable and infectious diseases, we assessed ChatGPT's performance in treatment recommendations for common diseases in low- and middle-income countries. ChatGPT had a high level of accuracy in both correct diagnoses (20/27, 74%) and medication prescriptions (22/27, 82%) but a concerning level of unnecessary or harmful medications (23/27, 85%) even with correct diagnoses. ChatGPT performed better in managing noncommunicable diseases than infectious ones. These results highlight the need for cautious AI integration in health care systems to ensure quality and safety.


Assuntos
Países em Desenvolvimento , Humanos , Simulação de Paciente , Qualidade da Assistência à Saúde/normas , Atenção à Saúde/normas , Doenças não Transmissíveis/terapia , Doenças Transmissíveis
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA