Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
JMIR Med Educ ; 10: e58355, 2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38989834

ABSTRACT

Background: The increasing importance of artificial intelligence (AI) in health care has generated a growing need for health care professionals to possess a comprehensive understanding of AI technologies, requiring an adaptation in medical education. Objective: This paper explores stakeholder perceptions and expectations regarding AI in medicine and examines their potential impact on the medical curriculum. This study project aims to assess the AI experiences and awareness of different stakeholders and identify essential AI-related topics in medical education to define necessary competencies for students. Methods: The empirical data were collected as part of the TüKITZMed project between August 2022 and March 2023, using a semistructured qualitative interview. These interviews were administered to a diverse group of stakeholders to explore their experiences and perspectives of AI in medicine. A qualitative content analysis of the collected data was conducted using MAXQDA software. Results: Semistructured interviews were conducted with 38 participants (6 lecturers, 9 clinicians, 10 students, 6 AI experts, and 7 institutional stakeholders). The qualitative content analysis revealed 6 primary categories with a total of 24 subcategories to answer the research questions. The evaluation of the stakeholders' statements revealed several commonalities and differences regarding their understanding of AI. Crucial identified AI themes based on the main categories were as follows: possible curriculum contents, skills, and competencies; programming skills; curriculum scope; and curriculum structure. Conclusions: The analysis emphasizes integrating AI into medical curricula to ensure students' proficiency in clinical applications. Standardized AI comprehension is crucial for defining and teaching relevant content. Considering diverse perspectives in implementation is essential to comprehensively define AI in the medical context, addressing gaps and facilitating effective solutions for future AI use in medical studies. The results provide insights into potential curriculum content and structure, including aspects of AI in medicine.


Subject(s)
Artificial Intelligence , Curriculum , Education, Medical , Humans , Education, Medical/methods , Qualitative Research , Stakeholder Participation , Male , Clinical Competence/standards , Female , Students, Medical/psychology , Awareness , Interviews as Topic , Adult
2.
JMIR Med Educ ; 10: e53961, 2024 Jan 16.
Article in English | MEDLINE | ID: mdl-38227363

ABSTRACT

BACKGROUND: Communication is a core competency of medical professionals and of utmost importance for patient safety. Although medical curricula emphasize communication training, traditional formats, such as real or simulated patient interactions, can present psychological stress and are limited in repetition. The recent emergence of large language models (LLMs), such as generative pretrained transformer (GPT), offers an opportunity to overcome these restrictions. OBJECTIVE: The aim of this study was to explore the feasibility of a GPT-driven chatbot to practice history taking, one of the core competencies of communication. METHODS: We developed an interactive chatbot interface using GPT-3.5 and a specific prompt including a chatbot-optimized illness script and a behavioral component. Following a mixed methods approach, we invited medical students to voluntarily practice history taking. To determine whether GPT provides suitable answers as a simulated patient, the conversations were recorded and analyzed using quantitative and qualitative approaches. We analyzed the extent to which the questions and answers aligned with the provided script, as well as the medical plausibility of the answers. Finally, the students filled out the Chatbot Usability Questionnaire (CUQ). RESULTS: A total of 28 students practiced with our chatbot (mean age 23.4, SD 2.9 years). We recorded a total of 826 question-answer pairs (QAPs), with a median of 27.5 QAPs per conversation and 94.7% (n=782) pertaining to history taking. When questions were explicitly covered by the script (n=502, 60.3%), the GPT-provided answers were mostly based on explicit script information (n=471, 94.4%). For questions not covered by the script (n=195, 23.4%), the GPT answers used 56.4% (n=110) fictitious information. Regarding plausibility, 842 (97.9%) of 860 QAPs were rated as plausible. Of the 14 (2.1%) implausible answers, GPT provided answers rated as socially desirable, leaving role identity, ignoring script information, illogical reasoning, and calculation error. Despite these results, the CUQ revealed an overall positive user experience (77/100 points). CONCLUSIONS: Our data showed that LLMs, such as GPT, can provide a simulated patient experience and yield a good user experience and a majority of plausible answers. Our analysis revealed that GPT-provided answers use either explicit script information or are based on available information, which can be understood as abductive reasoning. Although rare, the GPT-based chatbot provides implausible information in some instances, with the major tendency being socially desirable instead of medically plausible information.


Subject(s)
Communication , Students, Medical , Humans , Young Adult , Adult , Prospective Studies , Language , Medical History Taking
3.
Med Educ Online ; 28(1): 2182659, 2023 Dec.
Article in English | MEDLINE | ID: mdl-36855245

ABSTRACT

Artificial intelligence (AI) in medicine and digital assistance systems such as chatbots will play an increasingly important role in future doctor - patient communication. To benefit from the potential of this technical innovation and ensure optimal patient care, future physicians should be equipped with the appropriate skills. Accordingly, a suitable place for the management and adaptation of digital assistance systems must be found in the medical education curriculum. To determine the existing levels of knowledge of medical students about AI chatbots in particular in the healthcare setting, this study surveyed medical students of the University of Luebeck and the University Hospital of Tuebingen. Using standardized quantitative questionnaires and qualitative analysis of group discussions, the attitudes of medical students toward AI and chatbots in medicine were investigated. From this, relevant requirements for the future integration of AI into the medical curriculum could be identified. The aim was to establish a basic understanding of the opportunities, limitations, and risks, as well as potential areas of application of the technology. The participants (N = 12) were able to develop an understanding of how AI and chatbots will affect their future daily work. Although basic attitudes toward the use of AI were positive, the students also expressed concerns. There were high levels of agreement regarding the use of AI in administrative settings (83.3%) and research with health-related data (91.7%). However, participants expressed concerns that data protection may be insufficiently guaranteed (33.3%) and that they might be increasingly monitored at work in the future (58.3%). The evaluations indicated that future physicians want to engage more intensively with AI in medicine. In view of future developments, AI and data competencies should be taught in a structured way during the medical curriculum and integrated into curricular teaching.


Subject(s)
Students, Medical , Humans , Artificial Intelligence , Knowledge , Communication , Curriculum
4.
Digit Health ; 8: 20552076221139092, 2022.
Article in English | MEDLINE | ID: mdl-36457813

ABSTRACT

Objective: Digital transformation in higher education has presented medical students with new challenges, which has increased the difficulty of organising their own studies. The main objective of this study is to evaluate the effectiveness of a chatbot in assessing the stress levels of medical students in everyday conversations and to identify the main condition for accepting a chatbot as a conversational partner based on validated stress instruments, such as the Perceived Stress Questionnaire (PSQ20). Methods: In this mixed-methods research design, medical-student stress level was assessed using a quantitative (digital- and paper-based versions of PSQ20) and qualitative (chatbot conversation) study design. PSQ20 items were also shortened to investigate whether medical students' stress levels can be measured in everyday conversations. Therefore, items were integrated into the chat between medical students and a chatbot named Melinda. Results: PSQ20 revealed increased stress levels in 43.4% of medical students who participated (N = 136). The integrated PSQ20 items in the conversations with Melinda obtained similar subjective stress degree results in the statistical analysis of both PSQ20 versions. Qualitative analysis revealed that certain functional and technical requirements have a significant impact on the expected use and success of the chatbot. Conclusion: The results suggest that chatbots are promising as personal digital assistants for medical students; they can detect students' stress factors during the conversation. Increasing the chatbot's technical and social capabilities could have a positive impact on user acceptance.

SELECTION OF CITATIONS
SEARCH DETAIL
...