Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Database
Language
Publication year range
1.
Cureus ; 15(10): e47468, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38021810

ABSTRACT

Background Artificial intelligence (AI) has the potential to be integrated into medical education. Among AI-based technology, large language models (LLMs) such as ChatGPT, Google Bard, Microsoft Bing, and Perplexity have emerged as powerful tools with capabilities in natural language processing. With this background, this study investigates the knowledge, attitude, and practice of undergraduate medical students regarding the utilization of LLMs in medical education in a medical college in Jharkhand, India. Methods A cross-sectional online survey was sent to 370 undergraduate medical students on Google Forms. The questionnaire comprised the following three domains: knowledge, attitude, and practice, each containing six questions. Cronbach's alphas for knowledge, attitude, and practice domains were 0.703, 0.707, and 0.809, respectively. Intraclass correlation coefficients for knowledge, attitude, and practice domains were 0.82, 0.87, and 0.78, respectively. The average scores in the three domains were compared using ANOVA. Results A total of 172 students participated in the study (response rate: 46.49%). The majority of the students (45.93%) rarely used the LLMs for their teaching-learning purposes (chi-square (3) = 41.44, p < 0.0001). The overall score of knowledge (3.21±0.55), attitude (3.47±0.54), and practice (3.26±0.61) were statistically significantly different (ANOVA F (2, 513) = 10.2, p < 0.0001), with the highest score in attitude and lowest in knowledge. Conclusion While there is a generally positive attitude toward the incorporation of LLMs in medical education, concerns about overreliance and potential inaccuracies are evident. LLMs offer the potential to enhance learning resources and provide accessible education, but their integration requires further planning. Further studies are required to explore the long-term impact of LLMs in diverse educational contexts.

2.
J Family Med Prim Care ; 12(8): 1659-1662, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37767452

ABSTRACT

Background: Patient education is an essential component of improving public health as it empowers individuals with the knowledge and skills necessary for making informed decisions about their health and well-being. Primary care physicians play a crucial role in patients' education as they are the first contact between the patients and the healthcare system. However, they may not get adequate time to prepare educational material for their patients. An artificial intelligence-based writer like ChatGPT can help write the material for physicians. Aim: This study aimed to ascertain the capability of ChatGPT for generating patients' educational materials for common public health issues in India. Materials and Methods: This observational study was conducted on the internet using the free research version of ChatGPT, a conversational artificial intelligence that can generate human-like text output. We conversed with the program with the question - "prepare a patients' education material for X in India." In the X, we used the following words or phrases - "air pollution," "malnutrition," "maternal and child health," "mental health," "noncommunicable diseases," "road traffic accidents," "tuberculosis," and "water-borne diseases." The textual response in the conversation was collected and stored for further analysis. The text was analyzed for readability, grammatical errors, and text similarity. Result: We generated a total of eight educational documents with a median of 26 (Q1-Q3: 21.5-34) sentences with a median of 349 (Q1-Q3: 329-450.5) words. The median Flesch Reading Ease Score was 48.2 (Q1-Q3: 39-50.65). It indicates that the text can be understood by a college student. The text was grammatically correct with very few (seven errors in 3415 words) errors. The text was very clear in the majority (8 out of 9) of documents with a median score of 85 (Q1-Q3: 82.5-85) in 100. The overall text similarity index was 18% (Q1-Q3: 7.5-26). Conclusion: The research version of the ChatGPT (January 30, 2023 version) is capable of generating patients' educational materials for common public health issues in India with a difficulty level ideal for college students with high grammatical accuracy. However, the text similarity should be checked before using it. Primary care physicians can take the help of ChatGPT for generating text for materials used for patients' education.

SELECTION OF CITATIONS
SEARCH DETAIL