Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Med Educ ; 24(1): 165, 2024 Feb 21.
Article in English | MEDLINE | ID: mdl-38383443

ABSTRACT

BACKGROUND: Obstetrics and gynecology (OB/GYN) is an essential medical field that focuses on women's health. Universities aim to provide high-quality healthcare services to women through comprehensive education of medical students. In Germany, medical education is undergoing a phase of restructuring towards the implementation of competency-based learning. The objective of the current survey was to gain insights into the teaching methods, resources, and challenges at German medical universities in the field OB/GYN. This aims to document the current state of medical education and derive potential suggestions for improvements in the era of competency-based learning. The survey was conducted with teaching coordinators from the majority of OB/GYN departments at German universities. METHODS: A questionnaire was sent to the teaching coordinators in all 41 OB/GYN departments at German university hospitals. The survey was delivered via email with a link to an online survey platform. RESULTS: The study received 30 responses from 41 universities. Differences were observed in the work environment of teaching coordinators concerning release from clinical duties for teaching purposes and specialized academic training. Overall, medical education and student motivation were perceived positively, with noticeable gaps, particularly in practical gynecological training. Deficiencies in supervision and feedback mechanisms were also evident. Subfields such as urogynecology and reproductive medicine appear to be underrepresented in the curriculum, correlating with poorer student performance. E-learning was widely utilized and considered advantageous. CONCLUSION: The present study provides valuable insights into the current state of medical education in OB/GYN at German universities from the perspective of teaching experts. We highlight current deficits, discuss approaches to overcome present obstacles, and provide suggestions for improvement.


Subject(s)
Gynecology , Obstetrics , Pregnancy , Female , Humans , Gynecology/education , Competency-Based Education , Obstetrics/education , Curriculum , Surveys and Questionnaires
2.
Front Med (Lausanne) ; 10: 1296615, 2023.
Article in English | MEDLINE | ID: mdl-38155661

ABSTRACT

Background: Chat Generative Pre-Trained Transformer (ChatGPT) is an artificial learning and large language model tool developed by OpenAI in 2022. It utilizes deep learning algorithms to process natural language and generate responses, which renders it suitable for conversational interfaces. ChatGPT's potential to transform medical education and clinical practice is currently being explored, but its capabilities and limitations in this domain remain incompletely investigated. The present study aimed to assess ChatGPT's performance in medical knowledge competency for problem assessment in obstetrics and gynecology (OB/GYN). Methods: Two datasets were established for analysis: questions (1) from OB/GYN course exams at a German university hospital and (2) from the German medical state licensing exams. In order to assess ChatGPT's performance, questions were entered into the chat interface, and responses were documented. A quantitative analysis compared ChatGPT's accuracy with that of medical students for different levels of difficulty and types of questions. Additionally, a qualitative analysis assessed the quality of ChatGPT's responses regarding ease of understanding, conciseness, accuracy, completeness, and relevance. Non-obvious insights generated by ChatGPT were evaluated, and a density index of insights was established in order to quantify the tool's ability to provide students with relevant and concise medical knowledge. Results: ChatGPT demonstrated consistent and comparable performance across both datasets. It provided correct responses at a rate comparable with that of medical students, thereby indicating its ability to handle a diverse spectrum of questions ranging from general knowledge to complex clinical case presentations. The tool's accuracy was partly affected by question difficulty in the medical state exam dataset. Our qualitative assessment revealed that ChatGPT provided mostly accurate, complete, and relevant answers. ChatGPT additionally provided many non-obvious insights, especially in correctly answered questions, which indicates its potential for enhancing autonomous medical learning. Conclusion: ChatGPT has promise as a supplementary tool in medical education and clinical practice. Its ability to provide accurate and insightful responses showcases its adaptability to complex clinical scenarios. As AI technologies continue to evolve, ChatGPT and similar tools may contribute to more efficient and personalized learning experiences and assistance for health care providers.

SELECTION OF CITATIONS
SEARCH DETAIL
...