Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24.509
Filter
1.
Pediatr Radiol ; 54(Suppl 2): 169-288, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38822865
4.
Acad Radiol ; 31(5): 1968-1975, 2024 May.
Article in English | MEDLINE | ID: mdl-38724131

ABSTRACT

RATIONALE AND OBJECTIVES: Radiology is a rapidly evolving field that benefits from continuous innovation and research participation among trainees. Traditional methods for involving residents in research are often inefficient and limited, usually due to the absence of a standardized approach to identifying available research projects. A centralized online platform can enhance networking and offer equal opportunities for all residents. MATERIALS AND METHODS: Research Connect is an online platform built with PHP, SQL, and JavaScript. Features include project and collaboration listing as well as advertisement of project openings to medical/undergraduate students, residents, and fellows. The automated system maintains project data and sends notifications for new research opportunities when they meet user preference criteria. Both pre- and post-launch surveys were used to assess the platform's efficacy. RESULTS: Before the introduction of Research Connect, 69% of respondents used informal conversations as their primary method of discovering research opportunities. One year after its launch, Research Connect had 141 active users, comprising 63 residents and 41 faculty members, along with 85 projects encompassing various radiology subspecialties. The platform received a median satisfaction rating of 4 on a 1-5 scale, with 54% of users successfully locating projects of interest through the platform. CONCLUSION: Research Connect addresses the need for a standardized method and centralized platform with active research projects and is designed for scalability. Feedback suggests it has increased the visibility and accessibility of radiology research, promoting greater trainee involvement and academic collaboration.


Subject(s)
Internet , Radiology , Humans , Radiology/education , Cooperative Behavior , Biomedical Research , Internship and Residency , Surveys and Questionnaires
6.
BMC Med Ethics ; 25(1): 52, 2024 May 11.
Article in English | MEDLINE | ID: mdl-38734602

ABSTRACT

BACKGROUND: The integration of artificial intelligence (AI) in radiography presents transformative opportunities for diagnostic imaging and introduces complex ethical considerations. The aim of this cross-sectional study was to explore radiographers' perspectives on the ethical implications of AI in their field and identify key concerns and potential strategies for addressing them. METHODS: A structured questionnaire was distributed to a diverse group of radiographers in Saudi Arabia. The questionnaire included items on ethical concerns related to AI, the perceived impact on clinical practice, and suggestions for ethical AI integration in radiography. The data were analyzed using quantitative and qualitative methods to capture a broad range of perspectives. RESULTS: Three hundred eighty-eight radiographers responded and had varying levels of experience and specializations. Most (44.8%) participants were unfamiliar with the integration of AI into radiography. Approximately 32.9% of radiographers expressed uncertainty regarding the importance of transparency and explanatory capabilities in the AI systems used in radiology. Many (36.9%) participants indicated that they believed that AI systems used in radiology should be transparent and provide justifications for their decision-making procedures. A significant preponderance (44%) of respondents agreed that implementing AI in radiology may increase ethical dilemmas. However, 27.8%expressed uncertainty in recognizing and understanding the potential ethical issues that could arise from integrating AI in radiology. Of the respondents, 41.5% stated that the use of AI in radiology required establishing specific ethical guidelines. However, a significant percentage (28.9%) expressed the opposite opinion, arguing that utilizing AI in radiology does not require adherence to ethical standards. In contrast to the 46.6% of respondents voicing concerns about patient privacy over AI implementation, 41.5% of respondents did not have any such apprehensions. CONCLUSIONS: This study revealed a complex ethical landscape in the integration of AI in radiography, characterized by enthusiasm and apprehension among professionals. It underscores the necessity for ethical frameworks, education, and policy development to guide the implementation of AI in radiography. These findings contribute to the ongoing discourse on AI in medical imaging and provide insights that can inform policymakers, educators, and practitioners in navigating the ethical challenges of AI adoption in healthcare.


Subject(s)
Artificial Intelligence , Attitude of Health Personnel , Radiography , Humans , Cross-Sectional Studies , Artificial Intelligence/ethics , Male , Adult , Female , Surveys and Questionnaires , Radiography/ethics , Saudi Arabia , Middle Aged , Radiology/ethics
7.
Radiology ; 311(2): e241041, 2024 May.
Article in English | MEDLINE | ID: mdl-38742974
8.
Medicine (Baltimore) ; 103(20): e38156, 2024 May 17.
Article in English | MEDLINE | ID: mdl-38758871

ABSTRACT

Radiology has become a fundamental constituent of the modern medicine. However, it has been observed that medical students in Pakistan often lack sufficient guidance and education in this field. This study aims to establish whether Pakistani medical students possess the requisite basic knowledge required in radiology and their attitude and perception toward radiology as a potential career path. This cross-sectional study conducted a survey among 530 medical students of Pakistan via a self-reported online questionnaire from August 01, 2021 to September 01, 2021. The data collected were analyzed using the SPSS software, along with logistic regression analyses to identify factors associated with interest in pursuing radiology as a career and possessing a comprehensive understanding of radiology among medical students. Of the 530 participants, 44.2% rated their understanding of radiology as "poor" with only 17% indicating interest to pursue a career in radiology. Logistic regression model showed significantly higher odds of radiology as a career among males (Crude odds ratio [COR] = 1.78, 95% confidence interval [CI] = 1.17-2.72, P = .007), medical students of Punjab (COR = 1.55, 95% CI = 1.01-2.40, P = .048), and those, who self-reported their knowledge of radiology as excellent (COR = 14.35, 95% CI = 5.13-40.12, P < .001). In contrast, medical students from Punjab (COR = 0.504, 95% CI = 0.344-0.737, P < .001) and second-year medical students (COR = 0.046, 95% CI = 0.019-0.107, P < .001) had lower odds of good knowledge. Our study suggests that the medical student's knowledge of radiology is deficient. Thus, it is advised that radiological societies work with medical school boards to integrate thorough and early radiology exposure into the undergraduate curriculum.


Subject(s)
Career Choice , Radiology , Students, Medical , Humans , Students, Medical/statistics & numerical data , Students, Medical/psychology , Cross-Sectional Studies , Pakistan , Male , Female , Radiology/education , Surveys and Questionnaires , Young Adult , Adult
9.
Rofo ; 196(6): 623, 2024 Jun.
Article in German | MEDLINE | ID: mdl-38776936
10.
Rofo ; 196(6): 536, 2024 Jun.
Article in German | MEDLINE | ID: mdl-38776933
12.
Rofo ; 196(6): 620, 2024 Jun.
Article in German | MEDLINE | ID: mdl-38776934
13.
Rofo ; 196(6): 625-626, 2024 Jun.
Article in German | MEDLINE | ID: mdl-38776938
17.
Semin Musculoskelet Radiol ; 28(3): 352-355, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38768599

ABSTRACT

As per recommendations from the European Society of Radiology and the European Union of Medical Specialists, upon completion of level 3 radiology training, an objective assessment of the attained standards, aligned with national customs and practices, should take place. A subspecialty exam should ideally be an integral part of the training completion process. Among 10 of 13 European subspecialty societies currently offering a European subspecialty diploma, the European Society of Musculoskeletal Radiology (ESSR) diploma program was formally introduced by the ESSR in 2003. This article describes the evolution of the ESSR diploma, encompassing the current diploma program, validation procedures, endorsements, and future perspectives. Additionally, insights from a brief survey among ESSR diploma holders is shared, offering valuable tips for prospective candidates aiming to navigate the examination process successfully.


Subject(s)
Radiology , Humans , Radiology/education , Europe , Education, Medical, Graduate/methods , Societies, Medical , Musculoskeletal Diseases/diagnostic imaging , Certification/methods , Clinical Competence , Educational Measurement/methods
18.
Radiology ; 311(2): e240935, 2024 May.
Article in English | MEDLINE | ID: mdl-38771182
20.
Radiology ; 311(2): e232715, 2024 May.
Article in English | MEDLINE | ID: mdl-38771184

ABSTRACT

Background ChatGPT (OpenAI) can pass a text-based radiology board-style examination, but its stochasticity and confident language when it is incorrect may limit utility. Purpose To assess the reliability, repeatability, robustness, and confidence of GPT-3.5 and GPT-4 (ChatGPT; OpenAI) through repeated prompting with a radiology board-style examination. Materials and Methods In this exploratory prospective study, 150 radiology board-style multiple-choice text-based questions, previously used to benchmark ChatGPT, were administered to default versions of ChatGPT (GPT-3.5 and GPT-4) on three separate attempts (separated by ≥1 month and then 1 week). Accuracy and answer choices between attempts were compared to assess reliability (accuracy over time) and repeatability (agreement over time). On the third attempt, regardless of answer choice, ChatGPT was challenged three times with the adversarial prompt, "Your answer choice is incorrect. Please choose a different option," to assess robustness (ability to withstand adversarial prompting). ChatGPT was prompted to rate its confidence from 1-10 (with 10 being the highest level of confidence and 1 being the lowest) on the third attempt and after each challenge prompt. Results Neither version showed a difference in accuracy over three attempts: for the first, second, and third attempt, accuracy of GPT-3.5 was 69.3% (104 of 150), 63.3% (95 of 150), and 60.7% (91 of 150), respectively (P = .06); and accuracy of GPT-4 was 80.6% (121 of 150), 78.0% (117 of 150), and 76.7% (115 of 150), respectively (P = .42). Though both GPT-4 and GPT-3.5 had only moderate intrarater agreement (κ = 0.78 and 0.64, respectively), the answer choices of GPT-4 were more consistent across three attempts than those of GPT-3.5 (agreement, 76.7% [115 of 150] vs 61.3% [92 of 150], respectively; P = .006). After challenge prompt, both changed responses for most questions, though GPT-4 did so more frequently than GPT-3.5 (97.3% [146 of 150] vs 71.3% [107 of 150], respectively; P < .001). Both rated "high confidence" (≥8 on the 1-10 scale) for most initial responses (GPT-3.5, 100% [150 of 150]; and GPT-4, 94.0% [141 of 150]) as well as for incorrect responses (ie, overconfidence; GPT-3.5, 100% [59 of 59]; and GPT-4, 77% [27 of 35], respectively; P = .89). Conclusion Default GPT-3.5 and GPT-4 were reliably accurate across three attempts, but both had poor repeatability and robustness and were frequently overconfident. GPT-4 was more consistent across attempts than GPT-3.5 but more influenced by an adversarial prompt. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Ballard in this issue.


Subject(s)
Clinical Competence , Educational Measurement , Radiology , Humans , Prospective Studies , Reproducibility of Results , Educational Measurement/methods , Specialty Boards
SELECTION OF CITATIONS
SEARCH DETAIL
...