Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Med Ethics ; 50(2): 90-96, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-37945336

ABSTRACT

Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight ethical vignettes.The main outcomes measured were relevance, reasoning, depth, technical and non-technical clarity, as well as acceptability of GPT-4's responses. The readability of the responses was also assessed. Of the six metrics evaluating the effectiveness of GPT-4's responses, the overall mean score was 4.1/5. GPT-4 was rated highest in providing technical (4.7/5) and non-technical clarity (4.4/5), whereas the lowest rated metrics were depth (3.8/5) and acceptability (3.8/5). There was poor-to-moderate inter-rater reliability characterised by an intraclass coefficient of 0.54 (95% CI: 0.30 to 0.71). Based on panellist feedback, GPT-4 was able to identify and articulate key ethical issues but struggled to appreciate the nuanced aspects of ethical dilemmas and misapplied certain moral principles.This study reveals limitations in the ability of GPT-4 to appreciate the depth and nuanced acceptability of real-world ethical dilemmas, particularly those that require a thorough understanding of relational complexities and context-specific values. Ongoing evaluation of LLM capabilities within medical ethics remains paramount, and further refinement is needed before it can be used effectively in clinical settings.


Subject(s)
Ethicists , Ethics, Medical , Humans , Cross-Sectional Studies , Reproducibility of Results , Problem Solving
3.
AJOB Empir Bioeth ; 14(3): 143-154, 2023.
Article in English | MEDLINE | ID: mdl-36574227

ABSTRACT

BACKGROUND: Nonanonymized direct contact between organ recipients and donor families is a topic of international interest in the adult context. However, there is limited discussion about whether direct contact should be extended to pediatric settings due to clinician and researcher concerns of the potential harms to pediatric patients. METHODS: We interviewed pediatric organ recipients, their families, and donorfamilies in British Columbia, Canada, to determine their views on direct contact. Interviews were conducted in two stages, with those who were further removed from the transplant process informing the approach to interviews with those who more recently went throughthe transplant process. RESULTS: Twenty-nine individuals participated in twenty in-depth interviews. The study included participants from three major organ systems: kidney, heart, and liver. Only five participants expressed that direct contact might cause harm or discomfort, while twenty-three indicated they saw significant potential for benefits. Nearly half focused on the harms to others rather than themselves, and nearly two-thirds focused on the benefits for others rather than themselves. CONCLUSION: There appears to be a community desire for direct contact in pediatric organ transplant programs among those living in British Columbia, Canada. These results suggest a need to revisit the medical community's assumptions around protection and paternalism in our practice as clinicians and researchers.


Subject(s)
Organ Transplantation , Adult , Humans , Child , Tissue Donors , Qualitative Research , Canada
4.
J Med Ethics ; 2021 Jul 21.
Article in English | MEDLINE | ID: mdl-34290113

ABSTRACT

The 'black box problem' is a long-standing talking point in debates about artificial intelligence (AI). This is a significant point of tension between ethicists, programmers, clinicians and anyone else working on developing AI for healthcare applications. However, the precise definition of these systems are often left undefined, vague, unclear or are assumed to be standardised within AI circles. This leads to situations where individuals working on AI talk over each other and has been invoked in numerous debates between opaque and explainable systems. This paper proposes a coherent and clear definition for the black box problem to assist in future discussions about AI in healthcare. This is accomplished by synthesising various definitions in the literature and examining several criteria that can be extrapolated from these definitions.

SELECTION OF CITATIONS
SEARCH DETAIL
...