Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Am J Bioeth ; : 1-12, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38662360

ABSTRACT

A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called "update problem," which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory approval. In this paper, we draw attention to a prior ethical question: whether the continuous learning that will occur in such systems after their initial deployment should be classified, and regulated, as medical research? We argue that there is a strong prima facie case that the use of continuous learning in medical ML systems should be categorized, and regulated, as research and that individuals whose treatment involves such systems should be treated as research subjects.

2.
Camb Q Healthc Ethics ; : 1-10, 2023 Jan 10.
Article in English | MEDLINE | ID: mdl-36624634

ABSTRACT

Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.

3.
J Am Med Inform Assoc ; 30(2): 361-366, 2023 01 18.
Article in English | MEDLINE | ID: mdl-36377970

ABSTRACT

OBJECTIVES: Machine learning (ML) has the potential to facilitate "continual learning" in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such "adaptive" ML systems in medicine that have, thus far, been neglected in the literature. TARGET AUDIENCE: The target audiences for this tutorial are the developers of ML AI systems, healthcare regulators, the broader medical informatics community, and practicing clinicians. SCOPE: Discussions of adaptive ML systems to date have overlooked the distinction between 2 sorts of variance that such systems may exhibit-diachronic evolution (change over time) and synchronic variation (difference between cotemporaneous instantiations of the algorithm at different sites)-and underestimated the significance of the latter. We highlight the challenges that diachronic evolution and synchronic variation present for the quality of patient care, informed consent, and equity, and discuss the complex ethical trade-offs involved in the design of such systems.


Subject(s)
Artificial Intelligence , Medicine , Humans , Machine Learning , Algorithms , Delivery of Health Care
4.
Camb Q Healthc Ethics ; : 1-10, 2022 Dec 16.
Article in English | MEDLINE | ID: mdl-36524245

ABSTRACT

Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this article, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.

6.
J Med Ethics ; 46(7): 478-481, 2020 07.
Article in English | MEDLINE | ID: mdl-32220870

ABSTRACT

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI's progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.


Subject(s)
Artificial Intelligence , Trust , Humans , Reproducibility of Results
7.
Hastings Cent Rep ; 50(1): 14-17, 2020 Jan.
Article in English | MEDLINE | ID: mdl-32068275

ABSTRACT

In the much-celebrated book Deep Medicine, Eric Topol argues that the development of artificial intelligence for health care will lead to a dramatic shift in the culture and practice of medicine. In the next several decades, he suggests, AI will become sophisticated enough that many of the everyday tasks of physicians could be delegated to it. Topol is perhaps the most articulate advocate of the benefits of AI in medicine, but he is hardly alone in spruiking its potential to allow physicians to dedicate more of their time and attention to providing empathetic care for their patients in the future. Unfortunately, several factors suggest a radically different picture for the future of health care. Far from facilitating a return to a time of closer doctor-patient relationships, the use of medical AI seems likely to further erode therapeutic relationships and threaten professional and patient satisfaction.

8.
J Med Ethics ; 45(12): 817-820, 2019 12.
Article in English | MEDLINE | ID: mdl-31462453

ABSTRACT

Advocates of physician-assisted suicide (PAS) often argue that, although the provision of PAS is morally permissible for persons with terminal, somatic illnesses, it is impermissible for patients suffering from psychiatric conditions. This claim is justified on the basis that psychiatric illnesses have certain morally relevant characteristics and/or implications that distinguish them from their somatic counterparts. In this paper, I address three arguments of this sort. First, that psychiatric conditions compromise a person's decision-making capacity. Second, that we cannot have sufficient certainty that a person's psychiatric condition is untreatable. Third, that the institutionalisation of PAS for mental illnesses presents morally unacceptable risks. I argue that, if we accept that PAS is permissible for patients with somatic conditions, then none of these three arguments are strong enough to demonstrate that the exclusion of psychiatric patients from access to PAS is justifiable.


Subject(s)
Mental Disorders , Prejudice , Suicide, Assisted/ethics , Decision Making/ethics , Humans , Mental Competency/psychology , Mental Disorders/diagnosis , Prejudice/ethics , Prejudice/psychology , Prognosis
SELECTION OF CITATIONS
SEARCH DETAIL
...