Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
3.
Sci Eng Ethics ; 30(4): 27, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38888795

RESUMO

Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.


Assuntos
Inteligência Artificial , Tomada de Decisões , Responsabilidade Social , Humanos , Inteligência Artificial/ética , Tomada de Decisões/ética , Técnicas de Apoio para a Decisão , Julgamento , Aprendizado de Máquina/ética , Propriedade , Robótica/ética
5.
Artif Intell Med ; 152: 102873, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38643592

RESUMO

The COVID-19 pandemic has given rise to a broad range of research from fields alongside and beyond the core concerns of infectiology, epidemiology, and immunology. One significant subset of this work centers on machine learning-based approaches to supporting medical decision-making around COVID-19 diagnosis. To date, various challenges, including IT issues, have meant that, notwithstanding this strand of research on digital diagnosis of COVID-19, the actual use of these methods in medical facilities remains incipient at best, despite their potential to relieve pressure on scarce medical resources, prevent instances of infection, and help manage the difficulties and unpredictabilities surrounding the emergence of new mutations. The reasons behind this research-application gap are manifold and may imply an interdisciplinary dimension. We argue that the discipline of AI ethics can provide a framework for interdisciplinary discussion and create a roadmap for the application of digital COVID-19 diagnosis, taking into account all disciplinary stakeholders involved. This article proposes such an ethical framework for the practical use of digital COVID-19 diagnosis, considering legal, medical, operational managerial, and technological aspects of the issue in accordance with our diverse research backgrounds and noting the potential of the approach we set out here to guide future research.


Assuntos
Inteligência Artificial , COVID-19 , COVID-19/diagnóstico , Humanos , Inteligência Artificial/ética , SARS-CoV-2 , Aprendizado de Máquina/ética , Diagnóstico por Computador/ética , Pandemias
6.
Bioethics ; 38(5): 383-390, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38523587

RESUMO

After a wave of breakthroughs in image-based medical diagnostics and risk prediction models, machine learning (ML) has turned into a normal science. However, prominent researchers are claiming that another paradigm shift in medical ML is imminent-due to most recent staggering successes of large language models-from single-purpose applications toward generalist models, driven by natural language. This article investigates the implications of this paradigm shift for the ethical debate. Focusing on issues like trust, transparency, threats of patient autonomy, responsibility issues in the collaboration of clinicians and ML models, fairness, and privacy, it will be argued that the main problems will be continuous with the current debate. However, due to functioning of large language models, the complexity of all these problems increases. In addition, the article discusses some profound challenges for the clinical evaluation of large language models and threats to the reproducibility and replicability of studies about large language models in medicine due to corporate interests.


Assuntos
Aprendizado de Máquina , Humanos , Aprendizado de Máquina/ética , Autonomia Pessoal , Confiança , Privacidade , Reprodutibilidade dos Testes , Ética Médica
7.
Bioethics ; 38(5): 391-400, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38554069

RESUMO

Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms-showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Feminino , Inteligência Artificial/ética , Neoplasias da Mama , Melanoma , Atenção à Saúde/ética , Aprendizado de Máquina/ética , Justiça Social , Prognóstico
8.
Am J Bioeth ; 24(7): 13-26, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38226965

RESUMO

When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.


Assuntos
Julgamento , Preferência do Paciente , Humanos , Autonomia Pessoal , Algoritmos , Aprendizado de Máquina/ética , Tomada de Decisões/ética
11.
Psychol Med ; 51(15): 2522-2524, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33975655

RESUMO

The clinical interview is the psychiatrist's data gathering procedure. However, the clinical interview is not a defined entity in the way that 'vitals' are defined as measurements of blood pressure, heart rate, respiration rate, temperature, and oxygen saturation. There are as many ways to approach a clinical interview as there are psychiatrists; and trainees can learn as many ways of performing and formulating the clinical interview as there are instructors (Nestler, 1990). Even in the same clinical setting, two clinicians might interview the same patient and conduct very different examinations and reach different treatment recommendations. From the perspective of data science, this mismatch is not one of personal style or idiosyncrasy but rather one of uncertain salience: neither the clinical interview nor the data thereby generated is operationalized and, therefore, neither can be rigorously evaluated, tested, or optimized.


Assuntos
Entrevista Psicológica/métodos , Aprendizado de Máquina , Psiquiatria/métodos , Esquizofrenia/diagnóstico , Diagnóstico por Computador/ética , Diagnóstico por Computador/métodos , Humanos , Aprendizado de Máquina/ética , Psiquiatria/ética
13.
Psychol Med ; 51(15): 2515-2521, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-32536358

RESUMO

Recent advances in machine learning (ML) promise far-reaching improvements across medical care, not least within psychiatry. While to date no psychiatric application of ML constitutes standard clinical practice, it seems crucial to get ahead of these developments and address their ethical challenges early on. Following a short general introduction concerning ML in psychiatry, we do so by focusing on schizophrenia as a paradigmatic case. Based on recent research employing ML to further the diagnosis, treatment, and prediction of schizophrenia, we discuss three hypothetical case studies of ML applications with view to their ethical dimensions. Throughout this discussion, we follow the principlist framework by Tom Beauchamp and James Childress to analyse potential problems in detail. In particular, we structure our analysis around their principles of beneficence, non-maleficence, respect for autonomy, and justice. We conclude with a call for cautious optimism concerning the implementation of ML in psychiatry if close attention is paid to the particular intricacies of psychiatric disorders and its success evaluated based on tangible clinical benefit for patients.


Assuntos
Aprendizado de Máquina , Psiquiatria/métodos , Esquizofrenia , Algoritmos , Bioética , Diagnóstico por Computador/ética , Diagnóstico por Computador/métodos , Humanos , Aprendizado de Máquina/ética , Esquizofrenia/diagnóstico , Esquizofrenia/terapia
19.
J Am Acad Psychiatry Law ; 48(3): 345-349, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32409300

RESUMO

Artificial intelligence is rapidly transforming the landscape of medicine. Specifically, algorithms powered by deep learning are already gaining increasingly wide adoption in fields such as radiology, pathology, and preventive medicine. Forensic psychiatry is a complex and intricate specialty that seeks to balance the disparate approaches of psychiatric science, which strives to explain human behavior deterministically, and the law, which emphasizes free choice and moral responsibility. This balancing, a central task of the forensic psychiatrist, is necessarily fraught with ambiguity. Such a complex task may intuitively seem impenetrable to artificial intelligence. This article first aims to challenge this assumption and then seeks to address the unique concerns posed by the adoption of artificial intelligence in violence risk assessment and prediction. The relevant ethics concerns are analyzed within the framework of traditional bioethics principles. Finally, recommendations for practitioners, ethicists, and others are offered as a starting point for further discussion.


Assuntos
Inteligência Artificial/ética , Psiquiatria Legal , Aprendizado de Máquina/ética , Medição de Risco/métodos , Violência , Beneficência , Humanos , Autonomia Pessoal , Justiça Social
20.
Bull World Health Organ ; 98(4): 270-276, 2020 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-32284651

RESUMO

The application of digital technology to psychiatry research is rapidly leading to new discoveries and capabilities in the field of mobile health. However, the increase in opportunities to passively collect vast amounts of detailed information on study participants coupled with advances in statistical techniques that enable machine learning models to process such information has raised novel ethical dilemmas regarding researchers' duties to: (i) monitor adverse events and intervene accordingly; (ii) obtain fully informed, voluntary consent; (iii) protect the privacy of participants; and (iv) increase the transparency of powerful, machine learning models to ensure they can be applied ethically and fairly in psychiatric care. This review highlights emerging ethical challenges and unresolved ethical questions in mobile health research and provides recommendations on how mobile health researchers can address these issues in practice. Ultimately, the hope is that this review will facilitate continued discussion on how to achieve best practice in mobile health research within psychiatry.


L'application des technologies numériques à la recherche psychiatrique entraîne rapidement de nouvelles découvertes et capacités en matière de santé mobile. Cependant, la multiplication des opportunités de recueillir passivement d'immenses quantités d'informations détaillées sur les participants aux études combinée aux progrès des techniques statistiques permettant aux modèles d'apprentissage automatique de traiter de telles informations a soulevé de nouveaux dilemmes éthiques concernant l'obligation des chercheurs: (i) de surveiller les effets indésirables et d'intervenir en conséquence; (ii) d'obtenir un consentement pleinement éclairé et volontaire; (iii) de protéger la vie privée des participants; et enfin, (iv) d'améliorer la transparence des puissants modèles d'apprentissage automatique afin de garantir une application éthique et impartiale dans le domaine des soins psychiatriques. Ce rapport identifie les défis qui en découlent ainsi que les questions éthiques non résolues en matière de santé mobile. Il formule également des recommandations sur la façon dont les chercheurs en santé mobile peuvent résoudre ces problèmes dans la pratique. À terme, nous espérons que ce rapport favorisera la poursuite des discussions portant sur les moyens de définir des méthodes de recherche adéquates pour la santé mobile en psychiatrie.


La aplicación de la tecnología digital a la investigación en psiquiatría está conduciendo rápidamente a descubrimientos y capacidades nuevas en el ámbito de la salud móvil. No obstante, el incremento de las oportunidades para recopilar pasivamente grandes volúmenes de información detallada sobre los participantes en los estudios, junto con los avances en las técnicas de estadística que permiten a los modelos de aprendizaje automático procesar tal información, ha planteado nuevos dilemas éticos relativos a los deberes de los investigadores: (i) hacer un seguimiento de los eventos adversos e intervenir en consecuencia; (ii) obtener un consentimiento voluntario plenamente informado; (iii) proteger la privacidad de los participantes; y (iv) aumentar la transparencia de los modelos potentes de aprendizaje automático para asegurar que puedan aplicarse de manera ética y justa en la atención psiquiátrica. En este análisis se destacan tanto los desafíos éticos nuevos como las cuestiones éticas aún sin resolver en la investigación sobre la salud móvil y se formulan recomendaciones sobre cómo los investigadores de la salud móvil pueden abordar dichas cuestiones en la práctica. En última instancia, se espera que este análisis facilite un debate continuo sobre cómo lograr las mejores prácticas en la investigación de la salud móvil dentro de la psiquiatría.


Assuntos
Ética em Pesquisa , Aprendizado de Máquina/ética , Psiquiatria , Telemedicina/ética , Consentimento Livre e Esclarecido , Privacidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...