Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
3.
BMJ Health Care Inform ; 29(1)2022 Apr.
Article in English | MEDLINE | ID: mdl-35396245

ABSTRACT

OBJECTIVE: To demonstrate what it takes to reconcile the idea of fairness in medical algorithms and machine learning (ML) with the broader discourse of fairness and health equality in health research. METHOD: The methodological approach used in this paper is theoretical and ethical analysis. RESULT: We show that the question of ensuring comprehensive ML fairness is interrelated to three quandaries and one dilemma. DISCUSSION: As fairness in ML depends on a nexus of inherent justice and fairness concerns embedded in health research, a comprehensive conceptualisation is called for to make the notion useful. CONCLUSION: This paper demonstrates that more analytical work is needed to conceptualise fairness in ML so it adequately reflects the complexity of justice and fairness concerns within the field of health research.


Subject(s)
Machine Learning , Social Justice , Algorithms , Humans
4.
Sci Eng Ethics ; 28(2): 17, 2022 04 01.
Article in English | MEDLINE | ID: mdl-35362822

ABSTRACT

This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making.


Subject(s)
Artificial Intelligence , Machine Learning , Humans , Morals
6.
Stud Hist Philos Sci ; 69: 52-59, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29857801

ABSTRACT

The role of scientists as experts is crucial to public policymaking. However, the expert role is contested and unsettled in both public and scholarly discourse. In this paper, I provide a systematic account of the role of scientists as experts in policymaking by examining whether there are any normatively relevant differences between this role and the role of scientists as researchers. Two different interpretations can be given of how the two roles relate to each other. The separability view states that there is a normatively relevant difference between the two roles, whereas the inseparability view denies that there is such a difference. Based on a systematic analysis of the central aspects of the role of scientists as experts - that is, its aim, context, mode of output, and standards, I propose a moderate version of the separability view. Whereas the aim of scientific research is typically to produce new knowledge through the use of scientific method for evaluation and dissemination in internal settings, the aim of the expert is to provide policymakers and the public with relevant and applicable knowledge that can premise political reasoning and deliberation.

SELECTION OF CITATIONS
SEARCH DETAIL
...