Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
J Med Ethics ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38955479

ABSTRACT

Considering public moral attitudes is a hallmark of the anticipatory governance of emerging biotechnologies, such as heritable human genome editing. However, such anticipatory governance often overlooks that future morality is open to change and that future generations may perform different moral assessments on the very biotechnologies we are trying to govern in the present. In this article, we identify an 'anticipatory gap' that has not been sufficiently addressed in the discussion on the public governance of heritable genome editing, namely, uncertainty about the moral visions of future generations about the emerging applications that we are currently attempting to govern now. This paper motivates the relevance of this anticipatory gap, identifying the challenges it generates and offering various recommendations so that moral uncertainty does not lead to governance paralysis with regard to human germline genome editing.

2.
Camb Q Healthc Ethics ; : 1-14, 2023 Jul 27.
Article in English | MEDLINE | ID: mdl-37496126

ABSTRACT

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with "locked-in" syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the "risk asymmetry argument," which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human-AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.

3.
Ethical Theory Moral Pract ; : 1-22, 2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37362087

ABSTRACT

The idea that technologies can change moral beliefs and practices is an old one. But how, exactly, does this happen? This paper builds on an emerging field of inquiry by developing a synoptic taxonomy of the mechanisms of techno-moral change. It argues that technology affects moral beliefs and practices in three main domains: decisional (how we make morally loaded decisions), relational (how we relate to others) and perceptual (how we perceive situations). It argues that across these three domains there are six primary mechanisms of techno-moral change: (i) adding options; (ii) changing decision-making costs; (iii) enabling new relationships; (iv) changing the burdens and expectations within relationships; (v) changing the balance of power in relationships; and (vi) changing perception (information, mental models and metaphors). The paper also discusses the layered, interactive and second-order effects of these mechanisms.

4.
Philos Technol ; 35(2): 26, 2022.
Article in English | MEDLINE | ID: mdl-35378903

ABSTRACT

There is a concern that the widespread deployment of autonomous machines will open up a number of 'responsibility gaps' throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on 'plugging' or 'dissolving' the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace certain kinds of responsibility gap. The argument is based on the idea that human morality is often tragic. We frequently confront situations in which competing moral considerations pull in different directions and it is impossible to perfectly balance these considerations. This heightens the burden of responsibility associated with our choices. We cope with the tragedy of moral choice in different ways. Sometimes we delude ourselves into thinking the choices we make were not tragic (illusionism); sometimes we delegate the tragic choice to others (delegation); sometimes we make the choice ourselves and bear the psychological consequences (responsibilisation). Each of these strategies has its benefits and costs. One potential advantage of autonomous machines is that they enable a reduced cost form of delegation. However, we only gain the advantage of this reduced cost if we accept that some techno-responsibility gaps are virtuous.

5.
Camb Q Healthc Ethics ; 30(4): 585-603, 2021 10.
Article in English | MEDLINE | ID: mdl-34702409

Subject(s)
Technology , Humans
6.
Camb Q Healthc Ethics ; 30(3): 472-478, 2021 07.
Article in English | MEDLINE | ID: mdl-34109926

Subject(s)
Moral Status , Morals , Cognition , Humans
7.
Sci Eng Ethics ; 26(4): 2023-2049, 2020 08.
Article in English | MEDLINE | ID: mdl-31222612

ABSTRACT

Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.


Subject(s)
Moral Obligations , Robotics , Behaviorism , Beneficence , Ethical Analysis , Ethical Theory
9.
Am J Bioeth ; 19(7): 16-18, 2019 07.
Article in English | MEDLINE | ID: mdl-31543067

Subject(s)
Gene Editing , Child , Humans
10.
Med Law Rev ; 27(4): 553-575, 2019 Nov 01.
Article in English | MEDLINE | ID: mdl-30938445

ABSTRACT

In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and US taking steps to outlaw such devices. In this article, I subject these different regulatory attitudes to critical scrutiny. In doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory options that we confront when dealing with child sex robots. Secondly, I argue that there is a prima facie case for restrictive regulation, but that this is contingent on whether Arkin's hypothesis has a reasonable prospect of being successfully tested. Thirdly, I argue that Arkin's hypothesis probably does not have a reasonable prospect of being successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology.


Subject(s)
Commerce/ethics , Commerce/legislation & jurisprudence , Ethical Analysis , Government Regulation , Pedophilia/therapy , Robotics/ethics , Robotics/legislation & jurisprudence , Adult , Child , Child Abuse, Sexual/prevention & control , Humans , Morals , Pedophilia/economics , Play and Playthings , Robotics/economics
12.
Am J Bioeth ; 18(2): 3-19, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29393796

ABSTRACT

The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis of the Quantified Relationship (QR). We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies.


Subject(s)
Interpersonal Relations , Medical Informatics/ethics , Personal Autonomy , Self-Management/ethics , Humans , Object Attachment , Personal Satisfaction
13.
Sci Eng Ethics ; 24(4): 1097-1118, 2018 08.
Article in English | MEDLINE | ID: mdl-28674931

ABSTRACT

This article argues that the creation of artificial offspring could make our lives more meaningful (i.e. satisfy more meaning-relevant conditions of value). By 'artificial offspring' I mean beings that we construct, with a mix of human and non-human-like qualities. Robotic artificial intelligences are paradigmatic examples of the form. There are two reasons for thinking that the creation of such beings could make our lives more meaningful and valuable. The first is that the existence of a collective afterlife-i.e. a set of human-like lives that continue after we die-is likely to be an important source and sustainer of meaning in our present lives (Scheffler in Death and the afterlife, OUP, Oxford, 2013). The second is that the creation of artificial offspring provides a plausible and potentially better pathway to a collective afterlife than the traditional biological pathway (i.e. there are reasons to favour this pathway and there are no good defeaters to trying it out). Both of these arguments are defended from a variety of objections and misunderstandings.


Subject(s)
Artificial Intelligence , Attitude to Death , Life , Robotics , Dissent and Disputes , Humans
14.
Sci Eng Ethics ; 23(1): 41-64, 2017 02.
Article in English | MEDLINE | ID: mdl-26968572

ABSTRACT

Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if (as is to be expected) they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: (1) the literature on technological unemployment and workplace automation; (2) the antiwork critique-which I argue gives reasons to embrace technological unemployment; and (3) the philosophical debate about the conditions for meaning in life-which I argue gives reasons for concern.


Subject(s)
Quality of Life , Unemployment/psychology , Humans , Social Justice/ethics , Technology/ethics , Technology/trends , Unemployment/trends
15.
Bioethics ; 30(8): 568-78, 2016 10.
Article in English | MEDLINE | ID: mdl-27519124

ABSTRACT

Are universities justified in trying to regulate student use of cognitive enhancing drugs? In this article I argue that they can be, but that the most appropriate kind of regulatory intervention is likely to be voluntary in nature. To be precise, I argue that universities could justifiably adopt a commitment contract system of regulation wherein students are encouraged to voluntarily commit to not using cognitive enhancing drugs (or to using them in a specific way). If they are found to breach that commitment, they should be penalized by, for example, forfeiting a number of marks on their assessments. To defend this model of regulation, I adopt a recently-proposed evaluative framework for determining the appropriateness of enhancement in specific domains of activity, and I focus on particular existing types of cognitive enhancement drugs, not hypothetical or potential forms. In this way, my argument is tailored to the specific features of university education, and common patterns of usage among students. It is not concerned with the general ethical propriety of using cognitive enhancing drugs.


Subject(s)
Bioethical Issues , Cognition/drug effects , Nootropic Agents , Students/psychology , Contracts , Humans , Policy , Universities
16.
J Med Ethics ; 42(9): 611-8, 2016 09.
Article in English | MEDLINE | ID: mdl-27354246

ABSTRACT

It is widely believed that a conservative moral outlook is opposed to biomedical forms of human enhancement. In this paper, I argue that this widespread belief is incorrect. Using Cohen's evaluative conservatism as my starting point, I argue that there are strong conservative reasons to prioritise the development of biomedical enhancements. In particular, I suggest that biomedical enhancement may be essential if we are to maintain our current evaluative equilibrium (ie, the set of values that undergird and permeate our current political, economic and personal lives) against the threats to that equilibrium posed by external, non-biomedical forms of enhancement. I defend this view against modest conservatives who insist that biomedical enhancements pose a greater risk to our current evaluative equilibrium, and against those who see no principled distinction between the forms of human enhancement.


Subject(s)
Biomedical Enhancement , Politics , Biomedical Enhancement/ethics , Ethical Analysis , Ethical Theory , Humans , Moral Obligations , Social Change , Social Values
SELECTION OF CITATIONS
SEARCH DETAIL
...