Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 104
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(31): e2310458121, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39042690

RESUMO

The industrial revolution of the 19th century marked the onset of an era of machines and robots that transformed societies. Since the beginning of the 21st century, a new generation of robots envisions similar societal transformation. These robots are biohybrid: part living and part engineered. They may self-assemble and emerge from complex interactions between living cells. While this new era of living robots presents unprecedented opportunities for positive societal impact, it also poses a host of ethical challenges. A systematic, nuanced examination of these ethical issues is of paramount importance to guide the evolution of this nascent field. Multidisciplinary fields face the challenge that inertia around collective action to address ethical boundaries may result in unexpected consequences for researchers and societies alike. In this Perspective, we i) clarify the ethical challenges associated with biohybrid robotics, ii) discuss the need for and elements of a potential governance framework tailored to this technology; and iii) propose tangible steps toward ethical compliance and policy formation in the field of biohybrid robotics.


Assuntos
Robótica , Robótica/ética
2.
Sci Eng Ethics ; 30(4): 27, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38888795

RESUMO

Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.


Assuntos
Inteligência Artificial , Tomada de Decisões , Responsabilidade Social , Humanos , Inteligência Artificial/ética , Tomada de Decisões/ética , Técnicas de Apoio para a Decisão , Julgamento , Aprendizado de Máquina/ética , Propriedade , Robótica/ética
3.
J Med Internet Res ; 26: e48126, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38888953

RESUMO

BACKGROUND: Technological advances in robotics, artificial intelligence, cognitive algorithms, and internet-based coaches have contributed to the development of devices capable of responding to some of the challenges resulting from demographic aging. Numerous studies have explored the use of robotic coaching solutions (RCSs) for supporting healthy behaviors in older adults and have shown their benefits regarding the quality of life and functional independence of older adults at home. However, the use of RCSs by individuals who are potentially vulnerable raises many ethical questions. Establishing an ethical framework to guide the development, use, and evaluation practices regarding RCSs for older adults seems highly pertinent. OBJECTIVE: The objective of this paper was to highlight the ethical issues related to the use of RCSs for health care purposes among older adults and draft recommendations for researchers and health care professionals interested in using RCSs for older adults. METHODS: We conducted a narrative review of the literature to identify publications including an analysis of the ethical dimension and recommendations regarding the use of RCSs for older adults. We used a qualitative analysis methodology inspired by a Health Technology Assessment model. We included all article types such as theoretical papers, research studies, and reviews dealing with ethical issues or recommendations for the implementation of these RCSs in a general population, particularly among older adults, in the health care sector and published after 2011 in either English or French. The review was performed between August and December 2021 using the PubMed, CINAHL, Embase, Scopus, Web of Science, IEEE Explore, SpringerLink, and PsycINFO databases. Selected publications were analyzed using the European Network of Health Technology Assessment Core Model (version 3.0) around 5 ethical topics: benefit-harm balance, autonomy, privacy, justice and equity, and legislation. RESULTS: In the 25 publications analyzed, the most cited ethical concerns were the risk of accidents, lack of reliability, loss of control, risk of deception, risk of social isolation, data confidentiality, and liability in case of safety problems. Recommendations included collecting the opinion of target users, collecting their consent, and training professionals in the use of RCSs. Proper data management, anonymization, and encryption appeared to be essential to protect RCS users' personal data. CONCLUSIONS: Our analysis supports the interest in using RCSs for older adults because of their potential contribution to individuals' quality of life and well-being. This analysis highlights many ethical issues linked to the use of RCSs for health-related goals. Future studies should consider the organizational consequences of the implementation of RCSs and the influence of cultural and socioeconomic specificities of the context of experimentation. We suggest implementing a scalable ethical and regulatory framework to accompany the development and implementation of RCSs for various aspects related to the technology, individual, or legal aspects.


Assuntos
Robótica , Humanos , Idoso , Robótica/ética , Tutoria/métodos , Tutoria/ética , Qualidade de Vida
4.
Stud Health Technol Inform ; 313: 41-42, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38682502

RESUMO

The present study aims to describe ethical and social requirements for technical and robotic systems for caregiving from the perspective of users. Users are interviewed in the ReduSys project during the development phase (prospective viewpoint) and after technology testing in the clinical setting (retrospective viewpoint). The preliminary results presented here refer to the prospective viewpoint.


Assuntos
Robótica , Robótica/ética , Humanos , Princípios Morais , Assistência ao Paciente/ética
6.
Behav Brain Sci ; 46: e30, 2023 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-37017043

RESUMO

Do people hold robots responsible for their actions? While Clark and Fischer present a useful framework for interpreting social robots, we argue that they fail to account for people's willingness to assign responsibility to robots in certain contexts, such as when a robot performs actions not predictable by its user or programmer.


Assuntos
Comportamento , Modelos Psicológicos , Robótica , Humanos , Robótica/ética , Robótica/métodos , Emoções , Consciência
7.
Behav Brain Sci ; 46: e31, 2023 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-37017056

RESUMO

The target article proposes that people perceive social robots as depictions rather than as genuine social agents. We suggest that people might instead view social robots as social agents, albeit agents with more restricted capacities and moral rights than humans. We discuss why social robots, unlike other kinds of depictions, present a special challenge for testing the depiction hypothesis.


Assuntos
Princípios Morais , Robótica , Humanos , Robótica/ética
8.
Rev. bioét. derecho ; (53): 181-202, 2021.
Artigo em Inglês | IBECS | ID: ibc-228096

RESUMO

A few companies around the world are now developing and selling sex robots. Questions such as "how will relationships with robots' impact human relations in the future" emerge when technologies are used to meet the social and emotional needs of individuals. Considering that technology and design have embedded values and biases, this article surveys the use of sex robots from a bioethical perspective. Relationships with robots and computational systems, like Artificial Intelligence, are a possibility for many people around the world. We present questions raised by the voices in favor of robot sex, and against it. Beyond a binary polarization, the bioethical perspective recalls the Foucaultian concepts of biopolitics and biopower to situate the problems with the mechanization of intimate relationships. We argue that sex robots offer the opportunity to review old patterns regarding gender, inequality, and health. (AU)


Empresas de todo el mundo están desarrollando y vendiendo robots sexuales. Preguntas sobre "¿Cómo afectarán las relaciones con los robots a las relaciones humanas en el futuro?" surgen cuando las tecnologías se utilizan para satisfacer las necesidades sociales y emocionales de las personas. Este artículo analiza el uso de robots sexuales desde una perspectiva bioética, considerando que las tecnologías y los diseños tienen valores intrínsecos que hay que tener en cuenta. Las relaciones con robots y sistemas informáticos, como la inteligencia artificial, son una posibilidad para muchas personas en todo el mundo. Presentamos preguntas planteadas por voces a favor y en contra del sexo con robots. Además de la polarización binaria, la perspectiva bioética recuerda los conceptos de biopolítica y biopoder de Foucault para situar problemas como la mecanización de las relaciones íntimas. Sostenemos que el debate sobre los robots sexuales ofrece la oportunidad de revisar viejos patrones en relación con el género, desigualdad y la salud. (AU)


Empreses de tot el món estan desenvolupant i venent robots sexuals. Preguntes sobre "Com afectaran les relacions amb els robots a les relacions humanes en el futur?" sorgeixen quan les tecnologies s'utilitzen per a satisfer les necessitats socials i emocionals de les persones. Aquest article analitza l'ús de robots sexuals des d'una perspectiva bioètica, considerant que les tecnologies i els dissenys tenen valors intrínsecs que cal tenir en compte. Les relacions amb robots i sistemes informàtics, com la intel·ligència artificial, són una possibilitat per a moltes persones a tot el món. Presentem preguntes plantejades per veus a favor i en contra del sexe amb robots. A més de la polarització binària, la perspectiva bioètica recorda els conceptes de biopolítica i biopoder de Foucault per a situar problemes com la mecanització de les relacions íntimes. Sostenim que el debat sobre els robots sexuals ofereix l'oportunitat de revisar vells patrons en relació amb el gènere, desigualtat i la salut. (AU)


Assuntos
Humanos , Temas Bioéticos , Política , Robótica/ética , Inteligência Artificial
9.
PLoS One ; 15(7): e0235361, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32673326

RESUMO

Most people struggle to understand probability which is an issue for Human-Robot Interaction (HRI) researchers who need to communicate risks and uncertainties to the participants in their studies, the media and policy makers. Previous work showed that even the use of numerical values to express probabilities does not guarantee an accurate understanding by laypeople. We therefore investigate if words can be used to communicate probability, such as "likely" and "almost certainly not". We embedded these phrases in the context of the usage of autonomous vehicles. The results show that the association of phrases to percentages is not random and there is a preferred order of phrases. The association is, however, not as consistent as hoped for. Hence, it would be advisable to complement the use of words with numerical expression of uncertainty. This study provides an empirically verified list of probabilities phrases that HRI researchers can use to complement the numerical values.


Assuntos
Interfaces Cérebro-Computador/tendências , Robótica/tendências , Interfaces Cérebro-Computador/ética , Humanos , Probabilidade , Fatores de Risco , Robótica/ética
10.
J Alzheimers Dis ; 76(2): 461-466, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32568203

RESUMO

Socially assistive robots have the potential to improve aged care by providing assistance through social interaction. While some evidence suggests a positive impact of social robots on measures of well-being, the adoption of robotic technology remains slow. One approach to improve technology adoption is involving all stakeholders in the process of technology development using co-creation methods. To capture relevant stake holders' priorities and perceptions on the ethics of robotic companions, we conducted an interactive co-creation workshop at the 2019 Geriatric Services Conference in Vancouver, BC. The participants were presented with different portrayals of robotic companions in popular culture and answered questions about perceptions, expectations, and ethical concerns about the implementation of robotic technology. Our results reveal that the most pressing ethical concerns with robotic technology, such as issues related to privacy, are critical potential barriers to technology adoption. We also found that most participants agree on the types of tasks that robots should help with, such as domestic chores, communication, and medication reminders. Activities that robots should not help with, according to the stakeholders, included bathing, toileting, and managing finances. The perspectives that were captured contribute to a preliminary outline of the areas of importance for geriatric care stake holders in the process of ethical technology design and development.


Assuntos
Envelhecimento/psicologia , Congressos como Assunto , Educação/métodos , Robótica/métodos , Interação Social , Idoso , Envelhecimento/ética , Colúmbia Britânica , Congressos como Assunto/ética , Educação/ética , Estudos de Viabilidade , Humanos , Projetos Piloto , Robótica/ética
12.
Cuad Bioet ; 31(101): 87-100, 2020.
Artigo em Espanhol | MEDLINE | ID: mdl-32304201

RESUMO

Beyond the utopian or dystopian scenarios that accompany the progressive introduction of robots for care in daily environments, their use in the medical field entails controversies that require alternative forms of ethical responsibility. From this general objective, in this article we propose a series of reflections to articulate an ethical framework capable of orienting the introduction and use of robots in the field of health. The presented proposal is developed from a series of considerations about robots and care, as a starting point to develop an ethical framework based on the principle of precaution and measured action. It proposes a non-essentialist conceptualization of robots, that emphasizes their relational and contextual nature, understanding robots as heterogeneous artifacts that are constituted in a network of therapeutic relationships and that mediate our care relationships. This approach has a set of implications, which we articulate around measured action as an ethical proposal. The measured action, in our interpretation, responds to the principle of precaution and is configured through four dimensions: (1) the institutional commitment, (2) which integrates the fears and hopes of all those concerned actors, (3) which is realized carrying out progressive and revocable actions, under continuous monitoring and evaluation, and (4) which incorporates into the design process those actors practicing ″good care″.


Assuntos
Temas Bioéticos , Atenção à Saúde/ética , Robótica/ética , Incerteza , Humanos , Princípios Morais
13.
J Alzheimers Dis ; 76(2): 445-455, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32250295

RESUMO

Due to the high costs of providing long-term care to older adults with cognitive impairment, artificial companions are increasingly considered as a cost-efficient way to provide support. Artificial companions can comfort, entertain, and inform, and even induce a sense of being in a close relationship. Sensors and algorithms are increasingly leading to applications that exude a life-like feel. We focus on a case study of an artificial companion for people with cognitive impairment. This companion is an avatar on an electronic tablet that is displayed as a dog or a cat. Whereas artificial intelligence guides most artificial companions, this application also relies on technicians "behind" the on-screen avatar, who via surveillance, interact with users. This case is notable because it particularly illustrates the tension between the endless opportunities offered by technology and the ethical issues stemming from limited regulations. Reviewing the case through the lens of biomedical ethics, concerns of deception, monitoring and tracking, as well as informed consent and social isolation are raised by the introduction of this technology to users with cognitive impairment. We provide a detailed description of the case, review the main ethical issues and present two theoretical frameworks, the "human-driven technology" platform and the emancipatory gerontology framework, to inform the design of future applications.


Assuntos
Inteligência Artificial/ética , Disfunção Cognitiva/terapia , Amigos , Equipe de Assistência ao Paciente/ética , Robótica/ética , Idoso , Animais , Inteligência Artificial/normas , Gatos , Disfunção Cognitiva/psicologia , Cães , Amigos/psicologia , Humanos , Equipe de Assistência ao Paciente/normas , Robótica/normas
14.
AJOB Neurosci ; 11(2): 120-127, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32228385

RESUMO

The ethics of robots and artificial intelligence (AI) typically centers on "giving ethics" to as-yet imaginary AI with human-levels of autonomy in order to protect us from their potentially destructive power. It is often assumed that to do that, we should program AI with the true moral theory (whatever that might be), much as we teach morality to our children. This paper argues that the focus on AI with human-level autonomy is misguided. The robots and AI that we have now and in the near future are "semi-autonomous" in that their ability to make choices and to act is limited across a number of dimensions. Further, it may be morally problematic to create AI with human-level autonomy, even if it becomes possible. As such, any useful approach to AI ethics should begin with a theory of giving ethics to semi-autonomous agents (SAAs). In this paper, we work toward such a theory by evaluating our obligations to and for "natural" SAAs, including nonhuman animals and humans with developing and diminished capacities. Drawing on research in neuroscience, bioethics, and philosophy, we identify the ways in which AI semi-autonomy differs from semi-autonomy in humans and nonhuman animals. We conclude on the basis of these comparisons that when giving ethics to SAAs, we should focus on principles and restrictions that protect human interests, but that we can only permissibly maintain this approach so long as we do not aim at developing technology with human-level autonomy.


Assuntos
Inteligência Artificial/ética , Bioética , Autonomia Pessoal , Animais , Humanos , Robótica/ética
15.
Cuad. bioét ; 31(101): 87-100, ene.-abr. 2020.
Artigo em Espanhol | IBECS | ID: ibc-197139

RESUMO

Más allá de los escenarios utópicos o distópicos que acompañan la progresiva introducción de robots de cuidado en entornos cotidianos, su utilización en el ámbito médico plantea controversias que requieren formas alternativas de responsabilidad ética. Desde este objetivo general, en este artículo proponemos una serie de reflexiones para articular un marco ético capaz de guiar la introducción y el uso de robots en el ámbito de la salud. La propuesta presentada se desarrolla a partir de una serie de consideraciones acerca de los robots y los cuidados, como punto de partida para desarrollar un marco ético basado en el principio de precaución y la acción mesurada. Proponemos una conceptualización de los robots no-esencialista, enfatizando su naturaleza relacional y contextual, entendiendo los robots como artefactos heterogéneos que se constituyen en una red de relaciones terapéuticas y que median nuestras relaciones de cuidados. Este planteamiento tiene una serie de implicaciones, que articulamos alrededor de la acción mesurada como propuesta ética. La acción mesurada, tal y como la entendemos, responde al principio de precaución y se configura a través de cuatro dimensiones: (1) el compromiso institucional, (2) que integra los miedos y esperanzas de todos aquellos actores concernidos, (3) que se realiza llevando a cabo acciones progresivas y revocables, bajo continuo seguimiento y evaluación y, (4) que incorpora en el proceso de diseño a los actores que practican el "buen cuidar"


Beyond the utopian or dystopian scenarios that accompany the progressive introduction of robots for care in daily environments, their use in the medical field entails controversies that require alternative forms of ethical responsibility. From this general objective, in this article we propose a series of reflections to articulate an ethical framework capable of orienting the introduction and use of robots in the field of health. The presented proposal is developed from a series of considerations about robots and care, as a starting point to develop an ethical framework based on the principle of precaution and measured action. It proposes a non-essentialist conceptualization of robots, that emphasizes their relational and contextual nature, understanding robots as heterogeneous artifacts that are constituted in a network of therapeutic relationships and that mediate our care relationships. This approach has a set of implications, which we articulate around measured action as an ethical proposal. The measured action, in our interpretation, responds to the principle of precaution and is configured through four dimensions: (1) the institutional commitment, (2) which integrates the fears and hopes of all those concerned actors, (3) which is realized carrying out progressive and revocable actions, under continuous monitoring and evaluation, and (4) which incorporates into the design process those actors practicing "good care"


Assuntos
Humanos , Robótica/ética , Cuidados Médicos/ética , Cuidados Médicos/métodos , Avaliação de Resultados em Cuidados de Saúde , Avaliação da Tecnologia Biomédica
16.
Sci Eng Ethics ; 26(1): 141-157, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-30701408

RESUMO

This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is "computable" depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. The first type is so-called rookie mistakes, which could be addressed by providing these people with the necessary ethical knowledge. The second, more difficult methodological issue concerns areas of peer disagreement in ethics, where no easy solutions are currently available. This paper examines several existing approaches to highlight the ethical pitfalls and challenges involved. Familiarity with these and similar problems can help programmers to avoid pitfalls and build better moral machines. The paper concludes that ethical decisions regarding moral robots should be based on avoiding what is immoral (i.e. prohibiting certain immoral actions) in combination with a pluralistic ethical method of solving moral problems, rather than relying on a particular ethical approach, so as to avoid a normative bias.


Assuntos
Inteligência Artificial/ética , Tomada de Decisões/ética , Teoria Ética , Princípios Morais , Robótica/ética , Dissidências e Disputas , Eticistas , Pesquisadores/ética , Software/ética
17.
J Gerontol B Psychol Sci Soc Sci ; 75(9): 1996-2007, 2020 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-31131848

RESUMO

OBJECTIVES: Socially assistive robots (SARs) need to be studied from older adults' perspective, given their predicted future ubiquity in aged-care settings. Current ethical discourses on SARs in aged care are uninformed by primary stakeholders' ethical perceptions. This study reports on what community-dwelling older adults in Flanders, Belgium, perceive as ethical issues of SARs in aged care. METHODS: Constructivist grounded theory guided the study of 9 focus groups of 59 community-dwelling older adults (70+ years) in Flanders, Belgium. An open-ended topic guide and a modified Alice Cares documentary focused discussions. The Qualitative Analysis Guide of Leuven (QUAGOL) guided data analysis. RESULTS: Data revealed older adults' multidimensional perceptions on the ethics of SARs which were structured along three sections: (a) SARs as components of a techno-societal evolution, (b) SARs' embeddedness in aged-care dynamics, (c) SARs as embodiments of ethical considerations. DISCUSSION: Perceptions sociohistorically contextualize the ethics of SAR use by older adults' views on societal, organizational, and relational contexts in which aged care takes place. These contexts need to inform the ethical criteria for the design, development, and use of SARs. Focusing on older adults' ethical perceptions creates "normativity in place," viewing participants as moral subjects.


Assuntos
Envelhecimento , Vida Independente , Robótica , Tecnologia Assistiva , Percepção Social/psicologia , Idoso , Envelhecimento/ética , Envelhecimento/psicologia , Bélgica , Feminino , Grupos Focais , Teoria Fundamentada , Humanos , Vida Independente/ética , Vida Independente/psicologia , Invenções/ética , Masculino , Pesquisa Qualitativa , Robótica/ética , Robótica/tendências , Tecnologia Assistiva/ética , Tecnologia Assistiva/psicologia , Tecnologia Assistiva/tendências , Evolução Social
18.
J Med Ethics ; 46(2): 128-136, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31818967

RESUMO

Different embodiments of technology permeate all layers of public and private domains in society. In the public domain of aged care, attention is increasingly focused on the use of socially assistive robots (SARs) supporting caregivers and older adults to guarantee that older adults receive care. The introduction of SARs in aged-care contexts is joint by intensive empirical and philosophical research. Although these efforts merit praise, current empirical and philosophical research are still too far separated. Strengthening the connection between these two fields is crucial to have a full understanding of the ethical impact of these technological artefacts. To bridge this gap, we propose a philosophical-ethical framework for SAR use, one that is grounded in the dialogue between empirical-ethical knowledge about and philosophical-ethical reflection on SAR use. We highlight the importance of considering the intuitions of older adults and their caregivers in this framework. Grounding philosophical-ethical reflection in these intuitions opens the ethics of SAR use in aged care to its own socio-historical contextualisation. Referring to the work of Margaret Urban Walker, Joan Tronto and Andrew Feenberg, it is argued that this socio-historical contextualisation of the ethics of SAR use already has strong philosophical underpinnings. Moreover, this contextualisation enables us to formulate a rudimentary decision-making process about SAR use in aged care which rests on three pillars: (1) stakeholders' intuitions about SAR use as sources of knowledge; (2) interpretative dialogues as democratic spaces to discuss the ethics of SAR use; (3) the concretisation of ethics in SAR use.


Assuntos
Tomada de Decisões/ética , Instituição de Longa Permanência para Idosos , Casas de Saúde , Robótica/ética , Interação Social , Isolamento Social , Idoso , Idoso de 80 Anos ou mais , Cuidadores , Comunicação , Pesquisa Empírica , Humanos , Intuição , Conhecimento , Princípios Morais , Filosofia
19.
Camb Q Healthc Ethics ; 29(1): 115-121, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31858938

RESUMO

This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient-physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information might suggest a positive shift in the patient-physician relationship, the physician's 'need to care' might be irreplaceable, and robot healthcare workers ('robot carers') might be seen as contributing to dehumanized healthcare practices.


Assuntos
Inteligência Artificial/ética , Ética Médica , Relações Médico-Paciente , Inteligência Artificial/legislação & jurisprudência , Confidencialidade/ética , União Europeia , Humanos , Consentimento Livre e Esclarecido , Médicos , Robótica/ética , Robótica/legislação & jurisprudência
20.
BMC Med Ethics ; 20(1): 98, 2019 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-31856798

RESUMO

BACKGROUND: Advances in artificial intelligence (AI), robotics and wearable computing are creating novel technological opportunities for mitigating the global burden of population ageing and improving the quality of care for older adults with dementia and/or age-related disability. Intelligent assistive technology (IAT) is the umbrella term defining this ever-evolving spectrum of intelligent applications for the older and disabled population. However, the implementation of IATs has been observed to be sub-optimal due to a number of barriers in the translation of novel applications from the designing labs to the bedside. Furthermore, since these technologies are designed to be used by vulnerable individuals with age- and multi-morbidity-related frailty and cognitive disability, they are perceived to raise important ethical challenges, especially when they involve machine intelligence, collect sensitive data or operate in close proximity to the human body. Thus, the goal of this paper is to explore and assess the ethical issues that professional stakeholders perceive in the development and use of IATs in elderly and dementia care. METHODS: We conducted a multi-site study involving semi-structured qualitative interviews with researchers and health professionals. We analyzed the interview data using a descriptive thematic analysis to inductively explore relevant ethical challenges. RESULTS: Our findings indicate that professional stakeholders find issues of patient autonomy and informed consent, quality of data management, distributive justice and human contact as ethical priorities. Divergences emerged in relation to how these ethical issues are interpreted, how conflicts between different ethical principles are resolved and what solutions should be implemented to overcome current challenges. CONCLUSIONS: Our findings indicate a general agreement among professional stakeholders on the ethical promises and challenges raised by the use of IATs among older and disabled users. Yet, notable divergences persist regarding how these ethical challenges can be overcome and what strategies should be implemented for the safe and effective implementation of IATs. These findings provide technology developers with useful information about unmet ethical needs. Study results may guide policy makers with firsthand information from relevant stakeholders about possible solutions for ethically-aligned technology governance.


Assuntos
Inteligência Artificial/ética , Tecnologia Assistiva/ética , Demência , Europa (Continente) , Feminino , Pessoal de Saúde/psicologia , Humanos , Entrevistas como Assunto , Masculino , Pesquisa Qualitativa , Pesquisadores/psicologia , Robótica/ética , Participação dos Interessados
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...