Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
1.
Medicine (Baltimore) ; 103(21): e38330, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38788002

ABSTRACT

This paper examines the legal challenges associated with medical robots, including their legal status, liability in cases of malpractice, and concerns over patient data privacy and security. And this paper scrutinizes China's nuanced response to these dilemmas. An analysis of Chinese judicial practices and legislative actions reveals that current denial of legal personality to AI at this stage is commendable. To effectively control the financial risks associated with medical robots, there is an urgent need for clear guidelines on responsibility allocation for medical accidents involving medical robots, the implementation of strict data protection laws, and the strengthening of industry standards and regulations.


Subject(s)
Liability, Legal , Robotics , Humans , China , Robotics/legislation & jurisprudence , Malpractice/legislation & jurisprudence , Computer Security/legislation & jurisprudence , Confidentiality/legislation & jurisprudence
2.
Enferm. clín. (Ed. impr.) ; 30(supl.1): 54-59, feb. 2020.
Article in English | IBECS | ID: ibc-189615

ABSTRACT

The issue of older adults' care is becoming a serious concern in Japan, which has a rapidly aging population and a low birthrate. The development of robots is pushed forward as a measure to compensate for the healthcare worker shortage. The purpose of this paper is to consider the potential legal issues of caring healthcare robot (CHR) for older adults' care. A CHR must recognize the speech, face, and presence of older adults, and make judgments and relay information based on acquired information. CHRs fulfill the caring function by being close to patients, to know them deeply, and to look after them. Therefore, communication functions by advanced artificial intelligence based on caring in nursing are essential for CHRs. The ability to maintain and improve the activities of daily living (ADL) of older adults is to facilitate activities through bidirectional information relay. Furthermore, without guarantees on the safety and ascription of responsibilities, the introduction of CHRs in clinical practice will not proceed. As laws differ from country to country, it is necessary to examine Japanese policies and related laws when using CHRs in Japan. Currently, there are no central rules on information security. In Japan, the law is made after a case has occurred; thus, dealing with novel issues as they occur will not have the benefit of legal guidance. Creating a broad legal framework or taking preventive measures at an early stage is needed. Therefore, as a first step, establishing guidelines for the use of CHRs will be valuable


No disponible


Subject(s)
Humans , Aged , Old Age Assistance , Robotics/legislation & jurisprudence , Robotics/standards , Communication
4.
Camb Q Healthc Ethics ; 29(1): 115-121, 2020 01.
Article in English | MEDLINE | ID: mdl-31858938

ABSTRACT

This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient-physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information might suggest a positive shift in the patient-physician relationship, the physician's 'need to care' might be irreplaceable, and robot healthcare workers ('robot carers') might be seen as contributing to dehumanized healthcare practices.


Subject(s)
Artificial Intelligence/ethics , Ethics, Medical , Physician-Patient Relations , Artificial Intelligence/legislation & jurisprudence , Confidentiality/ethics , European Union , Humans , Informed Consent , Physicians , Robotics/ethics , Robotics/legislation & jurisprudence
6.
Rev. bioét. derecho ; (46): 5-8, jul. 2019.
Article in Spanish | IBECS | ID: ibc-184849

ABSTRACT

La expresión Inteligencia Artificial designa significados muy diferentes entre sí, cuando no antagónicos. En el presente artículo examinaremos algunos de estos sentidos, en concreto, Inteligencia Artificial como trending topic; como big data; como sesgo, como cuestión sociolaboral; como ente sin conciencia; como ente con conciencia; y, por último, como disciplina convergente. Cada una de estas expresiones nos muestra problemas de naturaleza muy diferente, algunos de los cuales intersecan con la bioética, de ahí el interés para nuestra disciplina. La normativa jurídica, dispersa y descoordinada, a veces incluso ausente, refleja las dificultades inherentes a no poder concretar exactamente a qué nos enfrentamos


The expression Artificial Intelligence designates meanings that are very different from each other, if not antagonistic. In this article we will examine some of these meanings, in particular, Artificial Intelligence as "trending topic"; as "big data"; as a "bias", as a "socio-labor issue"; as an "entity without conscience"; as an "entity with conscience"; and, finally, as a "convergent discipline". Each of these expressions shows us problems of different nature, some of which intersect with bioethics, hence the interest for our discipline. Legal regulations, dispersed and uncoordinated, sometimes even absent, reflects the inherent difficulties of not being able to specify exactly what we are facing


L'expressió Intel·ligència Artificial designa significats molt diferents entre si, quan no antagònics. En el present article examinarem alguns d'aquests sentits, en concret, Intel·ligència Artificial com trending topic; com big data; com a biaix, com a qüestió sociolaboral; com a ens sense consciència; com a ens amb consciència; i, finalment, com a disciplina convergent. Cadascuna d'aquestes expressions ens mostra problemes de naturalesa molt diferent, alguns dels quals intersequen amb la bioètica, d'aquí l'interès per a la nostra disciplina. La normativa jurídica, dispersa i descoordinada, a vegades fins i tot absent, reflecteix les dificultats inherents a no poder concretar exactament a què ens enfrontem


Subject(s)
Artificial Intelligence/ethics , Artificial Intelligence/legislation & jurisprudence , Big Data , Data Science , Bias , Robotics/ethics , Robotics/legislation & jurisprudence , Internet/ethics
7.
Rev. bioét. derecho ; (46): 47-66, jul. 2019.
Article in English | IBECS | ID: ibc-184851

ABSTRACT

The purpose of this paper is to illuminate the main ethical, legal and social implications (ELSIs) concerning social humanoid robots that have their base in artificial intelligence (AI). The main dilemma highlighted touches upon the expansion of the concept of legal personhood, and the attribution of appropriate legal responses to govern the future proliferation of AI systems vis-à-vis social humanoid robots. The paper cautions on the need to carefully reflect on notions of personhood and human dignity for AI systems, balanced against the underlying representation of values and behaviors that may threaten to erode the human rights discourse. Additionally, it questions the wisdom of the broad expanse of the European legal response to the development and use of AI systems


Este artículo trata los principales aspectos éticos, legales y las implicaciones sociales (ELSI, por sus siglas en inglés) de los robots humanoides sociales basados en inteligencia artificial (IA). El principal dilema se refiere a la expansión del concepto de persona jurídica y la atribución de respuestas jurídicas apropiadas para regir la futura proliferación de los sistemas de IA frente a los robots humanoides sociales. El artículo advierte la necesidad de reflexionar cuidadosamente sobre las nociones de persona y dignidad humana para los sistemas de IA, que se equilibren con la representación subyacente de valores y comportamientos que pueden amenazar con erosionar el discurso de los derechos humanos. Además, cuestiona el juicio de la respuesta jurídica europea al desarrollo y uso de los sistemas de IA


Aquest article tracta els principals aspectes ètics, legals i les implicacions socials (ELSI, per les seves sigles en anglès) dels robots humanoides socials basats en intel·ligència artificial (IA). El principal dilema es refereix a l'expansió del concepte de persona jurídica i l'atribució de respostes jurídiques apropiades per a regir la futura proliferació dels sistemes de IA enfront dels robots humanoides socials. L'article adverteix la necessitat de reflexionar acuradament sobre les nocions de persona i dignitat humana per als sistemes de IA, que s'equilibrin amb la representació subjacent de valors i comportaments que poden amenaçar amb erosionar el discurs dels drets humans. A més, qüestiona el judici de la resposta jurídica europea al desenvolupament i ús dels sistemes de IA


Subject(s)
Artificial Intelligence/ethics , Robotics/ethics , Robotics/legislation & jurisprudence , Human Rights/legislation & jurisprudence
8.
Rev. bioét. derecho ; (46): 37-84, jul. 2019. graf
Article in Portuguese | IBECS | ID: ibc-184852

ABSTRACT

dÉ a pessoa que inaugura a existência jurídica do ser: sem ela é difícil se chegar a acordos acerca das especificidades de direitos e deveres. No entremeio dessa relação, o transhumanismo, filosofia que advoga por um ser humano melhorado de modo a transcender sua natureza biológica, encontra respaldo prático na interação da tecnologia com a biologia, resultando na ampliação paulatina dos modos de "ser" humano, onde o ciborgue emerge como potencial humano diferenciado em vulnerabilidades e potencialidades em comparação ao Homo sapiens moderno. Assim, a personalidade jurídica contemporaneamente considerada é afetada e repensar sua formulação faz-se necessário. Sustenta-se que as inéditas possibilidades de proteção e responsabilização do ciborgue implicam sua existência jurídica através de uma nova pessoa, a pessoa não-natural


Es la persona que inaugura la existencia jurídica del ser: sin ella, es difícil llegar a acuerdos sobre las especificidades de derechos y deberes. En el centro de esa relación, el transhumanismo, filosofia que defiende un ser humano mejorado de manera a transcender su natureza biológica, encuentra respaldo práctico en la interacción de la tecnología con la biología, culminando en la ampliación paulatina de los modos de "ser" humano, donde el ciborg emerge como potencial humano diferenciado en vulnerabilidades y potencialidades en comparación al Homo sapiens moderno. Así, la personalidad jurídica contemporánea considerada es afectada y repensar su formulación se hace necesario. Se sustenta que las inéditas posibilidades de protección y responsabilización del ciborg implican su existencia jurídica a través de una nueva persona, la persona no-natural


It is the person who inaugurates the legal existence of the being: without it, it is difficult to reach agreements on the specificities of rights and duties. In the midst of this relationship, transhumanism, a philosophy that advocates an improved human being to transcend his biological nature, finds practical support in the interaction of technology with biology, resulting in the gradual expansion of human "being" modes, where the cyborg emerges as a human potential differentiated in vulnerabilities and potentialities in front of modern Homo sapiens. Thus, the contemporary legal personality is affected and it is necessary to rethink its formulation. It is argued that the possibilities of protection and responsibility of the cyborg imply an unprecedented legal existence through a new person, the non-natural person


És la persona que inaugura l'existència legal de l'ésser: sense ella és difícil arribar a acords sobre les especificitats dels drets i deures. Enmig d'aquesta relació, el transhumanisme, filosofia que advoca per un ésser humà millorat per a transcendir la seva naturalesa biològica, troba suport pràctic en la interacció de la tecnologia amb la biologia, resultant en l'expansió gradual de les maneres de "ser" humà, on el cyborg emergeix com un potencial humà diferenciat en vulnerabilitats i potencialitats enfront de l'Homo sapiens modern. Així, la personalitat jurídica contemporània es veu afectada i és necessari replantejar-se la seva formulació. S'argumenta que les possibilitats de protecció i responsabilitat del ciborg impliquen una existència jurídica sense precedents a través d'una nova persona, la persona no natural


Subject(s)
Humans , Cybernetics/ethics , Artificial Intelligence/ethics , Robotics/ethics , Aptitude/ethics , Adaptation, Physiological , Ergonomics , Human Development , Artificial Intelligence/legislation & jurisprudence , Robotics/legislation & jurisprudence , Human Rights/legislation & jurisprudence , Biomechanical Phenomena
9.
Med Law Rev ; 27(4): 553-575, 2019 Nov 01.
Article in English | MEDLINE | ID: mdl-30938445

ABSTRACT

In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and US taking steps to outlaw such devices. In this article, I subject these different regulatory attitudes to critical scrutiny. In doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory options that we confront when dealing with child sex robots. Secondly, I argue that there is a prima facie case for restrictive regulation, but that this is contingent on whether Arkin's hypothesis has a reasonable prospect of being successfully tested. Thirdly, I argue that Arkin's hypothesis probably does not have a reasonable prospect of being successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology.


Subject(s)
Commerce/ethics , Commerce/legislation & jurisprudence , Ethical Analysis , Government Regulation , Pedophilia/therapy , Robotics/ethics , Robotics/legislation & jurisprudence , Adult , Child , Child Abuse, Sexual/prevention & control , Humans , Morals , Pedophilia/economics , Play and Playthings , Robotics/economics
10.
J Int Bioethique Ethique Sci ; Vol. 30(3): 135-157, 2019 Nov 27.
Article in French | MEDLINE | ID: mdl-32372593

ABSTRACT

Who of human or robot has its place in space? The robot, because it can replace human beings for exploration missions that are always particularly dangerous both for the health and the safety of astronauts. But human also tends to gain a place in space, when he can be assisted by the robot as a tool that facilitates his work, or when the machine can serve as a medium to extend humanity to the confines of the universe. All these hypotheses raise ethical and legal questions to which the article gives some solutions.


Subject(s)
Artificial Intelligence , Astronauts , Robotics , Space Flight/ethics , Artificial Intelligence/ethics , Artificial Intelligence/legislation & jurisprudence , Astronauts/legislation & jurisprudence , Humans , Morals , Robotics/ethics , Robotics/legislation & jurisprudence
12.
J Int Bioethique Ethique Sci ; 29(3): 31-53, 2018.
Article in French | MEDLINE | ID: mdl-30767459

ABSTRACT

The robotization of the human implies a more or less intimate hybridization with the machine. When it participates in the repair of the human being, it generates a modification of the legal status of the robot that passes from the category of things to that of personn, which has remarkable effects in terms of civil liability. However, as the therapeutic aspect of hybridization disapears, not only does the robot move away from the category of personn, but the hybrid body raises many questions about fundamental rights.


Subject(s)
Delivery of Health Care , Legislation, Medical , Robotics/legislation & jurisprudence , Humans
13.
World J Urol ; 34(12): 1643-1650, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27000561

ABSTRACT

PURPOSE: To compare diameter as a continuous variable with categorical R.E.N.A.L. nephrometry score (RNS) in predicting surgical outcomes of robotic partial nephrectomy (RPN). METHODS: We retrospectively reviewed consecutive patients receiving RPN at our institution between July 2007 and June 2014 (n = 286). Three separate multivariate analyses were performed to assess the relationship between RNS components (R = radius, E = endophyticity, N = nearness to collecting system, L = location relative to polar lines), total RNS, and diameter as a continuous variable with operating time, warm ischemia time (WIT), and estimated blood loss (EBL). Each linear regression model's quality of fit to the data was assessed with coefficients of determination (R 2). RESULTS: Continuous tumor diameter and total RNS were each significantly correlated to operative time, EBL, and WIT (p < 0.001). Categorical R related to operative time (R = 2 vs. R = 1, p = 0.001; R = 3 vs. R = 1, p = 0.001) and WIT (R = 2 vs. R = 1, p = 0.003; R = 3 vs. R = 1, p = 0.016), but not to EBL. For each of these outcomes, diameter outperformed both R and total RNS, as assessed by R 2. Age, body mass index, Charlson Comorbidity Index, and anterior versus posterior location did not correlate with surgical outcomes. CONCLUSIONS: In this series of RPN from a high-volume center, surgical outcomes more closely related to tumor diameter than RNS. While RNS provides surgeons a standardized tool for preoperative planning of renal masses, tumor size may be employed as a more familiar measurement when counseling patients on potential outcomes.


Subject(s)
Kidney Neoplasms/surgery , Kidney/pathology , Laparoscopy/methods , Neoplasm Staging , Nephrectomy/methods , Robotics/legislation & jurisprudence , Tumor Burden , Female , Follow-Up Studies , Humans , Kidney Neoplasms/pathology , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , Treatment Outcome
17.
Ind Health ; 53(6): 498-504, 2015.
Article in English | MEDLINE | ID: mdl-26118854

ABSTRACT

In December 2013, the Japanese Ministry of Health, Labour and Welfare (MHLW) partially amended the safety regulations for use of industrial robots so that "collaborative operation" could be performed at Japanese worksites as allowed in the ISO standard for industrial robots. In order to show global harmonization of Japanese legislation on machinery safety and problems with applying ISO safety standards to Japanese worksites, this paper reports the progress of a research study which have been conducted in National Institute of Occupational Safety and Health, Japan from 2011 to the present at the request of MHLW to examine the necessity and effect of the amendment. In the first phase of this study, a questionnaire survey was conducted among domestic robot manufacturers and users. The obtained results revealed their potential demand for the collaborative operation and problems concerning their risk assessment and rule-based risk reduction. To solve the problems, we propose a method based on an investigation result of the regulatory framework for safety of machinery in the European Union. Furthermore, a model of robot system capable of demonstrating the collaborative operation and risk reduction measures which is being developed to support appropriate implementation of the amendment is also described.


Subject(s)
Occupational Health/legislation & jurisprudence , Occupational Health/standards , Robotics/legislation & jurisprudence , Humans , Japan , Man-Machine Systems , Manufacturing and Industrial Facilities , Risk Assessment , Robotics/methods , Safety Management , Surveys and Questionnaires
18.
Sci Eng Ethics ; 21(6): 1393-412, 2015 Dec.
Article in English | MEDLINE | ID: mdl-25371277

ABSTRACT

Remotely piloted aviation systems (RPAS) or 'drones' are well known for their military applications, but could also be used for a range of non-military applications for state, industrial, commercial and recreational purposes. The technology is advanced and regulatory changes are underway which will allow their use in domestic airspace. As well as the functional and economic benefits of a strong civil RPAS sector, the potential benefits for the military RPAS sector are also widely recognised. Several actors have nurtured this dual-use aspect of civil RPAS development. However, concerns have been raised about the public rejecting the technology because of their association with military applications and potentially controversial applications, for example in policing and border control. In contrast with the enthusiasm for dual-use exhibited throughout the EC consultation process, the strategy for avoiding public rejection devised in its roadmap would downplay the connection between military and non-military RPAS and focus upon less controversial applications such as search and rescue. We reflect upon this contrast in the context of the European agenda of responsible research and innovation. In doing so, we do not rely upon critique of drones per se, in their neither their civil nor military guise, but explore the extent to which current strategies for managing their public acceptability are compatible with a responsible and socially beneficial development of RPAS for civil purposes.


Subject(s)
Aircraft , Dual Use Research/ethics , Military Personnel , Public Opinion , Robotics , Social Responsibility , Technology/ethics , Attitude , Civil Rights , Dissent and Disputes , Dual Use Research/legislation & jurisprudence , Europe , Humans , Law Enforcement/methods , Machiavellianism , Marketing , Military Science , Pilots , Rescue Work/methods , Robotics/ethics , Robotics/legislation & jurisprudence , Social Control, Formal , Technology/legislation & jurisprudence , Weapons
SELECTION OF CITATIONS
SEARCH DETAIL
...