Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 223
Filter
1.
J Med Internet Res ; 26: e50344, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38838309

ABSTRACT

The growing prominence of artificial intelligence (AI) in mobile health (mHealth) has given rise to a distinct subset of apps that provide users with diagnostic information using their inputted health status and symptom information-AI-powered symptom checker apps (AISympCheck). While these apps may potentially increase access to health care, they raise consequential ethical and legal questions. This paper will highlight notable concerns with AI usage in the health care system, further entrenchment of preexisting biases in the health care system and issues with professional accountability. To provide an in-depth analysis of the issues of bias and complications of professional obligations and liability, we focus on 2 mHealth apps as examples-Babylon and Ada. We selected these 2 apps as they were both widely distributed during the COVID-19 pandemic and make prominent claims about their use of AI for the purpose of assessing user symptoms. First, bias entrenchment often originates from the data used to train AI systems, causing the AI to replicate these inequalities through a "garbage in, garbage out" phenomenon. Users of these apps are also unlikely to be demographically representative of the larger population, leading to distorted results. Second, professional accountability poses a substantial challenge given the vast diversity and lack of regulation surrounding the reliability of AISympCheck apps. It is unclear whether these apps should be subject to safety reviews, who is responsible for app-mediated misdiagnosis, and whether these apps ought to be recommended by physicians. With the rapidly increasing number of apps, there remains little guidance available for health professionals. Professional bodies and advocacy organizations have a particularly important role to play in addressing these ethical and legal gaps. Implementing technical safeguards within these apps could mitigate bias, AIs could be trained with primarily neutral data, and apps could be subject to a system of regulation to allow users to make informed decisions. In our view, it is critical that these legal concerns are considered throughout the design and implementation of these potentially disruptive technologies. Entrenched bias and professional responsibility, while operating in different ways, are ultimately exacerbated by the unregulated nature of mHealth.


Subject(s)
Artificial Intelligence , COVID-19 , Mobile Applications , Telemedicine , Humans , Artificial Intelligence/ethics , Bias , SARS-CoV-2 , Pandemics , Social Responsibility
2.
Rev Prat ; 74(5): 542-548, 2024 May.
Article in French | MEDLINE | ID: mdl-38833240

ABSTRACT

THE ETHICS OF IA IN MEDICINE MUST BE BASED ON THE PRACTICAL ETHICS OF THE HEALTHCARE RELATIONSHIP. Artificial intelligence (AI) offers more and more applications on the Internet, smartphones, computers, telemedicine. AI is growing rapidly in health. Transdisciplinary, AI must respect software engineering (reliability, robustness, security), knowledge obsolescence, law, ethics because a wide variety of algorithms, more or less opaque, process personal data help clinical decision. Hospital or city doctors and caregivers question the benefits/risks/costs of AI for the patient, the care relationship, deontology and medical ethics. Drawing on 30 years of experience in AI and medical ethics, the author proposes a first indicator of the ethical risks of AI (axis 1) evaluated by the surface of a radar diagram defined on the other 6 axes: Semantics, Opacity and acceptability, Complexity and autonomy, Target population, Actors (roles and motivations). Highly autonomous strong AI carries the most ethic risks.


L'ÉTHIQUE DE L'IA EN MÉDECINE DOIT REPOSER SUR L'ÉTHIQUE PRATIQUE DE LA RELATION DE SOIN. L'intelligence artificielle (IA) offre de plus en plus d'applications de santé sur smartphones, ordinateurs, télémédecine, internet des objets. Transdisciplinaire, l'IA doit respecter l'ingénierie logicielle (fiabilité, robustesse, sécurité), l'obsolescence des connaissances, le droit, l'éthique, car une grande variété d'algorithmes, plus ou moins opaques, traitent des données personnelles dans l'aide à la décision clinique. Médecins et soignants hospitaliers ou libéraux s'interrogent sur les bénéfices, risques, coûts de l'IA pour le patient, la relation de soin, la déontologie et l'éthique médicale. S'appuyant sur trente ans d'expérience en IA et en éthique médicale, cet article propose un premier indicateur des risques éthiques de l'IA (premier axe) défini par la surface du diagramme radar des autres axes (sémantique ; opacité et acceptabilité ; complexité et autonomie ; population cible ; acteurs [rôles et motivations]). L'IA forte autonome est celle qui comporte le plus de risques éthiques.


Subject(s)
Artificial Intelligence , Ethics, Medical , Artificial Intelligence/ethics , Humans , Delivery of Health Care/ethics
3.
JMIR Res Protoc ; 13: e52349, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38838329

ABSTRACT

BACKGROUND: Responsible artificial intelligence (RAI) emphasizes the use of ethical frameworks implementing accountability, responsibility, and transparency to address concerns in the deployment and use of artificial intelligence (AI) technologies, including privacy, autonomy, self-determination, bias, and transparency. Standards are under development to guide the support and implementation of AI given these considerations. OBJECTIVE: The purpose of this review is to provide an overview of current research evidence and knowledge gaps regarding the implementation of RAI principles and the occurrence and resolution of ethical issues within AI systems. METHODS: A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines was proposed. PubMed, ERIC, Scopus, IEEE Xplore, EBSCO, Web of Science, ACM Digital Library, and ProQuest (Arts and Humanities) will be systematically searched for articles published since 2013 that examine RAI principles and ethical concerns within AI. Eligibility assessment will be conducted independently and coded data will be analyzed along themes and stratified across discipline-specific literature. RESULTS: The results will be included in the full scoping review, which is expected to start in June 2024 and completed for the submission of publication by the end of 2024. CONCLUSIONS: This scoping review will summarize the state of evidence and provide an overview of its impact, as well as strengths, weaknesses, and gaps in research implementing RAI principles. The review may also reveal discipline-specific concerns, priorities, and proposed solutions to the concerns. It will thereby identify priority areas that should be the focus of future regulatory options available, connecting theoretical aspects of ethical requirements for principles with practical solutions. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/52349.


Subject(s)
Artificial Intelligence , Artificial Intelligence/ethics , Humans , Social Responsibility
5.
Sci Eng Ethics ; 30(3): 24, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833207

ABSTRACT

While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.


Subject(s)
Artificial Intelligence , Delivery of Health Care , Guidelines as Topic , Trust , Artificial Intelligence/ethics , Humans , Delivery of Health Care/ethics , Morals
6.
Br Dent J ; 236(9): 698, 2024 May.
Article in English | MEDLINE | ID: mdl-38730163
7.
BMC Med Ethics ; 25(1): 52, 2024 May 11.
Article in English | MEDLINE | ID: mdl-38734602

ABSTRACT

BACKGROUND: The integration of artificial intelligence (AI) in radiography presents transformative opportunities for diagnostic imaging and introduces complex ethical considerations. The aim of this cross-sectional study was to explore radiographers' perspectives on the ethical implications of AI in their field and identify key concerns and potential strategies for addressing them. METHODS: A structured questionnaire was distributed to a diverse group of radiographers in Saudi Arabia. The questionnaire included items on ethical concerns related to AI, the perceived impact on clinical practice, and suggestions for ethical AI integration in radiography. The data were analyzed using quantitative and qualitative methods to capture a broad range of perspectives. RESULTS: Three hundred eighty-eight radiographers responded and had varying levels of experience and specializations. Most (44.8%) participants were unfamiliar with the integration of AI into radiography. Approximately 32.9% of radiographers expressed uncertainty regarding the importance of transparency and explanatory capabilities in the AI systems used in radiology. Many (36.9%) participants indicated that they believed that AI systems used in radiology should be transparent and provide justifications for their decision-making procedures. A significant preponderance (44%) of respondents agreed that implementing AI in radiology may increase ethical dilemmas. However, 27.8%expressed uncertainty in recognizing and understanding the potential ethical issues that could arise from integrating AI in radiology. Of the respondents, 41.5% stated that the use of AI in radiology required establishing specific ethical guidelines. However, a significant percentage (28.9%) expressed the opposite opinion, arguing that utilizing AI in radiology does not require adherence to ethical standards. In contrast to the 46.6% of respondents voicing concerns about patient privacy over AI implementation, 41.5% of respondents did not have any such apprehensions. CONCLUSIONS: This study revealed a complex ethical landscape in the integration of AI in radiography, characterized by enthusiasm and apprehension among professionals. It underscores the necessity for ethical frameworks, education, and policy development to guide the implementation of AI in radiography. These findings contribute to the ongoing discourse on AI in medical imaging and provide insights that can inform policymakers, educators, and practitioners in navigating the ethical challenges of AI adoption in healthcare.


Subject(s)
Artificial Intelligence , Attitude of Health Personnel , Radiography , Humans , Cross-Sectional Studies , Artificial Intelligence/ethics , Male , Adult , Female , Surveys and Questionnaires , Radiography/ethics , Saudi Arabia , Middle Aged , Radiology/ethics
8.
BMC Med Ethics ; 25(1): 55, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38750441

ABSTRACT

BACKGROUND: Integrating artificial intelligence (AI) into healthcare has raised significant ethical concerns. In pharmacy practice, AI offers promising advances but also poses ethical challenges. METHODS: A cross-sectional study was conducted in countries from the Middle East and North Africa (MENA) region on 501 pharmacy professionals. A 12-item online questionnaire assessed ethical concerns related to the adoption of AI in pharmacy practice. Demographic factors associated with ethical concerns were analyzed via SPSS v.27 software using appropriate statistical tests. RESULTS: Participants expressed concerns about patient data privacy (58.9%), cybersecurity threats (58.9%), potential job displacement (62.9%), and lack of legal regulation (67.0%). Tech-savviness and basic AI understanding were correlated with higher concern scores (p < 0.001). Ethical implications include the need for informed consent, beneficence, justice, and transparency in the use of AI. CONCLUSION: The findings emphasize the importance of ethical guidelines, education, and patient autonomy in adopting AI. Collaboration, data privacy, and equitable access are crucial to the responsible use of AI in pharmacy practice.


Subject(s)
Artificial Intelligence , Humans , Cross-Sectional Studies , Female , Male , Adult , Artificial Intelligence/ethics , Middle East , Surveys and Questionnaires , Africa, Northern , Informed Consent/ethics , Confidentiality/ethics , Middle Aged , Beneficence , Pharmacists/ethics , Computer Security , Young Adult , Attitude of Health Personnel , Social Justice , Privacy
9.
Am Soc Clin Oncol Educ Book ; 44(3): e100043, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38788171

ABSTRACT

Providing a brief overview of past, present, and future ethics issues in oncology, this article begins with historical contexts, including the paternalistic approach to cancer care. It delves into present-day challenges such as navigating cancer treatment during pregnancy and addressing health care disparities faced by LGBTQ+ individuals. It also explores the ethical implications of emerging technologies, notably artificial intelligence and Big Data, in clinical decision making and medical education.


Subject(s)
Medical Oncology , Humans , Medical Oncology/ethics , Neoplasms/therapy , Ethics, Medical , Artificial Intelligence/ethics , Female
12.
JMIR Ment Health ; 11: e54781, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38787297

ABSTRACT

Unlabelled: This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.


Subject(s)
Artificial Intelligence , Psychotherapy , Artificial Intelligence/ethics , Humans , Psychotherapy/methods , Psychotherapy/ethics
14.
Sci Eng Ethics ; 30(3): 22, 2024 May 27.
Article in English | MEDLINE | ID: mdl-38801621

ABSTRACT

Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.


Subject(s)
Aging , Ethical Analysis , Personal Autonomy , Humans , Aging/ethics , Artificial Intelligence/ethics , Ethical Theory , Healthy Lifestyle , Delivery of Health Care/ethics , Healthy Aging/ethics
18.
Med Lav ; 115(2): e2024013, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38686573

ABSTRACT

Generative artificial intelligence and Large Language Models are reshaping labor dynamics and occupational health practices. As AI continues to evolve, there's a critical need to customize ethical considerations for its specific impacts on occupational health. Recognizing potential ethical challenges and dilemmas, stakeholders and physicians are urged to proactively adjust the practice of occupational medicine in response to shifting ethical paradigms. By advocating for a comprehensive review of the International Commission on Occupational Health ICOH code of Ethics, we can ensure responsible medical AI deployment, safeguarding the well-being of workers amidst the transformative effects of automation in healthcare.


Subject(s)
Artificial Intelligence , Occupational Medicine , Artificial Intelligence/ethics , Occupational Medicine/ethics , Humans , Codes of Ethics , Occupational Health/ethics
19.
Sci Rep ; 14(1): 8458, 2024 04 30.
Article in English | MEDLINE | ID: mdl-38688951

ABSTRACT

Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24-28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI's moral reasoning as superior in quality to humans' along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans' raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.


Subject(s)
Artificial Intelligence , Morals , Humans , Artificial Intelligence/ethics , Female , Male , Adult , Young Adult , Middle Aged , Judgment
20.
Lancet Digit Health ; 6(6): e428-e432, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38658283

ABSTRACT

With the rapid growth of interest in and use of large language models (LLMs) across various industries, we are facing some crucial and profound ethical concerns, especially in the medical field. The unique technical architecture and purported emergent abilities of LLMs differentiate them substantially from other artificial intelligence (AI) models and natural language processing techniques used, necessitating a nuanced understanding of LLM ethics. In this Viewpoint, we highlight ethical concerns stemming from the perspectives of users, developers, and regulators, notably focusing on data privacy and rights of use, data provenance, intellectual property contamination, and broad applications and plasticity of LLMs. A comprehensive framework and mitigating strategies will be imperative for the responsible integration of LLMs into medical practice, ensuring alignment with ethical principles and safeguarding against potential societal risks.


Subject(s)
Artificial Intelligence , Natural Language Processing , Humans , Artificial Intelligence/ethics , Intellectual Property
SELECTION OF CITATIONS
SEARCH DETAIL
...