Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 228
Filter
1.
Rev Prat ; 74(5): 542-548, 2024 May.
Article in French | MEDLINE | ID: mdl-38833240

ABSTRACT

THE ETHICS OF IA IN MEDICINE MUST BE BASED ON THE PRACTICAL ETHICS OF THE HEALTHCARE RELATIONSHIP. Artificial intelligence (AI) offers more and more applications on the Internet, smartphones, computers, telemedicine. AI is growing rapidly in health. Transdisciplinary, AI must respect software engineering (reliability, robustness, security), knowledge obsolescence, law, ethics because a wide variety of algorithms, more or less opaque, process personal data help clinical decision. Hospital or city doctors and caregivers question the benefits/risks/costs of AI for the patient, the care relationship, deontology and medical ethics. Drawing on 30 years of experience in AI and medical ethics, the author proposes a first indicator of the ethical risks of AI (axis 1) evaluated by the surface of a radar diagram defined on the other 6 axes: Semantics, Opacity and acceptability, Complexity and autonomy, Target population, Actors (roles and motivations). Highly autonomous strong AI carries the most ethic risks.


L'ÉTHIQUE DE L'IA EN MÉDECINE DOIT REPOSER SUR L'ÉTHIQUE PRATIQUE DE LA RELATION DE SOIN. L'intelligence artificielle (IA) offre de plus en plus d'applications de santé sur smartphones, ordinateurs, télémédecine, internet des objets. Transdisciplinaire, l'IA doit respecter l'ingénierie logicielle (fiabilité, robustesse, sécurité), l'obsolescence des connaissances, le droit, l'éthique, car une grande variété d'algorithmes, plus ou moins opaques, traitent des données personnelles dans l'aide à la décision clinique. Médecins et soignants hospitaliers ou libéraux s'interrogent sur les bénéfices, risques, coûts de l'IA pour le patient, la relation de soin, la déontologie et l'éthique médicale. S'appuyant sur trente ans d'expérience en IA et en éthique médicale, cet article propose un premier indicateur des risques éthiques de l'IA (premier axe) défini par la surface du diagramme radar des autres axes (sémantique ; opacité et acceptabilité ; complexité et autonomie ; population cible ; acteurs [rôles et motivations]). L'IA forte autonome est celle qui comporte le plus de risques éthiques.


Subject(s)
Artificial Intelligence , Ethics, Medical , Artificial Intelligence/ethics , Humans , Delivery of Health Care/ethics
2.
Sci Eng Ethics ; 30(3): 24, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833207

ABSTRACT

While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.


Subject(s)
Artificial Intelligence , Delivery of Health Care , Guidelines as Topic , Trust , Artificial Intelligence/ethics , Humans , Delivery of Health Care/ethics , Morals
3.
JMIR Res Protoc ; 13: e52349, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38838329

ABSTRACT

BACKGROUND: Responsible artificial intelligence (RAI) emphasizes the use of ethical frameworks implementing accountability, responsibility, and transparency to address concerns in the deployment and use of artificial intelligence (AI) technologies, including privacy, autonomy, self-determination, bias, and transparency. Standards are under development to guide the support and implementation of AI given these considerations. OBJECTIVE: The purpose of this review is to provide an overview of current research evidence and knowledge gaps regarding the implementation of RAI principles and the occurrence and resolution of ethical issues within AI systems. METHODS: A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines was proposed. PubMed, ERIC, Scopus, IEEE Xplore, EBSCO, Web of Science, ACM Digital Library, and ProQuest (Arts and Humanities) will be systematically searched for articles published since 2013 that examine RAI principles and ethical concerns within AI. Eligibility assessment will be conducted independently and coded data will be analyzed along themes and stratified across discipline-specific literature. RESULTS: The results will be included in the full scoping review, which is expected to start in June 2024 and completed for the submission of publication by the end of 2024. CONCLUSIONS: This scoping review will summarize the state of evidence and provide an overview of its impact, as well as strengths, weaknesses, and gaps in research implementing RAI principles. The review may also reveal discipline-specific concerns, priorities, and proposed solutions to the concerns. It will thereby identify priority areas that should be the focus of future regulatory options available, connecting theoretical aspects of ethical requirements for principles with practical solutions. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/52349.


Subject(s)
Artificial Intelligence , Artificial Intelligence/ethics , Humans , Social Responsibility
5.
J Med Internet Res ; 26: e50344, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38838309

ABSTRACT

The growing prominence of artificial intelligence (AI) in mobile health (mHealth) has given rise to a distinct subset of apps that provide users with diagnostic information using their inputted health status and symptom information-AI-powered symptom checker apps (AISympCheck). While these apps may potentially increase access to health care, they raise consequential ethical and legal questions. This paper will highlight notable concerns with AI usage in the health care system, further entrenchment of preexisting biases in the health care system and issues with professional accountability. To provide an in-depth analysis of the issues of bias and complications of professional obligations and liability, we focus on 2 mHealth apps as examples-Babylon and Ada. We selected these 2 apps as they were both widely distributed during the COVID-19 pandemic and make prominent claims about their use of AI for the purpose of assessing user symptoms. First, bias entrenchment often originates from the data used to train AI systems, causing the AI to replicate these inequalities through a "garbage in, garbage out" phenomenon. Users of these apps are also unlikely to be demographically representative of the larger population, leading to distorted results. Second, professional accountability poses a substantial challenge given the vast diversity and lack of regulation surrounding the reliability of AISympCheck apps. It is unclear whether these apps should be subject to safety reviews, who is responsible for app-mediated misdiagnosis, and whether these apps ought to be recommended by physicians. With the rapidly increasing number of apps, there remains little guidance available for health professionals. Professional bodies and advocacy organizations have a particularly important role to play in addressing these ethical and legal gaps. Implementing technical safeguards within these apps could mitigate bias, AIs could be trained with primarily neutral data, and apps could be subject to a system of regulation to allow users to make informed decisions. In our view, it is critical that these legal concerns are considered throughout the design and implementation of these potentially disruptive technologies. Entrenched bias and professional responsibility, while operating in different ways, are ultimately exacerbated by the unregulated nature of mHealth.


Subject(s)
Artificial Intelligence , COVID-19 , Mobile Applications , Telemedicine , Humans , Artificial Intelligence/ethics , Bias , SARS-CoV-2 , Pandemics , Social Responsibility
6.
Sci Eng Ethics ; 30(3): 26, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38856788

ABSTRACT

The rapid development of computer vision technologies and applications has brought forth a range of social and ethical challenges. Due to the unique characteristics of visual technology in terms of data modalities and application scenarios, computer vision poses specific ethical issues. However, the majority of existing literature either addresses artificial intelligence as a whole or pays particular attention to natural language processing, leaving a gap in specialized research on ethical issues and systematic solutions in the field of computer vision. This paper utilizes bibliometrics and text-mining techniques to quantitatively analyze papers from prominent academic conferences in computer vision over the past decade. It first reveals the developing trends and specific distribution of attention regarding trustworthy aspects in the computer vision field, as well as the inherent connections between ethical dimensions and different stages of visual model development. A life-cycle framework regarding trustworthy computer vision is then presented by making the relevant trustworthy issues, the operation pipeline of AI models, and viable technical solutions interconnected, providing researchers and policymakers with references and guidance for achieving trustworthy CV. Finally, it discusses particular motivations for conducting trustworthy practices and underscores the consistency and ambivalence among various trustworthy principles and technical attributes.


Subject(s)
Artificial Intelligence , Humans , Artificial Intelligence/ethics , Artificial Intelligence/trends , Trust , Natural Language Processing , Data Mining/ethics , Bibliometrics
7.
JMIR Ment Health ; 11: e54781, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38787297

ABSTRACT

Unlabelled: This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.


Subject(s)
Artificial Intelligence , Psychotherapy , Artificial Intelligence/ethics , Humans , Psychotherapy/methods , Psychotherapy/ethics
9.
Am Soc Clin Oncol Educ Book ; 44(3): e100043, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38788171

ABSTRACT

Providing a brief overview of past, present, and future ethics issues in oncology, this article begins with historical contexts, including the paternalistic approach to cancer care. It delves into present-day challenges such as navigating cancer treatment during pregnancy and addressing health care disparities faced by LGBTQ+ individuals. It also explores the ethical implications of emerging technologies, notably artificial intelligence and Big Data, in clinical decision making and medical education.


Subject(s)
Medical Oncology , Humans , Medical Oncology/ethics , Neoplasms/therapy , Ethics, Medical , Artificial Intelligence/ethics , Female
10.
Sci Eng Ethics ; 30(3): 22, 2024 May 27.
Article in English | MEDLINE | ID: mdl-38801621

ABSTRACT

Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.


Subject(s)
Aging , Ethical Analysis , Personal Autonomy , Humans , Aging/ethics , Artificial Intelligence/ethics , Ethical Theory , Healthy Lifestyle , Delivery of Health Care/ethics , Healthy Aging/ethics
13.
Asian J Psychiatr ; 97: 104067, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38718518

ABSTRACT

BACKGROUND: The integration of Artificial Intelligence (AI) in psychiatry presents opportunities for enhancing patient care but raises significant ethical concerns and challenges in clinical application. Addressing these challenges necessitates an informed and ethically aware psychiatric workforce capable of integrating AI into practice responsibly. METHODS: A mixed-methods study was conducted to assess the outcomes of the "CONNECT with AI" - (Collaborative Opportunity to Navigate and Negotiate Ethical Challenges and Trials with Artificial Intelligence) workshop, aimed at exploring AI's ethical implications and applications in psychiatry. This workshop featured presentations, discussions, and scenario analyses focusing on AI's role in mental health care. Pre- and post-workshop questionnaires and focus group discussions evaluated participants' perspectives, and ethical understanding regarding AI in psychiatry. RESULTS: Participants exhibited a cautious optimism towards AI, recognizing its potential to augment mental health care while expressing concerns over ethical usage, patient-doctor relationships, and AI's practical application in patient care. The workshop significantly improved participants' ethical understanding, highlighting a substantial knowledge gap and the need for further education in AI among psychiatrists. CONCLUSION: The study underscores the necessity of continuous education and ethical guideline development for psychiatrists in the era of AI, emphasizing collaborative efforts in AI system design to ensure they meet clinical needs ethically and effectively. Future initiatives should aim to broaden psychiatrists' exposure to AI, fostering a deeper understanding and integration of AI technologies in psychiatric practice.


Subject(s)
Artificial Intelligence , Psychiatry , Humans , Artificial Intelligence/ethics , Psychiatry/ethics , Adult , Attitude of Health Personnel , Female , Male
15.
BMC Med Ethics ; 25(1): 52, 2024 May 11.
Article in English | MEDLINE | ID: mdl-38734602

ABSTRACT

BACKGROUND: The integration of artificial intelligence (AI) in radiography presents transformative opportunities for diagnostic imaging and introduces complex ethical considerations. The aim of this cross-sectional study was to explore radiographers' perspectives on the ethical implications of AI in their field and identify key concerns and potential strategies for addressing them. METHODS: A structured questionnaire was distributed to a diverse group of radiographers in Saudi Arabia. The questionnaire included items on ethical concerns related to AI, the perceived impact on clinical practice, and suggestions for ethical AI integration in radiography. The data were analyzed using quantitative and qualitative methods to capture a broad range of perspectives. RESULTS: Three hundred eighty-eight radiographers responded and had varying levels of experience and specializations. Most (44.8%) participants were unfamiliar with the integration of AI into radiography. Approximately 32.9% of radiographers expressed uncertainty regarding the importance of transparency and explanatory capabilities in the AI systems used in radiology. Many (36.9%) participants indicated that they believed that AI systems used in radiology should be transparent and provide justifications for their decision-making procedures. A significant preponderance (44%) of respondents agreed that implementing AI in radiology may increase ethical dilemmas. However, 27.8%expressed uncertainty in recognizing and understanding the potential ethical issues that could arise from integrating AI in radiology. Of the respondents, 41.5% stated that the use of AI in radiology required establishing specific ethical guidelines. However, a significant percentage (28.9%) expressed the opposite opinion, arguing that utilizing AI in radiology does not require adherence to ethical standards. In contrast to the 46.6% of respondents voicing concerns about patient privacy over AI implementation, 41.5% of respondents did not have any such apprehensions. CONCLUSIONS: This study revealed a complex ethical landscape in the integration of AI in radiography, characterized by enthusiasm and apprehension among professionals. It underscores the necessity for ethical frameworks, education, and policy development to guide the implementation of AI in radiography. These findings contribute to the ongoing discourse on AI in medical imaging and provide insights that can inform policymakers, educators, and practitioners in navigating the ethical challenges of AI adoption in healthcare.


Subject(s)
Artificial Intelligence , Attitude of Health Personnel , Radiography , Humans , Cross-Sectional Studies , Artificial Intelligence/ethics , Male , Adult , Female , Surveys and Questionnaires , Radiography/ethics , Saudi Arabia , Middle Aged , Radiology/ethics
16.
BMC Med Ethics ; 25(1): 55, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38750441

ABSTRACT

BACKGROUND: Integrating artificial intelligence (AI) into healthcare has raised significant ethical concerns. In pharmacy practice, AI offers promising advances but also poses ethical challenges. METHODS: A cross-sectional study was conducted in countries from the Middle East and North Africa (MENA) region on 501 pharmacy professionals. A 12-item online questionnaire assessed ethical concerns related to the adoption of AI in pharmacy practice. Demographic factors associated with ethical concerns were analyzed via SPSS v.27 software using appropriate statistical tests. RESULTS: Participants expressed concerns about patient data privacy (58.9%), cybersecurity threats (58.9%), potential job displacement (62.9%), and lack of legal regulation (67.0%). Tech-savviness and basic AI understanding were correlated with higher concern scores (p < 0.001). Ethical implications include the need for informed consent, beneficence, justice, and transparency in the use of AI. CONCLUSION: The findings emphasize the importance of ethical guidelines, education, and patient autonomy in adopting AI. Collaboration, data privacy, and equitable access are crucial to the responsible use of AI in pharmacy practice.


Subject(s)
Artificial Intelligence , Humans , Cross-Sectional Studies , Female , Male , Adult , Artificial Intelligence/ethics , Middle East , Surveys and Questionnaires , Africa, Northern , Informed Consent/ethics , Confidentiality/ethics , Middle Aged , Beneficence , Pharmacists/ethics , Computer Security , Young Adult , Attitude of Health Personnel , Social Justice , Privacy
17.
Br Dent J ; 236(9): 698, 2024 May.
Article in English | MEDLINE | ID: mdl-38730163
20.
Am J Otolaryngol ; 45(4): 104303, 2024.
Article in English | MEDLINE | ID: mdl-38678799

ABSTRACT

Otolaryngologists can enhance workflow efficiency, provide better patient care, and advance medical research and education by integrating artificial intelligence (AI) into their practices. GPT-4 technology is a revolutionary and contemporary example of AI that may apply to otolaryngology. The knowledge of otolaryngologists should be supplemented, not replaced when using GPT-4 to make critical medical decisions and provide individualized patient care. In our thorough examination, we explore the potential uses of the groundbreaking GPT-4 technology in the field of otolaryngology, covering aspects such as potential outcomes and technical boundaries. Additionally, we delve into the intricate and intellectually challenging dilemmas that emerge when incorporating GPT-4 into otolaryngology, considering the ethical considerations inherent in its implementation. Our stance is that GPT-4 has the potential to be very helpful. Its capabilities, which include aid in clinical decision-making, patient care, and administrative job automation, present exciting possibilities for enhancing patient outcomes, boosting the efficiency of healthcare delivery, and enhancing patient experiences. Even though there are still certain obstacles and limitations, the progress made so far shows that GPT-4 can be a valuable tool for modern medicine. GPT-4 may play a more significant role in clinical practice as technology develops, helping medical professionals deliver high-quality care tailored to every patient's unique needs.


Subject(s)
Artificial Intelligence , Otolaryngology , Humans , Otolaryngology/ethics , Artificial Intelligence/ethics , Clinical Decision-Making/ethics
SELECTION OF CITATIONS
SEARCH DETAIL
...