Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 88
Filter
1.
J Law Med ; 31(2): 353-369, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38963250

ABSTRACT

AI technologies can pose a major national security concern. AI programs could be used to develop chemical and biological agents which circumvent existing protective measures or medical treatments, or to design pathogens with capabilities they do not naturally possess (gain-of-function research). Although Australia has a strong legislative framework relating to research into genetically modified organisms, the framework requires the interaction of more than 10 different government departments, universities and funding agencies. Further, there are few guidelines about the responsible use of AI in biological research where existing laws and policies do not apply to research that is conducted "virtually", even where that research may have national security implications. This article explores these under-scrutinised concepts in Australia's biological security frameworks.


Subject(s)
Artificial Intelligence , Security Measures , Synthetic Biology , Synthetic Biology/legislation & jurisprudence , Australia , Humans , Security Measures/legislation & jurisprudence , Artificial Intelligence/legislation & jurisprudence
2.
JAMA Health Forum ; 5(7): e242691, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38990560

ABSTRACT

This JAMA Forum discusses pending legislation in the US House and Senate and the history of the "firm-based approach" the US Food and Drug Administration (FDA) could use when regulating artificial intelligence (AI) medical devices to augment patient care.


Subject(s)
Artificial Intelligence , United States Food and Drug Administration , United States , United States Food and Drug Administration/legislation & jurisprudence , Humans , Artificial Intelligence/legislation & jurisprudence , Government Regulation
3.
J Law Med Ethics ; 52(S1): 70-74, 2024.
Article in English | MEDLINE | ID: mdl-38995251

ABSTRACT

Here, we analyze the public health implications of recent legal developments - including privacy legislation, intergovernmental data exchange, and artificial intelligence governance - with a view toward the future of public health informatics and the potential of diverse data to inform public health actions and drive population health outcomes.


Subject(s)
Artificial Intelligence , Humans , Artificial Intelligence/legislation & jurisprudence , United States , Confidentiality/legislation & jurisprudence , Public Health Informatics/legislation & jurisprudence , Public Health/legislation & jurisprudence , Privacy/legislation & jurisprudence
4.
BMC Psychol ; 12(1): 367, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38926756

ABSTRACT

AI Generated Content Law was extensively promoted in 2023; hence, it is crucial to uncover factors influencing people's behavioral intentions to comply with the AI Generated Content Law. This study extends the theory of planned behavior to explore the factors influencing people to follow AI Generated Content Law in China. In addition to the factors in TPB model, such as one's attitudinal factors, normative factors, and perceived behavioral control, we add another factor-moral obligation to extend the theory of planned behavior model. We used convenient sampling and there were 712 effective samples. Using the statistical software Amos17.0, the result shows that attitude, subjective norms, perceived behavioral control and moral obligation all have positive effects on intentions to follow AI Generated Content Law.


Subject(s)
Attitude , Intention , Humans , China , Male , Female , Adult , Psychological Theory , Artificial Intelligence/legislation & jurisprudence , Models, Psychological , Young Adult , Moral Obligations , Theory of Planned Behavior
6.
Cell Genom ; 4(6): 100564, 2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38795704

ABSTRACT

Here, we examine the challenges posed by laws in the United States and China for generative-AI-assisted genomic research collaboration. We recommend renewing the Agreement on Cooperation in Science and Technology to promote responsible principles for sharing human genomic data and to enhance transparency in research.


Subject(s)
Artificial Intelligence , Genomics , China , Humans , Genomics/legislation & jurisprudence , United States , Artificial Intelligence/legislation & jurisprudence
7.
J Am Med Inform Assoc ; 31(7): 1622-1627, 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38767890

ABSTRACT

OBJECTIVES: Surface the urgent dilemma that healthcare delivery organizations (HDOs) face navigating the US Food and Drug Administration (FDA) final guidance on the use of clinical decision support (CDS) software. MATERIALS AND METHODS: We use sepsis as a case study to highlight the patient safety and regulatory compliance tradeoffs that 6129 hospitals in the United States must navigate. RESULTS: Sepsis CDS remains in broad, routine use. There is no commercially available sepsis CDS system that is FDA cleared as a medical device. There is no public disclosure of an HDO turning off sepsis CDS due to regulatory compliance concerns. And there is no public disclosure of FDA enforcement action against an HDO for using sepsis CDS that is not cleared as a medical device. DISCUSSION AND CONCLUSION: We present multiple policy interventions that would relieve the current tension to enable HDOs to utilize artificial intelligence to improve patient care while also addressing FDA concerns about product safety, efficacy, and equity.


Subject(s)
Artificial Intelligence , Decision Support Systems, Clinical , Patient Safety , United States Food and Drug Administration , Artificial Intelligence/legislation & jurisprudence , United States , Humans , Sepsis , Guideline Adherence , Delivery of Health Care
10.
Eur J Radiol ; 175: 111462, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38608500

ABSTRACT

The integration of AI in radiology raises significant legal questions about responsibility for errors. Radiologists fear AI may introduce new legal challenges, despite its potential to enhance diagnostic accuracy. AI tools, even those approved by regulatory bodies like the FDA or CE, are not perfect, posing a risk of failure. The key issue is how AI is implemented: as a stand-alone diagnostic tool or as an aid to radiologists. The latter approach could reduce undesired side effects. However, it's unclear who should be held liable for AI failures, with potential candidates ranging from engineers and radiologists involved in AI development to companies and department heads who integrate these tools into clinical practice. The EU's AI Act, recognizing AI's risks, categorizes applications by risk level, with many radiology-related AI tools considered high risk. Legal precedents in autonomous vehicles offer some guidance on assigning responsibility. Yet, the existing legal challenges in radiology, such as diagnostic errors, persist. AI's potential to improve diagnostics raises questions about the legal implications of not using available AI tools. For instance, an AI tool improving the detection of pediatric fractures could reduce legal risks. This situation parallels innovations like car turn signals, where ignoring available safety enhancements could lead to legal problems. The debate underscores the need for further research and regulation to clarify AI's role in radiology, balancing innovation with legal and ethical considerations.


Subject(s)
Artificial Intelligence , Liability, Legal , Radiology , Humans , Radiology/legislation & jurisprudence , Radiology/ethics , Artificial Intelligence/legislation & jurisprudence , Diagnostic Errors/legislation & jurisprudence , Diagnostic Errors/prevention & control , Radiologists/legislation & jurisprudence
11.
Int J Law Psychiatry ; 94: 101985, 2024.
Article in English | MEDLINE | ID: mdl-38579525

ABSTRACT

People with impaired decision-making capacity enjoy the same rights to access technology as people with full capacity. Our paper looks at realising this right in the specific contexts of artificial intelligence (AI) and mental capacity legislation. Ireland's Assisted Decision-Making (Capacity) Act, 2015 commenced in April 2023 and refers to 'assistive technology' within its 'communication' criterion for capacity. We explore the potential benefits and risks of AI in assisting communication under this legislation and seek to identify principles or lessons which might be applicable in other jurisdictions. We focus especially on Ireland's provisions for advance healthcare directives because previous research demonstrates that common barriers to advance care planning include (i) lack of knowledge and skills, (ii) fear of starting conversations about advance care planning, and (iii) lack of time. We hypothesise that these barriers might be overcome, at least in part, by using generative AI which is already freely available worldwide. Bodies such as the United Nations have produced guidance about ethical use of AI and these guide our analysis. One of the ethical risks in the current context is that AI would reach beyond communication and start to influence the content of decisions, especially among people with impaired decision-making capacity. For example, when we asked one AI model to 'Make me an advance healthcare directive', its initial response did not explicitly suggest content for the directive, but it did suggest topics that might be included, which could be seen as setting an agenda. One possibility for circumventing this and other shortcomings, such as concerns around accuracy of information, is to look to foundational models of AI. With their capabilities to be trained and fine-tuned to downstream tasks, purpose-designed AI models could be adapted to provide education about capacity legislation, facilitate patient and staff interaction, and allow interactive updates by healthcare professionals. These measures could optimise the benefits of AI and minimise risks. Similar efforts have been made to use AI more responsibly in healthcare by training large language models to answer healthcare questions more safely and accurately. We highlight the need for open discussion about optimising the potential of AI while minimising risks in this population.


Subject(s)
Artificial Intelligence , Mental Competency , Humans , Artificial Intelligence/legislation & jurisprudence , Mental Competency/legislation & jurisprudence , Ireland , Decision Making , Advance Directives/legislation & jurisprudence
12.
Australas Psychiatry ; 32(3): 214-219, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38545872

ABSTRACT

OBJECTIVE: This article explores the transformative impact of OpenAI and ChatGPT on Australian medical practitioners, particularly psychiatrists in the private practice setting. It delves into the extensive benefits and limitations associated with integrating ChatGPT into medical practice, summarising current policies and scrutinising medicolegal implications. CONCLUSION: A careful assessment is imperative to determine whether the benefits of AI integration outweigh the associated risks. Practitioners are urged to review AI-generated content to ensure its accuracy, recognising that liability likely resides with them rather than with AI platforms, despite the lack of case law specific to negligence and AI in the Australian context at present. It is important to employ measures that ensure patient confidentiality is not breached and practitioners are encouraged to seek counsel from their professional indemnity insurer. There is considerable potential for future development of specialised AI software tailored specifically for the medical profession, making the use of AI more suitable for the medical field in the Australian legal landscape. Moving forward, it is essential to embrace technology and actively address its challenges rather than dismissing AI integration into medical practice. It is becoming increasingly essential that both the psychiatric community, medical community at large and policy makers develop comprehensive guidelines to fill existing policy gaps and adapt to the evolving landscape of AI technologies in healthcare.


Subject(s)
Private Practice , Psychiatry , Humans , Australia , Psychiatry/legislation & jurisprudence , Psychiatry/standards , Private Practice/legislation & jurisprudence , Private Practice/organization & administration , Artificial Intelligence/legislation & jurisprudence , Confidentiality/legislation & jurisprudence , Confidentiality/standards
14.
JAMA ; 331(11): 909-910, 2024 03 19.
Article in English | MEDLINE | ID: mdl-38373004

ABSTRACT

This Viewpoint summarizes a recent lawsuit alleging that a hospital violated patients' privacy by sharing electronic health record (EHR) data with Google for development of medical artificial intelligence (AI) and discusses how the federal court's decision in the case provides key insights for hospitals planning to share EHR data with for-profit companies developing medical AI.


Subject(s)
Artificial Intelligence , Confidentiality , Delivery of Health Care , Search Engine , Humans , Artificial Intelligence/legislation & jurisprudence , Confidentiality/legislation & jurisprudence , Delivery of Health Care/legislation & jurisprudence , Delivery of Health Care/methods , Electronic Health Records/legislation & jurisprudence , Privacy/legislation & jurisprudence , Search Engine/legislation & jurisprudence
18.
J Osteopath Med ; 124(7): 287-290, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38295300

ABSTRACT

The emergence of generative large language model (LLM) artificial intelligence (AI) represents one of the most profound developments in healthcare in decades, with the potential to create revolutionary and seismic changes in the practice of medicine as we know it. However, significant concerns have arisen over questions of liability for bad outcomes associated with LLM AI-influenced medical decision making. Although the authors were not able to identify a case in the United States that has been adjudicated on medical malpractice in the context of LLM AI at this time, sufficient precedent exists to interpret how analogous situations might be applied to these cases when they inevitably come to trial in the future. This commentary will discuss areas of potential legal vulnerability for clinicians utilizing LLM AI through review of past case law pertaining to third-party medical guidance and review the patchwork of current regulations relating to medical malpractice liability in AI. Finally, we will propose proactive policy recommendations including creating an enforcement duty at the US Food and Drug Administration (FDA) to require algorithmic transparency, recommend reliance on peer-reviewed data and rigorous validation testing when LLMs are utilized in clinical settings, and encourage tort reform to share liability between physicians and LLM developers.


Subject(s)
Artificial Intelligence , Liability, Legal , Malpractice , Artificial Intelligence/legislation & jurisprudence , Malpractice/legislation & jurisprudence , Humans , United States
19.
JAMA ; 331(3): 185-187, 2024 01 16.
Article in English | MEDLINE | ID: mdl-38117529

ABSTRACT

In this Medical News article, JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, and Alondra Nelson, PhD, the Harold F. Linder Professor at the Institute for Advanced Study, discuss effective AI regulation frameworks to accommodate innovation.


Subject(s)
Artificial Intelligence , Biomedical Research , Health Policy , Inventions , Legislation, Medical , Education, Medical, Graduate , Medicine , Artificial Intelligence/legislation & jurisprudence , Health Policy/legislation & jurisprudence , Inventions/legislation & jurisprudence , Biomedical Research/legislation & jurisprudence
20.
Rev. derecho genoma hum ; (59): 129-148, jul.-dic. 2023.
Article in Spanish | IBECS | ID: ibc-232451

ABSTRACT

La cuestión de los sesgos en la IA constituye un reto importante en los sistemas de IA. Estos sesgos no surgen únicamente de los datos existentes, sino que también los introducen las personas que utilizan sistemas, que son intrínsecamente parciales, como todos los seres humanos. No obstante, esto constituye una realidad preocupante porque los algoritmos tienen la capacidad de influir significativamente en el diagnóstico de un médico. Análisis recientes indican que este fenómeno puede reproducirse incluso en situaciones en las que los médicos ya no reciben orientación del sistema. Esto implica no sólo una incapacidad para percibir el sesgo, sino también una propensión a propagarlo. Las consecuencias potenciales de este fenómeno pueden conducir a un ciclo que se autoperpetúa y que tiene la capacidad de infligir un daño significativo a las personas, especialmente cuando los sistemas de inteligencia artificial (IA) se emplean en contextos que implican asuntos delicados, como el ámbito de la asistencia sanitaria. En respuesta a esta circunstancia, los ordenamientos jurídicos han ideado mecanismos de gobernanza que, a primera vista, parecen suficientes, especialmente en la Unión Europea. Los reglamentos de reciente aparición relativos a los datos y los que ahora se enfocarán a la inteligencia artificial (IA)*** sirven como ilustración por excelencia de cómo lograr potencialmente una supervisión suficiente de los sistemas de IA. En su aplicación práctica, no obstante, es probable que numerosos mecanismos muestren ineficacia a la hora de identificar los sesgos que surgen tras la integración de estos sistemas en el mercado. Es importante considerar que, en esa coyuntura, puede haber múltiples agentes implicados, en los que se ha delegado predominantemente la responsabilidad. ... (AU)


The issue of bias in AI presents a significant challenge in AI systems. These biases not only arise from existing data but are also introduced by the individuals using the systems, who are inherently biased, like all humans. However, this constitutes a concerning reality because algorithms have the ability to significantly influence a doctor’s diagnosis. Recent analyses indicate that this phenomenon can occur even in situations where doctors are no longer receiving guidance from the system. This implies not only an inability to perceive bias but also a propensity to propagate it. The potential consequences of this phenomenon can lead to a self-perpetuating cycle that has the ability to inflict significant harm on individuals, especially when artificial intelligence (AI) systems are employed in sensitive contexts, such as healthcare. In response to this circumstance, legal frameworks have devised governance mechanisms that, at first glance, seem sufficient, especially in the European Union. Recently emerged regulations regarding data and those now focusing on artificial intelligence (AI) serve as prime illustrations of potentially achieving adequate supervision of AI systems. In practical application, however, numerous mechanisms are likely to show inefficacy in identifying biases arising from the integration of these systems into the market. It is important to consider that, at this juncture, there may be multiple agents involved, predominantly delegated responsibility. Hence, it is imperative to insist on the need to persuade AI developers to implement strict measures to regulate biases inherent in their systems. If the detection of these entities is not achieved, it will pose a significant challenge for others to achieve the same, especially until their presence becomes very noticeable. Another possibility is that the long-term repercussions will be experienced collectively. (AU)


Subject(s)
Humans , Artificial Intelligence/ethics , Artificial Intelligence/legislation & jurisprudence , Artificial Intelligence/standards , Bias
SELECTION OF CITATIONS
SEARCH DETAIL
...