Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
AJOB Empir Bioeth ; : 1-10, 2024 Apr 08.
Article in English | MEDLINE | ID: mdl-38588388

ABSTRACT

BACKGROUND: Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms. METHODS: We conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States. RESULTS: Participants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms. CONCLUSIONS: These findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.

2.
J Med Internet Res ; 25: e47609, 2023 11 16.
Article in English | MEDLINE | ID: mdl-37971798

ABSTRACT

BACKGROUND: Machine learning predictive analytics (MLPA) is increasingly used in health care to reduce costs and improve efficacy; it also has the potential to harm patients and trust in health care. Academic and regulatory leaders have proposed a variety of principles and guidelines to address the challenges of evaluating the safety of machine learning-based software in the health care context, but accepted practices do not yet exist. However, there appears to be a shift toward process-based regulatory paradigms that rely heavily on self-regulation. At the same time, little research has examined the perspectives about the harms of MLPA developers themselves, whose role will be essential in overcoming the "principles-to-practice" gap. OBJECTIVE: The objective of this study was to understand how MLPA developers of health care products perceived the potential harms of those products and their responses to recognized harms. METHODS: We interviewed 40 individuals who were developing MLPA tools for health care at 15 US-based organizations, including data scientists, software engineers, and those with mid- and high-level management roles. These 15 organizations were selected to represent a range of organizational types and sizes from the 106 that we previously identified. We asked developers about their perspectives on the potential harms of their work, factors that influence these harms, and their role in mitigation. We used standard qualitative analysis of transcribed interviews to identify themes in the data. RESULTS: We found that MLPA developers recognized a range of potential harms of MLPA to individuals, social groups, and the health care system, such as issues of privacy, bias, and system disruption. They also identified drivers of these harms related to the characteristics of machine learning and specific to the health care and commercial contexts in which the products are developed. MLPA developers also described strategies to respond to these drivers and potentially mitigate the harms. Opportunities included balancing algorithm performance goals with potential harms, emphasizing iterative integration of health care expertise, and fostering shared company values. However, their recognition of their own responsibility to address potential harms varied widely. CONCLUSIONS: Even though MLPA developers recognized that their products can harm patients, public, and even health systems, robust procedures to assess the potential for harms and the need for mitigation do not exist. Our findings suggest that, to the extent that new oversight paradigms rely on self-regulation, they will face serious challenges if harms are driven by features that developers consider inescapable in health care and business environments. Furthermore, effective self-regulation will require MLPA developers to accept responsibility for safety and efficacy and know how to act accordingly. Our results suggest that, at the very least, substantial education will be necessary to fill the "principles-to-practice" gap.


Subject(s)
Delivery of Health Care , Privacy , Humans , Social Behavior , Machine Learning
3.
Pac Symp Biocomput ; 28: 496-506, 2023.
Article in English | MEDLINE | ID: mdl-36541003

ABSTRACT

Machine learning predictive analytics (MLPA) are utilized increasingly in health care, but can pose harms to patients, clinicians, health systems, and the public. The dynamic nature of this technology creates unique challenges to evaluating safety and efficacy and minimizing harms. In response, regulators have proposed an approach that would shift more responsibility to MLPA developers for mitigating potential harms. To be effective, this approach requires MLPA developers to recognize, accept, and act on responsibility for mitigating harms. In interviews of 40 MLPA developers of health care applications in the United States, we found that a subset of ML developers made statements reflecting moral disengagement, representing several different potential rationales that could create distance between personal accountability and harms. However, we also found a different subset of ML developers who expressed recognition of their role in creating potential hazards, the moral weight of their design decisions, and a sense of responsibility for mitigating harms. We also found evidence of moral conflict and uncertainty about responsibility for averting harms as an individual developer working in a company. These findings suggest possible facilitators and barriers to the development of ethical ML that could act through encouragement of moral engagement or discouragement of moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.


Subject(s)
Computational Biology , Morals , Humans , United States , Delivery of Health Care , Artificial Intelligence
4.
J Med Internet Res ; 23(6): e26391, 2021 06 22.
Article in English | MEDLINE | ID: mdl-34156338

ABSTRACT

BACKGROUND: Considerable effort has been devoted to the development of artificial intelligence, including machine learning-based predictive analytics (MLPA) for use in health care settings. The growth of MLPA could be fueled by payment reforms that hold health care organizations responsible for providing high-quality, cost-effective care. Policy analysts, ethicists, and computer scientists have identified unique ethical and regulatory challenges from the use of MLPA in health care. However, little is known about the types of MLPA health care products available on the market today or their stated goals. OBJECTIVE: This study aims to better characterize available MLPA health care products, identifying and characterizing claims about products recently or currently in use in US health care settings that are marketed as tools to improve health care efficiency by improving quality of care while reducing costs. METHODS: We conducted systematic database searches of relevant business news and academic research to identify MLPA products for health care efficiency meeting our inclusion and exclusion criteria. We used content analysis to generate MLPA product categories and characterize the organizations marketing the products. RESULTS: We identified 106 products and characterized them based on publicly available information in terms of the types of predictions made and the size, type, and clinical training of the leadership of the companies marketing them. We identified 5 categories of predictions made by MLPA products based on publicly available product marketing materials: disease onset and progression, treatment, cost and utilization, admissions and readmissions, and decompensation and adverse events. CONCLUSIONS: Our findings provide a foundational reference to inform the analysis of specific ethical and regulatory challenges arising from the use of MLPA to improve health care efficiency.


Subject(s)
Artificial Intelligence , Delivery of Health Care , Humans , Machine Learning , Quality of Health Care
5.
Qual Health Res ; 30(2): 293-302, 2020 01.
Article in English | MEDLINE | ID: mdl-31409193

ABSTRACT

In this study, we present views on bipolar disorder and reproductive decision-making through an analysis of posts on Reddit™, a major Internet discussion forum. Prior research has shown that the Internet is a useful source of data on sensitive topics. This study used qualitative textual analysis to analyze posts on Reddit™ bipolar discussion boards that dealt with genetics and related topics. All thread titles over 4 years were reviewed (N = 1,800). Genetic risk was often raised in the context of Redditors' discussions about whether or not to have children. Reproductive decision-making for Redditors with bipolar was complex and influenced by factors from their past, present, and imagined future. These factors coalesced under a summative theme: for adults with bipolar disorder, what was the manageability of parenting a child? Reproductive decisions for individuals with bipolar disorder are complex, and Reddit™ is a novel source of information on their perspectives.


Subject(s)
Bipolar Disorder/psychology , Decision Making , Family Planning Services/methods , Social Media , Adult , Bipolar Disorder/genetics , Female , Humans , Male , Qualitative Research , Reproduction , Young Adult
6.
Genet Med ; 21(2): 505-509, 2019 02.
Article in English | MEDLINE | ID: mdl-29970926

ABSTRACT

The Ethical, Legal, and Social Implications (ELSI) Research Program of the National Human Genome Research Institute sponsors research examining ethical, legal, and social issues arising in the context of genetics/genomics. The ELSI Program endorses an understanding of research not as the sole province of empirical study, but instead as systematic study or inquiry, of which there are many types and methods. ELSI research employs both empirical and nonempirical methods. Because the latter remain relatively unfamiliar to biomedical and translational scientists, this paper seeks to elucidate the relationship between empirical and nonempirical methods in ELSI research. It pays particular attention to the research questions and methods of normative and conceptual research, which examine questions of value and meaning, respectively. To illustrate the distinct but interrelated roles of empirical and nonempirical methods in ELSI research, including normative and conceptual research, the paper demonstrates how a range of methods may be employed both to examine the evolution of the concept of incidental findings (including the recent step toward terming them 'secondary findings'), and to address the normative question of how genomic researchers and clinicians should manage incidental such findings.


Subject(s)
Ethics, Research , Genome, Human/genetics , Genomics/ethics , National Human Genome Research Institute (U.S.)/ethics , Humans , National Human Genome Research Institute (U.S.)/legislation & jurisprudence , Public Policy/legislation & jurisprudence , United States
7.
J Autism Dev Disord ; 47(5): 1453-1463, 2017 May.
Article in English | MEDLINE | ID: mdl-28229350

ABSTRACT

Despite increasing utilization of chromosomal microarray analysis (CMA) for autism spectrum disorders (ASD), limited information exists about how results influence parents' beliefs about etiology and prognosis. We conducted in-depth interviews and surveys with 57 parents of children with ASD who received CMA results categorized as pathogenic, negative or variant of uncertain significance. Parents tended to incorporate their child's CMA results within their existing beliefs about the etiology of ASD, regardless of CMA result. However, parents' expectations for the future tended to differ depending on results; those who received genetic confirmation for their children's ASD expressed a sense of concreteness, acceptance and permanence of the condition. Some parents expressed hope for future biomedical treatments as a result of genetic research.


Subject(s)
Attitude to Health , Autism Spectrum Disorder/psychology , Culture , Parents/psychology , Autism Spectrum Disorder/genetics , Child , Female , Genomics , Humans , Male , Microarray Analysis , Prognosis , Qualitative Research , Surveys and Questionnaires
8.
Genet Med ; 19(7): 743-750, 2017 07.
Article in English | MEDLINE | ID: mdl-27929525

ABSTRACT

The Precision Medicine Initiative (PMI) is an innovative approach to developing a new model of health care that takes into account individual differences in people's genes, environments, and lifestyles. A cornerstone of the initiative is the PMI All of Us Research Program (formerly known as PMI-Cohort Program) which will create a cohort of 1 million volunteers who will contribute their health data and biospecimens to a centralized national database to support precision medicine research. The PMI All of US Research Program is the largest longitudinal study in the history of the United States. The designers of the Program anticipated and addressed some of the ethical, legal, and social issues (ELSI) associated with the initiative. To date, however, there is no plan to call for research regarding ELSI associated with the Program-PMI All of Us program. Based on analysis of National Institutes of Health (NIH) funding announcements for the PMI All of Us program, we have identified three ELSI themes: cohort diversity and health disparities, participant engagement, and privacy and security. We review All of Us Research Program plans to address these issues and then identify additional ELSI within each domain that warrant ongoing investigation as the All of Us Research Program develops. We conclude that PMI's All of Us Research Program represents a significant opportunity and obligation to identify, analyze, and respond to ELSI, and we call on the PMI to initiate a research program capable of taking on these challenges.Genet Med advance online publication 01 December 2016.


Subject(s)
Precision Medicine/ethics , Precision Medicine/methods , Ethics, Research , Humans , Longitudinal Studies , Morals , National Institutes of Health (U.S.) , Privacy , Research , United States
9.
Am J Bioeth ; 15(12): 18-24, 2015.
Article in English | MEDLINE | ID: mdl-26632356

ABSTRACT

Recent experiments have been used to "edit" genomes of various plant, animal and other species, including humans, with unprecedented precision. Furthermore, editing the Cas9 endonuclease gene with a gene encoding the desired guide RNA into an organism, adjacent to an altered gene, could create a "gene drive" that could spread a trait through an entire population of organisms. These experiments represent advances along a spectrum of technological abilities that genetic engineers have been working on since the advent of recombinant DNA techniques. The scientific and bioethics communities have built substantial literatures about the ethical and policy implications of genetic engineering, especially in the age of bioterrorism. However, recent CRISPr/Cas experiments have triggered a rehashing of previous policy discussions, suggesting that the scientific community requires guidance on how to think about social responsibility. We propose a framework to enable analysis of social responsibility, using two examples of genetic engineering experiments.


Subject(s)
Biological Science Disciplines/ethics , Ethical Analysis/methods , Genetic Engineering/ethics , Research Personnel/ethics , Social Responsibility , Social Values , Animals , Clustered Regularly Interspaced Short Palindromic Repeats/genetics , Ethics, Research , Humans , Influenza A Virus, H5N1 Subtype , Influenza, Human/prevention & control , Influenza, Human/virology
10.
J Autism Dev Disord ; 45(10): 3262-75, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26066358

ABSTRACT

Clinical guidelines recommend chromosomal microarray analysis (CMA) for all children with autism spectrum disorders (ASDs). We explored the test's perceived usefulness among parents of children with ASD who had undergone CMA, and received a result categorized as pathogenic, variant of uncertain significance, or negative. Fifty-seven parents participated in a semi-structured telephone interview, and 50 also completed a survey. Most parents reported that CMA was helpful for their child and family. Major themes regarding perceived usefulness were: medical care, educational and behavioral interventions, causal explanation, information for family members, and advancing knowledge. Limits to utility, uncertainties and negative outcomes were also identified. Our findings highlight the importance of considering both health and non-health related utility in genomic testing.


Subject(s)
Autism Spectrum Disorder/psychology , Genetic Testing , Parents/psychology , Patient Acceptance of Health Care , Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/genetics , Child , Child, Preschool , Female , Humans , Male , Microarray Analysis
11.
Sci Technol Human Values ; 40(1): 71-95, 2015 Jan.
Article in English | MEDLINE | ID: mdl-36119463

ABSTRACT

The US National Institute of Health's Human Microbiome Project aims to use genomic techniques to understand the microbial communities that live on the human body. The emergent field of microbiome science brought together diverse disciplinary perspectives and technologies, thus facilitating the negotiation of differing values. Here, we describe how values are conceptualized and negotiated within microbiome research. Analyzing discussions from a series of interdisciplinary workshops conducted with microbiome researchers, we argue that negotiations of epistemic, social, and institutional values were inextricable from the reflective and strategic category work (i.e., the work of anticipating and strategizing around divergent sets of institutional categories) that defined and organized the microbiome as an object of study and a potential future site of biomedical intervention. Negotiating the divergence or tension between emerging scientific and regulatory classifications also activated "values levers" and opened up reflective discussions of how classifications embody values and how these values might differ across domains. These data suggest that scholars at the intersections of science and technology studies, ethics, and policy could leverage such openings to identify and intervene in the ways that ethical/regulatory and scientific/technical practices are coproduced within unfolding research.

SELECTION OF CITATIONS
SEARCH DETAIL
...