Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
AI Ethics ; 1(1): 61-65, 2021.
Article in English | MEDLINE | ID: mdl-38624388

ABSTRACT

Artificial Intelligence (AI) is reshaping the world in profound ways; some of its impacts are certainly beneficial but widespread and lasting harms can result from the technology as well. The integration of AI into various aspects of human life is underway, and the complex ethical concerns emerging from the design, deployment, and use of the technology serves as a reminder that it is time to revisit what future developers and designers, along with professionals, are learning when it comes to AI. It is of paramount importance to train future members of the AI community, and other stakeholders as well, to reflect on the ways in which AI might impact people's lives and to embrace their responsibilities to enhance its benefits while mitigating its potential harms. This could occur in part through the fuller and more systematic inclusion of AI ethics into the curriculum. In this paper, we briefly describe different approaches to AI ethics and offer a set of recommendations related to AI ethics pedagogy.

2.
Sci Eng Ethics ; 26(6): 2957-2974, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32651773

ABSTRACT

The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing's practices and culture. Explanations for the crashes include: design flaws within the MAX's new flight control software system designed to prevent stalls; internal pressure to keep pace with Boeing's chief competitor, Airbus; Boeing's lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the FAA, especially during the certification of the MAX and following the first crash. While these and other factors have been the subject of numerous government reports and investigative journalism articles, little to date has been written on the ethical significance of the accidents, in particular the ethical responsibilities of the engineers at Boeing and the FAA involved in designing and certifying the MAX. Lessons learned from this case include the need to strengthen the voice of engineers within large organizations. There is also the need for greater involvement of professional engineering societies in ethics-related activities and for broader focus on moral courage in engineering ethics education.


Subject(s)
Engineering , Ethics, Professional , Aircraft , Morals , Writing
3.
AMA J Ethics ; 21(2): E138-145, 2019 02 01.
Article in English | MEDLINE | ID: mdl-30794123

ABSTRACT

This commentary responds to a hypothetical case involving an assistive artificial intelligence (AI) surgical device and focuses on potential harms emerging from interactions between humans and AI systems. Informed consent and responsibility-specifically, how responsibility should be distributed among professionals, technology companies, and other stakeholders-for uses of AI in health care are discussed.


Subject(s)
Artificial Intelligence , Communication , Health Personnel/psychology , Intervertebral Disc Displacement/diagnosis , Intervertebral Disc Displacement/surgery , Patients/psychology , Robotic Surgical Procedures/psychology , Attitude to Computers , Decision Making, Computer-Assisted , Diagnosis, Computer-Assisted , Humans , Male , Middle Aged , Patient Education as Topic/methods , United States
4.
Sci Eng Ethics ; 25(2): 383-398, 2019 04.
Article in English | MEDLINE | ID: mdl-29134429

ABSTRACT

The literature on self-driving cars and ethics continues to grow. Yet much of it focuses on ethical complexities emerging from an individual vehicle. That is an important but insufficient step towards determining how the technology will impact human lives and society more generally. What must complement ongoing discussions is a broader, system level of analysis that engages with the interactions and effects that these cars will have on one another and on the socio-technical systems in which they are embedded. To bring the conversation of self-driving cars to the system level, we make use of two traffic scenarios which highlight some of the complexities that designers, policymakers, and others should consider related to the technology. We then describe three approaches that could be used to address such complexities and their associated shortcomings. We conclude by bringing attention to the "Moral Responsibility for Computing Artifacts: The Rules", a framework that can provide insight into how to approach ethical issues related to self-driving cars.


Subject(s)
Artificial Intelligence/ethics , Automation/ethics , Automobile Driving , Automobiles/ethics , Engineering/ethics , Technology/ethics , Accidents, Traffic , Computers , Ethical Analysis , Humans , Morals , Social Change , Social Responsibility , Systems Analysis
5.
Sci Eng Ethics ; 24(5): 1521-1536, 2018 10.
Article in English | MEDLINE | ID: mdl-28936795

ABSTRACT

Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence (AI) applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as "black teenagers" are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of (1) a robot peacekeeper, (2) a self-driving car, and (3) a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology.


Subject(s)
Prejudice , Robotics , Social Justice , Algorithms , Artificial Intelligence , Automobiles , Bias , Biomedical Technology , Datasets as Topic , Female , Humans , Learning , Male , Racism , Sexism
6.
Sci Eng Ethics ; 22(1): 31-46, 2016 Feb.
Article in English | MEDLINE | ID: mdl-25736832

ABSTRACT

Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to "nudge" their human users in the direction of being "more ethical". More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture "socially just" tendencies in their human counterparts. Designing technological artifacts in such a way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation.


Subject(s)
Engineering/ethics , Moral Development , Robotics/ethics , Social Justice , Humans , Morals
7.
Account Res ; 22(5): 267-83, 2015.
Article in English | MEDLINE | ID: mdl-25928178

ABSTRACT

The size and complexity of research teams continues to grow, especially within the realms of science and engineering. This has intensified already existing concerns about relying on traditional authorship schemes as the way to allocate credit for a contribution to a research project. In this paper, we examine current authorship problems plaguing research communities and provide suggestions for how those problems could potentially be mitigated. We recommend that research communities, especially those involved in large scale collaborations, revisit the contributor model and embrace it as means for allocating credit more authentically and transparently.


Subject(s)
Authorship/standards , Cooperative Behavior , Research/organization & administration , Engineering , Humans , Peer Review/standards , Research/standards , Science , Writing/standards
8.
Sci Eng Ethics ; 20(1): 261-76, 2014 Mar.
Article in English | MEDLINE | ID: mdl-23420467

ABSTRACT

As a committee of the National Academy of Engineering recognized, ethics education should foster the ability of students to analyze complex decision situations and ill-structured problems. Building on the NAE's insights, we report about an innovative teaching approach that has two main features: first, it places the emphasis on deliberation and on self-directed, problem-based learning in small groups of students; and second, it focuses on understanding ill-structured problems. The first innovation is motivated by an abundance of scholarly research that supports the value of deliberative learning practices. The second results from a critique of the traditional case-study approach in engineering ethics. A key problem with standard cases is that they are usually described in such a fashion that renders the ethical problem as being too obvious and simplistic. The practitioner, by contrast, may face problems that are ill-structured. In the collaborative learning environment described here, groups of students use interactive and web-based argument visualization software called "AGORA-net: Participate - Deliberate!". The function of the software is to structure communication and problem solving in small groups. Students are confronted with the task of identifying possible stakeholder positions and reconstructing their legitimacy by constructing justifications for these positions in the form of graphically represented argument maps. The argument maps are then presented in class so that these stakeholder positions and their respective justifications become visible and can be brought into a reasoned dialogue. Argument mapping provides an opportunity for students to collaborate in teams and to develop critical thinking and argumentation skills.


Subject(s)
Cooperative Behavior , Engineering/ethics , Ethics, Research/education , Problem Solving , Problem-Based Learning , Teaching/methods , Communication , Comprehension , Humans , Software , Thinking
9.
Sci Eng Ethics ; 19(2): 653-68, 2013 Jun.
Article in English | MEDLINE | ID: mdl-22389209

ABSTRACT

This manuscript describes a pilot study in ethics education employing a problem-based learning approach to the study of novel, complex, ethically fraught, unavoidably public, and unavoidably divisive policy problems, called "fractious problems," in bioscience and biotechnology. Diverse graduate and professional students from four US institutions and disciplines spanning science, engineering, humanities, social science, law, and medicine analyzed fractious problems employing "navigational skills" tailored to the distinctive features of these problems. The students presented their results to policymakers, stakeholders, experts, and members of the public. This approach may provide a model for educating future bioscientists and bioengineers so that they can meaningfully contribute to the social understanding and resolution of challenging policy problems generated by their work.


Subject(s)
Biotechnology , Ethics, Professional/education , Ethics, Research/education , Problem Solving/ethics , Problem-Based Learning/methods , Science , Biotechnology/education , Biotechnology/ethics , Education, Graduate , Humans , Pilot Projects , Policy Making , Science/education , Science/ethics , Students , United States
10.
Sci Eng Ethics ; 19(1): 123-37, 2013 Mar.
Article in English | MEDLINE | ID: mdl-21918922

ABSTRACT

In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types of robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients.


Subject(s)
Caregivers/ethics , Cognition Disorders/therapy , Disabled Persons , Play and Playthings , Robotics/ethics , Adolescent , Child , Child Development , Humans
11.
Sci Eng Ethics ; 17(2): 355-64, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21512859

ABSTRACT

The primary aim of this article is to identify ethical challenges relating to authorship in engineering fields. Professional organizations and journals do provide crucial guidance in this realm, but this cannot replace the need for frequent and diligent discussions in engineering research communities about what constitutes appropriate authorship practice. Engineering researchers should seek to identify and address issues such as who is entitled to be an author and whether publishing their research could potentially harm the public.


Subject(s)
Authorship , Engineering/ethics , Ethics, Research , Periodicals as Topic/ethics , Social Responsibility , Humans , Publishing/ethics
12.
Sci Eng Ethics ; 16(2): 387-407, 2010 Jun.
Article in English | MEDLINE | ID: mdl-19597969

ABSTRACT

To assess ethics pedagogy in science and engineering, we developed a new tool called the Engineering and Science Issues Test (ESIT). ESIT measures moral judgment in a manner similar to the Defining Issues Test, second edition, but is built around technical dilemmas in science and engineering. We used a quasi-experimental approach with pre- and post-tests, and we compared the results to those of a control group with no overt ethics instruction. Our findings are that several (but not all) stand-alone classes showed a significant improvement compared to the control group when the metric includes multiple stages of moral development. We also found that the written test had a higher response rate and sensitivity to pedagogy than the electronic version. We do not find significant differences on pre-test scores with respect to age, education level, gender or political leanings, but we do on whether subjects were native English speakers. We did not find significant differences on pre-test scores based on whether subjects had previous ethics instruction; this could suggest a lack of a long-term effect from the instruction.


Subject(s)
Educational Measurement/methods , Engineering , Ethics, Professional/education , Judgment , Morals , Science , Adult , Decision Making/ethics , Educational Measurement/standards , Engineering/education , Engineering/ethics , Female , Georgia , Humans , Male , Principle-Based Ethics , Problem Solving/ethics , Science/education , Science/ethics , Sensitivity and Specificity
13.
Sci Eng Ethics ; 15(4): 517-30, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19915956

ABSTRACT

Many scholars predict that the technology to modify unborn children genetically is on the horizon. According to supporters of genetic enhancement, allowing parents to select a child's traits will enable him/her to experience a better life. Following their logic, the technology will not only increase our knowledge base and generate cures for genetic illness, but it may enable us to increase the intelligence, strength, and longevity of future generations as well. Yet it must be examined whether supporters of genetic enhancement, especially libertarians, adequately appreciate the ethical hazards emerging from the technology, including whether its use might violate the harm principle.


Subject(s)
Genetic Diseases, Inborn/therapy , Genetic Enhancement/ethics , Human Rights , Reproductive Techniques, Assisted/ethics , Adult , Child , Female , Forecasting , Humans , Parents , Pregnancy
14.
Account Res ; 15(3): 188-204, 2008.
Article in English | MEDLINE | ID: mdl-18792538

ABSTRACT

The implications of the institutional review board (IRB) system's growing purview are examined. Among the issues discussed are whether IRBs are censoring research and whether the IRB review process fundamentally alters the research that is being conducted. The intersection between IRB review and free speech is also explored. In general, it is argued that the review system for human subjects research (HSR) should be modified in order to limit the scope of IRB review.


Subject(s)
Ethics Committees, Research , Human Experimentation/ethics , Humans , Informed Consent
SELECTION OF CITATIONS
SEARCH DETAIL
...