Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Hum Factors ; 65(5): 718-722, 2023 08.
Article in English | MEDLINE | ID: mdl-32779530

ABSTRACT

OBJECTIVE: To provide an evaluative and personal overview of the life and contributions of Professor John Senders and to introduce this Special Issue dedicated to his memory. BACKGROUND: John Senders made many profound contributions to HF/E. These various topics are exemplified by the range of papers which compose the Special Issue. Collectively, these works document and demonstrate the impact of his many valuable research works. METHOD: The Special Issue serves to summarize Senders' collective body of work as can be extracted from archival sources. This introductory paper recounts a series of remembrances derived from personal relationships, as well as the products of cooperative investigative research. RESULTS: This collective evaluative process documents Senders' evident and deserved status in the highest pantheon of HF/E pioneers. It records his extraordinary life, replete with accounts of his insights and joie de vivre in exploring and explaining the world which surrounded him. APPLICATIONS: Senders' record of critical contributions provides the example, par excellence, of the successful and fulfilling life in science. It encourages all, both researchers and practitioners alike, in their own individual search for excellence.

2.
Curr Opin Psychol ; 36: 7-12, 2020 12.
Article in English | MEDLINE | ID: mdl-32294577

ABSTRACT

Research in social robotics has a different emphasis from research in robotics for factory, military, hospital, home (vacuuming), aerial (drone), space, and undersea applications. A social robot is one whose purpose is to serve a person in a caring interaction rather than to perform a mechanical task. Both because of its newness and because of its narrower psychological rather than technological emphasis, research in social robotics tends currently to be concentrated in a single journal and single annual conference. This review categorizes such a research into three areas: (1) Affect, Personality and Adaptation; (2) Sensing and Control for Action; and (3 Assistance to the Elderly and Handicapped. Current application is primarily for children's toys and devices to comfort the elderly and handicapped, as detailed in Section 'Toys and the market for social robots in general'.


Subject(s)
Robotics , Aged , Child , Humans , Social Interaction
3.
Front Psychol ; 10: 1117, 2019.
Article in English | MEDLINE | ID: mdl-31178783

ABSTRACT

Computer-based automation of sensing, analysis, memory, decision-making, and control in industrial, business, medical, scientific, and military applications is becoming increasingly sophisticated, employing various techniques of artificial intelligence for learning, pattern recognition, and computation. Research has shown that proper use of automation is highly dependent on operator trust. As a result the topic of trust has become an active subject of research and discussion in the applied disciplines of human factors and human-systems integration. While various papers have pointed to the many factors that influence trust, there currently exists no consensual definition of trust. This paper reviews previous studies of trust in automation with emphasis on its meaning and factors determining subjective assessment of trust and automation trustworthiness (which sometimes but not always are regarded as an objectively measurable properties of the automation). The paper asserts that certain attributes normally associated with human morality can usefully be applied to computer-based automation as it becomes more intelligent and more responsive to its human user. The paper goes on to suggest that the automation, based on its own experience with the user, can develop reciprocal attributes that characterize its own trust of the user and adapt accordingly. This situation can be modeled as a formal game where each of the automation user and the automation (computer) engage one another according to a payoff matrix of utilities (benefits and costs). While this is a concept paper lacking empirical data, it offers hypotheses by which future researchers can test for individual differences in the detailed attributes of trust in automation, and determine criteria for adjusting automation design to best accommodate these user differences.

4.
Hum Factors ; 61(7): 1162-1170, 2019 11.
Article in English | MEDLINE | ID: mdl-30811950

ABSTRACT

OBJECTIVE: The objective is to propose three quantitative models of trust in automation. BACKGROUND: Current trust-in-automation literature includes various definitions and frameworks, which are reviewed. METHOD: This research shows how three existing models, namely those for signal detection, statistical parameter estimation calibration, and internal model-based control, can be revised and reinterpreted to apply to trust in automation useful for human-system interaction design. RESULTS: The resulting reinterpretation is presented quantitatively and graphically, and the measures for trust and trust calibration are discussed, along with examples of application. CONCLUSION: The resulting models can be applied to provide quantitative trust measures in future experiments or system designs. APPLICATIONS: Simple examples are provided to explain how model application works for the three trust contexts that correspond to signal detection, parameter estimation calibration, and model-based open-loop control.


Subject(s)
Automation , Ergonomics/methods , Man-Machine Systems , Trust/psychology , Humans , Models, Psychological , Models, Statistical , Signal Detection, Psychological/physiology
5.
Hum Factors ; 59(6): 901-910, 2017 09.
Article in English | MEDLINE | ID: mdl-28796971

ABSTRACT

OBJECTIVE: We address the question of necessary conditions for users to adjust system settings, such as alarm thresholds, correctly. BACKGROUND: When designing systems, we need to decide which system functions users should control. Giving control to users empowers them, but users must have the relevant information and the ability to adjust settings correctly for their control to be beneficial. METHOD: Using the example of adjusting an alerting threshold, we analyze the conditions for when users can and when they cannot possibly adjust threshold settings adequately. RESULTS: We identify two obstacles that limit users' ability to adjust thresholds adequately: (a) the difficulty of determining the correct threshold settings, especially because of users' strong response to false positive indications, and (b) the difficulty of collecting the information necessary for setting the threshold. CONCLUSION: Users often cannot identify the optimal settings for a system, so it is unlikely that they choose adequate system settings. APPLICATION: System designers must consider the difficulties users face and analyze them explicitly when deciding on user involvement in processes.


Subject(s)
Man-Machine Systems , Signal Detection, Psychological , User-Computer Interface , Adult , Humans
6.
Hum Factors ; 59(2): 229-241, 2017 03.
Article in English | MEDLINE | ID: mdl-27591207

ABSTRACT

OBJECTIVE: This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. BACKGROUND: Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. METHOD: We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. RESULTS: The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. CONCLUSION: Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. APPLICATION: Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.


Subject(s)
Attention/physiology , Automation , Awareness/physiology , Man-Machine Systems , Models, Theoretical , Task Performance and Analysis , Visual Perception/physiology , Humans
7.
Appl Ergon ; 59(Pt B): 598-601, 2017 Mar.
Article in English | MEDLINE | ID: mdl-26724175

ABSTRACT

Two well-known Rasmussen models, the skill-rule knowledge (SRK) paradigm and the abstraction hierarchy, are compared to well-known models in both physics and psychology. Some of the latter are quantitative and make explicit predictions; some are qualitative, such as the Rasmussen models, being more useful for provoking thought about the relevant issues. Each of the Rasmussen models is evaluated with respect to six-attribute model taxonomy recently introduced by the author. The SRK model is shown to characterize modern automation as well as human behavior, with computer and physical devices exhibiting the a skill-based, rule-based and knowledge-based properties, and with monitoring and intermittent intervention by a human supervisor. A further suggestion is that the Rasmussen abstraction hierarchy could be applied not only to systems such as air traffic control but also to general situations of living.


Subject(s)
Ergonomics , Knowledge , Models, Psychological , Physics , Thinking , Humans
8.
Hum Factors ; 58(4): 525-32, 2016 06.
Article in English | MEDLINE | ID: mdl-27098262

ABSTRACT

OBJECTIVE: The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. BACKGROUND: Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. METHODS: This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. RESULTS: In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. CONCLUSIONS: HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. APPLICATIONS: HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations.


Subject(s)
Man-Machine Systems , Robotics , Humans
9.
Appl Ergon ; 45(1): 78-84, 2014 Jan.
Article in English | MEDLINE | ID: mdl-23615659

ABSTRACT

A model, as the term is used here, is a way of representing knowledge for the purpose of thinking, communicating to others, or implementing decisions as in system analysis, design or operations. It can be said that to the extent that we can model some aspect of nature we understand it. Models can range from fleeting mental images to highly refined mathematical equations of computer algorithms that precisely predict physical events. In constructing and evaluating models of ergonomic systems it is important that we consider the attributes of our models in relation to our objectives and what we can reasonably aspire to. To that end this paper proposes a taxonomy of models in terms of six independent attributes: applicability to observables, dimensionality, metricity, robustness, social penetration and conciseness. Each of these attributes is defined along with the meaning of different levels of each. The attribute taxonomy may be used to evaluate the quality of a model. Examples of system ergonomics models having different combinations of attributes at different levels are provided. Philosophical caveats regarding models in system ergonomics are discussed, as well as the relation to scientific method.


Subject(s)
Classification , Ergonomics , Models, Theoretical , Evaluation Studies as Topic , Humans
10.
Hum Factors ; 50(3): 418-26, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18689048

ABSTRACT

OBJECTIVE: I review and critique basic ideas of both traditional error/risk analysis and the newer and contrasting paradigm of resilience engineering. BACKGROUND: Analysis of human error has matured and been applied over the past 50 years by human factors engineers, whereas the resilience engineering paradigm is relatively new. METHOD: Fundamental ideas and examples of human factors applications of each approach are presented and contrasted. RESULTS: Probabilistic risk analysis provides mathematical rigor in generalizing on past error events to identify system vulnerabilities, but prediction is problematical because (a) error definition is arbitrary, and thus it is difficult to infer valid probabilities of human error to input to quantitative models, and (b) future accident conditions are likely to be quite different from those of past accidents. The new resilience engineering paradigm, in contrast, is oriented toward organizational process and is concerned with anticipating, mitigating, and preparing for graceful recovery from future events. CONCLUSION: Resilience engineering complements traditional error analysis but has yet to provide useful quantification and operational methods. APPLICATION: A best safety strategy is to use both approaches.


Subject(s)
Ergonomics , Industry , Safety Management , Causality , Humans , Risk Assessment
11.
Am Surg ; 72(11): 1102-8; discussion 1126-48, 2006 Nov.
Article in English | MEDLINE | ID: mdl-17120955

ABSTRACT

There is an increasing demand for interventions to improve patient safety, but there is limited data to guide such reform. In particular, because much of the existing research is outcome-driven, we have a limited understanding of the factors and process variations that influence safety in the operating room. In this article, we start with an overview of safety terminology, suggesting a model that emphasizes "safety" rather than "error" and that can encompass the spectrum of events occurring in the operating room. Next, we provide an introduction to techniques that can be used to understand safety at the point of care and we review the data that exists relating such studies to improved outcomes. Future work in this area will need to prospectively study the processes and factors that impact patient safety and vulnerability in the operating room.


Subject(s)
General Surgery/standards , Operating Rooms/standards , Quality Assurance, Health Care , Safety Management/methods , Humans , United States
12.
Surgery ; 139(2): 159-73, 2006 Feb.
Article in English | MEDLINE | ID: mdl-16455323

ABSTRACT

BACKGROUND: To better understand the operating room as a system and to identify system features that influence patient safety, we performed an analysis of operating room patient care using a prospective observational technique. METHODS: A multidisciplinary team comprised of human factors experts and surgeons conducted prospective observations of 10 complex general surgery cases in an academic hospital. Minute-to-minute observations were recorded in the field, and later coded and analyzed. A qualitative analysis first identified major system features that influenced team performance and patient safety. A quantitative analysis of factors related to these systems features followed. In addition, safety-compromising events were identified and analyzed for contributing and compensatory factors. RESULTS: Problems in communication and information flow, and workload and competing tasks were found to have measurable negative impact on team performance and patient safety in all 10 cases. In particular, the counting protocol was found to significantly compromise case progression and patient safety. We identified 11 events that potentially compromised patient safety, allowing us to identify recurring factors that contributed to or mitigated the overall effect on the patient's outcome. CONCLUSIONS: This study demonstrates the role of prospective observational methods in exposing critical system features that influence patient safety and that can be the targets for patient safety initiatives. Communication breakdown and information loss, as well as increased workload and competing tasks, pose the greatest threats to patient safety in the operating room.


Subject(s)
Operating Rooms/standards , Patient Care Team , Safety , Surgical Procedures, Operative/standards , Communication , Data Collection , Humans , Information Services , Medical Errors/prevention & control , Postoperative Complications , Prospective Studies , Surgical Procedures, Operative/adverse effects , Workload
13.
Hum Factors ; 46(4): 587-99, 2004.
Article in English | MEDLINE | ID: mdl-15709322

ABSTRACT

Distraction from cell phones, navigation systems, information/entertainment systems, and other driver-interactive devices now finding their way into the highway vehicles is a serious national safety concern. However, driver distraction is neither well defined nor well understood. In an effort to bring some better definition to the problem, a framework is proposed based on the ideas of control theory. Loci and causes of distraction are represented as disturbances to various functional elements of a control loop involving driver intending (goal setting), sensing, deciding on control response, dynamics of the vehicle, and human body activation and energetics. It is argued that activation should be classed separately from the other functions. Attention switching from environmental observation/control to internal device manipulation is modeled as sampled-data control. Also fit within the control framework are mental modeling and anticipation of events in the driver's preview. The control framework is shown to suggest some salient research questions and experiments. Actual or potential applications of this research include a refined understanding of driver distraction and better modeling and prediction of driving performance as a function of vehicle and highway design.


Subject(s)
Attention/physiology , Automobile Driving/psychology , Cell Phone , Mental Processes/physiology , Accidents, Traffic/prevention & control , Female , Humans , Male , Models, Theoretical , Predictive Value of Tests , Reaction Time , Risk Factors , Safety , Task Performance and Analysis , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...