Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Neuropsychologia ; 188: 108615, 2023 09 09.
Article in English | MEDLINE | ID: mdl-37423423

ABSTRACT

The aspiration for insight into human cognitive processing has traditionally driven research in cognitive science. With methods such as the Hidden semi-Markov Model-Electroencephalography (HsMM-EEG) method, new approaches have been developed that help to understand the temporal structure of cognition by identifying temporally discrete processing stages. However, it remains challenging to assign concrete functional contributions by specific processing stages to the overall cognitive process. In this paper, we address this challenge by linking HsMM-EEG3 with cognitive modelling, with the aim of further validating the HsMM-EEG3 method and demonstrating the potential of cognitive models to facilitate functional interpretation of processing stages. For this purpose, we applied HsMM-EEG3 to data from a mental rotation task and developed an ACT-R cognitive model that is able to closely replicate human performance in this task. Applying HsMM-EEG3 to the mental rotation experiment data revealed a strong likelihood for 6 distinct stages of cognitive processing during trials, with an additional stage for non-rotated conditions. The cognitive model predicted intra-trial mental activity patterns that project well onto the processing stages, while explaining the additional stage as a marker of non-spatial shortcut use. Thereby, this combined methodology provided substantially more information than either method by itself and suggests conclusions for cognitive processing in general.


Subject(s)
Cognition , Electroencephalography , Humans , Electroencephalography/methods , Probability
2.
Front Artif Intell ; 6: 1223251, 2023.
Article in English | MEDLINE | ID: mdl-38188590

ABSTRACT

Human-awareness is an ever more important requirement for AI systems that are designed to assist humans with daily physical interactions and problem solving. This is especially true for patients that need support to stay as independent as possible. To be human-aware, an AI should be able to anticipate the intentions of the individual humans it interacts with, in order to understand the difficulties and limitations they are facing and to adapt accordingly. While data-driven AI approaches have recently gained a lot of attention, more research is needed on assistive AI systems that can develop models of their partners' goals to offer proactive support without needing a lot of training trials for new problems. We propose an integrated AI system that can anticipate actions of individual humans to contribute to the foundations of trustworthy human-robot interaction. We test this in Tangram, which is an exemplary sequential problem solving task that requires dynamic decision making. In this task the sequences of steps to the goal might be variable and not known by the system. These are aspects that are also recognized as real world challenges for robotic systems. A hybrid approach based on the cognitive architecture ACT-R is presented that is not purely data-driven but includes cognitive principles, meaning heuristics that guide human decisions. Core of this Cognitive Tangram Solver (CTS) framework is an ACT-R cognitive model that simulates human problem solving behavior in action, recognizes possible dead ends and identifies ways forward. Based on this model, the CTS anticipates and adapts its predictions about the next action to take in any given situation. We executed an empirical study and collected data from 40 participants. The predictions made by CTS were evaluated with the participants' behavior, including comparative statistics as well as prediction accuracy. The model's anticipations compared to the human test data provide support for justifying further steps built upon our conceptual approach.

3.
Top Cogn Sci ; 14(4): 718-738, 2022 10.
Article in English | MEDLINE | ID: mdl-35005841

ABSTRACT

The ability to anticipate team members' actions enables joint action towards a common goal. Task knowledge and mental simulation allow for anticipating other agents' actions and for making inferences about their underlying mental representations. In human-AI teams, providing AI agents with anticipatory mechanisms can facilitate collaboration and successful execution of joint action. This paper presents a computational cognitive model demonstrating mental simulation of operators' mental models of a situation and anticipation of their behavior. The work proposes two successive steps: (1) A hierarchical cluster algorithm is applied to recognize patterns of behavior among pilots. These behavioral clusters are used to derive commonalities in situation models from empirical data (N = 13 pilots). (2) An ACT-R (adaptive control of thought - rational) cognitive model is implemented to mentally simulate different possible outcomes of action decisions and timing of a pilot. model tracing of ACT-R allows following up on operators' individual actions. Two models are implemented using the symbolic representations of ACT-R: one simulating normative behavior and the other by simulating individual differences and using subsymbolic learning. Model performance is analyzed by a comparison of both models. Results indicate the improved performance of the individual differences over the normative model and are discussed regarding implications for cognitive assistance capable of anticipating operator behavior.


Subject(s)
Pilots , Humans , Pilots/psychology , Unsupervised Machine Learning , Computer Simulation , Cognition
4.
Front Psychol ; 12: 640186, 2021.
Article in English | MEDLINE | ID: mdl-33868112

ABSTRACT

To realize a successful and collaborative interaction between human and robots remains a big challenge. Emotional reactions of the user provide crucial information for a successful interaction. These reactions carry key factors to prevent errors and fatal bidirectional misunderstanding. In cases where human-machine interaction does not proceed as expected, negative emotions, like frustration, can arise. Therefore, it is important to identify frustration in a human-machine interaction and to investigate its impact on other influencing factors such as dominance, sense of control and task performance. This paper presents a study that investigates a close cooperative work situation between human and robot, and explore the influence frustration has on the interaction. The task for the participants was to hand over colored balls to two different robot systems (an anthropomorphic robot and a robotic arm). The robot systems had to throw the balls into appropriate baskets. The coordination between human and robot was controlled by various gestures and words by means of trial and error. Participants were divided into two groups, a frustration- (FRUST) and a no frustration- (NOFRUST) group. Frustration was induced by the behavior of the robotic systems which made errors during the ball handover. Subjective and objective methods were used. The sample size of participants was N = 30 and the study was conducted in a between-subject design. Results show clear differences in perceived frustration in the two condition groups and different behavioral interactions were shown by the participants. Furthermore, frustration has a negative influence on interaction factors such as dominance and sense of control. The study provides important information concerning the influence of frustration on human-robot interaction (HRI) for the requirements of a successful, natural, and social HRI. The results (qualitative and quantitative) are discussed in favor of how a successful und effortless interaction between human and robot can be realized and what relevant factors, like appearance of the robot and influence of frustration on sense of control, have to be regarded.

5.
Front Neurosci ; 14: 795, 2020.
Article in English | MEDLINE | ID: mdl-32848566

ABSTRACT

This study presents the integration of a passive brain-computer interface (pBCI) and cognitive modeling as a method to trace pilots' perception and processing of auditory alerts and messages during operations. Missing alerts on the flight deck can result in out-of-the-loop problems that can lead to accidents. By tracing pilots' perception and responses to alerts, cognitive assistance can be provided based on individual needs to ensure they maintain adequate situation awareness. Data from 24 participating aircrew in a simulated flight study that included multiple alerts and air traffic control messages in single pilot setup are presented. A classifier was trained to identify pilots' neurophysiological reactions to alerts and messages from participants' electroencephalogram (EEG). A neuroadaptive ACT-R model using EEG data was compared to a conventional normative model regarding accuracy in representing individual pilots. Results show that passive BCI can distinguish between alerts that are processed by the pilot as task-relevant or irrelevant in the cockpit based on the recorded EEG. The neuroadaptive model's integration of this data resulted in significantly higher performance of 87% overall accuracy in representing individual pilots' responses to alerts and messages compared to 72% accuracy of a normative model that did not consider EEG data. We conclude that neuroadaptive technology allows for implicit measurement and tracing of pilots' perception and processing of alerts on the flight deck. Careful handling of uncertainties inherent to passive BCI and cognitive modeling shows how the representation of pilot cognitive states can be improved iteratively for providing assistance.

6.
Top Cogn Sci ; 12(3): 1012-1029, 2020 07.
Article in English | MEDLINE | ID: mdl-32666616

ABSTRACT

A model-based approach for cognitive assistance is proposed to keep track of pilots' changing demands in dynamic situations. Based on model-tracing with flight deck interactions and EEG recordings, the model is able to represent individual pilots' behavior in response to flight deck alerts. As a first application of the concept, an ACT-R cognitive model is created using data from an empirical flight simulator study on neurophysiological signals of missed acoustic alerts. Results show that uncertainty of individual behavior representation can be significantly reduced by combining cognitive modeling with EEG data. Implications for cognitive assistance in aviation are discussed.


Subject(s)
Aviation , Cognition , Electroencephalography , Models, Theoretical , Pilots , Psychomotor Performance , Uncertainty , Adult , Auditory Perception/physiology , Cognition/physiology , Female , Humans , Male , Man-Machine Systems , Middle Aged , Psychomotor Performance/physiology
7.
PLoS One ; 14(7): e0219920, 2019.
Article in English | MEDLINE | ID: mdl-31318919

ABSTRACT

INTRODUCTION: Intraoperative software assistance is gaining increasing importance in laparoscopic and robot-assisted surgery. Within the user-centred development process of such systems, the first question to be asked is: What information does the surgeon need and when does he or she need it? In this article, we present an approach to investigate these surgeon information needs for minimally invasive partial nephrectomy and compare these needs to the relevant surgical computer assistance literature. MATERIALS AND METHODS: First, we conducted a literature-based hierarchical task analysis of the surgical procedure. This task analysis was taken as a basis for a qualitative in-depth interview study with nine experienced surgical urologists. The study employed a cognitive task analysis method to elicit surgeons' information needs during minimally invasive partial nephrectomy. Finally, a systematic literature search was conducted to review proposed software assistance solutions for minimally invasive partial nephrectomy. The review focused on what information the solutions present to the surgeon and what phase of the surgery they aim to support. RESULTS: The task analysis yielded a workflow description for minimally invasive partial nephrectomy. During the subsequent interview study, we identified three challenging phases of the procedure, which may particularly benefit from software assistance. These phases are I. Hilar and vascular management, II. Tumour excision, and III. Repair of the renal defects. Between these phases, 25 individual challenges were found which define the surgeon information needs. The literature review identified 34 relevant publications, all of which aim to support the surgeon in hilar and vascular management (phase I) or tumour excision (phase II). CONCLUSION: The work presented in this article identified unmet surgeon information needs in minimally invasive partial nephrectomy. Namely, our results suggest that future solutions should address the repair of renal defects (phase III) or put more focus on the renal collecting system as a critical anatomical structure.


Subject(s)
Minimally Invasive Surgical Procedures/methods , Minimally Invasive Surgical Procedures/standards , Nephrectomy/methods , Nephrectomy/standards , Software , Surgeons , Surgery, Computer-Assisted/methods , Surgeons/psychology , Workflow
8.
Front Psychol ; 8: 1335, 2017.
Article in English | MEDLINE | ID: mdl-28824512

ABSTRACT

Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks.

9.
Behav Res Methods ; 49(3): 822-834, 2017 06.
Article in English | MEDLINE | ID: mdl-27287446

ABSTRACT

Today, capturing the behavior of a human eye is considered a standard method for measuring the information-gathering process and thereby gaining insights into cognitive processes. Due to the dynamic character of most task environments there is still a lack of a structured and automated approach for analyzing eye movement in combination with moving objects. In this article, we present a guideline for advanced gaze analysis, called IGDAI (Integration Guideline for Dynamic Areas of Interest). The application of IGDAI allows gathering dynamic areas of interest and simplifies its combination with eye movement. The first step of IGDAI defines the basic requirements for the experimental setup including the embedding of an eye tracker. The second step covers the issue of storing the information of task environments for the dynamic AOI analysis. Implementation examples in XML are presented fulfilling the requirements for most dynamic task environments. The last step includes algorithms to combine the captured eye movement and the dynamic areas of interest. A verification study was conducted, presenting an air traffic controller environment to participants. The participants had to distinguish between different types of dynamic objects. The results show that in comparison to static areas of interest, IGDAI allows a faster and more detailed view on the distribution of eye movement.


Subject(s)
Aircraft , Algorithms , Eye Movements , Research Design/standards , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...