Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
1.
Psychon Bull Rev ; 2023 Dec 04.
Article in English | MEDLINE | ID: mdl-38049574

ABSTRACT

Despite the ubiquitous nature of evidence accumulation models in cognitive and experimental psychology, there has been a comparatively limited uptake of such techniques in the applied literature. While quantifying latent cognitive processing properties has significant potential for applied domains such as adaptive work systems, accumulator models often fall short in practical applications. Two primary reasons for these shortcomings are the complexities and time needed for the application of cognitive models, and the failure of current models to capture systematic trial-to-trial variability in parameters. In this manuscript, we develop a novel, trial-varying extension of the shifted Wald model to address these concerns. By leveraging conjugate properties of the Wald distribution, we derive computationally efficient solutions for threshold and drift parameters which can be updated instantaneously with new data. The resulting model allows the quantification of systematic variation in latent cognitive parameters across trials and we demonstrate the utility of such analyses through simulations and an exemplar application to an existing data set. The analytic nature of our solutions opens the door for real-world applications, significantly extending the reach of computational models of behavioral responses.

2.
Heliyon ; 9(9): e19736, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37809370

ABSTRACT

Previous research has presented conflicting evidence regarding whether Chinese characters are processed holistically. In past work, we applied Systems Factorial Technology (SFT) and discovered that native Chinese speakers exhibited limited capacity when processing characters and words. To pinpoint the source of this limitation, our current research delved further into the mental architecture involved in processing Chinese characters and English words, taking into consideration information from each component. In our current study, participants were directed to make the same/different judgments on characters/words presented sequentially. Our results indicated that participants utilized a parallel self-terminating strategy when both or neither of the left/right components differed (Experiment 1). Faced with the decisional uncertainty that either the left/right component would also differ, most participants processed with a parallel exhaustive architecture, while a few exhibited the coactive architecture (Experiment 2). Taken together, our work provides evidence that in word/character perception, there is weak holistic processing (parallel self-terminating processing) when partial information is sufficient for the decision; robust holistic processing (coactive or parallel exhaustive processing) occurs under decisional uncertainty. Our findings underscore the significant role that the task and presentation context play in visual word processing.

3.
Top Cogn Sci ; 2023 Jul 13.
Article in English | MEDLINE | ID: mdl-37439275

ABSTRACT

In the modern world, many important tasks have become too complex for a single unaided individual to manage. Teams conduct some safety-critical tasks to improve task performance and minimize the risk of error. These teams have traditionally consisted of human operators, yet, nowadays, artificial intelligence and machine systems are incorporated into team environments to improve performance and capacity. We used a computerized task modeled after a classic arcade game to investigate the performance of human-machine and human-human teams. We manipulated the group conditions between team members; sometimes, they were instructed to collaborate, compete, or work separately. We evaluated players' performance in the main task (gameplay) and, in post hoc analyses, participant behavioral patterns to inform group strategies. We compared game performance between team types (human-human vs. human-machine) and group conditions (competitive, collaborative, independent). Adapting workload capacity analysis to human-machine teams, we found performance under both team types and all group conditions suffered a performance efficiency cost. However, we observed a reduced cost in collaborative over competitive teams within human-human pairings, but this effect was diminished when playing with a machine partner. The implications of workload capacity analysis as a powerful tool for human-machine team performance measurement are discussed.

4.
Acta Psychol (Amst) ; 238: 103986, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37454588

ABSTRACT

The Word Superiority Effect (WSE) refers to the phenomenon where a single letter is recognized more accurately when presented within a word, compared to when it is presented alone or in a random string. However, previous research has produced conflicting findings regarding whether this effect also occurs in the processing of Chinese characters. The current study employed the capacity coefficient, a measure derived from the Systems Factorial Technology framework, to investigate processing efficiency and test for the superiority effect in Chinese characters and English words. We hypothesized that WSE would result in more efficient processing of characters/words compared to their individual components, as reflected by super capacity processing. However, contrary to our predictions, results from both the "same" (Experiment 1) and "different" (Experiment 2) judgment tasks revealed that native Chinese speakers exhibited limited processing capacity (inefficiency) for both English words and Chinese characters. In addition, results supported an English WSE with participants integrating English words and pseudowords more efficiently than nonwords, and decomposing nonwords more efficiently than words and pseudowords. In contrast, no superiority effect was observed for Chinese characters. To conclude, the current work suggests that the superiority effect only applies to English processing efficiency with specific context rules and does not extend to Chinese characters.


Subject(s)
Pattern Recognition, Visual , Word Processing , Humans , Reading , Visual Perception
5.
Front Neuroergon ; 3: 1007673, 2022.
Article in English | MEDLINE | ID: mdl-38235464

ABSTRACT

Introduction: A well-designed brain-computer interface (BCI) can make accurate and reliable predictions of a user's state through the passive assessment of their brain activity; in turn, BCI can inform an adaptive system (such as artificial intelligence, or AI) to intelligently and optimally aid the user to maximize the human-machine team (HMT) performance. Various groupings of spectro-temporal neural features have shown to predict the same underlying cognitive state (e.g., workload) but vary in their accuracy to generalize across contexts, experimental manipulations, and beyond a single session. In our work we address an outstanding challenge in neuroergonomic research: we quantify if (how) identified neural features and a chosen modeling approach will generalize to various manipulations defined by the same underlying psychological construct, (multi)task cognitive workload. Methods: To do this, we train and test 20 different support vector machine (SVM) models, each given a subset of neural features as recommended from previous research or matching the capabilities of commercial devices. We compute each model's accuracy to predict which (monitoring, communications, tracking) and how many (one, two, or three) task(s) were completed simultaneously. Additionally, we investigate machine learning model accuracy to predict task(s) within- vs. between-sessions, all at the individual-level. Results: Our results indicate gamma activity across all recording locations consistently outperformed all other subsets from the full model. Our work demonstrates that modelers must consider multiple types of manipulations which may each influence a common underlying psychological construct. Discussion: We offer a novel and practical modeling solution for system designers to predict task through brain activity and suggest next steps in expanding our framework to further contribute to research and development in the neuroergonomics community. Further, we quantified the cost in model accuracy should one choose to deploy our BCI approach using a mobile EEG-systems with fewer electrodes-a practical recommendation from our work.

6.
Hum Factors ; 63(5): 833-853, 2021 08.
Article in English | MEDLINE | ID: mdl-33030381

ABSTRACT

OBJECTIVE: We proposed and demonstrate a theory-driven, quantitative, individual-level estimate of the degree to which cognitive processes are degraded or enhanced when multiple tasks are simultaneously completed. BACKGROUND: To evaluate multitasking, we used a performance-based cognitive model to predict efficient performance. The model controls for single-task performance at the individual level and does not depend on parametric assumptions, such as normality, which do not apply to many performance evaluations. METHODS: Twenty participants attempted to maintain their isolated task performance in combination for three dual-task and one triple-task scenarios. We utilized a computational model of multiple resource theory to form hypotheses for how performance in each environment would compare, relative to the other multitask contexts. We assessed if and to what extent multitask performance diverged from the model of efficient multitasking in each combination of tasks across multiple sessions. RESULTS: Across the two sessions, we found variable individual task performances but consistent patterns of multitask efficiency such that deficits were evident in all task combinations. All participants exhibited decrements in performing the triple-task condition. CONCLUSIONS: We demonstrate a modeling framework that characterizes multitasking efficiency with a single score. Because it controls for single-task differences and makes no parametric assumptions, the measure enables researchers and system designers to directly compare efficiency across various individuals and complex situations. APPLICATION: Multitask efficiency scores offer practical implications for the design of adaptive automation and training regimes. Furthermore, a system may be tailored for individuals or suggest task combinations that support productivity and minimize performance costs.


Subject(s)
Task Performance and Analysis , Humans
7.
Atten Percept Psychophys ; 82(7): 3340-3356, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32557004

ABSTRACT

Despite the increasing focus on target prevalence in visual search research, few papers have thoroughly examined the effect of how target prevalence is communicated. Findings in the judgment and decision-making literature have demonstrated that people behave differently depending on whether probabilistic information is made explicit or learned through experience, hence there is potential for a similar difference when communicating prevalence in visual search. Our current research examined how visual search changes depending on whether the target prevalence information was explicitly given to observers or they learned the prevalence through experience with additional manipulations of target reward and salience. We found that when the target prevalence was low, learning prevalence from experience resulted in more target-present responses and longer search times before quitting compared to when observers were explicitly informed of the target probability. The discrepancy narrowed with increased prevalence and reversed in the high target prevalence condition. Eye-tracking results indicated that search with experience consistently resulted in longer fixation durations, with the largest difference in low-prevalence conditions. Longer search time was primarily due to observers re-visited more items. Our work addressed the importance of exploring influences brought by probability communication in future prevalence visual search studies.


Subject(s)
Judgment , Learning , Humans , Prevalence , Probability , Visual Perception
8.
Atten Percept Psychophys ; 82(2): 426-456, 2020 Feb.
Article in English | MEDLINE | ID: mdl-32133598

ABSTRACT

The mechanisms guiding visual attention are of great interest within cognitive and perceptual psychology. Many researchers have proposed models of these mechanisms, which serve to both formalize their theories and to guide further empirical investigations. The assumption that a number of basic features are processed in parallel early in the attentional process is common among most models of visual attention and visual search. To date, much of the evidence for parallel processing has been limited to set-size manipulations. Unfortunately, set-size manipulations have been shown to be insufficient evidence for parallel processing. We applied Systems Factorial Technology, a general nonparametric framework, to test this assumption, specifically whether color and shape are processed in parallel or in serial, in three experiments representative of feature search, conjunctive search, and odd-one-out search, respectively. Our results provide strong evidence that color and shape information guides search through parallel processes. Furthermore, we found evidence for facilitation between color and shape when the target was known in advance but performance consistent with unlimited capacity, independent parallel processing in odd-one-out search. These results confirm core assumptions about color and shape feature processing instantiated in most models of visual search and provide more detailed clues about the manner in which color and shape information is combined to guide search.


Subject(s)
Color Perception/physiology , Form Perception/physiology , Photic Stimulation/methods , Reaction Time/physiology , Adult , Attention/physiology , Humans , Male , Pattern Recognition, Visual/physiology , Random Allocation , Young Adult
10.
Behav Res Methods ; 51(3): 1179-1186, 2019 06.
Article in English | MEDLINE | ID: mdl-29845553

ABSTRACT

A key question in the field of scene perception is what information people use when making decisions about images of scenes. A significant body of evidence has indicated the importance of global properties of a scene image. Ideally, well-controlled, real-world images would be used to examine the influence of these properties on perception. Unfortunately, real-world images are generally complex and impractical to control. In the current research, we elicit ratings of naturalness and openness from a large number of subjects using Amazon Mechanic Turk. Subjects were asked to indicate which of a randomly chosen pair of scene images was more representative of a global property. A score and rank for each image was then estimated based on those comparisons using the Bradley-Terry-Luce model. These ranked images offer the opportunity to exercise control over the global scene properties in stimulus set drawn from complex real-world images. This will allow a deeper exploration of the relationship between global scene properties and behavioral and neural responses.


Subject(s)
Visual Perception/physiology , Pattern Recognition, Visual/physiology
11.
Top Cogn Sci ; 11(1): 261-276, 2019 01.
Article in English | MEDLINE | ID: mdl-30592180

ABSTRACT

The time-based resource-sharing (TBRS) model envisions working memory as a rapidly switching, serial, attentional refreshing mechanism. Executive attention trades its time between rebuilding decaying memory traces and processing extraneous activity. To thoroughly investigate the implications of the TBRS theory, we integrated TBRS within the ACT-R cognitive architecture, which allowed us to test the TBRS model against both participant accuracy and response time data in a dual task environment. In the current work, we extend the model to include articulatory rehearsal, which has been argued in the literature to be a separate mechanism from attentional refreshing. Additionally, we use the model to predict performance under a larger range of cognitive load (CL) than typically administered to human subjects. Our simulations support the hypothesis that working memory capacity is a linear function of CL and suggest that this effect is less pronounced when articulatory rehearsal is available.


Subject(s)
Attention/physiology , Executive Function/physiology , Memory, Short-Term/physiology , Models, Theoretical , Pattern Recognition, Visual/physiology , Humans
12.
Vision Res ; 148: 49-58, 2018 07.
Article in English | MEDLINE | ID: mdl-29678536

ABSTRACT

Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences.


Subject(s)
Bayes Theorem , Logistic Models , Sensory Thresholds/physiology , Visual Perception , Humans , Linear Models , Psychometrics/methods
13.
Behav Res Methods ; 50(5): 2074-2096, 2018 10.
Article in English | MEDLINE | ID: mdl-29076106

ABSTRACT

The first stage of analyzing eye-tracking data is commonly to code the data into sequences of fixations and saccades. This process is usually automated using simple, predetermined rules for classifying ranges of the time series into events, such as "if the dispersion of gaze samples is lower than a particular threshold, then code as a fixation; otherwise code as a saccade." More recent approaches incorporate additional eye-movement categories in automated parsing algorithms by using time-varying, data-driven thresholds. We describe an alternative approach using the beta-process vector auto-regressive hidden Markov model (BP-AR-HMM). The BP-AR-HMM offers two main advantages over existing frameworks. First, it provides a statistical model for eye-movement classification rather than a single estimate. Second, the BP-AR-HMM uses a latent process to model the number and nature of the types of eye movements and hence is not constrained to predetermined categories. We applied the BP-AR-HMM both to high-sampling rate gaze data from Andersson et al. (Behavior Research Methods 49(2), 1-22 2016) and to low-sampling rate data from the DIEM project (Mital et al., Cognitive Computation 3(1), 5-24 2011). Driven by the data properties, the BP-AR-HMM identified over five categories of movements, some which clearly mapped on to fixations and saccades, and others potentially captured post-saccadic oscillations, smooth pursuit, and various recording errors. The BP-AR-HMM serves as an effective algorithm for data-driven event parsing alone or as an initial step in exploring the characteristics of gaze data sets.


Subject(s)
Algorithms , Data Collection , Eye Movements , Markov Chains , Data Visualization , Humans
14.
Psychol Methods ; 22(2): 288-303, 2017 06.
Article in English | MEDLINE | ID: mdl-28594226

ABSTRACT

The question of cognitive architecture-how cognitive processes are temporally organized-has arisen in many areas of psychology. This question has proved difficult to answer, with many proposed solutions turning out to be spurious. Systems factorial technology (Townsend & Nozawa, 1995) provided the first rigorous empirical and analytical method of identifying cognitive architecture, using the survivor interaction contrast (SIC) to determine when people are using multiple sources of information in parallel or in series. Although the SIC is based on rigorous nonparametric mathematical modeling of response time distributions, for many years inference about cognitive architecture has relied solely on visual assessment. Houpt and Townsend (2012) recently introduced null hypothesis significance tests, and here we develop both parametric and nonparametric (encompassing prior) Bayesian inference. We show that the Bayesian approaches can have considerable advantages. (PsycINFO Database Record


Subject(s)
Bayes Theorem , Cognition , Models, Theoretical , Humans , Reaction Time
15.
Behav Brain Sci ; 40: e145, 2017 01.
Article in English | MEDLINE | ID: mdl-29342608

ABSTRACT

Much of the evidence for theories in visual search (including Hulleman & Olivers' [H&O's]) comes from inferences made using changes in mean RT as a function of the number of items in a display. We have known for more than 40 years that these inferences are based on flawed reasoning and obscured by model mimicry. Here we describe a method that avoids these problems.


Subject(s)
Attention , Reaction Time
16.
Behav Res Methods ; 49(4): 1261-1277, 2017 08.
Article in English | MEDLINE | ID: mdl-27503304

ABSTRACT

The extent to which distracting information influences decisions can be informative about the nature of the underlying cognitive and perceptual processes. In a recent paper, a response time-based measure for quantifying the degree of interference (or facilitation) from distracting information termed resilience was introduced. Despite using a statistical measure, the analysis was limited to qualitative comparisons between different model predictions. In this paper, we demonstrate how statistical procedures from workload capacity analysis can be applied to the new resilience functions. In particular, we present an approach to null-hypothesis testing of resilience functions and a method based on functional principal components analysis for analyzing differences in the functional form of the resilience functions across participants and conditions.


Subject(s)
Behavioral Research/methods , Principal Component Analysis , Resilience, Psychological , Humans , Male , Reaction Time
18.
Cogn Res Princ Implic ; 1(1): 31, 2016.
Article in English | MEDLINE | ID: mdl-28180181

ABSTRACT

Multi-spectral imagery can enhance decision-making by supplying multiple complementary sources of information. However, overloading an observer with information can deter decision-making. Hence, it is critical to assess multi-spectral image displays using human performance. Accuracy and response times (RTs) are fundamental for assessment, although without sophisticated empirical designs, they offer little information about why performance is better or worse. Systems factorial technology (SFT) is a framework for study design and analysis that examines observers' processing mechanisms, not just overall performance. In the current work, we use SFT to compare a display with two sensor images alongside each another with a display in which there is a single composite image. In our first experiment, the SFT results indicated that both display approaches suffered from limited workload capacity and more so for the composite imagery. In the second experiment, we examined the change in observer performance over the course of multiple days of practice. Participants' accuracy and RTs improved with training, but their capacity limitations were unaffected. Using SFT, we found that the capacity limitation was not due to an inefficient serial examination of the imagery by the participants. There are two clear implications of these results: Observers are less efficient with multi-spectral images than single images and the side-by-side display of source images is a viable alternative to composite imagery. SFT was necessary for these conclusions because it provided an appropriate mechanism for comparing single-source images to multi-spectral images and because it ruled out serial processing as the source of the capacity limitation.

19.
Vision Res ; 126: 19-33, 2016 09.
Article in English | MEDLINE | ID: mdl-25986994

ABSTRACT

While there is widespread agreement among vision researchers on the importance of some local aspects of visual stimuli, such as hue and intensity, there is no general consensus on a full set of basic sources of information used in perceptual tasks or how they are processed. Gestalt theories place particular value on emergent features, which are based on the higher-order relationships among elements of a stimulus rather than local properties. Thus, arbitrating between different accounts of features is an important step in arbitrating between local and Gestalt theories of perception in general. In this paper, we present the capacity coefficient from Systems Factorial Technology (SFT) as a quantitative approach for formalizing and rigorously testing predictions made by local and Gestalt theories of features. As a simple, easily controlled domain for testing this approach, we focus on the local feature of location and the emergent features of Orientation and Proximity in a pair of dots. We introduce a redundant-target change detection task to compare our capacity measure on (1) trials where the configuration of the dots changed along with their location against (2) trials where the amount of local location change was exactly the same, but there was no change in the configuration. Our results, in conjunction with our modeling tools, favor the Gestalt account of emergent features. We conclude by suggesting several candidate information-processing models that incorporate emergent features, which follow from our approach.


Subject(s)
Gestalt Theory , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Analysis of Variance , Attention/physiology , Humans , Models, Psychological , Perceptual Masking , Photic Stimulation , Reaction Time
20.
Front Psychol ; 6: 594, 2015.
Article in English | MEDLINE | ID: mdl-26074828

ABSTRACT

Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.

SELECTION OF CITATIONS
SEARCH DETAIL
...