Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Brain Sci ; 12(8)2022 Aug 17.
Article in English | MEDLINE | ID: mdl-36009157

ABSTRACT

Task fMRI provides an opportunity to analyze the working mechanisms of the human brain during specific experimental paradigms. Deep learning models have increasingly been applied for decoding and encoding purposes study to representations in task fMRI data. More recently, graph neural networks, or neural networks models designed to leverage the properties of graph representations, have recently shown promise in task fMRI decoding studies. Here, we propose an end-to-end graph convolutional network (GCN) framework with three convolutional layers to classify task fMRI data from the Human Connectome Project dataset. We compared the predictive performance of our GCN model across four of the most widely used node embedding algorithms-NetMF, RandNE, Node2Vec, and Walklets-to automatically extract the structural properties of the nodes in the functional graph. The empirical results indicated that our GCN framework accurately predicted individual differences (0.978 and 0.976) with the NetMF and RandNE embedding methods, respectively. Furthermore, to assess the effects of individual differences, we tested the classification performance of the model on sub-datasets divided according to gender and fluid intelligence. Experimental results indicated significant differences in the classification predictions of gender, but not high/low fluid intelligence fMRI data. Our experiments yielded promising results and demonstrated the superior ability of our GCN in modeling task fMRI data.

2.
Hum Factors ; 64(4): 694-713, 2022 06.
Article in English | MEDLINE | ID: mdl-32678682

ABSTRACT

OBJECTIVE: The aim of this study is to describe information acquisition theory, explaining how drivers acquire and represent the information they need. BACKGROUND: While questions of what drivers are aware of underlie many questions in driver behavior, existing theories do not directly address how drivers in particular and observers in general acquire visual information. Understanding the mechanisms of information acquisition is necessary to build predictive models of drivers' representation of the world and can be applied beyond driving to a wide variety of visual tasks. METHOD: We describe our theory of information acquisition, looking to questions in driver behavior and results from vision science research that speak to its constituent elements. We focus on the intersection of peripheral vision, visual attention, and eye movement planning and identify how an understanding of these visual mechanisms and processes in the context of information acquisition can inform more complete models of driver knowledge and state. RESULTS: We set forth our theory of information acquisition, describing the gap in understanding that it fills and how existing questions in this space can be better understood using it. CONCLUSION: Information acquisition theory provides a new and powerful way to study, model, and predict what drivers know about the world, reflecting our current understanding of visual mechanisms and enabling new theories, models, and applications. APPLICATION: Using information acquisition theory to understand how drivers acquire, lose, and update their representation of the environment will aid development of driver assistance systems, semiautonomous vehicles, and road safety overall.


Subject(s)
Automobile Driving , Accidents, Traffic/prevention & control , Eye Movements , Humans
3.
Article in English | MEDLINE | ID: mdl-33922924

ABSTRACT

The COVID-19 pandemic has changed our lifestyles, habits, and daily routine. Some of the impacts of COVID-19 have been widely reported already. However, many effects of the COVID-19 pandemic are still to be discovered. The main objective of this study was to assess the changes in the frequency of reported physical back pain complaints reported during the COVID-19 pandemic. In contrast to other published studies, we target the general population using Twitter as a data source. Specifically, we aim to investigate differences in the number of back pain complaints between the pre-pandemic and during the pandemic. A total of 53,234 and 78,559 tweets were analyzed for November 2019 and November 2020, respectively. Because Twitter users do not always complain explicitly when they tweet about the experience of back pain, we have designed an intelligent filter based on natural language processing (NLP) to automatically classify the examined tweets into the back pain complaining class and other tweets. Analysis of filtered tweets indicated an 84% increase in the back pain complaints reported in November 2020 compared to November 2019. These results might indicate significant changes in lifestyle during the COVID-19 pandemic, including restrictions in daily body movements and reduced exposure to routine physical exercise.


Subject(s)
COVID-19 , Social Media , Back Pain/epidemiology , Humans , Natural Language Processing , Pandemics , SARS-CoV-2 , United States/epidemiology
4.
Hum Factors ; 63(4): 706-726, 2021 06.
Article in English | MEDLINE | ID: mdl-32091937

ABSTRACT

OBJECTIVE: The objective of this meta-analysis is to explore the presently available, empirical findings on transfer of training from virtual (VR), augmented (AR), and mixed reality (MR) and determine whether such extended reality (XR)-based training is as effective as traditional training methods. BACKGROUND: MR, VR, and AR have already been used as training tools in a variety of domains. However, the question of whether or not these manipulations are effective for training has not been quantitatively and conclusively answered. Evidence shows that, while extended realities can often be time-saving and cost-saving training mechanisms, their efficacy as training tools has been debated. METHOD: The current body of literature was examined and all qualifying articles pertaining to transfer of training from MR, VR, and AR were included in the meta-analysis. Effect sizes were calculated to determine the effects that XR-based factors, trainee-based factors, and task-based factors had on performance measures after XR-based training. RESULTS: Results showed that training in XR does not express a different outcome than training in a nonsimulated, control environment. It is equally effective at enhancing performance. CONCLUSION: Across numerous studies in multiple fields, extended realities are as effective of a training mechanism as the commonly accepted methods. The value of XR then lies in providing training in circumstances, which exclude traditional methods, such as situations when danger or cost may make traditional training impossible.


Subject(s)
Augmented Reality , Virtual Reality , Humans
5.
Ergonomics ; 63(7): 864-883, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32425139

ABSTRACT

Modern digital interfaces display typeface in ways new to the 500 year old art of typography, driving a shift in reading from primarily long-form to increasingly short-form. In safety-critical settings, such at-a-glance reading competes with the need to understand the environment. To keep both type and the environment legible, a variety of 'middle layer' approaches are employed. But what is the best approach to presenting type over complex backgrounds so as to preserve legibility? This work tests and ranks middle layers in three studies. In the first study, Gaussian blur and semi-transparent 'scrim' middle layer techniques best maximise legibility. In the second, an optimal combination of the two is identified. In the third, letter-localised middle layers are tested, with results favouring drop-shadows. These results, discussed in mixed reality (MR) including overlays, virtual reality (VR), and augmented reality (AR), considers a future in which glanceable reading amidst complex backgrounds is common. Practitioner summary: Typography over complex backgrounds, meant to be read and understood at a glance, was once niche but today is a growing design challenge for graphical user interface HCI. We provide a technique, evidence-based strategies, and illuminating results for maximising legibility of glanceable typography over complex backgrounds. Abbreviations: AR: augmented reality; VR: virtual reality; HUD: head-up display; OLED: organic light-emitting diode; UX: user experience; MS: millisecond; CM: centimeter.


Subject(s)
Data Display , Form Perception , Reading , User-Computer Interface , Adult , Aged , Ergonomics , Female , Humans , Male , Middle Aged
6.
Ergonomics ; 63(4): 391-398, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32089101

ABSTRACT

Typography plays an increasingly important role in today's dynamic digital interfaces. Graphic designers and interface engineers have more typographic options than ever before. Sorting through this maze of design choices can be a daunting task. Here we present the results of an experiment comparing differences in glance-based legibility between eight popular sans-serif typefaces. The results show typography to be more than a matter of taste, especially in safety critical contexts such as in-vehicle interfaces. Our work provides both a method and rationale for using glanceable typefaces, as well as actionable information to guide design decisions for optimised usability in the fast-paced mobile world in which information is increasingly consumed in a few short glances. Practitioner summary: There is presently no accepted scientific method for comparing font legibility under time-pressure, in 'glanceable' interfaces such as automotive displays and smartphone notifications. A 'bake-off' method is demonstrated with eight popular sans-serif typefaces. The results produce actionable information to guide design decisions when information must be consumed at-a-glance. Abbreviations: DOT: department of transportation; FAA: Federal Aviation Administration; GHz: gigahertz; Hz: hertz; IEC: International Electrotechnical Commission; ISO: International Organization for Standardization; LCD: liquid crystal display; MIT: Massachusetts Institute of Technology; ms: milliseconds; OS: operating system.


Subject(s)
Automobile Driving , Form Perception , Reading , Adult , Aged , Data Display , Female , Humans , Male , Middle Aged
7.
Atten Percept Psychophys ; 81(8): 2798-2813, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31222659

ABSTRACT

Drivers rarely focus exclusively on driving, even with the best of intentions. They are distracted by passengers, navigation systems, smartphones, and driver assistance systems. Driving itself requires performing simultaneous tasks, including lane keeping, looking for signs, and avoiding pedestrians. The dangers of multitasking while driving, and efforts to combat it, often focus on the distraction itself, rather than on how a distracting task can change what the driver can perceive. Critically, some distracting tasks require the driver to look away from the road, which forces the driver to use peripheral vision to detect driving-relevant events. As a consequence, both looking away and being distracted may degrade driving performance. To assess the relative contributions of these factors, we conducted a laboratory experiment in which we separately varied cognitive load and point of gaze. Subjects performed a visual 0-back or 1-back task at one of four fixation locations superimposed on a real-world driving video, while simultaneously monitoring for brake lights in their lane of travel. Subjects were able to detect brake lights in all conditions, but once the eccentricity of the brake lights increased, they responded more slowly and missed more braking events. However, our cognitive load manipulation had minimal effects on detection performance, reaction times, or miss rates for brake lights. These results suggest that, for tasks that require the driver to look off-road, the decrements observed may be due to the need to use peripheral vision to monitor the road, rather than due to the distraction itself.


Subject(s)
Attention/physiology , Automobile Driving , Cognition/physiology , Visual Fields/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Reaction Time , Young Adult
8.
Article in English | MEDLINE | ID: mdl-31096546

ABSTRACT

The present study investigated the risk-taking behaviors of angry drivers, which were coincidentally measured via behavioral and electroencephalographic (EEG) recordings. We manipulated a driving scenario that concerned a Go/No-Go decision at an intersection when the controlling traffic light was in its yellow phase. This protocol was based upon the underlying format of the Iowa gambling task. Variation in the anger level was induced through task frustration. The data of twenty-four drivers were analyzed via behavioral and neural recordings, and P300 was specifically extracted from EEG traces. In addition, the behavioral performance was indexed by the percentage of high-risk choices minus the number of the low-risk choices taken, which identified the risk-taking propensity. Results confirmed a significant main effect of anger on the decisions taken. The risk-taking propensity decreased across the sequence of trial blocks in baseline assessments. However, with anger, the risk-taking propensity increased across the trial regimen. Drivers in anger state also showed a higher mean amplitude of P300 than that in baseline state. Additionally, high-risk choices evoked larger P300 amplitude than low-risk choices during the anger state. Moreover, the P300 amplitude of high-risk choices was significantly larger in the anger state than the baseline state. The negative feedback induced larger P300 amplitude than that recorded in positive feedback trials. The results corroborated that the drivers exhibited higher risk-taking propensity when angry although they were sensitive to the inherent risk-reward evaluations within the scenario. To reduce this type of risk-taking, we proposed some effective/affective intervention methods.


Subject(s)
Anger , Automobile Driving/psychology , Risk-Taking , Adolescent , Adult , Decision Making , Female , Humans , Male , Reward , Young Adult
9.
Games Health J ; 8(2): 112-120, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30964717

ABSTRACT

OBJECTIVE: To develop and test the feasibility and preliminary impact of the social card game prototype, One Night Stan, a theory-driven and evidence-based human immunodeficiency virus (HIV) prevention intervention for young black women. MATERIALS AND METHODS: The study included the enrollment of 21 young, heterosexual black women (mean age 19) to test the feasibility and preliminary impact of the card game, using a pre/postdesign. Participant satisfaction and gameplay experience were assessed using quantitative and qualitative measures. Knowledge, self-efficacy, and intentions regarding condom use and HIV/sexually transmitted infection partner testing were assessed using standardized assessments. Effect sizes for the change in these outcome variables were calculated to determine the preliminary efficacy of the game. RESULTS: One hundred percent of participants reported that that they would play the game again, 95% liked the way the game looked, 100% enjoyed playing the game, and 100% reported that they would tell their friends to play. Effect sizes were large (ranged from 0.21 to 0.51) for all variables except perceived susceptibility (0.07) and suggest that playing the game can lead to increased self-efficacy and intentions to use condoms and insist that their partners get tested for HIV across time. CONCLUSIONS: One Night Stan is a feasible intervention approach and may be efficacious in helping players develop a pattern of cognitions and motivation that can protect them against the risk of HIV.


Subject(s)
Black or African American , HIV Infections/prevention & control , Health Knowledge, Attitudes, Practice , Risk Reduction Behavior , Sexually Transmitted Diseases/prevention & control , Adult , Feasibility Studies , Female , Games, Recreational , HIV Infections/ethnology , Humans , Sexual Behavior , Sexually Transmitted Diseases/ethnology , Young Adult
10.
J Eye Mov Res ; 12(6)2019 Jun 01.
Article in English | MEDLINE | ID: mdl-33828752

ABSTRACT

Understanding our visual world requires both looking and seeing. Dissociation of these processes can result in the phenomenon of inattentional blindness or 'looking without seeing'. Concomitant errors in applied settings can be serious, and even deadly. Current visual data analysis cannot differentiate between just 'looking' and actual processing of visual information, i.e., 'seeing'. Differentiation may be possible through the examination of microsaccades; the involuntary, smallmagnitude saccadic eye movements that occur during processed visual fixation. Recent work has suggested that microsaccades are post-attentional biosignals, potentially modulated by task. Specifically, microsaccade rates decrease with increased mental task demand, and increase with growing visual task difficulty. Such findings imply that there are fundamental differences in microsaccadic activity between visual and nonvisual tasks. To evaluate this proposition, we used a high-speed eye tracker to record participants in looking for differences between two images or, doing mental arithmetic, or both tasks in combination. Results showed that microsaccade rate was significantly increased in conditions that require high visual attention, and decreased in conditions that require less visual attention. The results support microsaccadic rate reflecting visual attention, and level of visual information processing. A measure that reflects to what extent and how an operator is processing visual information represents a critical step for the application of sophisticated visual assessment to real world tasks.

11.
Hum Factors ; 60(5): 597-609, 2018 08.
Article in English | MEDLINE | ID: mdl-29986155

ABSTRACT

OBJECTIVE: This work assesses the efficacy of the "prevalence effect" as a form of cyberattack in human-automation teaming, using an email task. BACKGROUND: Under the prevalence effect, rare signals are more difficult to detect, even when taking into account their proportionally low occurrence. This decline represents diminished human capability to both detect and respond. As signal probability (SP) approaches zero, accuracy exhibits logarithmic decay. Cybersecurity, a context in which the environment is entirely artificial, provides an opportunity to manufacture conditions enhancing or degrading human performance, such as prevalence effects. Email cybersecurity prevalence effects have not previously been demonstrated, nor intentionally manipulated. METHOD: The Email Testbed (ET) provides a simulation of a clerical email work involving messages containing sensitive personal information. Using the ET, participants were presented with 300 email interactions and received cyberattacks at rates of either 1%, 5%, or 20%. RESULTS: Results demonstrated the existence and power of prevalence effects in email cybersecurity. Attacks delivered at a rate of 1% were significantly more likely to succeed, and the overall pattern of accuracy across declining SP exhibited logarithmic decay. APPLICATION: These findings suggest a "prevalence paradox" within human-machine teams. As automation reduces attack SP, the human operator becomes increasingly likely to fail in detecting and reporting attacks that remain. In the cyber realm, the potential to artificially inflict this state on adversaries, hacking the human operator rather than algorithmic defense, is considered. Specific and general information security design countermeasures are offered.


Subject(s)
Computer Security , Electronic Mail , Man-Machine Systems , User-Computer Interface , Adult , Humans , Probability , Young Adult
12.
Ergonomics ; 60(2): 234-240, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27007605

ABSTRACT

Brain processes responsible for the error-related negativity (ERN) evoked response potential (ERP) have historically been studied in highly controlled laboratory experiments through presentation of simple visual stimuli. The present work describes the first time the ERN has been evoked and successfully detected in visual search of complex stimuli. A letter flanker task and a motorcycle conspicuity task were presented to participants during electroencephalographic (EEG) recording. Direct visual inspection and subsequent statistical analysis of the resultant time-locked ERP data clearly indicated that the ERN was detectable in both groups. Further, the ERN pattern did not differ between groups. Such results show that the ERN can be successfully elicited and detected in visual search of complex static images, opening the door to applied neuroergonomic use. Harnessing the brain's error detection system presents significant opportunities and complex challenges, and implication of such are discussed in the context of human-machine systems. Practitioner Summary: For the first time, error-related negativity (ERN) has been successfully elicited and detected in a visually complex applied search task. Brain-process-based error detection in human-machine systems presents unique challenges, but promises broad neuroergonomic applications.


Subject(s)
Attention , Brain , Evoked Potentials , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Middle Aged , Motorcycles , Psychomotor Performance , Young Adult
13.
Hum Factors ; 58(8): 1143-1157, 2016 12.
Article in English | MEDLINE | ID: mdl-27613827

ABSTRACT

OBJECTIVE: We examine how transitions in task demand are manifested in mental workload and performance in a dual-task setting. BACKGROUND: Hysteresis has been defined as the ongoing influence of demand levels prior to a demand transition. Authors of previous studies predominantly examined hysteretic effects in terms of performance. However, little is known about the temporal development of hysteresis in mental workload. METHOD: A simulated driving task was combined with an auditory memory task. Participants were instructed to prioritize driving or to prioritize both tasks equally. Three experimental conditions with low, high, and low task demands were constructed by manipulating the frequency of lane changing. Multiple measures of subjective mental workload were taken during experimental conditions. RESULTS: Contrary to our prediction, no hysteretic effects were found after the high- to low-demand transition. However, a hysteretic effect in mental workload was found within the high-demand condition, which degraded toward the end of the high condition. Priority instructions were not reflected in performance. CONCLUSION: Online assessment of both performance and mental workload demonstrates the transient nature of hysteretic effects. An explanation for the observed hysteretic effect in mental workload is offered in terms of effort regulation. APPLICATION: An informed arrival at the scene is important in safety operations, but peaks in mental workload should be avoided to prevent buildup of fatigue. Therefore, communication technologies should incorporate the historical profile of task demand.


Subject(s)
Auditory Perception/physiology , Automobile Driving , Executive Function/physiology , Memory, Short-Term/physiology , Psychomotor Performance/physiology , Adult , Humans
14.
Hum Factors ; 57(8): 1339-42, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26534852

ABSTRACT

The laudable effort by Strayer and his colleagues to derive a systematic method to assess forms of cognitive distraction in the automobile is beset by the problem of nonstationary in driver response capacity. At the level of the overall goal of driving, this problem conflates actual on-road behavior; characterized by underspecified task satisficing, with our own understandable, scientifically inspired aspiration for measuring deterministic performance optimization. Measures of response conceived under this latter imperative are, at best, only shadowy reflections of the actual phenomenological experience involved in real-world vehicle control. Whether we, as a research community, can resolve this issue remains uncertain. However, we believe we can mount a positive attack on what is arguably another equally important dimension of the collision problem.


Subject(s)
Attention , Automobiles , Accidents, Traffic , Automobile Driving/psychology , Cognition , Humans
15.
Hum Factors ; 56(7): 1307-21, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25490810

ABSTRACT

OBJECTIVE: We assess the driving distraction potential of texting with Google Glass (Glass), a mobile wearable platform capable of receiving and sending short-message-service and other messaging formats. BACKGROUND: A known roadway danger, texting while driving has been targeted by legislation and widely banned. Supporters of Glass claim the head-mounted wearable computer is designed to deliver information without concurrent distraction. Existing literature supports the supposition that design decisions incorporated in Glass might facilitate messaging for drivers. METHOD: We asked drivers in a simulator to drive and use either Glass or a smartphone-based messaging interface, then interrupted them with an emergency brake event. Both the response event and subsequent recovery were analyzed. RESULTS: Glass-delivered messages served to moderate but did not eliminate distracting cognitive demands. A potential passive cost to drivers merely wearing Glass was also observed. Messaging using either device impaired driving as compared to driving without multitasking. CONCLUSION: Glass in not a panacea as some supporters claim, but it does point the way to design interventions that effect reduced load in multitasking. APPLICATION: Discussions of these identified benefits are framed within the potential of new in-vehicle systems that bring both novel forms of distraction and tools for mitigation into the driver's seat.


Subject(s)
Attention , Task Performance and Analysis , Text Messaging , Adult , Automobile Driving , Humans , Male
16.
Work ; 41 Suppl 1: 4273-8, 2012.
Article in English | MEDLINE | ID: mdl-22317376

ABSTRACT

The driving task is highly complex and places considerable perceptual, physical and cognitive demands on the driver. As driving is fundamentally an information processing activity, distracted or impaired drivers have diminished safety margins compared with non- distracted drivers (Hancock and Parasuraman, 1992; TRB 1998 a & b). This competition for sensory and decision making capacities can lead to failures that cost lives. Some groups, teens and elderly drivers for example, have patterns of systematically poor perceptual, physical and cognitive performance while driving. Although there are technologies developed to aid these different drivers, these systems are often misused and underutilized. The DriveID project aims to design and develop a passive, automated face identification system capable of robustly identifying the driver of the vehicle, retrieve a stored profile, and intelligently prescribing specific accident prevention systems and driving environment customizations.


Subject(s)
Automation , Automobile Driving , Biometric Identification , Safety , Automobile Driving/psychology , Automobiles , Female , Humans , Individuality , Male
17.
Work ; 41 Suppl 1: 5384-5, 2012.
Article in English | MEDLINE | ID: mdl-22317558

ABSTRACT

This poster presents a study to assess one's ability to detect motorcycles under different conditions of conspicuity while performing a secondary visual load task. Previous research in which participants were required to detect motorcycles revealed differences in age (young adults/older adult) as well as differences associated with motorcycle conspicuity conditions. Past research has specifically found motorcycles with headlights ON and modulating headlights (flashing) to be more conspicuous than motorcycles with headlights OFF within traffic conditions. The present study seeks to provide more information on the effects of multitasking on motorcycle conspicuity and safety. The current study seeks to determine the degree to which multitasking limits the conspicuity of a motorcycle within traffic. We expect our results will indicate main effects for distraction task, age, gender, motorcycle lighting conditions, and vehicular DRLs on one's ability to effectively detect a motorcycle. The results have implications for motorcycle safety in general and through this research, a better understanding of motorcycle conspicuity can be established so as to minimize the risk involved with motorcycle operation.


Subject(s)
Attention , Automobile Driving , Motorcycles , Visual Perception , Accidents, Traffic/prevention & control , Accidents, Traffic/psychology , Adolescent , Adult , Age Factors , Aged , Female , Humans , Male , Middle Aged , Young Adult
18.
IEEE Comput Graph Appl ; 28(6): 83-5, 2008.
Article in English | MEDLINE | ID: mdl-19004688

ABSTRACT

Healthcare offers special opportunities for the application of game research and technology. This was evident in the presentations at the 2007 Games for Health Conference.


Subject(s)
Delivery of Health Care/trends , Diagnosis, Computer-Assisted/trends , Therapy, Computer-Assisted/trends , User-Computer Interface , Video Games/trends , Systems Integration
SELECTION OF CITATIONS
SEARCH DETAIL
...