Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Neural Eng ; 21(1)2024 01 17.
Article in English | MEDLINE | ID: mdl-38176028

ABSTRACT

Objective.To date, most research on electroencephalography (EEG)-based mental workload detection for passive brain-computer interface (pBCI) applications has focused on identifying the overall level of cognitive resources required, such as whether the workload is high or low. We propose, however, that being able to determine the specific type of cognitive resources being used, such as visual or auditory, would also be useful. This would enable the pBCI to take more appropriate action to reduce the overall level of cognitive demand on the user. For example, if a high level of workload was detected and it is determined that the user is primarily engaged in visual information processing, then the pBCI could cause some information to be presented aurally instead. In our previous work we showed that EEG could be used to differentiate visual from auditory processing tasks when the level of processing is high, but the two modalities could not be distinguished when the level of cognitive processing demand was very low. The current study aims to build on this work and move toward the overall objective of developing a pBCI that is capable of predicting both the level and the type of cognitive resources being used.Approach.Fifteen individuals undertook carefully designed visual and auditory tasks while their EEG data was being recorded. In this study, we incorporated a more diverse range of sensory processing conditions including not only single-modality conditions (i.e. those requiring one of either visual or auditory processing) as in our previous study, but also dual-modality conditions (i.e. those requiring both visual and auditory processing) and no-task/baseline conditions (i.e. when the individual is not engaged in either visual or auditory processing).Main results.Using regularized linear discriminant analysis within a hierarchical classification algorithm, the overall cognitive demand was predicted with an accuracy of more than 86%, while the presence or absence of visual and auditory sensory processing were each predicted with an accuracy of approximately 70%.Significance.The findings support the feasibility of establishing a pBCI that can determine both the level and type of attentional resources required by the user at any given moment. This pBCI could assist in enhancing safety in hazardous jobs by triggering the most effective and efficient adaptation strategies when high workload conditions are detected.


Subject(s)
Electroencephalography , Visual Perception , Humans , Electroencephalography/methods , Cognition , Auditory Perception , Attention
2.
J Neural Eng ; 20(1)2023 02 20.
Article in English | MEDLINE | ID: mdl-36749989

ABSTRACT

Objective.A passive brain-computer interface (pBCI) is a system that enhances a human-machine interaction by monitoring the mental state of the user and, based on this implicit information, making appropriate modifications to the interaction. Key to the development of such a system is the ability to reliably detect the mental state of interest via neural signals. Many different mental states have been investigated, including fatigue, attention and various emotions, however one of the most commonly studied states is mental workload, i.e. the amount of attentional resources required to perform a task. The emphasis of mental workload studies to date has been almost exclusively on detecting and predicting the 'level' of cognitive resources required (e.g. high vs. low), but we argue that having information regarding the specific 'type' of resources (e.g. visual or auditory) would allow the pBCI to apply more suitable adaption techniques than would be possible knowing just the overall workload level.Approach.15 participants performed carefully designed visual and auditory tasks while electroencephalography (EEG) data was recorded. The tasks were designed to be as similar as possible to one another except for the type of attentional resources required. The tasks were performed at two different levels of demand. Using traditional machine learning algorithms, we investigated, firstly, if EEG can be used to distinguish between auditory and visual processing tasks and, secondly, what effect level of sensory processing demand has on the ability to distinguish between auditory and visual processing tasks.Main results.The results show that at the high level of demand, the auditory vs. visual processing tasks could be distinguished with an accuracy of 77.1% on average. However, in the low demand condition in this experiment, the tasks were not classified with an accuracy exceeding chance.Significance.These results support the feasibility of developing a pBCI for detecting not only the level, but also the type, of attentional resources being required of the user at a given time. Further research is required to determine if there is a threshold of demand under which the type of sensory processing cannot be detected, but even if that is the case, these results are still promising since it is the high end of demand that is of most concern in safety critical scenarios. Such a BCI could help improve safety in high risk occupations by initiating the most effective and efficient possible adaptation strategies when high workload conditions are detected.


Subject(s)
Auditory Perception , Electroencephalography , Humans , Electroencephalography/methods , Visual Perception , Attention , Workload
3.
Cereb Cortex Commun ; 2(2): tgab026, 2021.
Article in English | MEDLINE | ID: mdl-34296171

ABSTRACT

The locus coeruleus (LC) produces phasic and tonic firing patterns that are theorized to have distinct functional consequences. However, how different firing modes affect learning and valence encoding of sensory information are unknown. Here, we show bilateral optogenetic activation of rat LC neurons using 10-Hz phasic trains of either 300 ms or 10 s accelerated acquisition of a similar odor discrimination. Similar odor discrimination learning was impaired by noradrenergic blockade in the piriform cortex (PC). However, 10-Hz phasic light-mediated learning facilitation was prevented by a dopaminergic antagonist in the PC, or by ventral tegmental area (VTA) silencing with lidocaine, suggesting a LC-VTA-PC dopamine circuitry involvement. Ten-hertz tonic stimulation did not alter odor discrimination acquisition, and was ineffective in activating VTA DA neurons. For valence encoding, tonic stimulation at 25 Hz induced conditioned odor aversion, whereas 10-Hz phasic stimulations produced an odor preference. Both conditionings were prevented by noradrenergic blockade in the basolateral amygdala (BLA). Cholera Toxin B retro-labeling showed larger engagement of nucleus accumbens-projecting neurons in the BLA with 10-Hz phasic activation, and larger engagement of central amygdala projecting cells with 25-Hz tonic light. These outcomes argue that the LC activation patterns differentially influence both target networks and behavior.

SELECTION OF CITATIONS
SEARCH DETAIL
...