ABSTRACT
Brain-computer interfaces provide a means for controlling a device by brain activity alone. One major drawback of noninvasive BCIs is their low information transfer rate, obstructing a wider deployment outside the lab. BCIs based on codebook visually evoked potentials (cVEP) outperform all other state-of-the-art systems in that regard. Previous work investigated cVEPs for spelling applications. We present the first cVEP-based BCI for use in real-world settings to accomplish everyday tasks such as navigation or action selection. To this end, we developed and evaluated a cVEP-based on-line BCI that controls a virtual agent in a simulated, but realistic, 3-D kitchen scenario. We show that cVEPs can be reliably triggered with stimuli in less restricted presentation schemes, such as on dynamic, changing backgrounds. We introduce a novel, dynamic repetition algorithm that allows for optimizing the balance between accuracy and speed individually for each user. Using these novel mechanisms in a 12-command cVEP-BCI in the 3-D simulation results in ITRs of 50 bits/min on average and 68 bits/min maximum. Thus, this work supports the notion of cVEP-BCIs as a particular fast and robust approach suitable for real-world use.
Subject(s)
Brain-Computer Interfaces , Communication Aids for Disabled , Electroencephalography/methods , Evoked Potentials, Visual/physiology , Pattern Recognition, Automated/methods , Visual Perception/physiology , Adult , Algorithms , Female , Humans , Male , Man-Machine Systems , Task Performance and AnalysisABSTRACT
Brain-Computer Interfaces provide a direct communication channel from the brain to a technical device. One major problem in state-of-the-art BCIs is their low communication speed. BCIs based on Codebook Visually Evoked Potentials (cVEP) outperform all other non-invasive approaches in terms of information transfer rate. Used only in spelling tasks so far, more flexibility with respect to stimulus structure and properties is needed. We propose using hierarchical codebook vectors together with varying color schemes to increase the stimulus flexibility. An off-line study showed that our novel hcVEP approach is capable of discriminating groups of targets after only 250 ms of stimulus flickering and the final target within the group after 1s. The accuracies are 81% and 67%, respectively. Different color schemes (black/white and green/red) are equally effective.