Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Assist Technol ; 35(1): 74-82, 2023 01 02.
Article in English | MEDLINE | ID: mdl-34184974

ABSTRACT

Augmentative and alternative communication (AAC) techniques can provide access to communication for individuals with severe physical impairments. Brain-computer interface (BCI) access techniques may serve alongside existing AAC access methods to provide communication device control. However, there is limited information available about how individual perspectives change with motor-based BCI-AAC learning. Four individuals with ALS completed 12 BCI-AAC training sessions in which they made letter selections during an automatic row-column scanning pattern via a motor-based BCI-AAC. Recurring measures were taken before and after each BCI-AAC training session to evaluate changes associated with BCI-AAC performance, and included measures of fatigue, frustration, mental effort, physical effort, device satisfaction, and overall ease of device control. Levels of pre- to post-fatigue were low for use of the BCI-AAC system. However, participants indicated different perceptions of the term fatigue, with three participants discussing fatigue to be generally synonymous with physical effort, and one mental effort. Satisfaction with the BCI-AAC system was related to BCI-AAC performance for two participants, and levels of frustration for two participants. Considering a range of person-centered measures in future clinical BCI-AAC applications is important for optimizing and standardizing BCI-AAC assessment procedures.


Subject(s)
Brain-Computer Interfaces , Communication Aids for Disabled , Humans , Learning , Communication , Fatigue
2.
Disabil Rehabil Assist Technol ; : 1-10, 2022 Aug 16.
Article in English | MEDLINE | ID: mdl-35972860

ABSTRACT

PURPOSE: This survey was conducted to investigate American and Indian clinician's preference and usage of high-tech communication supports (HTCS) for aphasia rehabilitation to identify factors in each country that support the use of HTCS for improving post-aphasia communicative outcomes. In this study, HTCS include speech-generating augmentative and alternative communication (AAC) devices with varying methods of access. METHOD: The survey exploring clinically practicing speech-language pathologists' (SLPs) training, assessment and aphasia rehabilitation practices using HTCS, was electronically distributed in both countries. The raw responses from the US SLPs (n = 56) and Indian SLPs (n = 43) were collected, segregated and then converted into percentages for all 41 survey questions. RESULTS: The responses from SLPs indicated higher (70%) and lower use (58%) of HTCS for aphasia in a developed country (USA) and developing country (India), respectively. In the US, identifiable factors for successful use of HTCS for aphasia rehabilitation were familiarity in procuring and programming the device, caregiver training and effectiveness in reducing the time of communicating through the device. In India, factors leading to successful inclusion of HTCS were AAC coursework and clinical training for clinicians and availability of HTCS at affordable prices for clients. CONCLUSION: There is a considerable difference in the educational and clinical practice of AAC as SLPs in the US tend to have more clinical AAC experience with a stronger network for device dissemination in comparison to SLPs in India leading to higher usage of high-tech AAC for aphasia rehabilitation in a developed country.Implications for RehabilitationFor the SLPs,Improve exposure to programming AAC devices in developed countries and increase coursework, clinical training and exposure to programming AAC devices in developing countries.Enhance awareness about integrating high-tech AAC devices in intervention programs.Improve efficiency by minimizing the time in message creation on high-tech AAC device in developed countries. For the bioengineers,Develop AAC application interfaces in regional languages for easier usage in developing countries.

3.
Mov Disord ; 37(9): 1798-1802, 2022 09.
Article in English | MEDLINE | ID: mdl-35947366

ABSTRACT

Task-specificity in isolated focal dystonias is a powerful feature that may successfully be targeted with therapeutic brain-computer interfaces. While performing a symptomatic task, the patient actively modulates momentary brain activity (disorder signature) to match activity during an asymptomatic task (target signature), which is expected to translate into symptom reduction.


Subject(s)
Brain-Computer Interfaces , Dystonic Disorders , Dystonic Disorders/diagnosis , Dystonic Disorders/therapy , Humans
4.
Assist Technol ; 34(4): 468-477, 2022 07 04.
Article in English | MEDLINE | ID: mdl-33667154

ABSTRACT

Current BCI-AAC systems largely utilize custom-made software and displays that may be unfamiliar to AAC stakeholders. Further, there is limited information available exploring the heterogenous profiles of individuals who may use BCI-AAC. Therefore, in this study, we aimed to evaluate how individuals with amyotrophic lateral sclerosis (ALS) learned to control a motor-based BCI switch in a row-column AAC scanning pattern, and person-centered factors associated with BCI-AAC performance. Four individuals with ALS completed 12 BCI-AAC training sessions, and three individuals without neurological impairment completed 3 BCI-AAC training sessions. To assess person-centered factors associated with BCI-AAC performance, participants completed both initial and recurring assessment measures including levels of cognition, motor ability, fatigue, and motivation. Three of four participants demonstrated either BCI-AAC performance in the range of neurotypical peers, or an improving BCI-AAC learning trajectory. However, BCI-AAC learning trajectories were variable. Assessment measures revealed that two participants presented with a suspicion for cognitive impairment yet achieved the highest levels of BCI-AAC accuracy with their increased levels of performance being possibly supported by largely unimpaired motor skills. Motor-based BCI switch access to a commercial AAC row-column scanning may be feasible for individuals with ALS and possibly supported by timely intervention.


Subject(s)
Amyotrophic Lateral Sclerosis , Brain-Computer Interfaces , Communication Aids for Disabled , Cognition , Communication , Electroencephalography , Humans
5.
J Speech Lang Hear Res ; 64(6S): 2392-2399, 2021 06 18.
Article in English | MEDLINE | ID: mdl-33684301

ABSTRACT

Purpose This study investigated whether changes in brain activity preceding spoken words can be used as a neural marker of speech intention. Specifically, changes in the contingent negative variation (CNV) were examined prior to speech production in three different study designs to determine a method that maximizes signal detection in a speaking task. Method Electroencephalography data were collected in three different protocols to elicit the CNV in a spoken word task that varied the timing and type of linguistic information. The first protocol provided participants with the word to be spoken before the instruction of whether or not to speak, the second provided both the word and the instruction to speak, and the third provided the instruction to speak before the word. Participants (N = 18) were split into three groups (one for each protocol) and were instructed to either speak (Go) or refrain from speaking (NoGo) each word according to task instructions. The CNV was measured by analyzing the difference in slope between Go and NoGo trials. Results Statistically significant effects of hemispheric laterality on the CNV slope confirm the third protocol where the participants know they will speak in advance of the word, as the paradigm that reliably elicits a CNV response related to speech intention. Conclusions The maximal CNV response when the instruction is known before the word indicates the neural processing measured in this protocol may reflect a generalized speech intention process in which the speech-language systems become prepared to speak and then execute production once the word information is provided. Further analysis of the optimal protocol identified in this study requires additional experimental investigation to confirm its role in eliciting an objective marker of speech intention. Supplemental Material https://doi.org/10.23641/asha.14111468.


Subject(s)
Speech Perception , Speech , Contingent Negative Variation , Electroencephalography , Functional Laterality , Humans
6.
Assist Technol ; 32(3): 161-172, 2020 05 03.
Article in English | MEDLINE | ID: mdl-30380372

ABSTRACT

Purpose: The use of standardized screening protocols may inform brain-computer interface (BCI) research procedures to help maximize BCI performance outcomes and provide foundational information for clinical translation. Therefore, in this study we developed and evaluated a new BCI screening protocol incorporating cognitive, sensory, motor and motor imagery tasks. Methods: Following development, BCI screener outcomes were compared to the Amyotrophic Lateral Sclerosis Cognitive Behavioral Screen (ALS-CBS), and ALS Functional Rating Scale (ALS-FRS) for twelve individuals with a neuromotor disorder. Results: Scores on the cognitive portion of the BCI screener demonstrated limited variability, indicating all participants possessed core BCI-related skills. When compared to the ALS-CBS, the BCI screener was able to modestly discriminate possible cognitive difficulties that are likely to influence BCI performance. In addition, correlations between the motor imagery section of the screener and ALS-CBS and ALS-FRS were non-significant, suggesting the BCI screener may provide information not captured on other assessment tools. Additional differences were found between motor imagery tasks, with greater self-ratings on first-person explicit imagery of familiar tasks compared to unfamiliar/generic BCI tasks. Conclusion: The BCI screener captures factors likely relevant for BCI, which has value for guiding person-centered BCI assessment across different devices to help inform BCI trials.


Subject(s)
Amyotrophic Lateral Sclerosis/diagnosis , Brain-Computer Interfaces , Aged , Aged, 80 and over , Communication , Female , Humans , Male , Middle Aged
7.
J Speech Lang Hear Res ; 62(7): 2133-2140, 2019 07 15.
Article in English | MEDLINE | ID: mdl-31306609

ABSTRACT

Purpose Speech motor control relies on neural processes for generating sensory expectations using an efference copy mechanism to maintain accurate productions. The N100 auditory event-related potential (ERP) has been identified as a possible neural marker of the efference copy with a reduced amplitude during active listening while speaking when compared to passive listening. This study investigates N100 suppression while controlling a motor imagery speech synthesizer brain-computer interface (BCI) with instantaneous auditory feedback to determine whether similar mechanisms are used for monitoring BCI-based speech output that may both support BCI learning through existing speech motor networks and be used as a clinical marker for the speech network integrity in individuals without severe speech and physical impairments. Method The motor-induced N100 suppression is examined based on data from 10 participants who controlled a BCI speech synthesizer using limb motor imagery. We considered listening to auditory target stimuli (without motor imagery) in the BCI study as passive listening and listening to BCI-controlled speech output (with motor imagery) as active listening since audio output depends on imagined movements. The resulting ERP was assessed for statistical significance using a mixed-effects general linear model. Results Statistically significant N100 ERP amplitude differences were observed between active and passive listening during the BCI task. Post hoc analyses confirm the N100 amplitude was suppressed during active listening. Conclusion Observation of the N100 suppression suggests motor planning brain networks are active as participants control the BCI synthesizer, which may aid speech BCI mastery.


Subject(s)
Brain-Computer Interfaces , Evoked Potentials, Auditory/physiology , Imagination/physiology , Psychomotor Performance/physiology , Speech/physiology , Adult , Auditory Perception/physiology , Communication Aids for Disabled , Electroencephalography , Evoked Potentials, Motor/physiology , Feedback, Sensory/physiology , Female , Humans , Male , Young Adult
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3111-3114, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946546

ABSTRACT

Millions of individuals suffer from impairments that significantly disrupt or completely eliminate their ability to speak. An ideal intervention would restore one's natural ability to physically produce speech. Recent progress has been made in decoding speech-related brain activity to generate synthesized speech. Our vision is to extend these recent advances toward the goal of restoring physical speech production using decoded speech-related brain activity to modulate the electrical stimulation of the orofacial musculature involved in speech. In this pilot study we take a step toward this vision by investigating the feasibility of stimulating orofacial muscles during vocalization in order to alter acoustic production. The results of our study provide necessary foundation for eventual orofacial stimulation controlled directly from decoded speech-related brain activity.


Subject(s)
Electric Stimulation , Facial Muscles/physiology , Movement , Speech , Brain/physiology , Humans , Pilot Projects
9.
Article in English | MEDLINE | ID: mdl-34531937

ABSTRACT

PURPOSE: Brain-computer interfaces (BCIs) aim to provide access to augmentative and alternative communication (AAC) devices via brain activity alone. However, while BCI technology is expanding in the laboratory setting there is minimal incorporation into clinical practice. Building upon established AAC research and clinical best practices may aid the clinical translation of BCI practice, allowing advancements in both fields to be fully leveraged. METHOD: A multidisciplinary team developed considerations for how BCI products, practice, and policy may build upon existing AAC research, based upon published reports of existing AAC and BCI procedures. OUTCOMES/BENEFITS: Within each consideration, a review of BCI research is provided, along with considerations regarding how BCI procedures may build upon existing AAC methods. The consistent use of clinical/research procedures across disciplines can help facilitate collaborative efforts, engaging a range-individuals within the AAC community in the transition of BCI into clinical practice.

10.
Disabil Rehabil Assist Technol ; 14(3): 241-249, 2019 04.
Article in English | MEDLINE | ID: mdl-29385839

ABSTRACT

PURPOSE: We investigated how overt visual attention and oculomotor control influence successful use of a visual feedback brain-computer interface (BCI) for accessing augmentative and alternative communication (AAC) devices in a heterogeneous population of individuals with profound neuromotor impairments. BCIs are often tested within a single patient population limiting generalization of results. This study focuses on examining individual sensory abilities with an eye toward possible interface adaptations to improve device performance. METHODS: Five individuals with a range of neuromotor disorders participated in four-choice BCI control task involving the steady state visually evoked potential. The BCI graphical interface was designed to simulate a commercial AAC device to examine whether an integrated device could be used successfully by individuals with neuromotor impairment. RESULTS: All participants were able to interact with the BCI and highest performance was found for participants able to employ an overt visual attention strategy. For participants with visual deficits to due to impaired oculomotor control, effective performance increased after accounting for mismatches between the graphical layout and participant visual capabilities. CONCLUSION: As BCIs are translated from research environments to clinical applications, the assessment of BCI-related skills will help facilitate proper device selection and provide individuals who use BCI the greatest likelihood of immediate and long term communicative success. Overall, our results indicate that adaptations can be an effective strategy to reduce barriers and increase access to BCI technology. These efforts should be directed by comprehensive assessments for matching individuals to the most appropriate device to support their complex communication needs. Implications for Rehabilitation Brain computer interfaces using the steady state visually evoked potential can be integrated with an augmentative and alternative communication device to provide access to language and literacy for individuals with neuromotor impairment. Comprehensive assessments are needed to fully understand the sensory, motor, and cognitive abilities of individuals who may use brain-computer interfaces for proper feature matching as selection of the most appropriate device including optimization device layouts and control paradigms. Oculomotor impairments negatively impact brain-computer interfaces that use the steady state visually evoked potential, but modifications to place interface stimuli and communication items in the intact visual field can improve successful outcomes.


Subject(s)
Brain Diseases/physiopathology , Brain Diseases/rehabilitation , Brain-Computer Interfaces , Evoked Potentials, Visual/physiology , Adult , Aged , Attention/physiology , Eye Movements/physiology , Feedback, Sensory/physiology , Female , Humans , Male , Middle Aged , Task Performance and Analysis , User-Computer Interface
11.
Perspect ASHA Spec Interest Groups ; 4(6): 1622-1636, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32529035

ABSTRACT

PURPOSE: Brain-computer interface (BCI) techniques may provide computer access for individuals with severe physical impairments. However, the relatively hidden nature of BCI control obscures how BCI systems work behind the scenes, making it difficult to understand how electroencephalography (EEG) records the BCI related brain signals, what brain signals are recorded by EEG, and why these signals are targeted for BCI control. Furthermore, in the field of speech-language-hearing, signals targeted for BCI application have been of primary interest to clinicians and researchers in the area of augmentative and alternative communication (AAC). However, signals utilized for BCI control reflect sensory, cognitive and motor processes, which are of interest to a range of related disciplines including speech science. METHOD: This tutorial was developed by a multidisciplinary team emphasizing primary and secondary BCI-AAC related signals of interest to speech-language-hearing. RESULTS: An overview of BCI-AAC related signals are provided discussing 1) how BCI signals are recorded via EEG, 2) what signals are targeted for non-invasive BCI control, including the P300, sensorimotor rhythms, steady state evoked potentials, contingent negative variation, and the N400, and 3) why these signals are targeted. During tutorial creation, attention was given to help support EEG and BCI understanding for those without an engineering background. CONCLUSION: Tutorials highlighting how BCI-AAC signals are elicited and recorded can help increase interest and familiarity with EEG and BCI techniques and provide a framework for understanding key principles behind BCI-AAC design and implementation.

12.
Am J Speech Lang Pathol ; 27(3): 950-964, 2018 08 06.
Article in English | MEDLINE | ID: mdl-29860376

ABSTRACT

Purpose: Brain-computer interfaces (BCIs) can provide access to augmentative and alternative communication (AAC) devices using neurological activity alone without voluntary movements. As with traditional AAC access methods, BCI performance may be influenced by the cognitive-sensory-motor and motor imagery profiles of those who use these devices. Therefore, we propose a person-centered, feature matching framework consistent with clinical AAC best practices to ensure selection of the most appropriate BCI technology to meet individuals' communication needs. Method: The proposed feature matching procedure is based on the current state of the art in BCI technology and published reports on cognitive, sensory, motor, and motor imagery factors important for successful operation of BCI devices. Results: Considerations for successful selection of BCI for accessing AAC are summarized based on interpretation from a multidisciplinary team with experience in AAC, BCI, neuromotor disorders, and cognitive assessment. The set of features that support each BCI option are discussed in a hypothetical case format to model possible transition of BCI research from the laboratory into clinical AAC applications. Conclusions: This procedure is an initial step toward consideration of feature matching assessment for the full range of BCI devices. Future investigations are needed to fully examine how person-centered factors influence BCI performance across devices.


Subject(s)
Brain-Computer Interfaces , Brain/physiopathology , Communication Aids for Disabled , Communication Disorders/rehabilitation , Communication , Adolescent , Aged , Auditory Threshold , Brain Waves , Clinical Decision-Making , Cognition , Communication Disorders/diagnosis , Communication Disorders/physiopathology , Communication Disorders/psychology , Disability Evaluation , Equipment Design , Event-Related Potentials, P300 , Female , Humans , Imagination , Male , Motor Activity , Patient Selection , Predictive Value of Tests , Visual Perception
13.
IEEE Trans Neural Syst Rehabil Eng ; 26(4): 874-881, 2018 04.
Article in English | MEDLINE | ID: mdl-29641392

ABSTRACT

We conducted a study of a motor imagery brain-computer interface (BCI) using electroencephalography to continuously control a formant frequency speech synthesizer with instantaneous auditory and visual feedback. Over a three-session training period, sixteen participants learned to control the BCI for production of three vowel sounds (/ textipa i/ [heed], / textipa A/ [hot], and / textipa u/ [who'd]) and were split into three groups: those receiving unimodal auditory feedback of synthesized speech, those receiving unimodal visual feedback of formant frequencies, and those receiving multimodal, audio-visual (AV) feedback. Audio feedback was provided by a formant frequency artificial speech synthesizer, and visual feedback was given as a 2-D cursor on a graphical representation of the plane defined by the first two formant frequencies. We found that combined AV feedback led to the greatest performance in terms of percent accuracy, distance to target, and movement time to target compared with either unimodal feedback of auditory or visual information. These results indicate that performance is enhanced when multimodal feedback is meaningful for the BCI task goals, rather than as a generic biofeedback signal of BCI progress.


Subject(s)
Brain-Computer Interfaces , Communication Aids for Disabled , Feedback, Psychological , Acoustic Stimulation , Adult , Algorithms , Data Interpretation, Statistical , Electroencephalography , Feedback, Sensory , Female , Humans , Imagination , Learning , Male , Mental Fatigue , Practice, Psychological , Psychomotor Performance , Reproducibility of Results , Young Adult
14.
Am J Speech Lang Pathol ; 27(1): 1-12, 2018 02 06.
Article in English | MEDLINE | ID: mdl-29318256

ABSTRACT

Purpose: Brain-computer interfaces (BCIs) have the potential to improve communication for people who require but are unable to use traditional augmentative and alternative communication (AAC) devices. As BCIs move toward clinical practice, speech-language pathologists (SLPs) will need to consider their appropriateness for AAC intervention. Method: This tutorial provides a background on BCI approaches to provide AAC specialists foundational knowledge necessary for clinical application of BCI. Tutorial descriptions were generated based on a literature review of BCIs for restoring communication. Results: The tutorial responses directly address 4 major areas of interest for SLPs who specialize in AAC: (a) the current state of BCI with emphasis on SLP scope of practice (including the subareas: the way in which individuals access AAC with BCI, the efficacy of BCI for AAC, and the effects of fatigue), (b) populations for whom BCI is best suited, (c) the future of BCI as an addition to AAC access strategies, and (d) limitations of BCI. Conclusion: Current BCIs have been designed as access methods for AAC rather than a replacement; therefore, SLPs can use existing knowledge in AAC as a starting point for clinical application. Additional training is recommended to stay updated with rapid advances in BCI.


Subject(s)
Brain-Computer Interfaces , Communication Aids for Disabled , Speech Disorders/rehabilitation , Brain-Computer Interfaces/trends , Communication , Communication Aids for Disabled/trends , Fatigue , Humans , Patient Selection
15.
Speech Commun ; 104: 95-105, 2018 Nov.
Article in English | MEDLINE | ID: mdl-31105365

ABSTRACT

Speech technology applications have emerged as a promising method for assessing speech-language abilities and at-home therapy, including prosody. Many applications assume that observed prosody errors are due to an underlying disorder; however, they may be instead due to atypical representations of prosody such as immature and developing speech motor control, or compensatory adaptations by those with congenital neuromotor disorders. The result is the same - vocal productions may not be a reliable measure of prosody knowledge. Therefore, in this study we examine the usability of a new technology application to express prosody knowledge without relying on vocalizations using the Prosodic Marionette (PM) graphical user interface for artificial resynthesis of speech prosody. We tested the ability of neurotypical participants to use the PM interface to control prosody through 2D movements of word-icon blocks vertically (fundamental frequency), horizontally (pause length), and by stretching (word duration) to correctly mark target prosodic contrasts. Nearly all participants used vertical movements to correctly mark fundamental frequency changes where appropriate (e.g., raised second word for pitch accent on second word). A smaller percentage of participants used the stretching feature to mark duration changes; when used, participants correctly lengthened the appropriate word (e.g., stretch the second item to accent the second word). Our results suggest the PM interface can be used reliably to correctly signal speech prosody, which validates future use of the interface to assess prosody in clinical and developmental populations with atypical speech motor control.

16.
PLoS One ; 11(11): e0166872, 2016.
Article in English | MEDLINE | ID: mdl-27875590

ABSTRACT

How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain.


Subject(s)
Cerebral Cortex/physiology , Electrocorticography , Reading , Speech/physiology , Adult , Female , Humans , Male , Middle Aged
17.
Front Hum Neurosci ; 9: 97, 2015.
Article in English | MEDLINE | ID: mdl-25759647

ABSTRACT

Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates.

18.
Article in English | MEDLINE | ID: mdl-24678295

ABSTRACT

The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.

19.
Article in English | MEDLINE | ID: mdl-23366434

ABSTRACT

In this paper we present a framework for reducing the development time needed for creating applications for use in non-invasive brain-computer interfaces (BCI). Our framework is primarily focused on facilitating rapid software "app" development akin to current efforts in consumer portable computing (e.g. smart phones and tablets). This is accomplished by handling intermodule communication without direct user or developer implementation, instead relying on a core subsystem for communication of standard, internal data formats. We also provide a library of hardware interfaces for common mobile EEG platforms for immediate use in BCI applications. A use-case example is described in which a user with amyotrophic lateral sclerosis participated in an electroencephalography-based BCI protocol developed using the proposed framework. We show that our software environment is capable of running in real-time with updates occurring 50-60 times per second with limited computational overhead (5 ms system lag) while providing accurate data acquisition and signal analysis.


Subject(s)
Brain-Computer Interfaces , Programming Languages , Electroencephalography , Humans , Software
20.
Front Neurosci ; 5: 65, 2011.
Article in English | MEDLINE | ID: mdl-21629876

ABSTRACT

We conducted a neurophysiological study of attempted speech production in a paralyzed human volunteer using chronic microelectrode recordings. The volunteer suffers from locked-in syndrome leaving him in a state of near-total paralysis, though he maintains good cognition and sensation. In this study, we investigated the feasibility of supervised classification techniques for prediction of intended phoneme production in the absence of any overt movements including speech. Such classification or decoding ability has the potential to greatly improve the quality-of-life of many people who are otherwise unable to speak by providing a direct communicative link to the general community. We examined the performance of three classifiers on a multi-class discrimination problem in which the items were 38 American English phonemes including monophthong and diphthong vowels and consonants. The three classifiers differed in performance, but averaged between 16 and 21% overall accuracy (chance-level is 1/38 or 2.6%). Further, the distribution of phonemes classified statistically above chance was non-uniform though 20 of 38 phonemes were classified with statistical significance for all three classifiers. These preliminary results suggest supervised classification techniques are capable of performing large scale multi-class discrimination for attempted speech production and may provide the basis for future communication prostheses.

SELECTION OF CITATIONS
SEARCH DETAIL
...