Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Nat Biomed Eng ; 2024 May 20.
Article in English | MEDLINE | ID: mdl-38769157

ABSTRACT

Advancements in decoding speech from brain activity have focused on decoding a single language. Hence, the extent to which bilingual speech production relies on unique or shared cortical activity across languages has remained unclear. Here, we leveraged electrocorticography, along with deep-learning and statistical natural-language models of English and Spanish, to record and decode activity from speech-motor cortex of a Spanish-English bilingual with vocal-tract and limb paralysis into sentences in either language. This was achieved without requiring the participant to manually specify the target language. Decoding models relied on shared vocal-tract articulatory representations across languages, which allowed us to build a syllable classifier that generalized across a shared set of English and Spanish syllables. Transfer learning expedited training of the bilingual decoder by enabling neural data recorded in one language to improve decoding in the other language. Overall, our findings suggest shared cortical articulatory representations that persist after paralysis and enable the decoding of multiple languages without the need to train separate language-specific decoders.

2.
Nature ; 620(7976): 1037-1046, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37612505

ABSTRACT

Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive1. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant's pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.


Subject(s)
Face , Neural Prostheses , Paralysis , Speech , Humans , Cerebral Cortex/physiology , Cerebral Cortex/physiopathology , Clinical Trials as Topic , Communication , Deep Learning , Gestures , Movement , Neural Prostheses/standards , Paralysis/physiopathology , Paralysis/rehabilitation , Vocabulary , Voice
3.
Nat Commun ; 13(1): 6510, 2022 11 08.
Article in English | MEDLINE | ID: mdl-36347863

ABSTRACT

Neuroprostheses have the potential to restore communication to people who cannot speak or type due to paralysis. However, it is unclear if silent attempts to speak can be used to control a communication neuroprosthesis. Here, we translated direct cortical signals in a clinical-trial participant (ClinicalTrials.gov; NCT03698149) with severe limb and vocal-tract paralysis into single letters to spell out full sentences in real time. We used deep-learning and language-modeling techniques to decode letter sequences as the participant attempted to silently spell using code words that represented the 26 English letters (e.g. "alpha" for "a"). We leveraged broad electrode coverage beyond speech-motor cortex to include supplemental control signals from hand cortex and complementary information from low- and high-frequency signal components to improve decoding accuracy. We decoded sentences using words from a 1,152-word vocabulary at a median character error rate of 6.13% and speed of 29.4 characters per minute. In offline simulations, we showed that our approach generalized to large vocabularies containing over 9,000 words (median character error rate of 8.23%). These results illustrate the clinical viability of a silently controlled speech neuroprosthesis to generate sentences from a large vocabulary through a spelling-based approach, complementing previous demonstrations of direct full-word decoding.


Subject(s)
Speech Perception , Speech , Humans , Language , Vocabulary , Paralysis
4.
Neuron ; 110(15): 2409-2421.e3, 2022 08 03.
Article in English | MEDLINE | ID: mdl-35679860

ABSTRACT

The action potential is a fundamental unit of neural computation. Even though significant advances have been made in recording large numbers of individual neurons in animal models, translation of these methodologies to humans has been limited because of clinical constraints and electrode reliability. Here, we present a reliable method for intraoperative recording of dozens of neurons in humans using the Neuropixels probe, yielding up to ∼100 simultaneously recorded single units. Most single units were active within 1 min of reaching target depth. The motion of the electrode array had a strong inverse correlation with yield, identifying a major challenge and opportunity to further increase the probe utility. Cell pairs active close in time were spatially closer in most recordings, demonstrating the power to resolve complex cortical dynamics. Altogether, this approach provides access to population single-unit activity across the depth of human neocortex at scales previously only accessible in animal models.


Subject(s)
Neocortex , Neurons , Action Potentials/physiology , Electrodes , Electrodes, Implanted , Humans , Neurons/physiology , Reproducibility of Results
5.
N Engl J Med ; 385(3): 217-227, 2021 07 15.
Article in English | MEDLINE | ID: mdl-34260835

ABSTRACT

BACKGROUND: Technology to restore the ability to communicate in paralyzed persons who cannot speak has the potential to improve autonomy and quality of life. An approach that decodes words and sentences directly from the cerebral cortical activity of such patients may represent an advancement over existing methods for assisted communication. METHODS: We implanted a subdural, high-density, multielectrode array over the area of the sensorimotor cortex that controls speech in a person with anarthria (the loss of the ability to articulate speech) and spastic quadriparesis caused by a brain-stem stroke. Over the course of 48 sessions, we recorded 22 hours of cortical activity while the participant attempted to say individual words from a vocabulary set of 50 words. We used deep-learning algorithms to create computational models for the detection and classification of words from patterns in the recorded cortical activity. We applied these computational models, as well as a natural-language model that yielded next-word probabilities given the preceding words in a sequence, to decode full sentences as the participant attempted to say them. RESULTS: We decoded sentences from the participant's cortical activity in real time at a median rate of 15.2 words per minute, with a median word error rate of 25.6%. In post hoc analyses, we detected 98% of the attempts by the participant to produce individual words, and we classified words with 47.1% accuracy using cortical signals that were stable throughout the 81-week study period. CONCLUSIONS: In a person with anarthria and spastic quadriparesis caused by a brain-stem stroke, words and sentences were decoded directly from cortical activity during attempted speech with the use of deep-learning models and a natural-language model. (Funded by Facebook and others; ClinicalTrials.gov number, NCT03698149.).


Subject(s)
Brain Stem Infarctions/complications , Brain-Computer Interfaces , Deep Learning , Dysarthria/rehabilitation , Neural Prostheses , Speech , Adult , Dysarthria/etiology , Electrocorticography , Electrodes, Implanted , Humans , Male , Natural Language Processing , Quadriplegia/etiology , Sensorimotor Cortex/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...