Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 182
Filter
1.
J Neurosci ; 44(30)2024 Jul 24.
Article in English | MEDLINE | ID: mdl-38926087

ABSTRACT

Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.


Subject(s)
Auditory Perception , Electroencephalography , Music , Humans , Female , Male , Electroencephalography/methods , Adult , Auditory Perception/physiology , Young Adult , Acoustic Stimulation/methods , Brain/physiology
2.
Neurobiol Lang (Camb) ; 5(2): 432-453, 2024.
Article in English | MEDLINE | ID: mdl-38911458

ABSTRACT

Research points to neurofunctional differences underlying fluent speech between stutterers and non-stutterers. Considerably less work has focused on processes that underlie stuttered vs. fluent speech. Additionally, most of this research has focused on speech motor processes despite contributions from cognitive processes prior to the onset of stuttered speech. We used MEG to test the hypothesis that reactive inhibitory control is triggered prior to stuttered speech. Twenty-nine stutterers completed a delayed-response task that featured a cue (prior to a go cue) signaling the imminent requirement to produce a word that was either stuttered or fluent. Consistent with our hypothesis, we observed increased beta power likely emanating from the right pre-supplementary motor area (R-preSMA)-an area implicated in reactive inhibitory control-in response to the cue preceding stuttered vs. fluent productions. Beta power differences between stuttered and fluent trials correlated with stuttering severity and participants' percentage of trials stuttered increased exponentially with beta power in the R-preSMA. Trial-by-trial beta power modulations in the R-preSMA following the cue predicted whether a trial would be stuttered or fluent. Stuttered trials were also associated with delayed speech onset suggesting an overall slowing or freezing of the speech motor system that may be a consequence of inhibitory control. Post-hoc analyses revealed that independently generated anticipated words were associated with greater beta power and more stuttering than researcher-assisted anticipated words, pointing to a relationship between self-perceived likelihood of stuttering (i.e., anticipation) and inhibitory control. This work offers a neurocognitive account of stuttering by characterizing cognitive processes that precede overt stuttering events.

3.
PLoS Biol ; 22(5): e3002622, 2024 May.
Article in English | MEDLINE | ID: mdl-38814982

ABSTRACT

Combinatoric linguistic operations underpin human language processes, but how meaning is composed and refined in the mind of the reader is not well understood. We address this puzzle by exploiting the ubiquitous function of negation. We track the online effects of negation ("not") and intensifiers ("really") on the representation of scalar adjectives (e.g., "good") in parametrically designed behavioral and neurophysiological (MEG) experiments. The behavioral data show that participants first interpret negated adjectives as affirmative and later modify their interpretation towards, but never exactly as, the opposite meaning. Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., "not bad" represented as "good"); furthermore, decoding accuracy for negated adjectives is found to be significantly lower than that for affirmative adjectives. Overall, these results suggest that negation mitigates rather than inverts the neural representations of adjectives. This putative suppression mechanism of negation is supported by increased synchronization of beta-band neural activity in sensorimotor areas. The analysis of negation provides a steppingstone to understand how the human brain represents changes of meaning over time.


Subject(s)
Language , Humans , Female , Male , Adult , Young Adult , Brain/physiology , Magnetoencephalography/methods , Semantics , Linguistics/methods
4.
PLoS Biol ; 22(5): e3002631, 2024 May.
Article in English | MEDLINE | ID: mdl-38805517

ABSTRACT

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Subject(s)
Acoustic Stimulation , Auditory Perception , Music , Speech Perception , Humans , Male , Female , Adult , Auditory Perception/physiology , Acoustic Stimulation/methods , Speech Perception/physiology , Young Adult , Speech/physiology , Adolescent
5.
Sci Rep ; 14(1): 8977, 2024 04 18.
Article in English | MEDLINE | ID: mdl-38637516

ABSTRACT

Why do we prefer some singers to others? We investigated how much singing voice preferences can be traced back to objective features of the stimuli. To do so, we asked participants to rate short excerpts of singing performances in terms of how much they liked them as well as in terms of 10 perceptual attributes (e.g.: pitch accuracy, tempo, breathiness). We modeled liking ratings based on these perceptual ratings, as well as based on acoustic features and low-level features derived from Music Information Retrieval (MIR). Mean liking ratings for each stimulus were highly correlated between Experiments 1 (online, US-based participants) and 2 (in the lab, German participants), suggesting a role for attributes of the stimuli in grounding average preferences. We show that acoustic and MIR features barely explain any variance in liking ratings; in contrast, perceptual features of the voices achieved around 43% of prediction. Inter-rater agreement in liking and perceptual ratings was low, indicating substantial (and unsurprising) individual differences in participants' preferences and perception of the stimuli. Our results indicate that singing voice preferences are not grounded in acoustic attributes of the voices per se, but in how these features are perceptually interpreted by listeners.


Subject(s)
Music , Singing , Voice , Humans , Voice Quality , Acoustics
6.
bioRxiv ; 2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38659750

ABSTRACT

Speech comprehension requires the human brain to transform an acoustic waveform into meaning. To do so, the brain generates a hierarchy of features that converts the sensory input into increasingly abstract language properties. However, little is known about how these hierarchical features are generated and continuously coordinated. Here, we propose that each linguistic feature is dynamically represented in the brain to simultaneously represent successive events. To test this 'Hierarchical Dynamic Coding' (HDC) hypothesis, we use time-resolved decoding of brain activity to track the construction, maintenance, and integration of a comprehensive hierarchy of language features spanning acoustic, phonetic, sub-lexical, lexical, syntactic and semantic representations. For this, we recorded 21 participants with magnetoencephalography (MEG), while they listened to two hours of short stories. Our analyses reveal three main findings. First, the brain incrementally represents and simultaneously maintains successive features. Second, the duration of these representations depend on their level in the language hierarchy. Third, each representation is maintained by a dynamic neural code, which evolves at a speed commensurate with its corresponding linguistic level. This HDC preserves the maintenance of information over time while limiting the interference between successive features. Overall, HDC reveals how the human brain continuously builds and maintains a language hierarchy during natural speech comprehension, thereby anchoring linguistic theories to their biological implementations.

7.
Ann N Y Acad Sci ; 1535(1): 121-136, 2024 May.
Article in English | MEDLINE | ID: mdl-38566486

ABSTRACT

While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.


Subject(s)
Auditory Perception , Emotions , Music , Reward , Music/psychology , Humans , Male , Female , Emotions/physiology , Adult , Auditory Perception/physiology , Pleasure/physiology , Young Adult , Acoustic Stimulation/methods
8.
Cognition ; 245: 105737, 2024 04.
Article in English | MEDLINE | ID: mdl-38342068

ABSTRACT

Phonological statistical learning - our ability to extract meaningful regularities from spoken language - is considered critical in the early stages of language acquisition, in particular for helping to identify discrete words in continuous speech. Most phonological statistical learning studies use an experimental task introduced by Saffran et al. (1996), in which the syllables forming the words to be learned are presented continuously and isochronously. This raises the question of the extent to which this purportedly powerful learning mechanism is robust to the kinds of rhythmic variability that characterize natural speech. Here, we tested participants with arhythmic, semi-rhythmic, and isochronous speech during learning. In addition, we investigated how input rhythmicity interacts with two other factors previously shown to modulate learning: prior knowledge (syllable order plausibility with respect to participants' first language) and learners' speech auditory-motor synchronization ability. We show that words are extracted by all learners even when the speech input is completely arhythmic. Interestingly, high auditory-motor synchronization ability increases statistical learning when the speech input is temporally more predictable but only when prior knowledge can also be used. This suggests an additional mechanism for learning based on predictions not only about when but also about what upcoming speech will be.


Subject(s)
Individuality , Speech Perception , Humans , Learning , Linguistics , Language Development , Speech
9.
Perspect Psychol Sci ; : 17456916231217722, 2024 Jan 17.
Article in English | MEDLINE | ID: mdl-38232303

ABSTRACT

Emotional voices attract considerable attention. A search on any browser using "emotional prosody" as a key phrase leads to more than a million entries. Such interest is evident in the scientific literature as well; readers are reminded in the introductory paragraphs of countless articles of the great importance of prosody and that listeners easily infer the emotional state of speakers through acoustic information. However, despite decades of research on this topic and important achievements, the mapping between acoustics and emotional states is still unclear. In this article, we chart the rich literature on emotional prosody for both newcomers to the field and researchers seeking updates. We also summarize problems revealed by a sample of the literature of the last decades and propose concrete research directions for addressing them, ultimately to satisfy the need for more mechanistic knowledge of emotional prosody.

10.
Sci Data ; 10(1): 862, 2023 12 04.
Article in English | MEDLINE | ID: mdl-38049487

ABSTRACT

The "MEG-MASC" dataset provides a curated set of raw magnetoencephalography (MEG) recordings of 27 English speakers who listened to two hours of naturalistic stories. Each participant performed two identical sessions, involving listening to four fictional stories from the Manually Annotated Sub-Corpus (MASC) intermixed with random word lists and comprehension questions. We time-stamp the onset and offset of each word and phoneme in the metadata of the recording, and organize the dataset according to the 'Brain Imaging Data Structure' (BIDS). This data collection provides a suitable benchmark to large-scale encoding and decoding analyses of temporally-resolved brain responses to speech. We provide the Python code to replicate several validations analyses of the MEG evoked responses such as the temporal decoding of phonetic features and word frequency. All code and MEG, audio and text data are publicly available to keep with best practices in transparent and reproducible research.


Subject(s)
Magnetoencephalography , Speech Perception , Humans , Brain/physiology , Brain Mapping/methods , Magnetoencephalography/methods , Speech , Speech Perception/physiology
11.
Commun Biol ; 6(1): 1153, 2023 11 13.
Article in English | MEDLINE | ID: mdl-37957351

ABSTRACT

In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.


Subject(s)
Chiroptera , Echolocation , Animals , Vocalization, Animal , Noise , Acoustics
12.
Trends Cogn Sci ; 27(11): 996-1007, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37625973

ABSTRACT

The classical notion of a 'language of thought' (LoT), advanced prominently by the philosopher Jerry Fodor, is an influential position in cognitive science whereby the mental representations underpinning thought are considered to be compositional and productive, enabling the construction of new complex thoughts from more primitive symbolic concepts. LoT theory has been challenged because a neural implementation has been deemed implausible. We disagree. Examples of critical computational ingredients needed for a neural implementation of a LoT have in fact been demonstrated, in particular in the hippocampal spatial navigation system of rodents. Here, we show that cell types found in spatial navigation (border cells, object cells, head-direction cells, etc.) provide key types of representation and computation required for the LoT, underscoring its neurobiological viability.

13.
Proc Natl Acad Sci U S A ; 120(36): e2215710120, 2023 09 05.
Article in English | MEDLINE | ID: mdl-37639606

ABSTRACT

The beginnings of words are, in some informal sense, special. This intuition is widely shared, for example, when playing word games. Less apparent is whether the intuition is substantiated empirically and what the underlying organizational principle(s) might be. Here, we answer this seemingly simple question in a quantitatively clear way. Based on arguments about the interplay between lexical storage and speech processing, we examine whether the distribution of information among different speech sounds of words is governed by a critical computational unit for online speech perception and production: syllables. By analyzing lexical databases of twelve languages, we demonstrate that there is a compelling asymmetry between syllable beginnings (onsets) versus ends (codas) in their involvement in distinguishing words stored in the lexicon. In particular, we show that the functional advantage of syllable onset reflects an asymmetrical distribution of lexical informativeness within the syllable unit but not an effect of a global decay of informativeness from the beginning to the end of a word. The converging finding across languages from a range of typological families supports the conjecture that the syllable unit, while being a critical primitive for both speech perception and production, is also a key organizational constraint for lexical storage.


Subject(s)
Dissent and Disputes , Intuition , Humans , Databases, Factual , Language , Speech
14.
Neurobiol Lang (Camb) ; 4(1): 120-144, 2023.
Article in English | MEDLINE | ID: mdl-37229144

ABSTRACT

Speech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated. In two MEG experiments, we investigate lexical and sublexical word-level processing and the interactions with (acoustic) syllable processing using a frequency-tagging paradigm. Participants listened to disyllabic words presented at a rate of 4 syllables/s. Lexical content (native language), sublexical syllable-to-syllable transitions (foreign language), or mere syllabic information (pseudo-words) were presented. Two conjectures were evaluated: (i) syllable-to-syllable transitions contribute to word-level processing; and (ii) processing of words activates brain areas that interact with acoustic syllable processing. We show that syllable-to-syllable transition information compared to mere syllable information, activated a bilateral superior, middle temporal and inferior frontal network. Lexical content resulted, additionally, in increased neural activity. Evidence for an interaction of word- and acoustic syllable-level processing was inconclusive. Decreases in syllable tracking (cerebroacoustic coherence) in auditory cortex and increases in cross-frequency coupling between right superior and middle temporal and frontal areas were found when lexical content was present compared to all other conditions; however, not when conditions were compared separately. The data provide experimental insight into how subtle and sensitive syllable-to-syllable transition information for word-level processing is.

15.
J Exp Psychol Gen ; 152(9): 2438-2462, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37079828

ABSTRACT

Prosodic stresses are known to affect the meaning of utterances, but exactly how they do this is not known in many cases. We focus on the mechanisms underlying the meaning effects of ironic prosody (e.g., teasing or blaming through an ironic twist), which is frequently used in both personal and mass-media communication. To investigate ironic twists, we created 30 sentences that can be interpreted both ironically and nonironically, depending on the context. In Experiment 1, 14 of these sentences were identified as being most reliably understood in the two conditions. In Experiment 2, we recorded the 14 sentences spoken in both a literal and an ironic condition by 14 speakers, and the resulting 392 recorded sentences were acoustically analyzed. In Experiment 3, 20 listeners marked the acoustically prominent words, thus identifying perceived prosodic stresses. In Experiment 4, 53 participants rated how ironic they perceived the 392 recorded sentences to be. The combined analysis of irony ratings, acoustic features, and various prosodic stress characteristics revealed that ironic meaning is primarily signaled by a stress shift from the end of a sentence to an earlier position. This change in position might function as a "warning" cue for listeners to consider potential alternative meanings of the sentence. Thus, beyond giving individual words a stronger contrastive or emphatic role, the distribution of prosodic stresses can also prime opposite meanings for identical sentences, supporting the view that the dynamic aspect of prosody conveys important cues in human communication. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Speech Perception , Humans , Language , Communication
16.
Psychol Sci ; 34(5): 633-643, 2023 05.
Article in English | MEDLINE | ID: mdl-37053267

ABSTRACT

Much of human learning happens through interaction with other people, but little is known about how this process is reflected in the brains of students and teachers. Here, we concurrently recorded electroencephalography (EEG) data from nine groups, each of which contained four students and a teacher. All participants were young adults from the northeast United States. Alpha-band (8-12 Hz) brain-to-brain synchrony between students predicted both immediate and delayed posttest performance. Further, brain-to-brain synchrony was higher in specific lecture segments associated with questions that students answered correctly. Brain-to-brain synchrony between students and teachers predicted learning outcomes at an approximately 300-ms lag in the students' brain activity relative to the teacher's brain activity, which is consistent with the time course of spoken-language comprehension. These findings provide key new evidence for the importance of collecting brain data simultaneously from groups of learners in ecologically valid settings.


Subject(s)
Brain , Learning , Young Adult , Humans , Students
17.
Proc Biol Sci ; 290(1994): 20222410, 2023 03 08.
Article in English | MEDLINE | ID: mdl-36855868

ABSTRACT

When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability-particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.


Subject(s)
Comprehension , Speech , Humans , Acoustics , Extremities , Linguistics
18.
iScience ; 26(3): 106257, 2023 Mar 17.
Article in English | MEDLINE | ID: mdl-36909667

ABSTRACT

In conversational settings, seeing the speaker's face elicits internal predictions about the upcoming acoustic utterance. Understanding how the listener's cortical dynamics tune to the temporal statistics of audiovisual (AV) speech is thus essential. Using magnetoencephalography, we explored how large-scale frequency-specific dynamics of human brain activity adapt to AV speech delays. First, we show that the amplitude of phase-locked responses parametrically decreases with natural AV speech synchrony, a pattern that is consistent with predictive coding. Second, we show that the temporal statistics of AV speech affect large-scale oscillatory networks at multiple spatial and temporal resolutions. We demonstrate a spatial nestedness of oscillatory networks during the processing of AV speech: these oscillatory hierarchies are such that high-frequency activity (beta, gamma) is contingent on the phase response of low-frequency (delta, theta) networks. Our findings suggest that the endogenous temporal multiplexing of speech processing confers adaptability within the temporal regimes that are essential for speech comprehension.

19.
Cogn Sci ; 47(3): e13267, 2023 03.
Article in English | MEDLINE | ID: mdl-36949729

ABSTRACT

The grammatical paradigm used to be a model for entire areas of cognitive science. Its primary tenet was that theories are axiomatic-like systems. A secondary tenet was that their predictions should be tested quickly and in great detail with introspective judgments. While the grammatical paradigm now often seems passé, we argue that in fact it continues to be as efficient as ever. Formal models are essential because they are explicit, highly predictive, and typically modular. They make numerous critical predictions, which must be tested efficiently; introspective judgments do just this. We further argue that the grammatical paradigm continues to be fruitful. Within linguistics, implicature theory is a recent example, with a combination of formal explicitness, modularity, and interaction with experimental work. Beyond traditional linguistics, the grammatical paradigm has proven fruitful in the study of gestures and emojis; literature ("Free Indirect Discourse"); picture semantics and comics; music and dance cognition; and even reasoning and concepts. We argue, however, that the grammatical paradigm must be adapted to contemporary cognitive science. Computational methods are essential to derive quantitative predictions from formal models (Bayesian pragmatics is an example). And data collection techniques offer an ever richer continuum of options, from introspective judgments to large-scale experiments, which makes it possible to optimize the cost/benefit ratio of the empirical methods that are chosen to test theories.


Subject(s)
Linguistics , Semantics , Humans , Bayes Theorem , Cognition , Judgment
20.
Neural Netw ; 162: 199-211, 2023 May.
Article in English | MEDLINE | ID: mdl-36913820

ABSTRACT

Natural and artificial audition can in principle acquire different solutions to a given problem. The constraints of the task, however, can nudge the cognitive science and engineering of audition to qualitatively converge, suggesting that a closer mutual examination would potentially enrich artificial hearing systems and process models of the mind and brain. Speech recognition - an area ripe for such exploration - is inherently robust in humans to a number transformations at various spectrotemporal granularities. To what extent are these robustness profiles accounted for by high-performing neural network systems? We bring together experiments in speech recognition under a single synthesis framework to evaluate state-of-the-art neural networks as stimulus-computable, optimized observers. In a series of experiments, we (1) clarify how influential speech manipulations in the literature relate to each other and to natural speech, (2) show the granularities at which machines exhibit out-of-distribution robustness, reproducing classical perceptual phenomena in humans, (3) identify the specific conditions where model predictions of human performance differ, and (4) demonstrate a crucial failure of all artificial systems to perceptually recover where humans do, suggesting alternative directions for theory and model building. These findings encourage a tighter synergy between the cognitive science and engineering of audition.


Subject(s)
Speech Perception , Speech , Humans , Neural Networks, Computer , Brain
SELECTION OF CITATIONS
SEARCH DETAIL
...