Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
Add more filters










Publication year range
1.
Cognition ; 244: 105689, 2024 03.
Article in English | MEDLINE | ID: mdl-38219453

ABSTRACT

Learning from sequential statistics is a general capacity common across many cognitive domains and species. One form of statistical learning (SL) - learning to segment "words" from continuous streams of speech syllables in which the only segmentation cue is ostensibly the transitional (or conditional) probability from one syllable to the next - has been studied in great detail. Typically, this phenomenon is modeled as the calculation of probabilities over discrete, featureless units. Here we present an alternative model, in which sequences are learned as trajectories through a similarity space. A simple recurrent network coding syllables with representations that capture the similarity relations among them correctly simulated the result of a classic SL study, as did a similar model that encoded syllables as three dimensional points in a continuous similarity space. We then used the simulations to identify a sequence of "words" that produces the reverse of the typical SL effect, i.e., part-words are predicted to be more familiar than Words. Results from two experiments with human participants are consistent with simulation results. Additional analyses identified features that drive differences in what is learned from a set of artificial languages that have the same transitional probabilities among syllables.


Subject(s)
Speech Perception , Humans , Phonetics , Language , Speech , Probability
2.
J Educ Psychol ; 114(4): 855-869, 2022 May.
Article in English | MEDLINE | ID: mdl-35602092

ABSTRACT

There is now considerable evidence regarding the types of interventions that are effective at remediating reading disabilities on average. It is generally unclear, however, what predicts the magnitude of individual-level change following a given intervention. We examine new predictors of intervention gains that are theoretically grounded in computational models of reading and focus on individual differences in the functional organization of the reading system. Specifically, we estimate the extent to which children with reading disabilities (n=118 3rd-4th graders) rely on two sources of information during an oral word reading task - print-speech correspondences and semantic imageability - before and after a phonologically-weighted intervention. We show that children who relied more on print-speech regularities and less on imageability pre-intervention had better intervention gains. In parallel, children who over the course of the intervention exhibited greater increases in their reliance on print-speech correspondences and greater decreases in their reliance on imageability had better intervention outcomes. Importantly, these two factors were differentially related to specific reading task outcomes, with greater reliance on print-speech correspondences associated with pseudoword naming, while (lesser) reliance on imageability related to word reading and comprehension. We discuss the implications of these findings for theoretical models of reading acquisition and educational practice.

3.
Brain Lang ; 219: 104961, 2021 08.
Article in English | MEDLINE | ID: mdl-33965686

ABSTRACT

Previous studies have shown that reading experience reshapes speech processing. The orthography can be implemented in the brain by restructuring the phonological representations or being co-activated during spoken word recognition. This study utilized event-related functional magnetic resonance imaging and functional connectivity analysis to examine the neural mechanism underlying two types of orthographic effects in the Chinese auditory semantic category task, namely phonology-to-orthography consistency (POC) and homophone density (HD). We found that the POC effects originated from the speech network, suggesting that sublexical orthographic information could change the organization of preexisting phonological representations when learning to read. Meanwhile, the HD effects were localized to the left fusiform and lingual gyrus, suggesting that lexical orthographic knowledge may be activated online during spoken word recognition. These results demonstrated the different natures and neural mechanisms for the POC and HD effects on Chinese spoken word recognition.


Subject(s)
Speech Perception , China , Humans , Language , Phonetics , Reading , Semantics , Speech
4.
J Mem Lang ; 1142020 Oct.
Article in English | MEDLINE | ID: mdl-32694882

ABSTRACT

Statistical views of literacy development maintain that proficient reading requires the assimilation of myriad statistical regularities present in the writing system. Indeed, previous studies have tied statistical learning (SL) abilities to reading skills, establishing the existence of a link between the two. However, some issues are currently left unanswered, including questions regarding the underlying bases for these associations as well as the types of statistical regularities actually assimilated by developing readers. Here we present an alternative approach to study the role of SL in literacy development, focusing on individual differences among beginning readers. Instead of using an artificial task to estimate SL abilities, our approach identifies individual differences in children's reliance on statistical regularities as reflected by actual reading behavior. We specifically focus on individuals' reliance on regularities in the mapping between print and speech versus associations between print and meaning in a word naming task. We present data from 399 children, showing that those whose oral naming performance is impacted more by print-speech regularities and less by associations between print and meaning have better reading skills. These findings suggest that a key route by which SL mechanisms impact developing reading abilities is via their role in the assimilation of sub-lexical regularities between printed and spoken language -and more generally, in detecting regularities that are more reliable than others. We discuss the implications of our findings to both SL and reading theories.

5.
Psychon Bull Rev ; 27(5): 1052-1058, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32542482

ABSTRACT

A large body of research has demonstrated that humans attend to adjacent co-occurrence statistics when processing sequential information, and bottom-up prosodic information can influence learning. In this study, we investigated how top-down grouping cues can influence statistical learning. Specifically, we presented English sentences that were structurally equivalent to each other, which induced top-down expectations of grouping in the artificial language sequences that immediately followed. We show that adjacent dependencies in the artificial language are learnable when these entrained boundaries bracket the adjacent dependencies into the same sub-sequence, but are not learnable when the elements cross an induced boundary, even though that boundary is not present in the bottom-up sensory input. We argue that when there is top-down bracketing information in the learning sequence, statistical learning takes place for elements bracketed within sub-sequences rather than all the elements in the continuous sequence. This limits the amount of linguistic computations that need to be performed, providing a domain over which statistical learning can operate.


Subject(s)
Cues , Probability Learning , Psycholinguistics , Adult , Female , Humans , Male , Young Adult
6.
Cogn Sci ; 43(8): e12740, 2019 08.
Article in English | MEDLINE | ID: mdl-31446661

ABSTRACT

In typical statistical learning studies, researchers define sequences in terms of the probability of the next item in the sequence given the current item (or items), and they show that high probability sequences are treated as more familiar than low probability sequences. Existing accounts of these phenomena all assume that participants represent statistical regularities more or less as they are defined by the experimenters-as sequential probabilities of symbols in a string. Here we offer an alternative, or possibly supplementary, hypothesis. Specifically, rather than identifying or labeling individual stimuli discretely in order to predict the next item in a sequence, we need only assume that the participant is able to represent the stimuli as evincing particular similarity relations to one another, with sequences represented as trajectories through this similarity space. We present experiments in which this hypothesis makes sharply different predictions from hypotheses based on the assumption that sequences are learned over discrete, labeled stimuli. We also present a series of simulation models that encode stimuli as positions in a continuous two-dimensional space, and predict the next location from the current location. Although no model captures all of the data presented here, the results of three critical experiments are more consistent with the view that participants represent trajectories through similarity space rather than sequences of discrete labels under particular conditions.


Subject(s)
Learning , Recognition, Psychology , Computer Simulation , Humans
7.
Cogn Psychol ; 113: 101223, 2019 09.
Article in English | MEDLINE | ID: mdl-31212192

ABSTRACT

Much of the statistical learning literature has focused on adjacent dependency learning, which has shown that learners are capable of extracting adjacent statistics from continuous language streams. In contrast, studies on non-adjacent dependency learning have mixed results, with some showing success and others failure. We review the literature on non-adjacent dependency learning and examine various theories proposed to account for these results, including the proposed necessity of the presence of pauses in the learning stream, or proposals regarding competition between adjacent and non-adjacent dependency learning such that high variability of middle elements is beneficial to learning. Here we challenge those accounts by showing successful learning of non-adjacent dependencies under conditions that are inconsistent with predictions of previous theories. We show that non-adjacent dependencies are learnable without pauses at dependency edges in a variety of artificial language designs. Moreover, we find no evidence of a relationship between non-adjacent dependency learning and the robustness of adjacent statistics. We demonstrate that our two-step statistical learning model can account for all of our non-adjacent dependency learning results, and provides a unified learning account of adjacent and non-adjacent dependency learning. Finally, we discussed the theoretical implications of our findings for natural language acquisition, and argue that the dependency learning process can be a precursor to other language acquisition tasks that are vital to natural language acquisition.


Subject(s)
Language Development , Language , Learning , Humans , Psycholinguistics
8.
J Exp Psychol Gen ; 146(12): 1738-1748, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29251987

ABSTRACT

Because of the hierarchical organization of natural languages, words that are syntactically related are not always linearly adjacent. For example, the subject and verb in the child always runs agree in person and number, although they are not adjacent in the sequences of words. Since such dependencies are indicative of abstract linguist structure, it is of significant theoretical interest how these relationships are acquired by language learners. Most experiments that investigate nonadjacent dependency (NAD) learning have used artificial languages in which the to-be-learned dependencies are isolated, by presenting the minimal sequences that contain the dependent elements. However, dependencies in natural language are not typically isolated in this way. We report the first demonstration to our knowledge of successful learning of embedded NADs, in which silences do not mark dependency boundaries. Subjects heard passages of English with a predictable structure, interspersed with passages of the artificial language. The English sentences were designed to induce boundaries in the artificial languages. In Experiment 1 & 3 the artificial NADs were contained within the induced boundaries and subjects learned them, whereas in Experiment 2 & 4, the NADs crossed the induced boundaries and subjects did not learn them. We take this as evidence that sentential structure was "carried over" from the English sentences and used to organize the artificial language. This approach provides several new insights into the basic mechanisms of NAD learning in particular and statistical learning in general. (PsycINFO Database Record


Subject(s)
Multilingualism , Probability Learning , Psycholinguistics , Adult , Humans , Young Adult
9.
Hum Brain Mapp ; 38(12): 6096-6106, 2017 12.
Article in English | MEDLINE | ID: mdl-28940969

ABSTRACT

Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin, and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading. Our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network. These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages. Hum Brain Mapp 38:6096-6106, 2017. © 2017 Wiley Periodicals, Inc.


Subject(s)
Brain/physiology , Comprehension/physiology , Multilingualism , Narration , Reading , Semantics , Adult , Brain/diagnostic imaging , Brain Mapping , Culture , Female , Humans , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Pattern Recognition, Visual/physiology , Psycholinguistics , Translating , Young Adult
10.
Cereb Cortex ; 27(11): 5197-5210, 2017 11 01.
Article in English | MEDLINE | ID: mdl-27664959

ABSTRACT

Mental and neural representations of words are at the core of understanding the cognitive and neural mechanisms of reading. Despite extensive studies, the nature of visual word representation remains highly controversial due to methodological limitations. In particular, it is unclear whether the fusiform cortex contains only abstract orthographic representation, or represents both lower and higher level orthography as well as phonology. Using representational similarity analysis, we integrated behavioral ratings, computational models of reading and visual object recognition, and neuroimaging data to examine the nature of visual word representations in the fusiform cortex. Our results provided clear evidence that the middle and anterior fusiform represented both phonological and orthographic information. Whereas lower level orthographic information was represented at every stage of the ventral visual stream, abstract orthographic information was increasingly represented along the posterior-to-anterior axis. Furthermore, the left and right hemispheres were tuned to high- and low-frequency orthographic information, respectively. These results help to resolve the long-standing debates regarding the role of the fusiform in reading, and have significant implications for the development of psychological, neural, and computational theories of reading.


Subject(s)
Pattern Recognition, Visual/physiology , Phonetics , Reading , Semantics , Temporal Lobe/physiology , Adolescent , Adult , Brain Mapping , Female , Functional Laterality , Humans , Magnetic Resonance Imaging , Male , Models, Theoretical , Neuropsychological Tests , Photic Stimulation , Psycholinguistics , Temporal Lobe/diagnostic imaging , Young Adult
11.
Front Psychol ; 7: 947, 2016.
Article in English | MEDLINE | ID: mdl-27445914

ABSTRACT

Visual word recognition involves mappings among orthographic, phonological, and semantic codes. In alphabetic languages, it is hard to disentangle the effects of these codes, because orthographically well-formed words are typically pronounceable, confounding orthographic and phonological processes, and orthographic cues to meaning are rare, and where they occur are morphological, confounding orthographic and semantic processes. In Chinese character recognition, it is possible to explore orthography to phonology (O-P) and orthography to semantics (O-S) processes independently by taking advantage of the distinct phonetic and semantic components in Chinese phonograms. We analyzed data from an fMRI experiment using lexical decision for Chinese characters to explore the sensitivity of areas associated with character recognition to orthographic, phonological, and semantic processing. First, a correlation approach was used to identify regions associated with reaction time, frequency, consistency and visual complexity. Then, these ROIs were examined for their responses to stimuli with different types of information available. These results revealed two neural pathways, one for O-S processing relying on left middle temporal gyrus and angular gyrus, and the other for O-P processing relying on inferior frontal gyrus and insula. The two neural routes form a shared neural network both for real and pseudo-characters, and their cooperative division of labor reflects the neural basis for processing different types of characters. Results are broadly consistent with findings from alphabetic languages, as predicted by reading models that assume the same general architecture for logographic and alphabetic scripts.

12.
Sci Stud Read ; 20(1): 1-5, 2016.
Article in English | MEDLINE | ID: mdl-26966346

ABSTRACT

Reading research is increasingly a multi-disciplinary endeavor involving more complex, team-based science approaches. These approaches offer the potential of capturing the complexity of reading development, the emergence of individual differences in reading performance over time, how these differences relate to the development of reading difficulties and disability, and more fully understanding the nature of skilled reading in adults. This special issue focuses on the potential opportunities and insights that early and richly integrated advanced statistical and computational modeling approaches can provide to our foundational (and translational) understanding of reading. The issue explores how computational and statistical modeling, using both observed and simulated data, can serve as a contact point among research domains and topics, complement other data sources and critically provide analytic advantages over current approaches.

13.
Proc Natl Acad Sci U S A ; 112(50): 15510-5, 2015 Dec 15.
Article in English | MEDLINE | ID: mdl-26621710

ABSTRACT

We propose and test a theoretical perspective in which a universal hallmark of successful literacy acquisition is the convergence of the speech and orthographic processing systems onto a common network of neural structures, regardless of how spoken words are represented orthographically in a writing system. During functional MRI, skilled adult readers of four distinct and highly contrasting languages, Spanish, English, Hebrew, and Chinese, performed an identical semantic categorization task to spoken and written words. Results from three complementary analytic approaches demonstrate limited language variation, with speech-print convergence emerging as a common brain signature of reading proficiency across the wide spectrum of selected languages, whether their writing system is alphabetic or logographic, whether it is opaque or transparent, and regardless of the phonological and morphological structure it represents.


Subject(s)
Brain/physiology , Language , Reading , Analysis of Variance , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Speech , Task Performance and Analysis , Young Adult
14.
Neuropsychologia ; 72: 94-104, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25934634

ABSTRACT

Learning a foreign language in a natural immersion context with high exposure to the new language has been shown to change the way speech sounds of that language are processed at the neural level. It remains unclear, however, to what extent this is also the case for classroom-based foreign language learning, particularly in children. To this end, we presented a mismatch negativity (MMN) experiment during EEG recordings as part of a longitudinal developmental study: 38 monolingual (Swiss-) German speaking children (7.5 years) were tested shortly before they started to learn English at school and followed up one year later. Moreover, 22 (Swiss-) German adults were recorded. Instead of the originally found positive mismatch response in children, an MMN emerged when applying a high-pass filter of 3 Hz. The overlap of a slow-wave positivity with the MMN indicates that two concurrent mismatch processes were elicited in children. The children's MMN in response to the non-native speech contrast was smaller compared to the native speech contrast irrespective of foreign language learning, suggesting that no additional neural resources were committed to processing the foreign language speech sound after one year of classroom-based learning.


Subject(s)
Contingent Negative Variation/physiology , Discrimination Learning , Multilingualism , Phonetics , Speech Perception/physiology , Acoustic Stimulation , Age Factors , Analysis of Variance , Brain Mapping , Child , Discrimination Learning/physiology , Electroencephalography , Evoked Potentials, Auditory , Female , Fourier Analysis , Humans , Longitudinal Studies , Male , Reaction Time
15.
J Exp Psychol Hum Percept Perform ; 41(4): 1124-38, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26010588

ABSTRACT

Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning.


Subject(s)
Auditory Perception/physiology , Concept Formation/physiology , Learning/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Humans , Young Adult
16.
PLoS One ; 10(5): e0124388, 2015.
Article in English | MEDLINE | ID: mdl-26017384

ABSTRACT

Differences in how writing systems represent language raise important questions about whether there could be a universal functional architecture for reading across languages. In order to study potential language differences in the neural networks that support reading skill, we collected fMRI data from readers of alphabetic (English) and morpho-syllabic (Chinese) writing systems during two reading tasks. In one, participants read short stories under conditions that approximate natural reading, and in the other, participants decided whether individual stimuli were real words or not. Prior work comparing these two writing systems has overwhelmingly used meta-linguistic tasks, generally supporting the conclusion that the reading system is organized differently for skilled readers of Chinese and English. We observed that language differences in the reading network were greatly dependent on task. In lexical decision, a pattern consistent with prior research was observed in which the Middle Frontal Gyrus (MFG) and right Fusiform Gyrus (rFFG) were more active for Chinese than for English, whereas the posterior temporal sulcus was more active for English than for Chinese. We found a very different pattern of language effects in a naturalistic reading paradigm, during which significant differences were only observed in visual regions not typically considered specific to the reading network, and the middle temporal gyrus, which is thought to be important for direct mapping of orthography to semantics. Indeed, in areas that are often discussed as supporting distinct cognitive or linguistic functions between the two languages, we observed interaction. Specifically, language differences were most pronounced in MFG and rFFG during the lexical decision task, whereas no language differences were observed in these areas during silent reading of text for comprehension.


Subject(s)
Brain/physiology , Language , Female , Humans , Linguistics , Male , Prefrontal Cortex/physiology
17.
Brain Lang ; 141: 35-49, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25528287

ABSTRACT

This study investigates the role of age of acquisition (AoA), socioeducational status (SES), and second language (L2) proficiency on the neural processing of L2 speech sounds. In a task of pre-attentive listening and passive viewing, Spanish-English bilinguals and a control group of English monolinguals listened to English syllables while watching a film of natural scenery. Eight regions of interest were selected from brain areas involved in speech perception and executive processes. The regions of interest were examined in 2 separate two-way ANOVA (AoA×SES; AoA×L2 proficiency). The results showed that AoA was the main variable affecting the neural response in L2 speech processing. Direct comparisons between AoA groups of equivalent SES and proficiency level enhanced the intensity and magnitude of the results. These results suggest that AoA, more than SES and proficiency level, determines which brain regions are recruited for the processing of second language speech sounds.


Subject(s)
Brain/physiology , Multilingualism , Phonetics , Speech Perception , Adolescent , Adult , Age Factors , Attention , Brain/growth & development , Brain Mapping , Child , Female , Humans , Male , Middle Aged , Socioeconomic Factors
18.
J Neurosci ; 34(18): 6267-72, 2014 Apr 30.
Article in English | MEDLINE | ID: mdl-24790197

ABSTRACT

Recent research has shown that the degree to which speakers and listeners exhibit similar brain activity patterns during human linguistic interaction is correlated with communicative success. Here, we used an intersubject correlation approach in fMRI to test the hypothesis that a listener's ability to predict a speaker's utterance increases such neural coupling between speakers and listeners. Nine subjects listened to recordings of a speaker describing visual scenes that varied in the degree to which they permitted specific linguistic predictions. In line with our hypothesis, the temporal profile of listeners' brain activity was significantly more synchronous with the speaker's brain activity for highly predictive contexts in left posterior superior temporal gyrus (pSTG), an area previously associated with predictive auditory language processing. In this region, predictability differentially affected the temporal profiles of brain responses in the speaker and listeners respectively, in turn affecting correlated activity between the two: whereas pSTG activation increased with predictability in the speaker, listeners' pSTG activity instead decreased for more predictable sentences. Listeners additionally showed stronger BOLD responses for predictive images before sentence onset, suggesting that highly predictable contexts lead comprehenders to preactivate predicted words.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Communication , Language , Temporal Lobe/physiology , Acoustic Stimulation , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen/blood , Predictive Value of Tests , Psycholinguistics , Temporal Lobe/blood supply , Young Adult
19.
Neuroimage ; 97: 262-70, 2014 Aug 15.
Article in English | MEDLINE | ID: mdl-24746955

ABSTRACT

Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may play in phonological awareness impairments thought to underlie developmental reading disabilities.


Subject(s)
Attention/physiology , Functional Laterality/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Cues , Evoked Potentials/physiology , Female , Humans , Judgment/physiology , Male , Photic Stimulation , Psychomotor Performance/physiology , Reaction Time/physiology , Young Adult
20.
Cognition ; 128(1): 82-102, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23618755

ABSTRACT

In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being "bottom-up," constrained by the lower levels of organization, or "top-down," such that higher-order statistical information of the stimulus stream takes priority over perceptual organization and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition.


Subject(s)
Learning/physiology , Mathematics , Perception/physiology , Acoustic Stimulation , Auditory Perception/physiology , Female , Humans , Judgment , Language , Language Development , Male , Speech Perception/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...