Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 61
Filter
Add more filters










Publication year range
1.
Cogn Sci ; 48(5): e13449, 2024 May.
Article in English | MEDLINE | ID: mdl-38773754

ABSTRACT

We recently reported strong, replicable (i.e., replicated) evidence for lexically mediated compensation for coarticulation (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for interactive models of cognition that include top-down feedback and is inconsistent with autonomous models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Cognition , Language
2.
J Cogn ; 7(1): 38, 2024.
Article in English | MEDLINE | ID: mdl-38681820

ABSTRACT

The Time-Invariant String Kernel (TISK) model of spoken word recognition (Hannagan, Magnuson & Grainger, 2013; You & Magnuson, 2018) is an interactive activation model with many similarities to TRACE (McClelland & Elman, 1986). However, by replacing most time-specific nodes in TRACE with time-invariant open-diphone nodes, TISK uses orders of magnitude fewer nodes and connections than TRACE. Although TISK performed remarkably similarly to TRACE in simulations reported by Hannagan et al., the original TISK implementation did not include lexical feedback, precluding simulation of top-down effects, and leaving open the possibility that adding feedback to TISK might fundamentally alter its performance. Here, we demonstrate that when lexical feedback is added to TISK, it gains the ability to simulate top-down effects without losing the ability to simulate the fundamental phenomena tested by Hannagan et al. Furthermore, with feedback, TISK demonstrates graceful degradation when noise is added to input, although parameters can be found that also promote (less) graceful degradation without feedback. We review arguments for and against feedback in cognitive architectures, and conclude that feedback provides a computationally efficient basis for robust constraint-based processing.

3.
Atten Percept Psychophys ; 86(3): 942-961, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38383914

ABSTRACT

Listeners have many sources of information available in interpreting speech. Numerous theoretical frameworks and paradigms have established that various constraints impact the processing of speech sounds, but it remains unclear how listeners might simultaneously consider multiple cues, especially those that differ qualitatively (i.e., with respect to timing and/or modality) or quantitatively (i.e., with respect to cue reliability). Here, we establish that cross-modal identity priming can influence the interpretation of ambiguous phonemes (Exp. 1, N = 40) and show that two qualitatively distinct cues - namely, cross-modal identity priming and auditory co-articulatory context - have additive effects on phoneme identification (Exp. 2, N = 40). However, we find no effect of quantitative variation in a cue - specifically, changes in the reliability of the priming cue did not influence phoneme identification (Exp. 3a, N = 40; Exp. 3b, N = 40). Overall, we find that qualitatively distinct cues can additively influence phoneme identification. While many existing theoretical frameworks address constraint integration to some degree, our results provide a step towards understanding how information that differs in both timing and modality is integrated in online speech perception.


Subject(s)
Cues , Phonetics , Speech Perception , Humans , Speech Perception/physiology , Young Adult , Female , Male , Adult
4.
Cognition ; 242: 105661, 2024 01.
Article in English | MEDLINE | ID: mdl-37944313

ABSTRACT

Whether top-down feedback modulates perception has deep implications for cognitive theories. Debate has been vigorous in the domain of spoken word recognition, where competing computational models and agreement on at least one diagnostic experimental paradigm suggest that the debate may eventually be resolvable. Norris and Cutler (2021) revisit arguments against lexical feedback in spoken word recognition models. They also incorrectly claim that recent computational demonstrations that feedback promotes accuracy and speed under noise (Magnuson et al., 2018) were due to the use of the Luce choice rule rather than adding noise to inputs (noise was in fact added directly to inputs). They also claim that feedback cannot improve word recognition because feedback cannot distinguish signal from noise. We have two goals in this paper. First, we correct the record about the simulations of Magnuson et al. (2018). Second, we explain how interactive activation models selectively sharpen signals via joint effects of feedback and lateral inhibition that boost lexically-coherent sublexical patterns over noise. We also review a growing body of behavioral and neural results consistent with feedback and inconsistent with autonomous (non-feedback) architectures, and conclude that parsimony supports feedback. We close by discussing the potential for synergy between autonomous and interactive approaches.


Subject(s)
Speech Perception , Feedback , Speech Perception/physiology , Language , Noise
5.
Neurobiol Lang (Camb) ; 4(1): 145-177, 2023.
Article in English | MEDLINE | ID: mdl-37229142

ABSTRACT

Though the right hemisphere has been implicated in talker processing, it is thought to play a minimal role in phonetic processing, at least relative to the left hemisphere. Recent evidence suggests that the right posterior temporal cortex may support learning of phonetic variation associated with a specific talker. In the current study, listeners heard a male talker and a female talker, one of whom produced an ambiguous fricative in /s/-biased lexical contexts (e.g., epi?ode) and one who produced it in /∫/-biased contexts (e.g., friend?ip). Listeners in a behavioral experiment (Experiment 1) showed evidence of lexically guided perceptual learning, categorizing ambiguous fricatives in line with their previous experience. Listeners in an fMRI experiment (Experiment 2) showed differential phonetic categorization as a function of talker, allowing for an investigation of the neural basis of talker-specific phonetic processing, though they did not exhibit perceptual learning (likely due to characteristics of our in-scanner headphones). Searchlight analyses revealed that the patterns of activation in the right superior temporal sulcus (STS) contained information about who was talking and what phoneme they produced. We take this as evidence that talker information and phonetic information are integrated in the right STS. Functional connectivity analyses suggested that the process of conditioning phonetic identity on talker information depends on the coordinated activity of a left-lateralized phonetic processing system and a right-lateralized talker processing system. Overall, these results clarify the mechanisms through which the right hemisphere supports talker-specific phonetic processing.

6.
Cogn Sci ; 47(5): e13291, 2023 05.
Article in English | MEDLINE | ID: mdl-37183557

ABSTRACT

Distributional semantic models (DSMs) are a primary method for distilling semantic information from corpora. However, a key question remains: What types of semantic relations among words do DSMs detect? Prior work typically has addressed this question using limited human data that are restricted to semantic similarity and/or general semantic relatedness. We tested eight DSMs that are popular in current cognitive and psycholinguistic research (positive pointwise mutual information; global vectors; and three variations each of Skip-gram and continuous bag of words (CBOW) using word, context, and mean embeddings) on a theoretically motivated, rich set of semantic relations involving words from multiple syntactic classes and spanning the abstract-concrete continuum (19 sets of ratings). We found that, overall, the DSMs are best at capturing overall semantic similarity and also can capture verb-noun thematic role relations and noun-noun event-based relations that play important roles in sentence comprehension. Interestingly, Skip-gram and CBOW performed the best in terms of capturing similarity, whereas GloVe dominated the thematic role and event-based relations. We discuss the theoretical and practical implications of our results, make recommendations for users of these models, and demonstrate significant differences in model performance on event-based relations.


Subject(s)
Language , Semantics , Humans , Psycholinguistics , Comprehension
7.
Brain Lang ; 240: 105264, 2023 05.
Article in English | MEDLINE | ID: mdl-37087863

ABSTRACT

Theories suggest that speech perception is informed by listeners' beliefs of what phonetic variation is typical of a talker. A previous fMRI study found right middle temporal gyrus (RMTG) sensitivity to whether a phonetic variant was typical of a talker, consistent with literature suggesting that the right hemisphere may play a key role in conditioning phonetic identity on talker information. The current work used transcranial magnetic stimulation (TMS) to test whether the RMTG plays a causal role in processing talker-specific phonetic variation. Listeners were exposed to talkers who differed in how they produced voiceless stop consonants while TMS was applied to RMTG, left MTG, or scalp vertex. Listeners subsequently showed near-ceiling performance in indicating which of two variants was typical of a trained talker, regardless of previous stimulation site. Thus, even though the RMTG is recruited for talker-specific phonetic processing, modulation of its function may have only modest consequences.


Subject(s)
Phonetics , Speech Perception , Humans , Transcranial Magnetic Stimulation , Temporal Lobe/diagnostic imaging , Speech Perception/physiology , Magnetic Resonance Imaging
8.
Behav Res Methods ; 54(3): 1388-1402, 2022 06.
Article in English | MEDLINE | ID: mdl-34595672

ABSTRACT

Language scientists often need to generate lists of related words, such as potential competitors. They may do this for purposes of experimental control (e.g., selecting items matched on lexical neighborhood but varying in word frequency), or to test theoretical predictions (e.g., hypothesizing that a novel type of competitor may impact word recognition). Several online tools are available, but most are constrained to a fixed lexicon and fixed sets of competitor definitions, and may not give the user full access to or control of source data. We present LexFindR, an open-source R package that can be easily modified to include additional, novel competitor types. LexFindR is easy to use. Because it can leverage multiple CPU cores and uses vectorized code when possible, it is also extremely fast. In this article, we present an overview of LexFindR usage, illustrated with examples. We also explain the details of how we implemented several standard lexical competitor types used in spoken word recognition research (e.g., cohorts, neighbors, embeddings, rhymes), and show how "lexical dimensions" (e.g., word frequency, word length, uniqueness point) can be integrated into LexFindR workflows (for example, to calculate "frequency-weighted competitor probabilities"), for both spoken and visual word recognition research.


Subject(s)
Speech Perception , Humans , Language
9.
J Exp Psychol Hum Percept Perform ; 47(12): 1673-1680, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34881952

ABSTRACT

Determining how human listeners achieve phonetic constancy despite a variable mapping between the acoustics of speech and phonemic categories is the longest standing challenge in speech perception. A clue comes from studies where the talker changes randomly between stimuli, which slows processing compared with a single-talker baseline. These multitalker processing costs have been observed most often in speeded monitoring paradigms, where participants respond whenever a specific item occurs. Notably, the conventional paradigm imposes attentional demands via two forms of varied mapping in mixed-talker conditions. First, target recycling (i.e., allowing items to serve as targets on some trials but as distractors on others) potentially prevents the development of task automaticity. Second, in mixed trials, participants must respond to two unique stimuli (i.e., one target produced by each talker), whereas in blocked conditions, they need respond to only one token (i.e., multiple target tokens). We seek to understand how attentional demands influence talker normalization, as measured by multitalker processing costs. Across four experiments, multitalker processing costs persisted when target recycling was not allowed but diminished when only one stimulus served as the target on mixed trials. We discuss the logic of using varied mapping to elicit attentional effects and implications for theories of speech perception. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Speech Perception , Acoustics , Attention , Humans , Phonetics , Speech
10.
Atten Percept Psychophys ; 83(6): 2367-2376, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33948883

ABSTRACT

Researchers have hypothesized that in order to accommodate variability in how talkers produce their speech sounds, listeners must perform a process of talker normalization. Consistent with this proposal, several studies have shown that spoken word recognition is slowed when speech is produced by multiple talkers compared with when all speech is produced by one talker (a multitalker processing cost). Nusbaum and colleagues have argued that talker normalization is modulated by attention (e.g., Nusbaum & Morin, 1992, Speech Perception, Production and Linguistic Structure, pp. 113-134). Some of the strongest evidence for this claim is from a speeded monitoring study where a group of participants who expected to hear two talkers showed a multitalker processing cost, but a separate group who expected one talker did not (Magnuson & Nusbaum, 2007, Journal of Experimental Psychology, 33[2], 391-409). In that study, however, the sample size was small and the crucial interaction was not significant. In this registered report, we present the results of a well-powered attempt to replicate those findings. In contrast to the previous study, we did not observe multitalker processing costs in either of our groups. To rule out the possibility that the null result was due to task constraints, we conducted a second experiment using a speeded classification task. As in Experiment 1, we found no influence of expectations on talker normalization, with no multitalker processing cost observed in either group. Our data suggest that the previous findings of Magnuson and Nusbaum (2007) be regarded with skepticism and that talker normalization may not be permeable to high-level expectations.


Subject(s)
Motivation , Speech Perception , Attention , Humans , Phonetics , Speech
11.
J Exp Psychol Learn Mem Cogn ; 47(4): 685-704, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33983786

ABSTRACT

A challenge for listeners is to learn the appropriate mapping between acoustics and phonetic categories for an individual talker. Lexically guided perceptual learning (LGPL) studies have shown that listeners can leverage lexical knowledge to guide this process. For instance, listeners learn to interpret ambiguous /s/-/∫/ blends as /s/ if they have previously encountered them in /s/-biased contexts like epi?ode. Here, we examined whether the degree of preceding lexical support might modulate the extent of perceptual learning. In Experiment 1, we first demonstrated that perceptual learning could be obtained in a modified LGPL paradigm where listeners were first biased to interpret ambiguous tokens as one phoneme (e.g., /s/) and then later as another (e.g., /∫/). In subsequent experiments, we tested whether the extent of learning differed depending on whether targets encountered predictive contexts or neutral contexts prior to the auditory target (e.g., epi?ode). Experiment 2 used auditory sentence contexts (e.g., "I love The Walking Dead and eagerly await every new . . ."), whereas Experiment 3 used written sentence contexts. In Experiment 4, participants did not receive sentence contexts but rather saw the written form of the target word (episode) or filler text (########) prior to hearing the critical auditory token. While we consistently observed effects of context on in-the-moment processing of critical words, the size of the learning effect was not modulated by the type of context. We hypothesize that boosting lexical support through preceding context may not strongly influence perceptual learning when ambiguous speech sounds can be identified solely from lexical information. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Learning , Phonetics , Speech Perception , Writing , Female , Humans , Knowledge , Male
12.
Psychon Bull Rev ; 28(4): 1381-1389, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33852158

ABSTRACT

Pervasive behavioral and neural evidence for predictive processing has led to claims that language processing depends upon predictive coding. Formally, predictive coding is a computational mechanism where only deviations from top-down expectations are passed between levels of representation. In many cognitive neuroscience studies, a reduction of signal for expected inputs is taken as being diagnostic of predictive coding. In the present work, we show that despite not explicitly implementing prediction, the TRACE model of speech perception exhibits this putative hallmark of predictive coding, with reductions in total lexical activation, total lexical feedback, and total phoneme activation when the input conforms to expectations. These findings may indicate that interactive activation is functionally equivalent or approximant to predictive coding or that caution is warranted in interpreting neural signal reduction as diagnostic of predictive coding.


Subject(s)
Speech Perception , Humans , Language
13.
Cogn Sci ; 45(4): e12962, 2021 04.
Article in English | MEDLINE | ID: mdl-33877697

ABSTRACT

A long-standing question in cognitive science is how high-level knowledge is integrated with sensory input. For example, listeners can leverage lexical knowledge to interpret an ambiguous speech sound, but do such effects reflect direct top-down influences on perception or merely postperceptual biases? A critical test case in the domain of spoken word recognition is lexically mediated compensation for coarticulation (LCfC). Previous LCfC studies have shown that a lexically restored context phoneme (e.g., /s/ in Christma#) can alter the perceived place of articulation of a subsequent target phoneme (e.g., the initial phoneme of a stimulus from a tapes-capes continuum), consistent with the influence of an unambiguous context phoneme in the same position. Because this phoneme-to-phoneme compensation for coarticulation is considered sublexical, scientists agree that evidence for LCfC would constitute strong support for top-down interaction. However, results from previous LCfC studies have been inconsistent, and positive effects have often been small. Here, we conducted extensive piloting of stimuli prior to testing for LCfC. Specifically, we ensured that context items elicited robust phoneme restoration (e.g., that the final phoneme of Christma# was reliably identified as /s/) and that unambiguous context-final segments (e.g., a clear /s/ at the end of Christmas) drove reliable compensation for coarticulation for a subsequent target phoneme. We observed robust LCfC in a well-powered, preregistered experiment with these pretested items (N = 40) as well as in a direct replication study (N = 40). These results provide strong evidence in favor of computational models of spoken word recognition that include top-down feedback.


Subject(s)
Speech Perception , Humans , Phonetics
14.
Atten Percept Psychophys ; 83(4): 1842-1860, 2021 May.
Article in English | MEDLINE | ID: mdl-33398658

ABSTRACT

A fundamental problem in speech perception is how (or whether) listeners accommodate variability in the way talkers produce speech. One view of the way listeners cope with this variability is that talker differences are normalized - a mapping between talker-specific characteristics and phonetic categories is computed such that speech is recognized in the context of the talker's vocal characteristics. Consistent with this view, listeners process speech more slowly when the talker changes randomly than when the talker remains constant. An alternative view is that speech perception is based on talker-specific auditory exemplars in memory clustered around linguistic categories that allow talker-independent perception. Consistent with this view, listeners become more efficient at talker-specific phonetic processing after voice identification training. We asked whether phonetic efficiency would increase with talker familiarity by testing listeners with extremely familiar talkers (family members), newly familiar talkers (based on laboratory training), and unfamiliar talkers. We also asked whether familiarity would reduce the need for normalization. As predicted, phonetic efficiency (word recognition in noise) increased with familiarity (unfamiliar < trained-on < family). However, we observed a constant processing cost for talker changes even for pairs of family members. We discuss how normalization and exemplar theories might account for these results, and constraints the results impose on theoretical accounts of phonetic constancy.


Subject(s)
Speech Perception , Voice , Humans , Phonetics , Recognition, Psychology , Speech
15.
Dev Sci ; 24(2): e13023, 2021 03.
Article in English | MEDLINE | ID: mdl-32691904

ABSTRACT

Word learning is critical for the development of reading and language comprehension skills. Although previous studies have indicated that word learning is compromised in children with reading disability (RD) or developmental language disorder (DLD), it is less clear how word learning difficulties manifest in children with comorbid RD and DLD. Furthermore, it is unclear whether word learning deficits in RD or DLD include difficulties with offline consolidation of newly learned words. In the current study, we employed an artificial lexicon learning paradigm with an overnight design to investigate how typically developing (TD) children (N = 25), children with only RD (N = 93), and children with both RD and DLD (N = 34) learned and remembered a set of phonologically similar pseudowords. Results showed that compared to TD children, children with RD exhibited: (i) slower growth in discrimination accuracy for cohort item pairs sharing an onset (e.g. pibu-pibo), but not for rhyming item pairs (e.g. pibu-dibu); and (ii) lower discrimination accuracy for both cohort and rhyme item pairs on Day 2, even when accounting for differences in Day 1 learning. Moreover, children with comorbid RD and DLD showed learning and retention deficits that extended to unrelated item pairs that were phonologically dissimilar (e.g. pibu-tupa), suggestive of broader impairments compared to children with only RD. These findings provide insights into the specific learning deficits underlying RD and DLD and motivate future research concerning how children use phonological similarity to guide the organization of new word knowledge.


Subject(s)
Dyslexia , Language Development Disorders , Child , Humans , Language , Learning , Verbal Learning
16.
Cogn Sci ; 44(12): e12917, 2020 12.
Article in English | MEDLINE | ID: mdl-33274485

ABSTRACT

Visual word recognition is facilitated by the presence of orthographic neighbors that mismatch the target word by a single letter substitution. However, researchers typically do not consider where neighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number of enemies at each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over-time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions.


Subject(s)
Pattern Recognition, Visual , Reading , Humans , Reaction Time
17.
New Dir Child Adolesc Dev ; 2020(169): 131-155, 2020 Jan.
Article in English | MEDLINE | ID: mdl-32324324

ABSTRACT

The etiological mechanisms of the genetic underpinnings of developmental language disorder (DLD) are unknown, in part due to the behavioral heterogeneity of the disorder's manifestations. In this study, we explored an association between the SETBP1 gene (18q21.1), revealed in a genome-wide association study of DLD in a geographically isolated population, and brain network-based endophenotypes of functional intracortical coherence between major language-related brain areas. We analyzed electroencephalogram (EEG) data from thirty-nine children (twenty-three with, sixteen without DLD) aged 7.17-15.83 years acquired during an auditory picture-word matching paradigm. Variation at a single nucleotide polymorphism in the intronic region of the SETBP1 gene, rs8085464, explained 19% of the variance in intracortical network cohesion (p = .00478). This suggests that the development of these brain networks might be partially associated with the variation in SETBP1.


Subject(s)
Brain/physiopathology , Carrier Proteins/genetics , Language Development Disorders/genetics , Language Development Disorders/physiopathology , Nuclear Proteins/genetics , Adolescent , Brain/diagnostic imaging , Child , Cognition , Electroencephalography , Genome-Wide Association Study , Humans , Language , Male , Polymorphism, Genetic , Russia
18.
Cogn Sci ; 44(4): e12823, 2020 04.
Article in English | MEDLINE | ID: mdl-32274861

ABSTRACT

Despite the lack of invariance problem (the many-to-many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side-stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real-world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them to provide direct insights into mechanisms that may support HSR. In this brief article, we report preliminary results from a two-layer network that borrows one element from ASR, long short-term memory nodes, which provide dynamic memory for a range of temporal spans. This allows the model to learn to map real speech from multiple talkers to semantic targets with high accuracy, with human-like timecourse of lexical access and phonological competition. Internal representations emerge that resemble phonetically organized responses in human superior temporal gyrus, suggesting that the model develops a distributed phonological code despite no explicit training on phonetic or phonemic targets. The ability to work with real speech is a major advance for cognitive models of HSR.


Subject(s)
Computer Simulation , Models, Neurological , Neural Networks, Computer , Speech Perception , Speech , Female , Humans , Male , Phonetics , Semantics
19.
J Mem Lang ; 107: 195-215, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31431796

ABSTRACT

Many studies have established a link between phonological abilities (indexed by phonological awareness and phonological memory tasks) and typical and atypical reading development. Individuals who perform poorly on phonological assessments have been mostly assumed to have underspecified (or "fuzzy") phonological representations, with typical phonemic categories, but with greater category overlap due to imprecise encoding. An alternative posits that poor readers have overspecified phonological representations, with speech sounds perceived allophonically (phonetically distinct variants of a single phonemic category). On both accounts, mismatch between phonological categories and orthography leads to reading difficulty. Here, we consider the implications of these accounts for online speech processing. We used eye tracking and an individual differences approach to assess sensitivity to subphonemic detail in a community sample of young adults with a wide range of reading-related skills. Subphonemic sensitivity inversely correlated with meta-phonological task performance, consistent with overspecification.

20.
Lang Cogn Neurosci ; 33(10): 1275-1295, 2018.
Article in English | MEDLINE | ID: mdl-30505876

ABSTRACT

This exploratory study investigated relations between individual differences in cortical grey matter structure and young adult readers' cognitive profiles. Whole-brain analyses revealed neuroanatomical correlations with word and nonword reading ability (decoding), and experience with printed matter. Decoding was positively correlated with grey matter volume (GMV) in left superior temporal sulcus, and thickness (GMT) in right superior temporal gyrus. Print exposure was negatively correlated with GMT in left inferior frontal gyrus (pars opercularis) and left fusiform gyrus (including the visual word form area). Both measures also correlated with supramarginal gyrus (SMG), but in spatially distinct subregions: decoding was positively associated with GMV in left anterior SMG, and print exposure was negatively associated with GMT in left posterior SMG. Our comprehensive approach to assessment both confirms and refines our understanding of the novel relation between the structure of pSMG and proficient reading, and unifies previous research relating cortical structure and reading skill.

SELECTION OF CITATIONS
SEARCH DETAIL
...