Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 97
Filter
1.
Brain Neurosci Adv ; 8: 23982128241255798, 2024.
Article in English | MEDLINE | ID: mdl-38800359

ABSTRACT

The binding of information from different sensory or neural sources is critical for associative memory. Previous research in animals suggested that the timing of theta oscillations in the hippocampus is critical for long-term potentiation, which underlies associative and episodic memory. Studies with human participants showed correlations between theta oscillations in medial temporal lobe and episodic memory. Clouter et al. directly investigated this link by modulating the intensity of the luminance and the sound of the video clips so that they 'flickered' at certain frequencies and with varying synchronicity between the visual and auditory streams. Across several experiments, better memory was found for stimuli that flickered synchronously at theta frequency compared with no-flicker, asynchronous theta, or synchronous alpha and delta frequencies. This effect - which they called the theta-induced memory effect - is consistent with the importance of theta synchronicity for long-term potentiation. In addition, electroencephalography data showed entrainment of cortical regions to the visual and auditory flicker, and that synchronicity was achieved in neuronal oscillations (with a fixed delay between visual and auditory streams). The theoretical importance, large effect size, and potential application to enhance real-world memory mean that a replication of theta-induced memory effect would be highly valuable. The present study aimed to replicate the key differences among synchronous theta, asynchronous theta, synchronous delta, and no-flicker conditions, but within a single experiment. The results do not show evidence of improved memory for theta synchronicity in any of the comparisons. We suggest a reinterpretation of theta-induced memory effect to accommodate this non-replication.

2.
J Exp Psychol Gen ; 153(4): 957-981, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38095981

ABSTRACT

Poor performance on phonological tasks is characteristic of neurodevelopmental language disorders (dyslexia and/or developmental language disorder). Perceptual deficit accounts attribute phonological dysfunction to lower-level deficits in speech-sound processing. However, a causal pathway from speech perception to phonological performance has not been established. We assessed this relationship in typical adults by experimentally disrupting speech-sound discrimination in a phonological short-term memory (pSTM) task. We used an automated audio-morphing method (Rogers & Davis, 2017) to create ambiguous intermediate syllables between 16 letter name-letter name ("B"-"P") and letter name-word ("B"-"we") pairs. High- and low-ambiguity syllables were used in a pSTM task in which participants (N = 36) recalled six- and eight-letter name sequences. Low-ambiguity sequences were better recalled than high-ambiguity sequences, for letter name-letter name but not letter name-word morphed syllables. A further experiment replicated this ambiguity cost (N = 26), but failed to show retroactive or prospective effects for mixed high- and low-ambiguity sequences, in contrast to pSTM findings for speech-in-noise (SiN; Guang et al., 2020; Rabbitt, 1968). These experiments show that ambiguous speech sounds impair pSTM, via a different mechanism to SiN recall. We further show that the effect of ambiguous speech on recall is context-specific, limited, and does not transfer to recall of nonconfusable items. This indicates that speech perception deficits are not a plausible cause of pSTM difficulties in language disorders. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Dyslexia , Language Disorders , Speech Perception , Adult , Humans , Speech , Memory, Short-Term , Phonetics , Articulation Disorders
3.
Article in English | MEDLINE | ID: mdl-37929612

ABSTRACT

BACKGROUND: The use of telepractice in aphasia research and therapy is increasing in frequency. Teleassessment in aphasia has been demonstrated to be reliable. However, neuropsychological and clinical language comprehension assessments are not always readily translatable to an online environment and people with severe language comprehension or cognitive impairments have sometimes been considered to be unsuitable for teleassessment. AIM: This project aimed to produce a battery of language comprehension teleassessments at the single word, sentence and discourse level suitable for individuals with moderate-severe language comprehension impairments. METHODS: Assessment development prioritised response consistency and clinical flexibility during testing. Teleassessments were delivered in PowerPoint over Zoom using screen sharing and remote control functions. The assessments were evaluated in 14 people with aphasia and 9 neurotypical control participants. Modifiable assessment templates are available here: https://osf.io/r6wfm/. MAIN CONTRIBUTIONS: People with aphasia were able to engage in language comprehension teleassessment with limited carer support. Only one assessment could not be completed for technical reasons. Statistical analysis revealed above chance performance in 141/151 completed assessments. CONCLUSIONS: People with aphasia, including people with moderate-severe comprehension impairments, are able to engage with teleassessment. Successful teleassessment can be supported by retaining clinical flexibility and maintaining consistent task demands. WHAT THIS PAPER ADDS: What is already known on the subject Teleassessment for aphasia is reliable but assessment of auditory comprehension is difficult to adapt to the online environment. There has been limited evaluation of the ability of people with severe aphasia to engage in auditory comprehension teleassessment. What this paper adds to existing knowledge Auditory comprehension assessment can be adapted for videoconferencing administration while maintaining clinical flexibility to support people with severe aphasia. What are the potential or actual clinical implications of this work? Teleassessment is time and cost effective and can be designed to support inclusion of severely impaired individuals.

5.
Cell Rep ; 42(5): 112422, 2023 05 30.
Article in English | MEDLINE | ID: mdl-37099422

ABSTRACT

Humans use predictions to improve speech perception, especially in noisy environments. Here we use 7-T functional MRI (fMRI) to decode brain representations of written phonological predictions and degraded speech signals in healthy humans and people with selective frontal neurodegeneration (non-fluent variant primary progressive aphasia [nfvPPA]). Multivariate analyses of item-specific patterns of neural activation indicate dissimilar representations of verified and violated predictions in left inferior frontal gyrus, suggestive of processing by distinct neural populations. In contrast, precentral gyrus represents a combination of phonological information and weighted prediction error. In the presence of intact temporal cortex, frontal neurodegeneration results in inflexible predictions. This manifests neurally as a failure to suppress incorrect predictions in anterior superior temporal gyrus and reduced stability of phonological representations in precentral gyrus. We propose a tripartite speech perception network in which inferior frontal gyrus supports prediction reconciliation in echoic memory, and precentral gyrus invokes a motor model to instantiate and refine perceptual predictions for speech.


Subject(s)
Motor Cortex , Speech , Humans , Speech/physiology , Brain Mapping , Frontal Lobe/physiology , Brain , Temporal Lobe , Magnetic Resonance Imaging/methods
6.
PLoS One ; 18(1): e0279024, 2023.
Article in English | MEDLINE | ID: mdl-36634109

ABSTRACT

Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words, etc.) that listeners extract from intelligible speech. Here, we present three experiments in which participants were asked to detect an irregularity in rhythmically spoken speech sequences. In each experiment, we reduce the number of possible stimulus properties that differ between intelligible and unintelligible speech sounds and show that these acoustically-matched intelligibility conditions nonetheless lead to differences in rhythm perception. In Experiment 1, we replicate a previous study showing that rhythm perception is improved for intelligible (16-channel vocoded) as compared to unintelligible (1-channel vocoded) speech-despite near-identical broadband amplitude modulations. In Experiment 2, we use spectrally-rotated 16-channel speech to show the effect of intelligibility cannot be explained by differences in spectral complexity. In Experiment 3, we compare rhythm perception for sine-wave speech signals when they are heard as non-speech (for naïve listeners), and subsequent to training, when identical sounds are perceived as speech. In all cases, detection of rhythmic regularity is enhanced when participants perceive the stimulus as speech compared to when they do not. Together, these findings demonstrate that intelligibility enhances the perception of timing changes in speech, which is hence linked to processes that extract abstract linguistic units from sound.


Subject(s)
Speech Intelligibility , Speech Perception , Humans , Phonetics , Acoustics , Cognition , Acoustic Stimulation , Auditory Perception
7.
Cortex ; 159: 54-63, 2023 02.
Article in English | MEDLINE | ID: mdl-36608420

ABSTRACT

Studies of inter-brain relationships thrive, and yet many reservations regarding their scope and interpretation of these phenomena have been raised by the scientific community. It is thus essential to establish common ground on methodological and conceptual definitions related to this topic and to open debate about any remaining points of uncertainty. We here offer insights to improve the conceptual clarity and empirical standards offered by social neuroscience studies of inter-personal interaction using hyperscanning with a particular focus on verbal communication.


Subject(s)
Brain , Communication , Humans , Auditory Perception , Thalamus
8.
J Neurosci ; 42(31): 6108-6120, 2022 08 03.
Article in English | MEDLINE | ID: mdl-35760528

ABSTRACT

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase-locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual, or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6 Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase-locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared with audio-only speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus did not show above-chance partial coherence with visual speech signals during AV conditions but did show partial coherence in visual-only conditions. Hence, visual speech enabled stronger phase-locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.SIGNIFICANCE STATEMENT Verbal communication in noisy environments is challenging, especially for hearing-impaired individuals. Seeing facial movements of communication partners improves speech perception when auditory signals are degraded or absent. The neural mechanisms supporting lip-reading or audio-visual benefit are not fully understood. Using MEG recordings and partial coherence analysis, we show that speech information is used differently in brain regions that respond to auditory and visual speech. While visual areas use visual speech to improve phase-locking to auditory speech signals, auditory areas do not show phase-locking to visual speech unless auditory speech is absent and visual speech is used to substitute for missing auditory signals. These findings highlight brain processes that combine visual and auditory signals to support speech understanding.


Subject(s)
Auditory Cortex , Speech Perception , Visual Cortex , Acoustic Stimulation , Auditory Cortex/physiology , Auditory Perception , Female , Humans , Lipreading , Male , Speech/physiology , Speech Perception/physiology , Visual Cortex/physiology , Visual Perception/physiology
9.
Nat Protoc ; 17(3): 596-617, 2022 03.
Article in English | MEDLINE | ID: mdl-35121855

ABSTRACT

Low-intensity transcranial electrical stimulation (tES), including alternating or direct current stimulation, applies weak electrical stimulation to modulate the activity of brain circuits. Integration of tES with concurrent functional MRI (fMRI) allows for the mapping of neural activity during neuromodulation, supporting causal studies of both brain function and tES effects. Methodological aspects of tES-fMRI studies underpin the results, and reporting them in appropriate detail is required for reproducibility and interpretability. Despite the growing number of published reports, there are no consensus-based checklists for disclosing methodological details of concurrent tES-fMRI studies. The objective of this work was to develop a consensus-based checklist of reporting standards for concurrent tES-fMRI studies to support methodological rigor, transparency and reproducibility (ContES checklist). A two-phase Delphi consensus process was conducted by a steering committee (SC) of 13 members and 49 expert panelists through the International Network of the tES-fMRI Consortium. The process began with a circulation of a preliminary checklist of essential items and additional recommendations, developed by the SC on the basis of a systematic review of 57 concurrent tES-fMRI studies. Contributors were then invited to suggest revisions or additions to the initial checklist. After the revision phase, contributors rated the importance of the 17 essential items and 42 additional recommendations in the final checklist. The state of methodological transparency within the 57 reviewed concurrent tES-fMRI studies was then assessed by using the checklist. Experts refined the checklist through the revision and rating phases, leading to a checklist with three categories of essential items and additional recommendations: (i) technological factors, (ii) safety and noise tests and (iii) methodological factors. The level of reporting of checklist items varied among the 57 concurrent tES-fMRI papers, ranging from 24% to 76%. On average, 53% of checklist items were reported in a given article. In conclusion, use of the ContES checklist is expected to enhance the methodological reporting quality of future concurrent tES-fMRI studies and increase methodological transparency and reproducibility.


Subject(s)
Checklist , Transcranial Direct Current Stimulation , Consensus , Magnetic Resonance Imaging , Reproducibility of Results
10.
Cognition ; 224: 105051, 2022 07.
Article in English | MEDLINE | ID: mdl-35219954

ABSTRACT

⁠This study investigates the dynamics of speech envelope tracking during speech production, listening and self-listening. We use a paradigm in which participants listen to natural speech (Listening), produce natural speech (Speech Production), and listen to the playback of their own speech (Self-Listening), all while their neural activity is recorded with EEG. After time-locking EEG data collection and auditory recording and playback, we used a Gaussian copula mutual information measure to estimate the relationship between information content in the EEG and auditory signals. In the 2-10 Hz frequency range, we identified different latencies for maximal speech envelope tracking during speech production and speech perception. Maximal speech tracking takes place approximately 110 ms after auditory presentation during perception and 25 ms before vocalisation during speech production. These results describe a specific timeline for speech tracking in speakers and listeners in line with the idea of a speech chain and hence, delays in communication.


Subject(s)
Speech Perception , Speech , Auditory Perception , Brain , Electroencephalography , Humans
11.
Neurobiol Lang (Camb) ; 3(4): 665-698, 2022.
Article in English | MEDLINE | ID: mdl-36742011

ABSTRACT

Listening to spoken language engages domain-general multiple demand (MD; frontoparietal) regions of the human brain, in addition to domain-selective (frontotemporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understanding language. In a behavioural study of volunteers (n = 19) with chronic brain lesions, but without aphasia, we assessed the causal role of these networks in perceiving, comprehending, and adapting to spoken sentences made more challenging by acoustic-degradation or lexico-semantic ambiguity. We measured perception of and adaptation to acoustically degraded (noise-vocoded) sentences with a word report task before and after training. Participants with greater damage to MD but not language regions required more vocoder channels to achieve 50% word report, indicating impaired perception. Perception improved following training, reflecting adaptation to acoustic degradation, but adaptation was unrelated to lesion location or extent. Comprehension of spoken sentences with semantically ambiguous words was measured with a sentence coherence judgement task. Accuracy was high and unaffected by lesion location or extent. Adaptation to semantic ambiguity was measured in a subsequent word association task, which showed that availability of lower-frequency meanings of ambiguous words increased following their comprehension (word-meaning priming). Word-meaning priming was reduced for participants with greater damage to language but not MD regions. Language and MD networks make dissociable contributions to challenging speech comprehension: Using recent experience to update word meaning preferences depends on language-selective regions, whereas the domain-general MD network plays a causal role in reporting words from degraded speech.

12.
J Neurosci ; 41(32): 6919-6932, 2021 08 11.
Article in English | MEDLINE | ID: mdl-34210777

ABSTRACT

Human listeners achieve quick and effortless speech comprehension through computations of conditional probability using Bayes rule. However, the neural implementation of Bayesian perceptual inference remains unclear. Competitive-selection accounts (e.g., TRACE) propose that word recognition is achieved through direct inhibitory connections between units representing candidate words that share segments (e.g., hygiene and hijack share /haidʒ/). Manipulations that increase lexical uncertainty should increase neural responses associated with word recognition when words cannot be uniquely identified. In contrast, predictive-selection accounts (e.g., Predictive-Coding) propose that spoken word recognition involves comparing heard and predicted speech sounds and using prediction error to update lexical representations. Increased lexical uncertainty in words, such as hygiene and hijack, will increase prediction error and hence neural activity only at later time points when different segments are predicted. We collected MEG data from male and female listeners to test these two Bayesian mechanisms and used a competitor priming manipulation to change the prior probability of specific words. Lexical decision responses showed delayed recognition of target words (hygiene) following presentation of a neighboring prime word (hijack) several minutes earlier. However, this effect was not observed with pseudoword primes (higent) or targets (hijure). Crucially, MEG responses in the STG showed greater neural responses for word-primed words after the point at which they were uniquely identified (after /haidʒ/ in hygiene) but not before while similar changes were again absent for pseudowords. These findings are consistent with accounts of spoken word recognition in which neural computations of prediction error play a central role.SIGNIFICANCE STATEMENT Effective speech perception is critical to daily life and involves computations that combine speech signals with prior knowledge of spoken words (i.e., Bayesian perceptual inference). This study specifies the neural mechanisms that support spoken word recognition by testing two distinct implementations of Bayes perceptual inference. Most established theories propose direct competition between lexical units such that inhibition of irrelevant candidates leads to selection of critical words. Our results instead support predictive-selection theories (e.g., Predictive-Coding): by comparing heard and predicted speech sounds, neural computations of prediction error can help listeners continuously update lexical probabilities, allowing for more rapid word identification.


Subject(s)
Recognition, Psychology/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Adult , Bayes Theorem , Comprehension/physiology , Female , Humans , Magnetoencephalography , Male , Middle Aged , Young Adult
13.
Psychol Sci ; 32(4): 471-484, 2021 04.
Article in English | MEDLINE | ID: mdl-33634711

ABSTRACT

There is profound and long-standing debate over the role of explicit instruction in reading acquisition. In this research, we investigated the impact of teaching regularities in the writing system explicitly rather than relying on learners to discover these regularities through text experience alone. Over 10 days, 48 adults learned to read novel words printed in two artificial writing systems. One group learned spelling-to-sound and spelling-to-meaning regularities solely through experience with the novel words, whereas the other group received a brief session of explicit instruction on these regularities before training commenced. Results showed that virtually all participants who received instruction performed at ceiling on tests that probed generalization of underlying regularities. In contrast, despite up to 18 hr of training on the novel words, less than 25% of discovery learners performed on par with those who received instruction. These findings illustrate the dramatic impact of teaching method on outcomes during reading acquisition.


Subject(s)
Learning , Reading , Adult , Generalization, Psychological , Humans , Language , Writing
14.
PLoS Biol ; 19(2): e3001142, 2021 02.
Article in English | MEDLINE | ID: mdl-33635855

ABSTRACT

Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or "entrained") to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research-including neural entrainment and tACS-and reveal endogenous neural oscillations as a key underlying principle for speech perception.


Subject(s)
Brain/physiology , Speech Perception/physiology , Adult , Biological Clocks , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Middle Aged , Transcranial Direct Current Stimulation
15.
Elife ; 92020 11 04.
Article in English | MEDLINE | ID: mdl-33147138

ABSTRACT

Human speech perception can be described as Bayesian perceptual inference but how are these Bayesian computations instantiated neurally? We used magnetoencephalographic recordings of brain responses to degraded spoken words and experimentally manipulated signal quality and prior knowledge. We first demonstrate that spectrotemporal modulations in speech are more strongly represented in neural responses than alternative speech representations (e.g. spectrogram or articulatory features). Critically, we found an interaction between speech signal quality and expectations from prior written text on the quality of neural representations; increased signal quality enhanced neural representations of speech that mismatched with prior expectations, but led to greater suppression of speech that matched prior expectations. This interaction is a unique neural signature of prediction error computations and is apparent in neural responses within 100 ms of speech input. Our findings contribute to the detailed specification of a computational model of speech perception based on predictive coding frameworks.


Subject(s)
Brain/physiology , Brain/physiopathology , Magnetoencephalography , Speech Disorders/physiopathology , Speech Perception , Adolescent , Adult , Bayes Theorem , Computer Simulation , Female , Humans , Linear Models , Male , Neurons/physiology , Regression Analysis , Speech , Young Adult
16.
J Cogn Neurosci ; 32(2): 226-240, 2020 02.
Article in English | MEDLINE | ID: mdl-31659922

ABSTRACT

Several recent studies have used transcranial alternating current stimulation (tACS) to demonstrate a causal role of neural oscillatory activity in speech processing. In particular, it has been shown that the ability to understand speech in a multi-speaker scenario or background noise depends on the timing of speech presentation relative to simultaneously applied tACS. However, it is possible that tACS did not change actual speech perception but rather auditory stream segregation. In this study, we tested whether the phase relation between tACS and the rhythm of degraded words, presented in silence, modulates word report accuracy. We found strong evidence for a tACS-induced modulation of speech perception, but only if the stimulation was applied bilaterally using ring electrodes (not for unilateral left hemisphere stimulation with square electrodes). These results were only obtained when data were analyzed using a statistical approach that was identified as optimal in a previous simulation study. The effect was driven by a phasic disruption of word report scores. Our results suggest a causal role of neural entrainment for speech perception and emphasize the importance of optimizing stimulation protocols and statistical approaches for brain stimulation research.


Subject(s)
Cerebral Cortex/physiology , Speech Perception/physiology , Transcranial Direct Current Stimulation , Adult , Female , Humans , Male , Placebos , Psychomotor Performance/physiology , Time Factors , Young Adult
17.
J Cogn Neurosci ; 32(3): 403-425, 2020 03.
Article in English | MEDLINE | ID: mdl-31682564

ABSTRACT

Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400-800 msec after their acoustic offset compared with unambiguous control words in left frontotemporal MEG sensors, corresponding to sources in bilateral frontotemporal brain regions. This response may reflect increased demands on processes by which multiple alternative meanings are activated and maintained until later selection. Disambiguating words heard after an ambiguous word were associated with marginally increased neural activity over bilateral temporal MEG sensors and a central cluster of EEG electrodes, which localized to similar bilateral frontal and left temporal regions. This later neural response may reflect effortful semantic integration or elicitation of prediction errors that guide reinterpretation of previously selected word meanings. Across participants, the amplitude of the ambiguity response showed a marginal positive correlation with comprehension scores, suggesting that sentence comprehension benefits from additional processing around the time of an ambiguous word. Better comprehenders may have increased availability of subordinate meanings, perhaps due to higher quality lexical representations and reflected in a positive correlation between vocabulary size and comprehension success.


Subject(s)
Brain/physiology , Comprehension/physiology , Semantics , Speech Perception/physiology , Adult , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Vocabulary , Young Adult
18.
Neuroimage ; 202: 116175, 2019 11 15.
Article in English | MEDLINE | ID: mdl-31499178

ABSTRACT

Research on whether perception or other processes depend on the phase of neural oscillations is rapidly gaining popularity. However, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect such an effect on dichotomous (e.g., "hit" vs "miss") and continuous (e.g., scalp potentials) response variables. We manipulated parameters that characterise the phase effect or define the experimental approach to test for this effect. For each parameter combination and response variable, we identified an optimal method. We found that methods regressing single-trial responses on circular (sine and cosine) predictors perform best for all of the simulated parameters, regardless of the nature of the response variable (dichotomous or continuous). In sum, our study lays a foundation for optimized experimental designs and analyses in future studies investigating the role of phase for neural and behavioural responses. We provide MATLAB code for the statistical methods tested.


Subject(s)
Brain/physiology , Models, Neurological , Neurons/physiology , Perception/physiology , Computer Simulation , Data Interpretation, Statistical , Electroencephalography , Humans , Magnetoencephalography , Transcranial Direct Current Stimulation
19.
Proc Natl Acad Sci U S A ; 116(36): 17723-17728, 2019 09 03.
Article in English | MEDLINE | ID: mdl-31427523

ABSTRACT

Reading involves transforming arbitrary visual symbols into sounds and meanings. This study interrogated the neural representations in ventral occipitotemporal cortex (vOT) that support this transformation process. Twenty-four adults learned to read 2 sets of 24 novel words that shared phonemes and semantic categories but were written in different artificial orthographies. Following 2 wk of training, participants read the trained words while neural activity was measured with functional MRI. Representational similarity analysis on item pairs from the same orthography revealed that right vOT and posterior regions of left vOT were sensitive to basic visual similarity. Left vOT encoded letter identity and representations became more invariant to position along a posterior-to-anterior hierarchy. Item pairs that shared sounds or meanings, but were written in different orthographies with no letters in common, evoked similar neural patterns in anterior left vOT. These results reveal a hierarchical, posterior-to-anterior gradient in vOT, in which representations of letters become increasingly invariant to position and are transformed to convey spoken language information.


Subject(s)
Language , Magnetic Resonance Imaging , Occipital Lobe , Reading , Verbal Learning/physiology , Adolescent , Adult , Female , Humans , Male , Occipital Lobe/diagnostic imaging , Occipital Lobe/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...