Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
BMC Neurol ; 23(1): 359, 2023 Oct 06.
Article in English | MEDLINE | ID: mdl-37803266

ABSTRACT

BACKGROUND: Sleep spindle activity is commonly estimated by measuring sigma power during stage 2 non-rapid eye movement (NREM2) sleep. However, spindles account for little of the total NREM2 interval and therefore sigma power over the entire interval may be misleading. This study compares derived spindle measures from direct automated spindle detection with those from gross power spectral analyses for the purposes of clinical trial design. METHODS: We estimated spindle activity in a set of 8,440 overnight electroencephalogram (EEG) recordings from 5,793 patients from the Sleep Heart Health Study using both sigma power and direct automated spindle detection. Performance of the two methods was evaluated by determining the sample size required to detect decline in age-related spindle coherence with each method in a simulated clinical trial. RESULTS: In a simulated clinical trial, sigma power required a sample size of 115 to achieve 95% power to identify age-related changes in sigma coherence, while automated spindle detection required a sample size of only 60. CONCLUSIONS: Measurements of spindle activity utilizing automated spindle detection outperformed conventional sigma power analysis by a wide margin, suggesting that many studies would benefit from incorporation of automated spindle detection. These results further suggest that some previous studies which have failed to detect changes in sigma power or coherence may have failed simply because they were underpowered.


Subject(s)
Sleep Stages , Sleep , Humans , Polysomnography/methods , Electroencephalography/methods
2.
J Cogn Neurosci ; 32(10): 2001-2012, 2020 10.
Article in English | MEDLINE | ID: mdl-32662731

ABSTRACT

A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80-93, 2014]. In that study, participants first heard ambiguous /s/-/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/-/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.


Subject(s)
Speech Perception , Speech , Humans , Language , Learning , Phonetics
4.
Top Cogn Sci ; 10(4): 818-834, 2018 10.
Article in English | MEDLINE | ID: mdl-29542857

ABSTRACT

Social and linguistic perceptions are linked. On one hand, talker identity affects speech perception. On the other hand, speech itself provides information about a talker's identity. Here, we propose that the same probabilistic knowledge might underlie both socially conditioned linguistic inferences and linguistically conditioned social inferences. Our computational-level approach-the ideal adapter-starts from the idea that listeners use probabilistic knowledge of covariation between social, linguistic, and acoustic cues in order to infer the most likely explanation of the speech signals they hear. As a first step toward understanding social inferences in this framework, we use a simple ideal observer model to show that it would be possible to infer aspects of a talker's identity using cue distributions based on actual speech production data. This suggests the possibility of a single formal framework for social and linguistic inferences and the interactions between them.


Subject(s)
Models, Psychological , Psycholinguistics , Social Perception , Speech Perception/physiology , Thinking/physiology , Uncertainty , Adult , Female , Humans , Male , Middle Aged , Proof of Concept Study
5.
Lang Cogn Neurosci ; 34(1): 43-68, 2018.
Article in English | MEDLINE | ID: mdl-30619905

ABSTRACT

One of the persistent puzzles in understanding human speech perception is how listeners cope with talker variability. One thing that might help listeners is structure in talker variability: rather than varying randomly, talkers of the same gender, dialect, age, etc. tend to produce language in similar ways. Listeners are sensitive to this covariation between linguistic variation and socio-indexical variables. In this paper I present new techniques based on ideal observer models to quantify (1) the amount and type of structure in talker variation (informativity of a grouping variable), and (2) how useful such structure can be for robust speech recognition in the face of talker variability (the utility of a grouping variable). I demonstrate these techniques in two phonetic domains-word-initial stop voicing and vowel identity-and show that these domains have different amounts and types of talker variability, consistent with previous, impressionistic findings. An R package (phondisttools) accompanies this paper, and the source and data are available from osf.io/zv6e3.

6.
Psychon Bull Rev ; 23(3): 678-91, 2016 06.
Article in English | MEDLINE | ID: mdl-26438255

ABSTRACT

When a listener hears many good examples of a /b/ in a row, they are less likely to classify other sounds on, e.g., a /b/-to-/d/ continuum as /b/. This phenomenon is known as selective adaptation and is a well-studied property of speech perception. Traditionally, selective adaptation is seen as a mechanistic property of the speech perception system, and attributed to fatigue in acoustic-phonetic feature detectors. However, recent developments in our understanding of non-linguistic sensory adaptation and higher-level adaptive plasticity in speech perception and language comprehension suggest that it is time to re-visit the phenomenon of selective adaptation. We argue that selective adaptation is better thought of as a computational property of the speech perception system. Drawing on a common thread in recent work on both non-linguistic sensory adaptation and plasticity in language comprehension, we furthermore propose that selective adaptation can be seen as a consequence of distributional learning across multiple levels of representation. This proposal opens up new questions for research on selective adaptation itself, and also suggests that selective adaptation can be an important bridge between work on adaptation in low-level sensory systems and the complicated plasticity of the adult language comprehension system.


Subject(s)
Adaptation, Physiological/physiology , Learning/physiology , Speech Perception/physiology , Comprehension , Fatigue , Humans , Language , Phonetics
7.
Lang Learn ; 66(4): 900-944, 2016 Dec.
Article in English | MEDLINE | ID: mdl-28348442

ABSTRACT

We present a framework of second and additional language (L2/Ln) acquisition motivated by recent work on socio-indexical knowledge in first language (L1) processing. The distribution of linguistic categories covaries with socio-indexical variables (e.g., talker identity, gender, dialects). We summarize evidence that implicit probabilistic knowledge of this covariance is critical to L1 processing, and propose that L2/Ln learning uses the same type of socio-indexical information to probabilistically infer latent hierarchical structure over previously learned and new languages. This structure guides the acquisition of new languages based on their inferred place within that hierarchy, and is itself continuously revised based on new input from any language. This proposal unifies L1 processing and L2/Ln acquisition as probabilistic inference under uncertainty over socio-indexical structure. It also offers a new perspective on crosslinguistic influences during L2/Ln learning, accommodating gradient and continued transfer (both negative and positive) from previously learned to novel languages, and vice versa.

8.
Psychol Rev ; 122(2): 148-203, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25844873

ABSTRACT

Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker's /p/ might be physically indistinguishable from another talker's /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively nonstationary world and propose that the speech perception system overcomes this challenge by (a) recognizing previously encountered situations, (b) generalizing to other situations based on previous similar experience, and (c) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (a) to (c) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on 2 critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires that listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these 2 aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension.


Subject(s)
Adaptation, Psychological/physiology , Generalization, Psychological/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Humans
9.
J Mem Lang ; 71(1): 145-163, 2014 Feb 01.
Article in English | MEDLINE | ID: mdl-24511179

ABSTRACT

Two visual-world experiments examined listeners' use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as "The … ladder is the target". With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200-240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (" The ladder … is the target"). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article's vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for "data explanation" approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed.

10.
Mem Cognit ; 42(3): 508-24, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24217892

ABSTRACT

According to an influential multiple-systems model of category learning, an implicit procedural system governs the learning of information-integration category structures, whereas a rule-based system governs the learning of explicit rule-based categories. Support for this idea has come in part from demonstrations that motor interference, in the form of inconsistent mapping between response location and category labels, results in observed deficits, but only for learning information-integration category structures. In this article, we argue that this response location manipulation results in a potentially more cognitively complex task in which the feedback is difficult to interpret. In one experiment, we attempted to attenuate the cognitive complexity by providing more information in the feedback, and demonstrated that this eliminates the observed performance deficit for information-integration category structures. In a second experiment, we demonstrated similar interference of the inconsistent mapping manipulation in a rule-based category structure. We claim that task complexity, and not separate systems, might be the source of the original dissociation between performance on rule-based and information-integration tasks.


Subject(s)
Concept Formation/physiology , Learning/physiology , Task Performance and Analysis , Adult , Humans , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...