Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Eur J Neurosci ; 59(12): 3162-3183, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38626924

ABSTRACT

Musical engagement can be conceptualized through various activities, modes of listening and listener states. Recent research has reported that a state of focused engagement can be indexed by the inter-subject correlation (ISC) of audience responses to a shared naturalistic stimulus. While statistically significant ISC has been reported during music listening, we lack insight into the temporal dynamics of engagement over the course of musical works-such as those composed in the Western classical style-which involve the formulation of expectations that are realized or derailed at subsequent points of arrival. Here, we use the ISC of electroencephalographic (EEG) and continuous behavioural (CB) responses to investigate the time-varying dynamics of engagement with functional tonal music. From a sample of adult musicians who listened to a complete cello concerto movement, we found that ISC varied throughout the excerpt for both measures. In particular, significant EEG ISC was observed during periods of musical tension that built to climactic highpoints, while significant CB ISC corresponded more to declarative entrances and points of arrival. Moreover, we found that a control stimulus retaining envelope characteristics of the intact music, but little other temporal structure, also elicited significantly correlated EEG and CB responses, though to lesser extents than the original version. In sum, these findings shed light on the temporal dynamics of engagement during music listening and clarify specific aspects of musical engagement that may be indexed by each measure.


Subject(s)
Auditory Perception , Electroencephalography , Music , Humans , Electroencephalography/methods , Male , Female , Adult , Auditory Perception/physiology , Young Adult , Acoustic Stimulation/methods , Brain/physiology
2.
Dev Sci ; 27(2): e13435, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37465984

ABSTRACT

Learning to read depends on the ability to extract precise details of letter combinations, which convey critical information that distinguishes tens of thousands of visual word forms. To support fluent reading skill, one crucial neural developmental process is one's brain sensitivity to statistical constraints inherent in combining letters into visual word forms. To test this idea in early readers, we tracked the impact of two years of schooling on within-subject longitudinal changes in cortical responses to three different properties of words: coarse tuning for print, and fine tuning to either familiar letter combinations within visual word forms or whole word representations. We then examined how each related to growth in reading skill. Three stimulus contrasts-words versus pseudofonts, words versus pseudowords, pseudowords versus nonwords-were presented while high-density EEG Steady-State Visual Evoked Potentials (SSVEPs, n = 31) were recorded. Internalization of abstract visual word form structures over two years of reading experience resulted in a near doubling of SSVEP amplitude, with increasing left lateralization. Longitudinal changes (decreases) in brain responses to such word form structural information were linked to the growth in reading skills, especially in rapid automatic naming of letters. No such changes were observed for whole word representation processing and coarse tuning for print. Collectively, these findings indicate that sensitivity to visual word form structure develops rapidly through exposure to print and is linked to growth in reading skill. RESEARCH HIGHLIGHTS: Longitudinal changes in cognitive responses to coarse print tuning, visual word from structure, and whole word representation were examined in early readers. Visual word form structure processing demonstrated striking patterns of growth with nearly doubled in EEG amplitude and increased left lateralization. Longitudinal changes (decreases) in brain responses to visual word form structural information were linked to the growth in rapid automatic naming for letters. No longitudinal changes were observed for whole word representation processing and coarse tuning for print.


Subject(s)
Electroencephalography , Reading , Humans , Evoked Potentials, Visual , Brain Mapping , Schools , Pattern Recognition, Visual/physiology
3.
Dev Sci ; 26(4): e13352, 2023 07.
Article in English | MEDLINE | ID: mdl-36413170

ABSTRACT

There are multiple levels of processing relevant to reading that vary in their visual, sublexical, and lexical orthographic processing demands. Segregating distinct cortical sources for each of these levels has been challenging in EEG studies of early readers. To address this challenge, we applied recent advances in analyzing high-density EEG using Steady-State Visual Evoked Potentials (SSVEPs) via data-driven Reliable Components Analysis (RCA) in a group of early readers spanning from kindergarten to second grade. Three controlled stimulus contrasts-familiar words versus unfamiliar pseudofonts, familiar words versus pseudowords, and pseudowords versus nonwords-were used to isolate coarse print tuning, lexical processing, and sublexical orthography-related processing, respectively. First, three overlapping yet distinct neural sources-left vOT, dorsal parietal, and primary visual cortex were revealed underlying coarse print tuning. Second, we segregated distinct cortical sources for the other two levels of processing: lexical fine tuning over occipito-temporal/parietal regions; sublexical orthographic fine tuning over left occipital regions. Finally, exploratory group analyses based on children's reading fluency suggested that coarse print tuning emerges early even in children with limited reading knowledge, while sublexical and higher-level lexical processing emerge only in children with sufficient reading knowledge. RESEARCH HIGHLIGHTS: Cognitive processes underlying coarse print tuning, sublexical, and lexical fine tuning were examined in beginning readers. Three overlapping yet distinct neural sources-left ventral occipito-temporal (vOT), left temporo-parietal, and primary visual cortex-were revealed underlying coarse print tuning. Responses to sublexical orthographic fine tuning were found over left occipital regions, while responses to higher-level linguistic fine tuning were found over occipito-temporal/parietal regions. Exploratory group analyses suggested that coarse print tuning emerges in children with limited reading knowledge, while sublexical and higher-level linguistic fine tuning effects emerge in children with sufficient reading knowledge.


Subject(s)
Evoked Potentials, Visual , Occipital Lobe , Child , Humans , Occipital Lobe/physiology , Reading , Temporal Lobe/physiology , Parietal Lobe , Evoked Potentials/physiology , Pattern Recognition, Visual/physiology , Brain Mapping
4.
PLoS One ; 17(1): e0260750, 2022.
Article in English | MEDLINE | ID: mdl-34986153

ABSTRACT

Today, collaborative playlists (CPs) translate long-standing social practices around music consumption to enable people to curate and listen to music together over streaming platforms. Yet despite the critical role of CPs in digitally connecting people through music, we still understand very little about the needs and desires of real-world users, and how CPs might be designed to best serve them. To bridge this gap in knowledge, we conducted a survey with CP users, collecting open-ended text responses on what aspects of CPs they consider most important and useful, and what they viewed as missing or desired. Using thematic analysis, we derived from these responses the Codebook of Critical CP Factors, which comprises eight categories. We gained insights into which aspects of CPs are particularly useful-for instance, the ability for multiple collaborators to edit a single playlist-and which are absent and desired-such as the ability for collaborators to communicate about a CP or the music contained therein. From these findings we propose design implications to inform further design of CP functionalities and platforms, and highlight potential benefits and challenges related to their adoption in current music services.


Subject(s)
Information Dissemination/methods , Music/psychology , Humans , Surveys and Questionnaires
5.
Front Neurosci ; 15: 702067, 2021.
Article in English | MEDLINE | ID: mdl-34955706

ABSTRACT

Musical minimalism utilizes the temporal manipulation of restricted collections of rhythmic, melodic, and/or harmonic materials. One example, Steve Reich's Piano Phase, offers listeners readily audible formal structure with unpredictable events at the local level. For example, pattern recurrences may generate strong expectations which are violated by small temporal and pitch deviations. A hyper-detailed listening strategy prompted by these minute deviations stands in contrast to the type of listening engagement typically cultivated around functional tonal Western music. Recent research has suggested that the inter-subject correlation (ISC) of electroencephalographic (EEG) responses to natural audio-visual stimuli objectively indexes a state of "engagement," demonstrating the potential of this approach for analyzing music listening. But can ISCs capture engagement with minimalist music, which features less obvious expectation formation and has historically received a wide range of reactions? To approach this question, we collected EEG and continuous behavioral (CB) data while 30 adults listened to an excerpt from Steve Reich's Piano Phase, as well as three controlled manipulations and a popular-music remix of the work. Our analyses reveal that EEG and CB ISC are highest for the remix stimulus and lowest for our most repetitive manipulation, no statistical differences in overall EEG ISC between our most musically meaningful manipulations and Reich's original piece, and evidence that compositional features drove engagement in time-resolved ISC analyses. We also found that aesthetic evaluations corresponded well with overall EEG ISC. Finally we highlight co-occurrences between stimulus events and time-resolved EEG and CB ISC. We offer the CB paradigm as a useful analysis measure and note the value of minimalist compositions as a limit case for the neuroscientific study of music listening. Overall, our participants' neural, continuous behavioral, and question responses showed strong similarities that may help refine our understanding of the type of engagement indexed by ISC for musical stimuli.

6.
Sci Rep ; 11(1): 18229, 2021 09 14.
Article in English | MEDLINE | ID: mdl-34521874

ABSTRACT

EEG has been central to investigations of the time course of various neural functions underpinning visual word recognition. Recently the steady-state visual evoked potential (SSVEP) paradigm has been increasingly adopted for word recognition studies due to its high signal-to-noise ratio. Such studies, however, have been typically framed around a single source in the left ventral occipitotemporal cortex (vOT). Here, we combine SSVEP recorded from 16 adult native English speakers with a data-driven spatial filtering approach-Reliable Components Analysis (RCA)-to elucidate distinct functional sources with overlapping yet separable time courses and topographies that emerge when contrasting words with pseudofont visual controls. The first component topography was maximal over left vOT regions with a shorter latency (approximately 180 ms). A second component was maximal over more dorsal parietal regions with a longer latency (approximately 260 ms). Both components consistently emerged across a range of parameter manipulations including changes in the spatial overlap between successive stimuli, and changes in both base and deviation frequency. We then contrasted word-in-nonword and word-in-pseudoword to test the hierarchical processing mechanisms underlying visual word recognition. Results suggest that these hierarchical contrasts fail to evoke a unitary component that might be reasonably associated with lexical access.


Subject(s)
Evoked Potentials, Visual , Reading , Adolescent , Adult , Female , Humans , Male , Middle Aged , Temporal Lobe/physiology , Visual Perception
7.
Hear Res ; 398: 108101, 2020 12.
Article in English | MEDLINE | ID: mdl-33142106

ABSTRACT

Successful mapping of meaningful labels to sound input requires accurate representation of that sound's acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a "leave-one-out" approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject's results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice.


Subject(s)
Music , Speech Perception , Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Hearing Loss , Humans , Speech
8.
Vision Res ; 172: 27-45, 2020 07.
Article in English | MEDLINE | ID: mdl-32388211

ABSTRACT

The ventral visual stream is known to be organized hierarchically, where early visual areas processing simplistic features feed into higher visual areas processing more complex features. Hierarchical convolutional neural networks (CNNs) were largely inspired by this type of brain organization and have been successfully used to model neural responses in different areas of the visual system. In this work, we aim to understand how an instance of these models corresponds to temporal dynamics of human object processing. Using representational similarity analysis (RSA) and various similarity metrics, we compare the model representations with two electroencephalography (EEG) data sets containing responses to a shared set of 72 images. We find that there is a hierarchical relationship between the depth of a layer and the time at which peak correlation with the brain response occurs for certain similarity metrics in both data sets. However, when comparing across layers in the neural network, the correlation onset time did not appear in a strictly hierarchical fashion. We present two additional methods that improve upon the achieved correlations by optimally weighting features from the CNN and show that depending on the similarity metric, deeper layers of the CNN provide a better correspondence than shallow layers to later time points in the EEG responses. However, we do not find that shallow layers provide better correspondences than those of deeper layers to early time points, an observation that violates the hierarchy and is in agreement with the finding from the onset-time analysis. This work makes a first comparison of various response features-including multiple similarity metrics and data sets-with respect to a neural network.


Subject(s)
Electroencephalography , Neural Networks, Computer , Visual Cortex/physiology , Visual Perception/physiology , Humans , Signal Processing, Computer-Assisted , Time Factors
9.
Neuroimage ; 214: 116559, 2020 07 01.
Article in English | MEDLINE | ID: mdl-31978543

ABSTRACT

The brain activity of multiple subjects has been shown to synchronize during salient moments of natural stimuli, suggesting that correlation of neural responses indexes a brain state operationally termed 'engagement'. While past electroencephalography (EEG) studies have considered both auditory and visual stimuli, the extent to which these results generalize to music-a temporally structured stimulus for which the brain has evolved specialized circuitry-is less understood. Here we investigated neural correlation during natural music listening by recording EEG responses from N=48 adult listeners as they heard real-world musical works, some of which were temporally disrupted through shuffling of short-term segments (measures), reversal, or randomization of phase spectra. We measured correlation between multiple neural responses (inter-subject correlation) and between neural responses and stimulus envelope fluctuations (stimulus-response correlation) in the time and frequency domains. Stimuli retaining basic musical features, such as rhythm and melody, elicited significantly higher behavioral ratings and neural correlation than did phase-scrambled controls. However, while unedited songs were self-reported as most pleasant, time-domain correlations were highest during measure-shuffled versions. Frequency-domain measures of correlation (coherence) peaked at frequencies related to the musical beat, although the magnitudes of these spectral peaks did not explain the observed temporal correlations. Our findings show that natural music evokes significant inter-subject and stimulus-response correlations, and suggest that the neural correlates of musical 'engagement' may be distinct from those of enjoyment.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Music , Acoustic Stimulation/methods , Adolescent , Adult , Brain Mapping/methods , Electroencephalography/methods , Female , Humans , Male , Young Adult
11.
Dev Cogn Neurosci ; 38: 100670, 2019 08.
Article in English | MEDLINE | ID: mdl-31228678

ABSTRACT

Motion sensitivity increases during childhood, but little is known about the neural correlates. Most studies investigating children's evoked responses have not dissociated direction-specific and non-direction-specific responses. To isolate direction-specific responses, we presented coherently moving dot stimuli preceded by incoherent motion, to 6- to 7-year-olds (n = 34), 8- to 10-year-olds (n = 34), 10- to 12-year-olds (n = 34) and adults (n = 20). Participants reported the coherent motion direction while high-density EEG was recorded. Using a data-driven approach, we identified two stimulus-locked EEG components with distinct topographies: an early component with an occipital topography likely reflecting sensory encoding and a later, sustained positive component over centro-parietal electrodes that we attribute to decision-related processes. The component waveforms showed clear age-related differences. In the early, occipital component, all groups showed a negativity peaking at ˜300 ms, like the previously reported coherent-motion N2. However, the children, unlike adults, showed an additional positive peak at ˜200 ms, suggesting differential stimulus encoding. The later positivity in the centro-parietal component rose more steeply for adults than for the youngest children, likely reflecting age-related speeding of decision-making. We conclude that children's protracted development of coherent motion sensitivity is associated with maturation of both early sensory and later decision-related processes.


Subject(s)
Brain/physiology , Child Development/physiology , Evoked Potentials, Visual/physiology , Motion Perception/physiology , Photic Stimulation/methods , Adolescent , Adult , Child , Electroencephalography/methods , Female , Humans , Male , Reaction Time/physiology , Young Adult
12.
Front Psychol ; 8: 416, 2017.
Article in English | MEDLINE | ID: mdl-28386241

ABSTRACT

Music discovery in everyday situations has been facilitated in recent years by audio content recognition services such as Shazam. The widespread use of such services has produced a wealth of user data, specifying where and when a global audience takes action to learn more about music playing around them. Here, we analyze a large collection of Shazam queries of popular songs to study the relationship between the timing of queries and corresponding musical content. Our results reveal that the distribution of queries varies over the course of a song, and that salient musical events drive an increase in queries during a song. Furthermore, we find that the distribution of queries at the time of a song's release differs from the distribution following a song's peak and subsequent decline in popularity, possibly reflecting an evolution of user intent over the "life cycle" of a song. Finally, we derive insights into the data size needed to achieve consistent query distributions for individual songs. The combined findings of this study suggest that music discovery behavior, and other facets of the human experience of music, can be studied quantitatively using large-scale industrial data.

13.
PLoS One ; 10(8): e0135697, 2015.
Article in English | MEDLINE | ID: mdl-26295970

ABSTRACT

The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes.


Subject(s)
Image Processing, Computer-Assisted/methods , Pattern Recognition, Visual/physiology , Recognition, Psychology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Brain Mapping , Electrodes , Electroencephalography , Female , Humans , Male , Middle Aged , Photic Stimulation , Photography , Reaction Time , Visual Cortex/anatomy & histology
SELECTION OF CITATIONS
SEARCH DETAIL
...