Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Front Psychol ; 14: 1012839, 2023.
Article in English | MEDLINE | ID: mdl-37496799

ABSTRACT

Communication has been studied extensively in the context of speech and language. While speech is tremendously effective at transferring ideas between people, music is another communicative mode that has a unique power to bring people together and transmit a rich tapestry of emotions, through joint music-making and listening in a variety of everyday contexts. Research has begun to examine the behavioral and neural correlates of the joint action required for successful musical interactions, but it has yet to fully account for the rich, dynamic, multimodal nature of musical communication. We review the current literature in this area and propose that naturalistic musical paradigms will open up new ways to study communication more broadly.

2.
Cognition ; 214: 104752, 2021 09.
Article in English | MEDLINE | ID: mdl-33965782

ABSTRACT

Social interactions, such as joint book reading, have a well-studied influence on early development and language learning. Recent work has begun to investigate the neural mechanisms that underlie shared representations of input, documenting neural synchrony (measured using intersubject temporal correlations of neural activity) between individuals exposed to the same stimulus. Neural synchrony has been found to predict the quality of engagement with a stimulus and with communicative cues, but studies have yet to address how neural synchrony among children may relate to real-time learning. Using functional near-infrared spectroscopy (fNIRS), we recorded the neural activity of 45 children (3.5-4.5 years) during joint book reading with an adult experimenter. The custom children's book contained four novel words and objects embedded in an unfolding story, as well as a range of narrative details about object functions and character roles. We observed synchronized neural activity between child participants during book reading and found a positive correlation between learning and intersubject neural synchronization in parietal cortex, an area implicated in narrative-level processing in adult research. Our findings suggest that signature patterns of neural engagement with the dynamics of stories facilitate children's learning.


Subject(s)
Learning , Reading , Adult , Child , Cues , Humans , Language Development , Narration
3.
Dev Sci ; 24(1): e12997, 2021 01.
Article in English | MEDLINE | ID: mdl-32441385

ABSTRACT

Young children have an overall preference for child-directed speech (CDS) over adult-directed speech (ADS), and its structural features are thought to facilitate language learning. Many studies have supported these findings, but less is known about processing of CDS at short, sub-second timescales. How do the moment-to-moment dynamics of CDS influence young children's attention and learning? In Study 1, we used hierarchical clustering to characterize patterns of pitch variability in a natural CDS corpus, which uncovered four main word-level contour shapes: 'fall', 'rise', 'hill', and 'valley'. In Study 2, we adapted a measure from adult attention research-pupil size synchrony-to quantify real-time attention to speech across participants, and found that toddlers showed higher synchrony to the dynamics of CDS than to ADS. Importantly, there were consistent differences in toddlers' attention when listening to the four word-level contour types. In Study 3, we found that pupil size synchrony during exposure to novel words predicted toddlers' learning at test. This suggests that the dynamics of pitch in CDS not only shape toddlers' attention but guide their learning of new words. By revealing a physiological response to the real-time dynamics of CDS, this investigation yields a new sub-second framework for understanding young children's engagement with one of the most important signals in their environment.


Subject(s)
Language Development , Speech , Adult , Child, Preschool , Comprehension , Family , Humans , Learning
4.
Curr Dir Psychol Sci ; 30(6): 459-467, 2021 Dec.
Article in English | MEDLINE | ID: mdl-35177881

ABSTRACT

How do young children learn to organize the statistics of communicative input across milliseconds and months? Developmental science has made progress in understanding how infants learn patterns in language and how infant-directed speech is engineered to ease short-timescale processing, but less is known about how they link perceptual experiences across multiple levels of processing within an interaction (from syllables to stories) and across development. In this article, we propose that three domains of research - statistical summary, neural processing hierarchies, and neural coupling - will be fruitful in uncovering the dynamic exchange of information between children and adults, both in the moment and in aggregate. In particular, we discuss how the study of brain-to-brain and brain-to-behavior coupling between children and adults will further our understanding of how children's neural representations become aligned with the increasingly complex statistics of communication across timescales.

5.
Sci Rep ; 10(1): 3561, 2020 Feb 21.
Article in English | MEDLINE | ID: mdl-32081889

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

6.
Psychol Sci ; 31(1): 6-17, 2020 01.
Article in English | MEDLINE | ID: mdl-31845827

ABSTRACT

Infancy is the foundational period for learning from adults, and the dynamics of the social environment have long been considered central to children's development. Here, we reveal a novel, naturalistic approach for studying live interactions between infants and adults. Using functional near-infrared spectroscopy (fNIRS), we simultaneously and continuously measured the brains of infants (N = 18; 9-15 months of age) and an adult while they communicated and played with each other. We found that time-locked neural coupling within dyads was significantly greater when dyad members interacted with each other than with control individuals. In addition, we characterized the dynamic relationship between neural activation and the moment-to-moment fluctuations of mutual gaze, joint attention to objects, infant emotion, and adult speech prosody. This investigation advances what is currently known about how the brains and behaviors of infants both shape and reflect those of adults during real-life communication.


Subject(s)
Brain/physiology , Fixation, Ocular , Nonverbal Communication , Speech Perception , Adult , Brain/growth & development , Female , Humans , Infant , Language , Male , Spectroscopy, Near-Infrared
7.
Sci Rep ; 8(1): 13826, 2018 09 14.
Article in English | MEDLINE | ID: mdl-30218053

ABSTRACT

Timbre, the unique quality of a sound that points to its source, allows us to quickly identify a loved one's voice in a crowd and distinguish a buzzy, bright trumpet from a warm cello. Despite its importance for perceiving the richness of auditory objects, timbre is a relatively poorly understood feature of sounds. Here we demonstrate for the first time that listeners adapt to the timbre of a wide variety of natural sounds. For each of several sound classes, participants were repeatedly exposed to two sounds (e.g., clarinet and oboe, male and female voice) that formed the endpoints of a morphed continuum. Adaptation to timbre resulted in consistent perceptual aftereffects, such that hearing sound A significantly altered perception of a neutral morph between A and B, making it sound more like B. Furthermore, these aftereffects were robust to moderate pitch changes, suggesting that adaptation to timbral features used for object identification drives these effects, analogous to face adaptation in vision.


Subject(s)
Auditory Perception/physiology , Hearing/physiology , Pitch Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Female , Humans , Male , Music , Pitch Discrimination , Psychoacoustics , Sound , Sound Spectrography/methods , Voice , Young Adult
8.
J Vis ; 18(3): 1, 2018 03 01.
Article in English | MEDLINE | ID: mdl-29497742

ABSTRACT

Incoming sensory signals are often ambiguous and consistent with multiple perceptual interpretations. Information from one sensory modality can help to resolve ambiguity in another modality, but the mechanisms by which multisensory associations come to influence the contents of conscious perception are unclear. We asked whether and how novel statistical information about the coupling between sounds and images influences the early stages of awareness of visual stimuli. We exposed subjects to consistent, arbitrary pairings of sounds and images and then measured the impact of this recent passive statistical learning on subjects' initial conscious perception of a stimulus by employing binocular rivalry, a phenomenon in which incompatible images presented separately to the two eyes result in a perceptual alternation between the two images. On each trial of the rivalry test, subjects were presented with a pair of rivalrous images (one of which had been consistently paired with a specific sound during exposure while the other had not) and an accompanying sound. We found that, at the onset of binocular rivalry, an image was significantly more likely to be perceived, and was perceived for a longer duration, when it was presented with its paired sound than when presented with other sounds. Our results indicate that recently acquired multisensory information helps resolve sensory ambiguity, and they demonstrate that statistical learning is a fast, flexible mechanism that facilitates this process.


Subject(s)
Learning/physiology , Models, Statistical , Visual Perception/physiology , Adolescent , Adult , Awareness , Biometry , Female , Humans , Male , Photic Stimulation , Time Factors , Vision, Binocular , Young Adult
9.
Curr Biol ; 27(20): 3162-3167.e3, 2017 Oct 23.
Article in English | MEDLINE | ID: mdl-29033333

ABSTRACT

The voice is the most direct link we have to others' minds, allowing us to communicate using a rich variety of speech cues [1, 2]. This link is particularly critical early in life as parents draw infants into the structure of their environment using infant-directed speech (IDS), a communicative code with unique pitch and rhythmic characteristics relative to adult-directed speech (ADS) [3, 4]. To begin breaking into language, infants must discern subtle statistical differences about people and voices in order to direct their attention toward the most relevant signals. Here, we uncover a new defining feature of IDS: mothers significantly alter statistical properties of vocal timbre when speaking to their infants. Timbre, the tone color or unique quality of a sound, is a spectral fingerprint that helps us instantly identify and classify sound sources, such as individual people and musical instruments [5-7]. We recorded 24 mothers' naturalistic speech while they interacted with their infants and with adult experimenters in their native language. Half of the participants were English speakers, and half were not. Using a support vector machine classifier, we found that mothers consistently shifted their timbre between ADS and IDS. Importantly, this shift was similar across languages, suggesting that such alterations of timbre may be universal. These findings have theoretical implications for understanding how infants tune in to their local communicative environments. Moreover, our classification algorithm for identifying infant-directed timbre has direct translational implications for speech recognition technology.


Subject(s)
Communication , Mother-Child Relations , Mothers , Speech Acoustics , Female , Humans , Infant , Voice
10.
Front Psychol ; 8: 559, 2017.
Article in English | MEDLINE | ID: mdl-28469585

ABSTRACT

Visual stimuli with different spatial frequencies (SFs) are processed asymmetrically in the two cerebral hemispheres. Specifically, low SFs are processed relatively more efficiently in the right hemisphere than the left hemisphere, whereas high SFs show the opposite pattern. In this study, we ask whether these differences between the two hemispheres reflect a low-level division that is based on absolute SF values or a flexible comparison of the SFs in the visual environment at any given time. In a recent study, we showed that conscious awareness of SF information (i.e., visual perceptual selection from multiple SFs simultaneously present in the environment) differs between the two hemispheres. Building upon that result, here we employed binocular rivalry to test whether this hemispheric asymmetry is due to absolute or relative SF processing. In each trial, participants viewed a pair of rivalrous orthogonal gratings of different SFs, presented either to the left or right of central fixation, and continuously reported which grating they perceived. We found that the hemispheric asymmetry in perception is significantly influenced by relative processing of the SFs of the simultaneously presented stimuli. For example, when a medium SF grating and a higher SF grating were presented as a rivalry pair, subjects were more likely to report that they initially perceived the medium SF grating when the rivalry pair was presented in the left visual hemifield (right hemisphere), compared to the right hemifield. However, this same medium SF grating, when it was paired in rivalry with a lower SF grating, was more likely to be perceptually selected when it was in the right visual hemifield (left hemisphere). Thus, the visual system's classification of a given SF as "low" or "high" (and therefore, which hemisphere preferentially processes that SF) depends on the other SFs that are present, demonstrating that relative SF processing contributes to hemispheric differences in visual perceptual selection.

11.
Sci Rep ; 7: 43293, 2017 02 27.
Article in English | MEDLINE | ID: mdl-28240295

ABSTRACT

The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners' brain activity was significantly correlated with speakers' with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.


Subject(s)
Comprehension/physiology , Magnetic Resonance Imaging/methods , Spectroscopy, Near-Infrared/methods , Speech Perception/physiology , Speech/physiology , Adolescent , Adult , Attention , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods , Female , Hemodynamics/physiology , Humans , Male , Neuroimaging/methods , Reproducibility of Results
12.
Ecol Psychol ; 26(1-2): 30-46, 2014 Jan 01.
Article in English | MEDLINE | ID: mdl-25089080
13.
J Cogn Neurosci ; 26(9): 2021-7, 2014 Sep.
Article in English | MEDLINE | ID: mdl-24666124

ABSTRACT

Previous research has shown that the right hemisphere processes low spatial frequencies more efficiently than the left hemisphere, which preferentially processes high spatial frequencies. These studies have typically measured RTs to single, briefly flashed gratings and/or have directed observers to attend to a particular spatial frequency immediately before making a judgment about a subsequently presented stimulus. Thus, it is unclear whether the hemispheres differ in perceptual selection from multiple spatial frequencies that are simultaneously present in the environment, without bias from selective attention. Moreover, the time course of hemispheric asymmetry in spatial frequency processing is unknown. We addressed both of these questions with binocular rivalry, a measure of perceptual selection from competing alternatives over time. Participants viewed a pair of rivalrous orthogonal gratings with different spatial frequencies, presented either to the left or right of central fixation, and continuously reported which grating they perceived. At the beginning of a trial, the low spatial frequency grating was perceptually selected more often when presented in the left hemifield (right hemisphere) than in the right hemifield (left hemisphere), whereas the high spatial frequency grating showed the opposite pattern of results. This hemispheric asymmetry in perceptual selection persisted for the entire 30-sec stimulus presentation, continuing long after stimulus onset. These results indicate stable differences in the resolution of ambiguity across spatial locations and demonstrate the importance of considering sustained differences in perceptual selection across space when characterizing conscious representations of complex scenes.


Subject(s)
Attention/physiology , Choice Behavior/physiology , Functional Laterality/physiology , Space Perception/physiology , Adolescent , Adult , Depth Perception/physiology , Female , Humans , Male , Photic Stimulation , Psychophysics , Time Factors , Young Adult
14.
Psychol Sci ; 24(8): 1389-97, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23761928

ABSTRACT

In vision, humans use summary statistics (e.g., the average facial expression of a crowd) to efficiently perceive the gist of groups of features. Here, we present direct evidence that ensemble coding is also important for auditory processing. We found that listeners could accurately estimate the mean frequency of a set of logarithmically spaced pure tones presented in a temporal sequence (Experiment 1). Their performance was severely reduced when only a subset of tones from a given sequence was presented (Experiment 2), which demonstrates that ensemble coding is based on a substantial number of the tones in a sequence. This precise ensemble coding occurred despite very limited representation of individual tones from the sequence: Listeners were poor at identifying specific individual member tones (Experiment 3) and at determining their positions in the sequence (Experiment 4). Together, these results indicate that summary statistical coding is not limited to visual processing and is an important auditory mechanism for extracting ensemble frequency information from sequences of sounds.


Subject(s)
Pitch Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Perception/physiology , Female , Humans , Logistic Models , Male , Young Adult
15.
J Vis ; 12(5): 8, 2012 May 25.
Article in English | MEDLINE | ID: mdl-22637709

ABSTRACT

Photographers, cinematographers, and computer-graphics engineers use certain techniques to create striking pictorial effects. By using lenses of different focal lengths, they can make a scene look compressed or expanded in depth, make a familiar object look natural or distorted, or make a person look smarter, more attractive, or more neurotic. We asked why pictures taken with a certain focal length look natural, while those taken with other focal lengths look distorted. We found that people's preferred viewing distance when looking at pictures leads them to view long-focal-length pictures from too near and short-focal-length pictures from too far. Perceptual distortions occur because people do not take their incorrect viewing distances into account. By following the rule of thumb of using a 50-mm lens, photographers greatly increase the odds of a viewer looking at a photograph from the correct distance, where the percept will be undistorted. Our theory leads to new guidelines for creating pictorial effects that are more effective than conventional guidelines.


Subject(s)
Depth Perception/physiology , Learning/physiology , Photography , Distance Perception/physiology , Humans , Photic Stimulation , Young Adult
16.
Front Hum Neurosci ; 5: 166, 2011.
Article in English | MEDLINE | ID: mdl-22180741

ABSTRACT

PREDICTION MAY BE A FUNDAMENTAL PRINCIPLE OF SENSORY PROCESSING: it has been proposed that the brain continuously generates predictions about forthcoming sensory information. However, little is known about how prediction contributes to the selection of a conscious percept from among competing alternatives. Here, we used binocular rivalry to investigate the effects of prediction on perceptual selection. In binocular rivalry, incompatible images presented to the two eyes result in a perceptual alternation between the images, even though the visual stimuli remain constant. If predictive signals influence the competition between neural representations of rivalrous images, this influence should generate a bias in perceptual selection that depends on predictive context. To manipulate predictive context, we developed a novel binocular rivalry paradigm in which rivalrous test images were immediately preceded by a sequence of context images presented identically to the two eyes. One of the test images was consistent with the preceding image sequence (it was the expected next image in the series), and the other was inconsistent (non-predicted). We found that human observers were more likely to perceive the consistent image at the onset of rivalry, suggesting that predictive context biased selection in favor of the predicted percept. This prediction effect was distinct from the effects of adaptation to stimuli presented before the binocular rivalry test. In addition, perceptual reports were speeded for predicted percepts relative to non-predicted percepts. These results suggest that predictive signals related to visual stimulus history exist at neural sites that can bias conscious perception during binocular rivalry. Our paradigm provides a new way to study how prior information and incoming sensory information combine to generate visual percepts.

SELECTION OF CITATIONS
SEARCH DETAIL
...