Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters










Publication year range
1.
Brain Lang ; 254: 105439, 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-38945108

ABSTRACT

Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.

2.
Q J Exp Psychol (Hove) ; 77(5): 909-923, 2024 May.
Article in English | MEDLINE | ID: mdl-37382107

ABSTRACT

Most research on mental lexical representations (lemmas) assumes they are discrete and correspond in number to a word's number of distinct meanings. Thus, homophones (bat), whose meanings are unrelated, have separate lemmas for each meaning (one for baseball bat, another for flying bat), whereas polysemes (paper), whose senses are related, have shared lemmas (the same lemma for printer paper and term paper). However, most aspects of cognition are thought to be graded, not discrete; could lemmas be graded too? We conducted a preregistered picture-word interference study with pictures of words whose meanings ranged from unrelated (homophones) to very related (regular polysemes). Whereas semantic competitors to picture names slow picture naming, semantic competitors to non-depicted meanings of homophones facilitate naming, suggesting distinct lemmas for homophones' meanings. We predicted that competitors to non-depicted senses of polysemes would slow naming, as polysemes' depicted and non-depicted senses presumably share a lemma. Crucially, we aimed to examine the transition from facilitation to inhibition: two groupings (where competitors to non-depicted senses led to facilitation for words with two lemmas but inhibition for words with one lemma) would imply that lemmas are indeed discrete. But a transition that varies continuously by sense relatedness would imply that lemmas are graded. Unexpectedly, competitors to non-depicted senses of both homophones and polysemes facilitated naming. Although these results do not indicate whether lemmas are graded or discrete, they do inform a long-standing question on the nature of polysemes, supporting a multiple-lemma (vs. core-lemma) account.

3.
J Exp Psychol Learn Mem Cogn ; 50(3): 500-508, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37439729

ABSTRACT

Interpreting a sentence can be characterized as a rational process in which comprehenders integrate linguistic input with top-down knowledge (e.g., plausibility). One type of evidence for this is that comprehenders sometimes reinterpret sentences to arrive at interpretations that conflict with the original language input. Does this reflect a reinterpretation of only the message, or also of earlier stages of linguistic representation such as the syntactic parse? The present study relies both on comprehension questions as a measure of the eventual interpretation (as in past work) and on syntactic priming as an implicit measure of the eventual parse of a sentence. Plausible dative sentences yielded a classic syntactic priming effect. Implausible dative sentences, for which a plausible alternative version corresponded to the alternate dative structure, not only tended to be interpreted as the plausible alternative, but also showed no priming effect from the perceived syntactic structure. These results suggest that the plausibility of a message can not only impact the interpretation of a perceived sentence, but also its underlying syntactic representation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Language , Motivation , Humans , Linguistics , Comprehension , Databases, Factual
4.
Neurobiol Lang (Camb) ; 2(4): 487-512, 2021.
Article in English | MEDLINE | ID: mdl-37214629

ABSTRACT

The study of how bilingualism is linked to cognitive processing, including executive functioning, has historically focused on comparing bilinguals to monolinguals across a range of tasks. These group comparisons presume to capture relatively stable cognitive traits and have revealed important insights about the architecture of the language processing system that could not have been gleaned from studying monolinguals alone. However, there are drawbacks to using a group-comparison, or Traits, approach. In this theoretical review, we outline some limitations of treating executive functions as stable traits and of treating bilinguals as a uniform group when compared to monolinguals. To build on what we have learned from group comparisons, we advocate for an emerging complementary approach to the question of cognition and bilingualism. Using an approach that compares bilinguals to themselves under different linguistic or cognitive contexts allows researchers to ask questions about how language and cognitive processes interact based on dynamically fluctuating cognitive and neural states. A States approach, which has already been used by bilingualism researchers, allows for cause-and-effect hypotheses and shifts our focus from questions of group differences to questions of how varied linguistic environments influence cognitive operations in the moment and how fluctuations in cognitive engagement impact language processing.

5.
Cognition ; 197: 104183, 2020 04.
Article in English | MEDLINE | ID: mdl-31982849

ABSTRACT

We report two experiments that suggest that syntactic category plays a key role in limiting competition in lexical access in speaking. We introduce a novel sentence-picture interference (SPI) paradigm, and we show that nouns (e.g., running as a noun) do not compete with verbs (e.g., walking as a verb) and verbs do not compete with nouns in sentence production, regardless of their conceptual similarity. Based on this finding, we argue that lexical competition in production is limited by syntactic category. We also suggest that even complex words containing category-changing derivational morphology can be stored and accessed together with their final syntactic category information. We discuss the potential underlying mechanism and how it may enable us to speak relatively fluently.


Subject(s)
Language , Semantics , Humans
6.
J Cogn Neurosci ; 32(1): 111-123, 2020 01.
Article in English | MEDLINE | ID: mdl-31560265

ABSTRACT

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


Subject(s)
Auditory Pathways/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Concept Formation/physiology , Magnetoencephalography/methods , Acoustics , Adult , Female , Functional Neuroimaging , Humans , Male , Time Factors , Young Adult
7.
Front Psychol ; 10: 1594, 2019.
Article in English | MEDLINE | ID: mdl-31379658

ABSTRACT

Human listeners must identify and orient themselves to auditory objects and events in their environment. What acoustic features support a listener's ability to differentiate the great variety of natural sounds they might encounter? Studies of auditory object perception typically examine identification (and confusion) responses or dissimilarity ratings between pairs of objects and events. However, the majority of this prior work has been conducted within single categories of sound. This separation has precluded a broader understanding of the general acoustic attributes that govern auditory object and event perception within and across different behaviorally relevant sound classes. The present experiments take a broader approach by examining multiple categories of sound relative to one another. This approach bridges critical gaps in the literature and allows us to identify (and assess the relative importance of) features that are useful for distinguishing sounds within, between and across behaviorally relevant sound categories. To do this, we conducted behavioral sound identification (Experiment 1) and dissimilarity rating (Experiment 2) studies using a broad set of stimuli that leveraged the acoustic variability within and between different sound categories via a diverse set of 36 sound tokens (12 utterances from different speakers, 12 instrument timbres, and 12 everyday objects from a typical human environment). Multidimensional scaling solutions as well as analyses of item-pair-level responses as a function of different acoustic qualities were used to understand what acoustic features informed participants' responses. In addition to the spectral and temporal envelope qualities noted in previous work, listeners' dissimilarity ratings were associated with spectrotemporal variability and aperiodicity. Subsets of these features (along with fundamental frequency variability) were also useful for making specific within or between sound category judgments. Dissimilarity ratings largely paralleled sound identification performance, however the results of these tasks did not completely mirror one another. In addition, musical training was related to improved sound identification performance.

8.
Neuroimage ; 191: 116-126, 2019 05 01.
Article in English | MEDLINE | ID: mdl-30731247

ABSTRACT

Human listeners can quickly and easily recognize different sound sources (objects and events) in their environment. Understanding how this impressive ability is accomplished can improve signal processing and machine intelligence applications along with assistive listening technologies. However, it is not clear how the brain represents the many sounds that humans can recognize (such as speech and music) at the level of individual sources, categories and acoustic features. To examine the cortical organization of these representations, we used patterns of fMRI responses to decode 1) four individual speakers and instruments from one another (separately, within each category), 2) the superordinate category labels associated with each stimulus (speech or instrument), and 3) a set of simple synthesized sounds that could be differentiated entirely on their acoustic features. Data were collected using an interleaved silent steady state sequence to increase the temporal signal-to-noise ratio, and mitigate issues with auditory stimulus presentation in fMRI. Largely separable clusters of voxels in the temporal lobes supported the decoding of individual speakers and instruments from other stimuli in the same category. Decoding the superordinate category of each sound was more accurate and involved a larger portion of the temporal lobes. However, these clusters all overlapped with areas that could decode simple, acoustically separable stimuli. Thus, individual sound sources from different sound categories are represented in separate regions of the temporal lobes that are situated within regions implicated in more general acoustic processes. These results bridge an important gap in our understanding of cortical representations of sounds and their acoustics.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Music , Acoustic Stimulation , Adult , Female , Humans , Male , Young Adult
9.
Perspect Psychol Sci ; 14(3): 361-375, 2019 05.
Article in English | MEDLINE | ID: mdl-30629888

ABSTRACT

There is a growing interest in changing the culture of psychology to improve the quality of our science. At the root of this interest is concern over the reproducibility of key findings. A variety of large-scale replication attempts have revealed that several previously published effects cannot be reproduced, whereas other analyses indicate that the published literature is rife with underpowered studies and publication bias. These revelations suggest that it is time to change how psychological science is carried out and increase the transparency of reporting. We argue that change will be slow until institutions adopt new procedures for evaluating scholarly activity. We consider three actions that individuals and departments can take to facilitate change throughout psychological science: the development of individualized research-philosophy statements, the creation of an annotated curriculum vitae to improve the transparency of scholarly reporting, and the use of a formal evaluative system that explicitly captures behaviors that support reproducibility. Our recommendations build on proposals for open science by enabling researchers to have a voice in articulating (and contextualizing) how they would like their work to be evaluated and by providing a mechanism for more detailed and transparent reporting of scholarly activities.


Subject(s)
Psychology/methods , Research Design , Scholarly Communication , Access to Information , Culture , Humans , Peer Review, Research , Reproducibility of Results , Research Personnel/psychology , Universities
10.
Mem Cognit ; 46(7): 1076-1092, 2018 10.
Article in English | MEDLINE | ID: mdl-29752659

ABSTRACT

Learning and performing music draw on a host of cognitive abilities, and previous research has postulated that musicians might have advantages in related cognitive processes. One such aspect of cognition that may be related to musical training is executive functions (EFs), a set of top-down processes that regulate behavior and cognition according to task demands. Previous studies investigating the link between musical training and EFs have yielded mixed results and are difficult to compare. In part, this is because most studies have looked at only one specific cognitive process, and even studies looking at the same process have used different experimental tasks. Furthermore, most correlational studies have used different "musician" and "non-musician" categorizations for their comparisons, so generalizing the findings is difficult. The present study provides a more comprehensive assessment of how individual differences in musical training relate to latent measures of three separable aspects of EFs. We administered a well-validated EF battery containing multiple tasks tapping the EF components of inhibition, shifting, and working memory updating (Friedman et al. in Journal of Experimental Psychology: General, 137, 201-225, 2008), as well as a comprehensive, continuous measure of musical training and sophistication (Müllensiefen et al., in PLoS ONE, 9, e89642, 2014). Musical training correlated with some individual EF tasks involving inhibition and working memory updating, but not with individual tasks involving shifting. However, musical training only predicted the latent variable of working memory updating, but not the latent variables of inhibition or shifting after controlling for IQ, socioeconomic status, and handedness. Although these data are correlational, they nonetheless suggest that musical experience places particularly strong demands specifically on working memory updating processes.


Subject(s)
Executive Function/physiology , Individuality , Inhibition, Psychological , Memory, Short-Term/physiology , Music , Adult , Humans , Young Adult
11.
Behav Brain Sci ; 40: e310, 2017 01.
Article in English | MEDLINE | ID: mdl-29342736

ABSTRACT

Understanding the nature of linguistic representations undoubtedly will benefit from multiple types of evidence, including structural priming. Here, we argue that successfully gaining linguistic insights from structural priming requires us to better understand (1) the precise mappings between linguistic input and comprehenders' syntactic knowledge; and (2) the role of cognitive faculties such as memory and attention in structural priming.


Subject(s)
Comprehension , Linguistics , Attention , Memory
12.
J Acoust Soc Am ; 142(6): 3459, 2017 12.
Article in English | MEDLINE | ID: mdl-29289109

ABSTRACT

Humans have an impressive, automatic capacity for identifying and organizing sounds in their environment. However, little is known about the timescales that sound identification functions on, or the acoustic features that listeners use to identify auditory objects. To better understand the temporal and acoustic dynamics of sound category identification, two go/no-go perceptual gating studies were conducted. Participants heard speech, musical instrument, and human-environmental sounds ranging from 12.5 to 200 ms in duration. Listeners could reliably identify sound categories with just 25 ms of duration. In experiment 1, participants' performance on instrument sounds showed a distinct processing advantage at shorter durations. Experiment 2 revealed that this advantage was largely dependent on regularities in instrument onset characteristics relative to the spectrotemporal complexity of environmental sounds and speech. Models of participant responses indicated that listeners used spectral, temporal, noise, and pitch cues in the task. Aspects of spectral centroid were associated with responses for all categories, while noisiness and spectral flatness were associated with environmental and instrument responses, respectively. Responses for speech and environmental sounds were also associated with spectral features that varied over time. Experiment 2 indicated that variability in fundamental frequency was useful in identifying steady state speech and instrument stimuli.

13.
Cogn Sci ; 41 Suppl 6: 1532-1548, 2017 May.
Article in English | MEDLINE | ID: mdl-27471136

ABSTRACT

Every word signifies multiple senses. Many studies using comprehension-based measures suggest that polysemes' senses (e.g., paper as in printer paper or term paper) share lexical representations, whereas homophones' meanings (e.g., pen as in ballpoint pen or pig pen) correspond to distinct lexical representations. Less is known about the lexical representations of polysemes compared to homophones in language production. In this study, speakers named pictures after reading sentence fragments that primed polysemes and homophones either as direct competitors to pictures (i.e., semantic-competitors), or as indirect-competitors to pictures (e.g., polysemous senses of semantic competitors, or homophonous meanings of semantic competitors). Polysemes (e.g., paper) elicited equal numbers of intrusions to picture names (e.g., cardboard) compared to in control conditions whether primed as direct competitors (printer paper) or as indirect-competitors (term paper). This contrasted with the finding that homophones (e.g., pen) elicited more intrusions to picture names (e.g., crayon) compared to in control conditions when primed as direct competitors (ballpoint pen) than when primed as indirect-competitors (pig pen). These results suggest that polysemes, unlike homophones, are stored and retrieved as unified lexical representations.


Subject(s)
Comprehension/physiology , Language , Reading , Speech/physiology , Vocabulary , Female , Humans , Male
14.
Neurocase ; 22(6): 505-511, 2016 12.
Article in English | MEDLINE | ID: mdl-27112951

ABSTRACT

Evidence for shared processing of structure (or syntax) in language and in music conflicts with neuropsychological dissociations between the two. However, while harmonic structural processing can be impaired in patients with spared linguistic syntactic abilities (Peretz, I. (1993). Auditory atonalia for melodies. Cognitive Neuropsychology, 10, 21-56. doi:10.1080/02643299308253455), evidence for the opposite dissociation-preserved harmonic processing despite agrammatism-is largely lacking. Here, we report one such case: HV, a former musician with Broca's aphasia and agrammatic speech, was impaired in making linguistic, but not musical, acceptability judgments. Similarly, she showed no sensitivity to linguistic structure, but normal sensitivity to musical structure, in implicit priming tasks. To our knowledge, this is the first non-anecdotal report of a patient with agrammatic aphasia demonstrating preserved harmonic processing abilities, supporting claims that aspects of musical and linguistic structure rely on distinct neural mechanisms.


Subject(s)
Aphasia, Broca/physiopathology , Music , Pitch Perception/physiology , Adult , Aged , Aphasia, Broca/diagnostic imaging , Female , Humans , Judgment/physiology , Linguistics , Magnetic Resonance Imaging , Middle Aged , Pregnancy , Speech , Vocabulary
15.
Cognition ; 152: 199-211, 2016 07.
Article in English | MEDLINE | ID: mdl-27107499

ABSTRACT

A growing body of research suggests that musical experience and ability are related to a variety of cognitive abilities, including executive functioning (EF). However, it is not yet clear if these relationships are limited to specific components of EF, limited to auditory tasks, or reflect very general cognitive advantages. This study investigated the existence and generality of the relationship between musical ability and EFs by evaluating the musical experience and ability of a large group of participants and investigating whether this predicts individual differences on three different components of EF - inhibition, updating, and switching - in both auditory and visual modalities. Musical ability predicted better performance on both auditory and visual updating tasks, even when controlling for a variety of potential confounds (age, handedness, bilingualism, and socio-economic status). However, musical ability was not clearly related to inhibitory control and was unrelated to switching performance. These data thus show that cognitive advantages associated with musical ability are not limited to auditory processes, but are limited to specific aspects of EF. This supports a process-specific (but modality-general) relationship between musical ability and non-musical aspects of cognition.


Subject(s)
Executive Function , Music , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Individuality , Inhibition, Psychological , Male , Memory, Short-Term , Photic Stimulation , Reaction Time , Stroop Test , Young Adult
16.
J Exp Psychol Learn Mem Cogn ; 42(5): 813-24, 2016 05.
Article in English | MEDLINE | ID: mdl-26569434

ABSTRACT

Many influential models of sentence production (e.g., Bock & Levelt, 1994; Kempen & Hoenkamp, 1987; Levelt, 1989) emphasize the central role of verbs in structural encoding, and thus predict that verbs should be selected early in sentence formulation, possibly even before the phonological encoding of the first constituent (Ferreira, 2000). However, the most direct experimental test of this hypothesis (Schriefers, Teruel, & Meinshausen, 1998) found no evidence for advance verb selection in verb-final (subject-verb and subject-object-verb) utterances in German. The current study, based on a multiword picture-word interference task (Meyer, 1996; Schriefers et al., 1998), demonstrates that in Japanese, a strongly verb-final language, verbs are indeed planned in advance, but selectively before object noun articulation and not before subject noun articulation. This contrasting pattern of advance verb selection may reconcile the motivation for advance verb selection in structural encoding while explaining the previous failures to demonstrate it. Potential mechanisms that might underlie this contrasting pattern of advance verb selection are discussed. (PsycINFO Database Record


Subject(s)
Association , Choice Behavior/physiology , Semantics , Vocabulary , Asian People , Female , Humans , Linguistics , Male , Reaction Time , Students , Universities
18.
Handb Clin Neurol ; 129: 573-87, 2015.
Article in English | MEDLINE | ID: mdl-25726291

ABSTRACT

Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.


Subject(s)
Agnosia , Agnosia/classification , Agnosia/diagnosis , Humans
19.
Psychon Bull Rev ; 22(3): 637-52, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25092390

ABSTRACT

The relationship between structural processing in music and language has received increasing interest in the past several years, spurred by the influential Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, Nature Neuroscience, 6, 674-681, 2003). According to this resource-sharing framework, music and language rely on separable syntactic representations but recruit shared cognitive resources to integrate these representations into evolving structures. The SSIRH is supported by findings of interactions between structural manipulations in music and language. However, other recent evidence suggests that such interactions also can arise with nonstructural manipulations, and some recent neuroimaging studies report largely nonoverlapping neural regions involved in processing musical and linguistic structure. These conflicting results raise the question of exactly what shared (and distinct) resources underlie musical and linguistic structural processing. This paper suggests that one shared resource is prefrontal cortical mechanisms of cognitive control, which are recruited to detect and resolve conflict that occurs when expectations are violated and interpretations must be revised. By this account, musical processing involves not just the incremental processing and integration of musical elements as they occur, but also the incremental generation of musical predictions and expectations, which must sometimes be overridden and revised in light of evolving musical input.


Subject(s)
Cognition/physiology , Executive Function/physiology , Language , Music , Prefrontal Cortex/physiology , Humans
20.
Front Psychol ; 6: 1962, 2015.
Article in English | MEDLINE | ID: mdl-26733930

ABSTRACT

What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.

SELECTION OF CITATIONS
SEARCH DETAIL
...