Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters










Publication year range
1.
Neurobiol Lang (Camb) ; 5(2): 484-496, 2024.
Article in English | MEDLINE | ID: mdl-38911463

ABSTRACT

Cortical tracking, the synchronization of brain activity to linguistic rhythms is a well-established phenomenon. However, its nature has been heavily contested: Is it purely epiphenomenal or does it play a fundamental role in speech comprehension? Previous research has used intelligibility manipulations to examine this topic. Here, we instead varied listeners' language comprehension skills while keeping the auditory stimulus constant. To do so, we tested 22 native English speakers and 22 Spanish/Catalan bilinguals learning English as a second language (SL) in an EEG cortical entrainment experiment and correlated the responses with the magnitude of the N400 component of a semantic comprehension task. As expected, native listeners effectively tracked sentential, phrasal, and syllabic linguistic structures. In contrast, SL listeners exhibited limitations in tracking sentential structures but successfully tracked phrasal and syllabic rhythms. Importantly, the amplitude of the neural entrainment correlated with the amplitude of the detection of semantic incongruities in SLs, showing a direct connection between tracking and the ability to understand speech. Together, these findings shed light on the interplay between language comprehension and cortical tracking, to identify neural entrainment as a fundamental principle for speech comprehension.

2.
Cogn Neuropsychol ; : 1-17, 2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38377394

ABSTRACT

ABSTRACTThis study investigates factors influencing lexical access in language production across modalities (signed and oral). Data from deaf and hearing signers were reanalyzed (Baus and Costa, 2015, On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC). Brain Research, 1609(1), 40-53. https://doi.org/10.1016/j.brainres.2015.03.013; Gimeno-Martínez and Baus, 2022, Iconicity in sign language production: Task matters. Neuropsychologia, 167, 108166. https://doi.org/10.1016/j.neuropsychologia.2022.108166) testing the influence of psycholinguistic variables and ERP mean amplitudes on signing and naming latencies. Deaf signers' signing latencies were influenced by sign iconicity in the picture signing task, and by spoken psycholinguistic variables in the word-to-sign translation task. Additionally, ERP amplitudes before response influenced signing but not translation latencies. Hearing signers' latencies, both signing and naming, were influenced by sign iconicity and word frequency, with early ERP amplitudes predicting only naming latencies. These findings highlight general and modality-specific determinants of lexical access in language production.

3.
Sci Rep ; 13(1): 20037, 2023 11 16.
Article in English | MEDLINE | ID: mdl-37973908

ABSTRACT

When encountering people, their faces are usually paired with their voices. We know that if the face looks familiar, and the voice is high-pitched, the first impression will be positive and trustworthy. But, how do we integrate these two multisensory physical attributes? Here, we explore 1) the automaticity of audiovisual integration in shaping first impressions of trustworthiness, and 2) the relative contribution of each modality in the final judgment. We find that, even though participants can focus their attention on one modality to judge trustworthiness, they fail to completely filter out the other modality for both faces (Experiment 1a) and voices (Experiment 1b). When asked to judge the person as a whole, people rely more on voices (Experiment 2) or faces (Experiment 3). We link this change to the distinctiveness of each cue in the stimulus set rather than a general property of the modality. Overall, we find that people weigh faces and voices automatically based on cue saliency when forming trustworthiness impressions.


Subject(s)
Cues , Voice , Humans , Attention , Facial Expression , Physical Examination , Trust
4.
PLoS One ; 17(11): e0276334, 2022.
Article in English | MEDLINE | ID: mdl-36322568

ABSTRACT

This registered report article investigates the role of language as a dimension of social categorization. Our critical aim was to investigate whether categorization based on language occurs even when the languages coexist within the same sociolinguistic context, as is the case in bilingual communities. Bilingual individuals of two bilingual communities, the Basque Country (Spain) and Veneto (Italy), were tested using the memory confusion paradigm in a 'Who said what?' task. In the encoding part of the task, participants were presented with different faces together with auditory sentences. Two different languages of the sentences were presented in each study, with half of the faces always associated with one language and the other half with the other language. Spanish and Basque languages were used in Study 1, and Italian and Venetian dialect in Study 2. In the test phase, the auditory sentences were presented again and participants were required to decide which face uttered each sentence. As expected, participants error rates were high. Critically, participants were more likely to confuse faces from the same language category than from the other (different) language category. The results indicate that bilinguals categorize individuals belonging to the same sociolinguistic community based on the language these individuals speak, suggesting that social categorization based on language is an automatic process.


Subject(s)
Language , Multilingualism , Humans , Cues , Linguistics , Spain
5.
Cognition ; 227: 105213, 2022 10.
Article in English | MEDLINE | ID: mdl-35803105

ABSTRACT

In this study we investigated whether people conceptually align when performing a language task together with a robot. In a joint picture-naming task, 24 French native speakers took turns with a robot in naming images of objects belonging to fifteen different semantic categories. For a subset of those semantic categories, the robot was programmed to produce the superordinate, semantic category name (e.g., fruit) instead of the more typical basic-level name associated with an object (e.g., pear). Importantly, while semantic categories were shared between the participant and the robot (e.g., fruits), different objects were assigned to each of them (e.g., the object of 'a pear' for the robot and of 'an apple' for the participant). Logistic regression models on participants' responses revealed that they aligned with the conceptual choices of the robot, producing over the course of the experiment more superordinate names (e.g., saying 'fruit' to the picture of an 'apple') for those objects belonging to the same semantic category as where the robot produced a superordinate name (e.g., saying 'fruit' to the picture of a 'pear'). These results provide evidence for conceptual alignment affecting speakers' word choices as a result of adaptation to the partner, even when the partner is a robot.


Subject(s)
Pattern Recognition, Visual , Robotics , Humans , Reaction Time/physiology , Semantics , Social Interaction
6.
Sci Data ; 9(1): 431, 2022 07 21.
Article in English | MEDLINE | ID: mdl-35864133

ABSTRACT

The growing interdisciplinary research field of psycholinguistics is in constant need of new and up-to-date tools which will allow researchers to answer complex questions, but also expand on languages other than English, which dominates the field. One type of such tools are picture datasets which provide naming norms for everyday objects. However, existing databases tend to be small in terms of the number of items they include, and have also been normed in a limited number of languages, despite the recent boom in multilingualism research. In this paper we present the Multilingual Picture (Multipic) database, containing naming norms and familiarity scores for 500 coloured pictures, in thirty-two languages or language varieties from around the world. The data was validated with standard methods that have been used for existing picture datasets. This is the first dataset to provide naming norms, and translation equivalents, for such a variety of languages; as such, it will be of particular value to psycholinguists and other interested researchers. The dataset has been made freely available.


Subject(s)
Multilingualism , Psycholinguistics , Databases, Factual , Humans , Language , Recognition, Psychology
7.
Cogn Sci ; 46(2): e13102, 2022 02.
Article in English | MEDLINE | ID: mdl-35122322

ABSTRACT

How does prior linguistic knowledge modulate learning in verbal auditory statistical learning (SL) tasks? Here, we address this question by assessing to what extent the frequency of syllabic co-occurrences in the learners' native language determines SL performance. We computed the frequency of co-occurrences of syllables in spoken Spanish through a transliterated corpus, and used this measure to construct two artificial familiarization streams. One stream was constructed by embedding pseudowords with high co-occurrence frequency in Spanish ("Spanish-like" condition), the other by embedding pseudowords with low co-occurrence frequency ("Spanish-unlike" condition). Native Spanish-speaking participants listened to one of the two streams, and were tested in an old/new identification task to examine their ability to discriminate the embedded pseudowords from foils. Our results show that performance in the verbal auditory SL (ASL) task was significantly influenced by the frequency of syllabic co-occurrences in Spanish: When the embedded pseudowords were more "Spanish-like," participants were better able to identify them as part of the stream. These findings demonstrate that learners' task performance in verbal ASL tasks changes as a function of the artificial language's similarity to their native language, and highlight how linguistic prior knowledge biases the learning of regularities.


Subject(s)
Learning , Speech Perception , Auditory Perception , Humans , Language , Linguistics , Verbal Learning
8.
Neuropsychologia ; 167: 108166, 2022 03 12.
Article in English | MEDLINE | ID: mdl-35114219

ABSTRACT

The present study explored the influence of iconicity on sign lexical retrieval and whether it is modulated by the task at hand. Lexical frequency was also manipulated to have an index of lexical processing during sign production. Behavioural and electrophysiological measures (ERPs) were collected from 22 Deaf bimodal bilinguals while performing a picture naming task in Catalan Sign Language (Llengua de Signes Catalana, LSC) and a word-to-sign translation task (Spanish written-words to LSC). Iconicity effects were observed in the picture naming task, but not in the word-to-sign translation task, both behaviourally and at the ERP level. In contrast, frequency effects were observed in the two tasks, with ERP effects appearing earlier in the word-to-sign translation than in the picture naming task. These results support the idea that iconicity in sign language is not pervasive but modulated by task demands. As discussed, iconicity effects in sign language would be emphasised when naming pictures because sign lexical representations in this task are retrieved via semantic-to-phonological links. Conversely, attenuated iconicity effects when translating words might result from sign lexical representations being directly accessed from the lexical representations of the word.


Subject(s)
Semantics , Sign Language , Humans , Language , Linguistics
9.
Lang Cogn Neurosci ; 36(7): 824-839, 2021.
Article in English | MEDLINE | ID: mdl-34485588

ABSTRACT

Speakers learning a second language show systematic differences from native speakers in the retrieval, planning, and articulation of speech. A key challenge in examining the interrelationship between these differences at various stages of production is the need for manual annotation of fine-grained properties of speech. We introduce a new method for automatically analyzing voice onset time (VOT), a key phonetic feature indexing differences in sound systems cross-linguistically. In contrast to previous approaches, our method allows reliable measurement of prevoicing, a dimension of VOT variation used by many languages. Analysis of VOTs, word durations, and reaction times from German-speaking learners of Spanish (Baus et al., 2013) suggest that while there are links between the factors impacting planning and articulation, these two processes also exhibit some degree of independence. We discuss the implications of these findings for theories of speech production and future research in bilingual language processing.

10.
PLoS One ; 16(7): e0254513, 2021.
Article in English | MEDLINE | ID: mdl-34252169

ABSTRACT

The present pre-registration aims to investigate the role of language as a dimension of social categorization. Our critical aim is to investigate whether language can be used as a dimension of social categorization even when the languages coexist within the same sociolinguistic group, as is the case in bilingual communities where two languages are used in daily social interactions. We will use the memory confusion paradigm (also known as the Who said what? task). In the first part of the task, i.e. encoding, participants will be presented with a face (i.e. speaker) and will listen to an auditory sentence. Two languages will be used, with half of the faces always associated with one language and the other half with the other language. In the second phase, i.e. recognition, all the faces will be presented on the screen and participants will decide which face uttered which sentence in the encoding phase. Based on previous literature, we expect that participants will be more likely to confuse faces from within the same language category than from the other language category. Participants will be bilingual individuals of two bilingual communities, the Basque Country (Spain) and Veneto (Italy). The two languages of these communities will be used, Spanish and Basque (Study 1), and Italian and Venetian dialect (Study 2). Furthermore, we will explore whether the amount of daily exposure to the two languages modulates the effect of language as a social categorization cue. This research will allow us to test whether bilingual people use language to categorize individuals belonging to the same sociolinguistic community based on the language these individuals are speaking. Our findings may have relevant political and social implications for linguistic policies in bilingual communities.


Subject(s)
Language , Humans , Italy , Male , Multilingualism , Spain , White People
11.
Sci Rep ; 11(1): 9715, 2021 05 06.
Article in English | MEDLINE | ID: mdl-33958663

ABSTRACT

Does language categorization influence face identification? The present study addressed this question by means of two experiments. First, to establish language categorization of faces, the memory confusion paradigm was used to create two language categories of faces, Spanish and English. Subsequently, participants underwent an oddball paradigm, in which faces that had been previously paired with one of the two languages (Spanish or English), were presented. We measured EEG perceptual differences (vMMN) between standard and two types of deviant faces: within-language category (faces sharing language with standards) or between-language category (faces paired with the other language). Participants were more likely to confuse faces within the language category than between categories, an index that faces were categorized by language. At the neural level, early vMMN were obtained for between-language category faces, but not for within-language category faces. At a later stage, however, larger vMMNs were obtained for those faces from the same language category. Our results showed that language is a relevant social cue that individuals used to categorize others and this categorization subsequently affects face perception.


Subject(s)
Facial Recognition , Language , Electroencephalography , Female , Humans , Male , Young Adult
12.
J Deaf Stud Deaf Educ ; 25(1): 80-90, 2020 01 03.
Article in English | MEDLINE | ID: mdl-31504619

ABSTRACT

In the past years, there has been a significant increase in the number of people learning sign languages. For hearing second language (L2) signers, acquiring a sign language involves acquiring a new language in a different modality. Exploring how L2 sign perception is accomplished and how newly learned categories are created is the aim of the present study. In particular, we investigated handshape perception by means of two tasks, identification and discrimination. In two experiments, we compared groups of hearing L2 signers and groups with different knowledge of sign language. Experiment 1 explored three groups of children-hearing L2 signers, deaf signers, and hearing nonsigners. All groups obtained similar results in both identification and discrimination tasks regardless of sign language experience. In Experiment 2, two groups of adults-Catalan sign language learners (LSC) and nonsigners-perceived handshapes that could be permissible (either as a sign or as a gesture) or not. Both groups obtained similar results in both tasks and performed significantly different perceiving handshapes depending on their permissibility. The results obtained here suggest that sign language experience is not a determinant factor in handshape perception and support other hypotheses considering gesture experience.


Subject(s)
Deafness/psychology , Gestures , Sign Language , Adolescent , Adult , Case-Control Studies , Child , Child, Preschool , Comprehension , Discrimination, Psychological , Female , Humans , Learning , Linguistics , Male , Multilingualism , Young Adult
13.
Brain Sci ; 9(11)2019 Oct 27.
Article in English | MEDLINE | ID: mdl-31717882

ABSTRACT

Word reduction refers to how predictable words are shortened in features such as duration, intensity, or pitch. However, its origin is still unclear: Are words reduced because it is the second time that conceptual representations are activated, or because words are articulated twice? If word reduction is conceptually driven, it would be irrelevant whether the same referent is mentioned twice but using different words. However, if is articulatory, using different words for the same referent could prevent word reduction. In the present work, we use bilingualism to explore the conceptual or articulatory origin of word reduction in language production. Word reduction was compared in two conditions: a non-switch condition, where the two mentions of a referent were uttered in the same language, and a switch condition, where the referent was said in both languages. Dyads of participants completed collaborative maps in which words were uttered twice in Catalan or in Spanish, either repeating or switching the language between mentions. Words were equally reduced in duration, intensity, and pitch in non-switch and in switch conditions. Furthermore, the cognate status of words did not play any role. These findings support the theory that word reduction is conceptually driven.

14.
Sci Rep ; 9(1): 414, 2019 01 23.
Article in English | MEDLINE | ID: mdl-30674913

ABSTRACT

We form very rapid personality impressions about speakers on hearing a single word. This implies that the acoustical properties of the voice (e.g., pitch) are very powerful cues when forming social impressions. Here, we aimed to explore how personality impressions for brief social utterances transfer across languages and whether acoustical properties play a similar role in driving personality impressions. Additionally, we examined whether evaluations are similar in the native and a foreign language of the listener. In two experiments we asked Spanish listeners to evaluate personality traits from different instances of the Spanish word "Hola" (Experiment 1) and the English word "Hello" (Experiment 2), native and foreign language respectively. The results revealed that listeners across languages form very similar personality impressions irrespective of whether the voices belong to the native or the foreign language of the listener. A social voice space was summarized by two main personality traits, one emphasizing valence (e.g., trust) and the other strength (e.g., dominance). Conversely, the acoustical properties that listeners pay attention to when judging other's personality vary across languages. These results provide evidence that social voice perception contains certain elements invariant across cultures/languages, while others are modulated by the cultural/linguistic background of the listener.


Subject(s)
Attention , Language , Personality , Social Perception , Speech Perception , Adolescent , Adult , Female , Humans , Male
15.
Front Psychol ; 9: 1032, 2018.
Article in English | MEDLINE | ID: mdl-29988490

ABSTRACT

Bilingual speakers are suggested to use control processes to avoid linguistic interference from the unintended language. It is debated whether these bilingual language control (BLC) processes are an instantiation of the more domain-general executive control (EC) processes. Previous studies inconsistently report correlations between measures of linguistic and non-linguistic control in bilinguals. In the present study, we investigate the extent to which there is cross-talk between these two domains of control for two switch costs, namely the n-1 shift cost and the n-2 repetition cost. Also, we address an important problem, namely the reliability of the measures used to investigate cross-talk. If the reliability of a measure is low, then these measures are ill-suited to test cross-talk between domains through correlations. We asked participants to perform both a linguistic- and non-linguistic switching task at two sessions about a week apart. The results show a dissociation between the two types of switch costs. Regarding test-retest reliability, we found a stronger reliability for the n-1 shift cost compared to the n-2 repetition cost within both domains as measured by correlations across sessions. This suggests the n-1 shift cost is more suitable to explore cross-talk of BLC and EC. Next, we do find cross-talk for the n-1 shift cost as demonstrated by a significant cross-domain correlation. This suggests that there are at least some shared processes in the linguistic and non-linguistic task.

16.
Acta Psychol (Amst) ; 186: 63-70, 2018 May.
Article in English | MEDLINE | ID: mdl-29704743

ABSTRACT

The information we obtain from how speakers sound-for example their accent-affects how we interpret the messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's social categorization (out-group member) affect memory and the credibility of the message (e.g., less trustworthiness). In the present study, we go one step further and ask whether evaluations of messages are also affected by regional accents-accents from a different region than the listener. In the current study, we report results from three experiments on immediate memory recognition and immediate credibility assessments as well as the illusory truth effect. These revealed no differences between messages conveyed in local-from the same region as the participant-and regional accents-from native speakers of a different country than the participants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by accent does not seem to negatively affect how we treat the speakers' messages.


Subject(s)
Cross-Cultural Comparison , Language , Memory/physiology , Phonetics , Speech Perception/physiology , Adolescent , Adult , Cognition/physiology , Cuba/ethnology , Female , Humans , Male , South America/ethnology , Spain/ethnology , Speech/physiology , Young Adult
17.
Front Psychol ; 8: 709, 2017.
Article in English | MEDLINE | ID: mdl-28539898

ABSTRACT

Here we investigated how the language in which a person addresses us, native or foreign, influences subsequent face recognition. In an old/new paradigm, we explored the behavioral and electrophysiological activity associated with face recognition memory. Participants were first presented with faces accompanied by voices speaking either in their native (NL) or foreign language (FL). Faces were then presented in isolation and participants decided whether the face was presented before (old) or not (new). The results revealed that participants were more accurate at remembering faces previously paired with their native as opposed to their FL. At the event-related potential (ERP) level, we obtained evidence that faces in the NL were differently encoded from those in the FL condition, potentially due to differences in processing demands. During recognition, the frontal old/new effect was present (with a difference in latency) regardless of the language with which a face was associated, while the parietal old/new effect appeared only for faces associated with the native language. These results suggest that the language of our social interactions has an impact on the memory processes underlying the recognition of individuals.

18.
Behav Res Methods ; 48(1): 123-37, 2016 Mar.
Article in English | MEDLINE | ID: mdl-25630312

ABSTRACT

The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.


Subject(s)
Databases, Factual , Sign Language , Humans , Movement , Video Recording
19.
Front Psychol ; 6: 351, 2015.
Article in English | MEDLINE | ID: mdl-25870577

ABSTRACT

The present study examined whether processing words with affective connotations in a listener's native language may be modulated by accented speech. To address this question, we used the Event Related Potential (ERP) technique and recorded the cerebral activity of Spanish native listeners, who performed a semantic categorization task, while listening to positive, negative and neutral words produced in standard Spanish or in four foreign accents. The behavioral results yielded longer latencies for emotional than for neutral words in both native and foreign-accented speech, with no difference between positive and negative words. The electrophysiological results replicated previous findings from the emotional language literature, with the amplitude of the Late Positive Complex (LPC), associated with emotional language processing, being larger (more positive) for emotional than for neutral words at posterior scalp sites. Interestingly, foreign-accented speech was found to interfere with the processing of positive valence and go along with a negativity bias, possibly suggesting heightened attention to negative words. The manipulation employed in the present study provides an interesting perspective on the effects of accented speech on processing affective-laden information. It shows that higher order semantic processes that involve emotion-related aspects are sensitive to a speaker's accent.

20.
Brain Res ; 1609: 40-53, 2015 Jun 03.
Article in English | MEDLINE | ID: mdl-25801115

ABSTRACT

This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated.


Subject(s)
Brain/physiology , Multilingualism , Pattern Recognition, Visual/physiology , Sign Language , Adult , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Language Tests , Male , Neuropsychological Tests , Photic Stimulation , Semantics , Spain , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...