Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 71
Filter
1.
Thorax ; 79(5): 395-402, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38184370

ABSTRACT

BACKGROUND: The potential association between the use of inhaled corticosteroids (ICS) and the risk of pneumonia among adults is disputed and paediatric-specific evidence is scarce. AIM: To assess the potential association between ICS, use and the risk of hospitalisation for pneumonia among children (age 2-17 years) with asthma. METHODS: This was a cohort study based on nationwide data from routine clinical practice in Sweden (January 2007 to November 2021). From 425 965 children with confirmed asthma, episodes of new ICS use and no use were identified using records of dispensed drugs. We adjusted for potential confounders with propensity score overlap weighting and the risk of a hospitalisation with pneumonia as primary diagnosis was estimated. Multiple subgroup and sensitivity analyses were also performed. RESULTS: We identified 249 351 ICS (mean follow-up of 0.9 years) and 214 840 no-use (mean follow-up of 0.7 years) episodes. During follow-up, 369 and 181 events of hospitalisation for pneumonia were observed in the ICS and no-use episodes, respectively. The weighted incidence rates of hospitalisation for pneumonia was 14.5 per 10 000 patient-years for ICS use episodes and 14.6 for no-use episodes. The weighted HR for hospitalisation for pneumonia associated with ICS use was 1.06 (95% CI 0.88 to 1.28) and the absolute rate difference was -0.06 (95% CI -2.83 to 2.72) events per 10 000 patient-years, compared with no use. CONCLUSIONS: In this nationwide cohort study, we found no evidence of an association between ICS use and the risk of hospitalisation for pneumonia among children with asthma, as compared with no use.


Subject(s)
Anti-Asthmatic Agents , Asthma , Pneumonia , Adult , Child , Humans , Child, Preschool , Adolescent , Anti-Asthmatic Agents/therapeutic use , Cohort Studies , Administration, Inhalation , Asthma/drug therapy , Asthma/epidemiology , Adrenal Cortex Hormones/adverse effects , Hospitalization , Pneumonia/chemically induced , Pneumonia/epidemiology
2.
Lancet Digit Health ; 5(11): e821-e830, 2023 11.
Article in English | MEDLINE | ID: mdl-37890904

ABSTRACT

BACKGROUND: Novel immunisation methods against respiratory syncytial virus (RSV) are emerging, but knowledge of risk factors for severe RSV disease is insufficient for optimal targeting of interventions against them. Our aims were to identify predictors for RSV hospital admission from registry-based data and to develop and validate a clinical prediction model to guide RSV immunoprophylaxis for infants younger than 1 year. METHODS: In this model development and validation study, we studied all infants born in Finland between June 1, 1997, and May 31, 2020, and in Sweden between June 1, 2006, and May 31, 2020, along with the data for their parents and siblings. Infants were excluded if they died or were admitted to hospital for RSV within the first 7 days of life. The outcome was hospital admission due to RSV bronchiolitis during the first year of life. The Finnish study population was divided into a development dataset (born between June 1, 1997, and May 31, 2017) and a temporal hold-out validation dataset (born between June 1, 2017, and May 31, 2020). The development dataset was used for predictor discovery and selection in which we screened 1511 candidate predictors from the infants', parents', and siblings' data, and developed a logistic regression model with the 16 most important predictors. This model was then validated using the Finnish hold-out validation dataset and the Swedish dataset. FINDINGS: In total, there were 1 124 561 infants in the Finnish development dataset, 130 352 infants in the Finnish hold-out validation dataset, and 1 459 472 infants in the Swedish dataset. In addition to known predictors such as severe congenital heart defects (adjusted odds ratio 2·89, 95% CI 2·28-3·65), we confirmed some less established predictors for RSV hospital admission, most notably oesophageal malformations (3·11, 1·86-5·19) and lower complexity congenital heart defects (1·43, 1·25-1·63). The prediction model's C-statistic was 0·766 (95% CI 0·742-0·789) in Finnish data and 0·737 (0·710-0·762) in Swedish validation data. The infants in the highest decile of predicted RSV hospital admission probability had 4·5 times higher observed risk compared with others. Calibration varied according to epidemic intensity. The model's performance was similar to a machine learning (XGboost) model using all 1511 candidate predictors (C-statistic in Finland 0·771, 95% CI 0·754-0·788). The prediction model showed clinical utility in decision curve analysis and in hypothetical number needed to treat calculations for immunisation, and its C-statistic was similar across different strata of parental income. INTERPRETATION: The identified predictors and the prediction model can be used in guiding RSV immunoprophylaxis in infants, or as a basis for further immunoprophylaxis targeting tools. FUNDING: Sigrid Jusélius Foundation, European Research Council, Pediatric Research Foundation, and Academy of Finland.


Subject(s)
Heart Defects, Congenital , Respiratory Syncytial Virus Infections , Infant , Child , Humans , Respiratory Syncytial Virus Infections/epidemiology , Respiratory Syncytial Virus Infections/prevention & control , Models, Statistical , Prognosis , Respiratory Syncytial Viruses , Risk Factors
3.
Atten Percept Psychophys ; 85(7): 2437-2458, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37264293

ABSTRACT

The speech perception system adjusts its phoneme categories based on the current speech input and lexical context. This is known as lexically driven perceptual recalibration, and it is often assumed to underlie accommodation to non-native accented speech. However, recalibration studies have focused on maximally ambiguous sounds (e.g., a sound ambiguous between "sh" and "s" in a word like "superpower"), a scenario that does not represent the full range of variation present in accented speech. Indeed, non-native speakers sometimes completely substitute a phoneme for another, rather than produce an ambiguous segment (e.g., saying "shuperpower"). This has been called a "bad map" in the literature. In this study, we scale up the lexically driven recalibration paradigm to such cases. Because previous research suggests that the position of the critically accented phoneme modulates the success of recalibration, we include such a manipulation in our study. And to ensure that participants treat all critical items as words (an important point for successful recalibration), we use a new exposure task that incentivizes them to do so. Our findings suggest that while recalibration is most robust after exposure to ambiguous sounds, it also occurs after exposure to bad maps. But interestingly, positional effects may be reversed: recalibration was more likely for ambiguous sounds late in words, but more likely for bad maps occurring early in words. Finally, a comparison of an online versus in-lab version of these conditions shows that experimental setting may have a non-trivial effect on the results of recalibration studies.


Subject(s)
Phonetics , Speech Perception , Humans , Speech , Sound , Accommodation, Ocular
4.
J Exp Psychol Learn Mem Cogn ; 48(3): 394-415, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35389728

ABSTRACT

Does saying a novel word help to recognize it later? Previous research on the effect of production on this aspect of word learning is inconclusive, as both facilitatory and detrimental effects of production are reported. In a set of three experiments, we sought to reconcile the seemingly contrasting findings by disentangling the production from other effects. In Experiment 1, participants learned eight new words and their visual referents. On each trial, participants heard a novel word twice: either (a) by hearing the same speaker produce it twice (Perception-Only condition) or (b) by first hearing the speaker once and then producing it themselves (Production condition). At test, participants saw two pictures while hearing a novel word and were asked to choose its correct referent. Experiment 2 was identical to Experiment 1, except that in the Perception-Only condition each word was spoken by 2 different speakers (equalizing talker variability between conditions). Experiment 3 was identical to Experiment 2, but at test words were spoken by a novel speaker to assess generalizability of the effect. Accuracy, reaction time, and eye-movements to the target image were collected. Production had a facilitatory effect during early stages of learning (after short training), but its effect became detrimental after additional training. The results help to reconcile conflicting findings regarding the role of production on word learning. This work is relevant to a wide range of research on human learning in showing that the same factor may play a different role at different stages of learning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Speech Perception , Humans , Learning , Reaction Time , Verbal Learning
5.
Atten Percept Psychophys ; 84(3): 960-980, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35277847

ABSTRACT

Speech perception and production are critical skills when acquiring a new language. However, the nature of the relationship between these two processes is unclear, particularly for non-native speech sound contrasts. Although it has been assumed that perception and production are supportive, recent evidence has demonstrated that, under some circumstances, production can disrupt perceptual learning. Specifically, producing the to-be-learned contrast on each trial can disrupt perceptual learning of that contrast. Here, we treat speech perception and speech production as separate tasks. From this perspective, perceptual learning studies that include a production component on each trial create a task switch. We report two experiments that test how task switching can disrupt perceptual learning. One experiment demonstrates that the disruption caused by switching to production is sensitive to time delays: Increasing the delay between perception and production on a trial can reduce and even eliminate disruption of perceptual learning. The second experiment shows that if a task other than producing the to-be-learned contrast is imposed, the task-switching component of disruption is not influenced by a delay. These experiments provide a new understanding of the relationship between speech perception and speech production, and clarify conditions under which the two cooperate or compete.


Subject(s)
Phonetics , Speech Perception , Humans , Language , Learning , Speech
7.
Ann N Y Acad Sci ; 1511(1): 191-209, 2022 05.
Article in English | MEDLINE | ID: mdl-35124815

ABSTRACT

In Basque-Spanish bilinguals, statistical learning (SL) in the visual modality was more efficient on nonlinguistic than linguistic input; in the auditory modality, we found the reverse pattern of results. We hypothesize that SL was shaped for processing nonlinguistic environmental stimuli and only later, as the language faculty emerged, recycled for speech processing. This led to further adaptive changes in the neurocognitive mechanisms underlying speech processing, including SL. By contrast, as a recent cultural innovation, written language has not yet led to adaptations. The current study investigated whether such phylogenetic influences on SL can be modulated by ontogenetic influences on a shorter timescale, over the course of individual development. We explored how SL is modulated by the ambient linguistic environment. We found that SL in the auditory modality can be further modulated by exposure to a bilingual environment, in which speakers need to process a wider range of diverse speech cues. This effect was observed only on linguistic, not nonlinguistic, material. We conclude that ontogenetic factors modulate the efficiency of already existing SL ability, honing it for specific types of input, by providing new targets for selection via exposure to different cues in the sensory input.


Subject(s)
Learning , Speech Perception , Humans , Language , Language Development , Phylogeny , Speech
8.
Neuropsychologia ; 165: 108107, 2022 01 28.
Article in English | MEDLINE | ID: mdl-34921819

ABSTRACT

We investigated how aging modulates lexico-semantic processes in the visual (seeing written items), auditory (hearing spoken items) and audiovisual (seeing written items while hearing congruent spoken items) modalities. Participants were young and older adults who performed a delayed lexical decision task (LDT) presented in blocks of visual, auditory, and audiovisual stimuli. Event-related potentials (ERPs) revealed differences between young and older adults despite older adults' ability to identify words and pseudowords as accurately as young adults. The observed differences included more focalized lexico-semantic access in the N400 time window in older relative to young adults, stronger re-instantiation and/or more widespread activity of the lexicality effect at the time of responding, and stronger multimodal integration for older relative to young adults. Our results offer new insights into how functional neural differences in older adults can result in efficient access to lexico-semantic representations across the lifespan.


Subject(s)
Electroencephalography , Semantics , Aged , Aging , Brain , Evoked Potentials , Female , Humans , Male , Regression Analysis , Young Adult
9.
J Exp Psychol Hum Percept Perform ; 47(8): 1023-1042, 2021 08.
Article in English | MEDLINE | ID: mdl-34516210

ABSTRACT

Speech selective adaptation is a phenomenon in which repeated presentation of a speech stimulus alters subsequent phonetic categorization. Prior work has reported that lexical, but not multisensory, context influences selective adaptation. This dissociation suggests that lexical and multisensory contexts influence speech perception through separate and independent processes (see Samuel & Lieblich, 2014). However, this dissociation is based on results reported by different studies using different stimuli. This leaves open the possibility that the divergent effects of multisensory and lexical contexts on selective adaptation may be the result of idiosyncratic differences in the stimuli rather than separate perceptual processes. The present investigation used a single stimulus set to compare the selective adaptation produced by lexical and multisensory contexts. In contrast to the apparent dissociation in the literature, we find that multisensory information can in fact support selective adaptation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Speech Perception , Speech , Humans , Phonetics
11.
J Exp Psychol Hum Percept Perform ; 47(4): 596-615, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33983792

ABSTRACT

Over the course of a lifetime, adults develop perceptual categories for the vowels and consonants in their native language, based on the distribution of those sounds in their environment. However, in any given listening situation, the short-term distribution of sounds can cause changes in this long-term categorization. For example, if the same sound (the "adaptor") is heard many times in a short period of time, listeners adapt and become less prone to hearing that sound. Although hundreds of speech selective adaptation experiments have been published, there is almost no information about how long this adaptation lasts. Using stimuli chosen to produce very large initial adaptation, we test adaptation effects with essentially no delay, and with delays of 25 min, 90 min, and 5.5 hr; these tests probe the duration of adaptation both in the (single) ear to which the adaptor was presented, and in the opposite ear. Reliable adaptation remains 5.5 hr after exposure in the same-ear condition, whereas it is undetectable at 90 min in the opposite ear. Surprisingly, the amount of residual adaptation is largely unaffected by whether the listener is exposed to speech between adaptation and test, unless the speech shares critical acoustic properties with the adapting sounds. Analyses of the shifts on three time scales (seconds, minutes, and hours) provide information about the multiple levels of analysis that the speech signal undergoes. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Speech Perception , Acoustic Stimulation , Adaptation, Physiological , Adult , Auditory Perception , Humans , Sound , Speech
12.
Neuroimage ; 237: 118168, 2021 08 15.
Article in English | MEDLINE | ID: mdl-34000398

ABSTRACT

Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.


Subject(s)
Cerebral Cortex/physiology , Multilingualism , Nerve Net/physiology , Psycholinguistics , Recognition, Psychology/physiology , Speech Perception/physiology , Adult , Brain Mapping , Cerebral Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Nerve Net/diagnostic imaging , Semantics , Young Adult
13.
Ann N Y Acad Sci ; 1486(1): 76-89, 2021 02.
Article in English | MEDLINE | ID: mdl-33020959

ABSTRACT

The cognitive mechanisms underlying statistical learning are engaged for the purposes of speech processing and language acquisition. However, these mechanisms are shared by a wide variety of species that do not possess the language faculty. Moreover, statistical learning operates across domains, including nonlinguistic material. Ancient mechanisms for segmenting continuous sensory input into discrete constituents have evolved for general-purpose segmentation of the environment and been readopted for processing linguistic input. Linguistic input provides a rich set of cues for the boundaries between sequential constituents. Such input engages a wider variety of more specialized mechanisms operating on these language-specific cues, thus potentially reducing the role of conditional statistics in tokenizing a continuous linguistic stream. We provide an explicit within-subject comparison of the utility of statistical learning in language versus nonlanguage domains across the visual and auditory modalities. The results showed that in the auditory modality statistical learning is more efficient with speech-like input, while in the visual modality efficiency is higher with nonlanguage input. We suggest that the speech faculty has been important for individual fitness for an extended period, leading to the adaptation of statistical learning mechanisms for speech processing. This is not the case in the visual modality, in which linguistic material presents a less ecological type of sensory input.


Subject(s)
Biological Evolution , Language Development , Language , Learning , Speech Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Speech/physiology , Young Adult
14.
Infancy ; 25(3): 304-318, 2020 05.
Article in English | MEDLINE | ID: mdl-32749062

ABSTRACT

Attunement theories of speech perception development suggest that native-language exposure is one of the main factors shaping infants' phonemic discrimination capacity within the second half of their first year. Here, we focus on the role of acoustic-perceptual salience and language-specific experience by assessing the discrimination of acoustically subtle Basque sibilant contrasts. We used the infant-controlled version of the habituation procedure to assess discrimination in 6- to 7-month and 11- to 12-month-old infants who varied in their amount of exposure to Basque and Spanish. We observed no significant variation in the infants' discrimination behavior as a function of their linguistic experience. Infants in both age-groups exhibited poor discrimination, consistent with Basque adults finding these contrasts more difficult than some others. Our findings are in agreement with previous research showing that perceptual discrimination of subtle speech sound contrasts may follow a different developmental trajectory, where increased native-language exposure seems to be a requisite.


Subject(s)
Language Development , Phonetics , Speech Perception , Acoustic Stimulation , Discrimination Learning , Female , Humans , Infant , Language , Male , Spain
15.
J Exp Psychol Hum Percept Perform ; 46(8): 759-788, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32324035

ABSTRACT

How bilinguals control their languages and switch between them may change across the life span. Furthermore, bilingual language control may depend on the demands imposed by the context. Across 2 experiments, we examined how Spanish-Basque children, teenagers, younger, and older adults switch between languages in voluntary and cued picture-naming tasks. In the voluntary task, bilinguals could freely choose a language while the cued task required them to use a prespecified language. In the cued task, youths and older adults showed larger language mixing costs than young adults, suggesting that using 2 languages in response to cues was more effortful. Cued switching costs, especially when the switching sequence was predictable, were also greater for youths and older adults. The voluntary switching task showed limited age effects. Older adults, but not youths, showed larger switching costs than younger adults. A voluntary mixing benefit was found in all ages, implying that voluntarily using 2 languages was less effortful than using one language across the life span. Thus, while youths and older adults experience greater difficulties using multiple languages in response to external cues, they are affected less when they can freely use their languages. This shows that age effects on bilingual language control are context-dependent. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Cues , Executive Function/physiology , Multilingualism , Pattern Recognition, Visual/physiology , Psycholinguistics , Adolescent , Adult , Aged , Child , Female , Humans , Male , Middle Aged , Young Adult
16.
Neuropsychologia ; 137: 107305, 2020 02 03.
Article in English | MEDLINE | ID: mdl-31838100

ABSTRACT

In two experiments, we investigated the relationship between lexical access processes, and processes that are specifically related to making lexical decisions. In Experiment 1, participants performed a standard lexical decision task in which they had to respond as quickly and as accurately as possible to visual (written), auditory (spoken) and audiovisual (written + spoken) items. In Experiment 2, a different group of participants performed the same task but were required to make responses after a delay. Linear mixed effect models on reaction times and single trial Event-Related Potentials (ERPs) revealed that ERP lexicality effects started earlier in the visual than auditory modality, and that effects were driven by the written input in the audiovisual modality. More negative ERP amplitudes predicted slower reaction times in all modalities in both experiments. However, these predictive amplitudes were mainly observed within the window of the lexicality effect in Experiment 1 (the speeded task), and shifted to post-response-probe time windows in Experiment 2 (the delayed task). The lexicality effects lasted longer in Experiment 1 than in Experiment 2, and in the delayed task, we additionally observed a "re-instantiation" of the lexicality effect related to the delayed response. Delaying the response in an otherwise identical lexical decision task thus allowed us to separate lexical access processes from processes specific to lexical decision.


Subject(s)
Decision Making/physiology , Evoked Potentials/physiology , Pattern Recognition, Visual/physiology , Psycholinguistics , Psychomotor Performance/physiology , Reaction Time/physiology , Reading , Speech Perception/physiology , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Young Adult
17.
J Exp Psychol Learn Mem Cogn ; 46(7): 1270-1292, 2020 Jul.
Article in English | MEDLINE | ID: mdl-31633368

ABSTRACT

People often experience difficulties when they first hear a novel accent. Prior research has shown that relatively fast natural accent accommodation can occur. However, there has been little investigation of the underlying perceptual mechanism that drives the learning. The current study examines whether phonemic boundary changes play a central role in natural accent accommodation. Two well-established boundary shifting phenomena were used here-recalibration and selective adaptation-to index the flexibility of phonemic category boundaries. Natural accent accommodation was measured with a task in which listeners heard accented words and nonwords before and after listening to English sentences produced by one of two native Mandarin Chinese speakers with moderate accents. In two experiments, participants completed recalibration, selective adaptation, and natural accent accommodation tasks focusing on a consonant contrast that is difficult for native Chinese speakers to produce. We found that: (a) On the accent accommodation task, participants showed an increased endorsement of accented/mispronounced words after exposure to a speaker's accented speech, indicating a potential relaxation of criteria in the word recognition process; (b) There was no strong link between recalibrating phonemic boundaries and natural accent accommodation; (c) There was no significant correlation between recalibration and selective adaptation. These results suggest that recalibration of phonemic boundaries does not play a central role in natural accent accommodation. Instead, there is some evidence suggesting that natural accent accommodation involves a relaxation of phonemic categorization criteria. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Adaptation, Physiological/physiology , Multilingualism , Psycholinguistics , Speech Intelligibility/physiology , Speech Perception/physiology , Adolescent , Adult , Female , Humans , Male , Phonetics , Young Adult
18.
J Exp Psychol Learn Mem Cogn ; 46(6): 1121-1145, 2020 Jun.
Article in English | MEDLINE | ID: mdl-31647287

ABSTRACT

In conversational speech, it is very common for words' segments to be reduced or deleted. However, previous research has consistently shown that during spoken word recognition, listeners prefer words' canonical pronunciation over their reduced pronunciations (e.g., pretty pronounced [prɪti] vs. [prɪɾi]), even when the latter are far more frequent. This surprising effect violates most current accounts of spoken word recognition. The current study tests the possibility that words' orthography may be 1 factor driving the advantage for canonical pronunciations during spoken word recognition. Participants learned new words presented in their reduced pronunciation (e.g., [trɒti]), paired with 1 of 3 spelling possibilities: (a) no accompanying spelling, (b) a spelling consistent with the reduced pronunciation (a reduced spelling, e.g., "troddy"), or (c) a spelling consistent with the canonical pronunciation (a canonical spelling, e.g., "trotty"). When listeners were presented with the new words' canonical forms for the first time, they erroneously accepted them at a higher rate if the words had been learned with a canonical spelling. These results remained robust after a delay period of 48 hr, and after additional learning trials. Our findings suggest that orthography plays an important role in the recognition of spoken words and that it is a significant factor driving the canonical pronunciation advantage observed previously. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Psycholinguistics , Recognition, Psychology/physiology , Speech Perception/physiology , Adult , Choice Behavior/physiology , Female , Humans , Male , Phonetics , Young Adult
19.
Evol Psychol ; 17(3): 1474704919879335, 2019.
Article in English | MEDLINE | ID: mdl-31564124

ABSTRACT

Patterns of nonverbal and verbal behavior of interlocutors become more similar as communication progresses. Rhythm entrainment promotes prosocial behavior and signals social bonding and cooperation. Yet, it is unknown if the convergence of rhythm in human speech is perceived and is used to make pragmatic inferences regarding the cooperative urge of the interactors. We conducted two experiments to answer this question. For analytical purposes, we separate pulse (recurring acoustic events) and meter (hierarchical structuring of pulses based on their relative salience). We asked the listeners to make judgments on the hostile or collaborative attitude of interacting agents who exhibit different or similar pulse (Experiment 1) or meter (Experiment 2). The results suggest that rhythm convergence can be a marker of social cooperation at the level of pulse, but not at the level of meter. The mapping of rhythmic convergence onto social affiliation or opposition is important at the early stages of language acquisition. The evolutionary origin of this faculty is possibly the need to transmit and perceive coalition information in social groups of human ancestors. We suggest that this faculty could promote the emergence of the speech faculty in humans.


Subject(s)
Biological Evolution , Cooperative Behavior , Interpersonal Relations , Social Perception , Verbal Behavior/physiology , Adolescent , Adult , Humans , Time Factors , Young Adult
20.
Ann N Y Acad Sci ; 1453(1): 153-165, 2019 10.
Article in English | MEDLINE | ID: mdl-31373001

ABSTRACT

Regular rhythm facilitates audiomotor entrainment and synchronization in motor behavior and vocalizations between individuals. As rhythm entrainment between interacting agents is correlated with higher levels of cooperation and prosocial affiliative behavior, humans can potentially map regular speech rhythm onto higher cooperation and friendliness between interacting individuals. We tested this hypothesis at two rhythmic levels: pulse (recurrent acoustic events) and meter (hierarchical structuring of pulses based on their relative salience). We asked the listeners to make judgments of the hostile or collaborative attitude of two interacting agents who exhibit either regular or irregular pulse (Experiment 1) or meter (Experiment 2). The results confirmed a link between the perception of social affiliation and rhythmicity: evenly distributed pulses (vowel onsets) and consistent grouping of pulses into recurrent hierarchical patterns are more likely to be perceived as cooperation signals. People are more sensitive to regularity at the level of pulse than at the level of meter, and they are more confident when they associate cooperation with isochrony in pulse. The evolutionary origin of this faculty is possibly the need to transmit and perceive coalition information in social groups of human ancestors. We discuss the implications of these findings for the emergence of speech in humans.


Subject(s)
Periodicity , Social Behavior , Speech Perception/physiology , Speech/physiology , Adolescent , Adult , Female , Humans , Judgment/physiology , Language , Male , Multilingualism , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...