Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
Add more filters










Publication year range
1.
Cogn Psychol ; 147: 101597, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37827092

ABSTRACT

Connectives such as but are critical for building coherent discourse. They also express meanings that do not fit neatly into the standard distinction between semantics and implicated pragmatics. How do children acquire them? Corpus analyses indicate that children use these words in a sophisticated way by the early pre-school years, but a small number of experimental studies also suggest that children do not understand that but has a contrastive meaning until they reach school age. In a series of eight experiments we tested children's understanding of contrastive but compared to the causal connective so, by using a word learning paradigm (e.g., It was a warm day but/so Katy put on a pagle). When the connective so was used, we found that even 2-year-olds inferred a novel word meaning that was associated with the sentence context (a t-shirt). However, for the connective but, children did not infer a non-associated contrastive meaning (a winter coat) until age 7. Before that, even 5-year-old children reliably inferred an associated referent, indicating that they failed to correctly assign but a contrastive meaning. Five control experiments ruled out explanations for this pattern based on basic task demands, sentence processing skills or difficulty making adult-like inferences. A sixth experiment reports one particular context in which five-year-olds do interpret but contrastively. However, that same context also leads children to interpret so contrastively. We conclude that children's sophisticated production of connectives like but and so masks a major difficulty learning their meanings. We suggest that discourse connectives incorporate a class of words whose usage is easy to mimic, but whose meanings are difficult to acquire from everyday conversations, with implications for theories of word learning and discourse processing.


Subject(s)
Language , Learning , Humans , Child, Preschool , Child , Semantics , Verbal Learning , Language Tests , Language Development
2.
Emotion ; 23(7): 2059-2079, 2023 Oct.
Article in English | MEDLINE | ID: mdl-36877488

ABSTRACT

Detecting faces and identifying their emotional expressions are essential for social interaction. The importance of expressions has prompted suggestions that some emotionally relevant facial features may be processed unconsciously, and it has been further suggested that this unconscious processing yields preferential access to awareness. Evidence for such preferential access has predominantly come from reaction times in the breaking continuous flash suppression (bCFS) paradigm, which measures how long it takes different stimuli to overcome interocular suppression. For instance, it has been claimed that fearful expressions break through suppression faster than neutral expressions. However, in the bCFS procedure, observers can decide how much information they receive before committing to a report, so although their responses may reflect differential detection sensitivity, they may also be influenced by differences in decision criteria, stimulus identification, and response production processes. Here, we employ a procedure that directly measures sensitivity for both face detection and identification of facial expressions, using predefined exposure durations. We apply diverse psychophysical approaches-forced-choice localization, presence/absence detection, and staircase-based threshold measurement; across six experiments, we find that emotional expressions do not alter detection sensitivity to faces as they break through CFS. Our findings constrain the possible mechanisms underlying previous findings: faster reporting of emotional expressions' breakthrough into awareness is unlikely to be due to the presence of emotion affecting perceptual sensitivity; the source of such effects is likely to reside in one of the many other processes that influence response times. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Awareness , Emotions , Humans , Awareness/physiology , Facial Expression , Reaction Time
3.
Behav Brain Res ; 437: 114116, 2023 02 02.
Article in English | MEDLINE | ID: mdl-36113728

ABSTRACT

Human faces convey essential information for understanding others' mental states and intentions. The importance of faces in social interaction has prompted suggestions that some relevant facial features such as configural information, emotional expression, and gaze direction may promote preferential access to awareness. This evidence has predominantly come from interocular suppression studies, with the most common method being the Breaking Continuous Flash Suppression (bCFS) procedure, which measures the time it takes different stimuli to overcome interocular suppression. However, the procedures employed in such studies suffer from multiple methodological limitations. For example, they are unable to disentangle detection from identification processes, their results may be confounded by participants' response bias and decision criteria, they typically use small stimulus sets, and some of their results attributed to detecting high-level facial features (e.g., emotional expression) may be confounded by differences in low-level visual features (e.g., contrast, spatial frequency). In this article, we review the evidence from the bCFS procedure on whether relevant facial features promote access to awareness, discuss the main limitations of this very popular method, and propose strategies to address these issues.


Subject(s)
Facial Recognition , Humans , Awareness/physiology
4.
Psychol Sci ; 33(11): 1842-1856, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36126649

ABSTRACT

We studied the fundamental issue of whether children evaluate the reliability of their language interpretation, that is, their confidence in understanding words. In two experiments, 2-year-olds (Experiment 1: N = 50; Experiment 2: N = 60) saw two objects and heard one of them being named; both objects were then hidden behind screens and children were asked to look toward the named object, which was eventually revealed. When children knew the label used, they showed increased postdecision persistence after a correct compared with an incorrect anticipatory look, a marker of confidence in word comprehension (Experiment 1). When interacting with an unreliable speaker, children showed accurate word comprehension but reduced confidence in the accuracy of their own choice, indicating that children's confidence estimates are influenced by social information (Experiment 2). Thus, by the age of 2 years, children can estimate their confidence during language comprehension, long before they can talk about their linguistic skills.


Subject(s)
Comprehension , Eye Movements , Child , Humans , Child, Preschool , Reproducibility of Results , Language , Language Development
5.
Sci Rep ; 12(1): 7640, 2022 05 10.
Article in English | MEDLINE | ID: mdl-35538138

ABSTRACT

Faces convey information essential for social interaction. Their importance has prompted suggestions that some facial features may be processed unconsciously. Although some studies have provided empirical support for this idea, it remains unclear whether these findings were due to perceptual processing or to post-perceptual decisional factors. Evidence for unconscious processing of facial features has predominantly come from the Breaking Continuous Flash Suppression (b-CFS) paradigm, which measures the time it takes different stimuli to overcome interocular suppression. For example, previous studies have found that upright faces are reported faster than inverted faces, and direct-gaze faces are reported faster than averted-gaze faces. However, this procedure suffers from important problems: observers can decide how much information they receive before committing to a report, so their detection responses may be influenced by differences in decision criteria and by stimulus identification. Here, we developed a new procedure that uses predefined exposure durations, enabling independent measurement of perceptual sensitivity and decision criteria. We found higher detection sensitivity to both upright and direct-gaze (compared to inverted and averted-gaze) faces, with no effects on decisional factors. For identification, we found both greater sensitivity and more liberal criteria for upright faces. Our findings demonstrate that face orientation and gaze direction influence perceptual sensitivity, indicating that these facial features may be processed unconsciously.


Subject(s)
Face , Fixation, Ocular , Head , Photic Stimulation
6.
J Autism Dev Disord ; 52(8): 3560-3573, 2022 Aug.
Article in English | MEDLINE | ID: mdl-34406588

ABSTRACT

One factor that may influence how executive functions develop is exposure to more than one language in childhood. This study explored the impact of bilingualism on inhibitory control in autistic (n = 38) and non-autistic children (n = 51). Bilingualism was measured on a continuum of exposure to investigate the effects of language environment on two facets of inhibitory control. Behavioural control of motor impulses was modulated positively through increased bilingual exposure, irrespective of diagnostic status, but bilingual exposure did not significantly affect inhibition involving visual attention. The results partially support the hypothesis that bilingual exposure differentially affects components of inhibitory control and provides important evidence for families that bilingualism is not detrimental to their development.


Subject(s)
Autism Spectrum Disorder , Multilingualism , Child , Executive Function/physiology , Humans , Language , Psychomotor Performance
7.
Open Mind (Camb) ; 5: 1-19, 2021.
Article in English | MEDLINE | ID: mdl-34485794

ABSTRACT

There has been little investigation of the way source monitoring, the ability to track the source of one's knowledge, may be involved in lexical acquisition. In two experiments, we tested whether toddlers (mean age 30 months) can monitor the source of their lexical knowledge and reevaluate their implicit belief about a word mapping when this source is proven to be unreliable. Experiment 1 replicated previous research (Koenig & Woodward, 2010): children displayed better performance in a word learning test when they learned words from a speaker who has previously revealed themself as reliable (correctly labeling familiar objects) as opposed to an unreliable labeler (incorrectly labeling familiar objects). Experiment 2 then provided the critical test for source monitoring: children first learned novel words from a speaker before watching that speaker labeling familiar objects correctly or incorrectly. Children who were exposed to the reliable speaker were significantly more likely to endorse the word mappings taught by the speaker than children who were exposed to a speaker who they later discovered was an unreliable labeler. Thus, young children can reevaluate recently learned word mappings upon discovering that the source of their knowledge is unreliable. This suggests that children can monitor the source of their knowledge in order to decide whether that knowledge is justified, even at an age where they are not credited with the ability to verbally report how they have come to know what they know.

8.
Q J Exp Psychol (Hove) ; 74(12): 2193-2209, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34120522

ABSTRACT

Language comprehension depends heavily upon prediction, but how predictions are generated remains poorly understood. Several recent theories propose that these predictions are in fact generated by the language production system. Here, we directly test this claim. Participants read sentence contexts that either were or were not highly predictive of a final word, and we measured how quickly participants recognised that final word (Experiment 1), named that final word (Experiment 2), or used that word to name a picture (Experiment 3). We manipulated engagement of the production system by asking participants to read the sentence contexts either aloud or silently. Across the experiments, participants responded more quickly following highly predictive contexts. Importantly, the effect of contextual predictability was greater when participants had read the sentence contexts aloud rather than silently, a finding that was significant in Experiment 3, marginally significant in Experiment 2, and again significant in combined analyses of Experiments 1-3. These results indicate that language production (as used in reading aloud) can be used to facilitate prediction. We consider whether prediction benefits from production only in particular contexts and discuss the theoretical implications of our evidence.


Subject(s)
Language , Names , Comprehension , Humans , Reading
9.
Cognition ; 211: 104650, 2021 06.
Article in English | MEDLINE | ID: mdl-33721717

ABSTRACT

How do we update our linguistic knowledge? In seven experiments, we asked whether error-driven learning can explain under what circumstances adults and children are more likely to store and retain a new word meaning. Participants were exposed to novel object labels in the context of more or less constraining sentences or visual contexts. Both two-to-four-year-olds (Mage = 38 months) and adults were strongly affected by expectations based on sentence constraint when choosing the referent of a new label. In addition, adults formed stronger memory traces for novel words that violated a stronger prior expectation. However, preschoolers' memory was unaffected by the strength of their prior expectations. We conclude that the encoding of new word-object associations in memory is affected by prediction error in adults, but not in preschoolers.


Subject(s)
Learning , Verbal Learning , Adult , Child , Child, Preschool , Humans , Knowledge , Language , Linguistics
10.
Cognition ; 210: 104602, 2021 05.
Article in English | MEDLINE | ID: mdl-33550116

ABSTRACT

Speakers' lexical choices are affected by interpersonal-level influences, like a tendency to reuse an interlocutor's words. Here, we examined how those choices are additionally affected by community-level factors, like whether the interlocutor is from their own or another speech community (in-community vs. out-community partner), and how such interpersonal experiences contribute to the acquisition of community-level linguistic knowledge. Our three experiments tested (i) how speakers' lexical choices varied depending on their partner's choices and speech community, and (ii) how speakers' extrapolation of these choices to a subsequent partner was influenced by their partners' speech communities. In Experiment 1, Spanish participants played two sessions of an online picture-matching-and-naming task, encountering the same pictures but different confederates in each session. The first confederate was either an in-community partner (Spanish) or an out-community partner (Latin American); the second confederate was either from the same community as the first confederate or not. Participants' referential choices in Session 1 were influenced by their partner's choices, but not by their community. However, participants' likelihood to subsequently maintain these choices was affected by their partners' communities. Experiment 2 replicated this pattern in Mexicans, and Experiment 3 confirmed that these results were driven by confederates' communities, rather than perceived linguistic status. Our results suggest that speakers encode speech community information during dialogue and store it to inform future contexts of language use, even when it has not affected their choices during that particular encounter. Thus, speakers learn community-level knowledge by extrapolating linguistic information from interpersonal-level experiences.


Subject(s)
Speech Perception , Speech , Humans , Interpersonal Relations , Language , Linguistics
11.
Child Dev ; 92(3): 1048-1066, 2021 05.
Article in English | MEDLINE | ID: mdl-32865231

ABSTRACT

By age 2, children are developing foundational language processing skills, such as quickly recognizing words and predicting words before they occur. How do these skills relate to children's structural knowledge of vocabulary? Multiple aspects of language processing were simultaneously measured in a sample of 2-to-5-year-olds (N = 215): While older children were more fluent at recognizing words, at predicting words in a graded fashion, and at revising incorrect predictions, only revision was associated with concurrent vocabulary knowledge once age was accounted for. However, an exploratory longitudinal follow-up (N = 55) then found that word recognition and prediction skills were associated with rate of subsequent vocabulary development, but revision skills were not. We argue that prediction skills may facilitate language learning through enhancing processing speed.


Subject(s)
Language Development , Vocabulary , Adolescent , Child , Child, Preschool , Humans , Language , Language Tests
12.
Nat Hum Behav ; 4(12): 1217, 2020 12.
Article in English | MEDLINE | ID: mdl-33106628

Subject(s)
Publishing , Humans
14.
J Exp Psychol Learn Mem Cogn ; 46(6): 1091-1105, 2020 Jun.
Article in English | MEDLINE | ID: mdl-31580124

ABSTRACT

Language use is intrinsically variable, such that the words we use vary widely across speakers and communicative situations. For instance, we can call the same entity refrigerator or fridge. However, attempts to understand individual differences in how we process language have made surprisingly little progress, perhaps because most psycholinguistic instruments are better-suited to experimental comparisons than differential analyses. In particular, investigations of individual differences require instruments that have high test-retest reliability, such that they consistently distinguish between individuals across measurement sessions. Here, we established the reliability of an instrument measuring lexical entrainment, or the tendency to use a name that a partner has used before (e.g., using refrigerator after a partner used refrigerator), which is a key phenomenon for the psycholinguistics of dialogue. Online participants completed two sessions of a picture matching-and-naming task, using different pictures and different (scripted) partners in each session. Entrainment was measured as the proportion of trials on which participants followed their partner in using a low-frequency name, and we assessed reliability by comparing entrainment scores across sessions. The estimated reliability was substantial, both when sessions were separated by minutes and when sessions were a week apart. These results suggest that our instrument is well-suited for differential analyses, opening new avenues for understanding language variability. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Individuality , Psycholinguistics , Social Interaction , Verbal Behavior/physiology , Adult , Female , Humans , Male , Pattern Recognition, Visual/physiology
15.
Psychol Sci ; 30(4): 504-515, 2019 04.
Article in English | MEDLINE | ID: mdl-30747577

ABSTRACT

Conversation is the natural setting for language learning and use, and a key property of conversation is the smooth taking of turns. In adult conversations, delays between turns are minimal (typically 200 ms or less) because listeners display a striking ability to predict what their partner will say, and they formulate a response before their partner's turn ends. Here, we tested how this ability to coordinate comprehension and production develops in preschool children. In an interactive paradigm, 106 children (ages 3-5 years) and 48 adults responded to questions that varied in predictability but were controlled for linguistic complexity. Using a novel distributional approach to data analysis, we found that when children can predict a question's ending, they leave shorter gaps before responding, suggesting that they can optimize the timing of their conversational turns like adults do. In line with a recent ethological theory of turn taking, this early competency helps explain how conversational contexts support language development.


Subject(s)
Child Language , Communication , Interpersonal Relations , Adult , Auditory Perception , Child, Preschool , Comprehension , Female , Humans , Male
16.
J Exp Psychol Gen ; 148(5): 926-942, 2019 May.
Article in English | MEDLINE | ID: mdl-30010371

ABSTRACT

It is well-known that children rapidly learn words, following a range of heuristics. What is less well appreciated is that-because most words are polysemous and have multiple meanings (e.g., "glass" can label a material and drinking vessel)-children will often be learning a new meaning for a known word, rather than an entirely new word. Across 4 experiments we show that children flexibly adapt a well-known heuristic-the shape bias-when learning polysemous words. Consistent with previous studies, we find that children and adults preferentially extend a new object label to other objects of the same shape. But we also find that when a new word for an object ("a gup") has previously been used to label the material composing that object ("some gup"), children and adults override the shape bias, and are more likely to extend the object label by material (Experiments 1 and 3). Further, we find that, just as an older meaning of a polysemous word constrains interpretations of a new word meaning, encountering a new word meaning leads learners to update their interpretations of an older meaning (Experiment 2). Finally, we find that these effects only arise when learners can perceive that a word's meanings are related, not when they are arbitrarily paired (Experiment 4). Together, these findings show that children can exploit cues from polysemy to infer how new word meanings should be extended, suggesting that polysemy may facilitate word learning and invite children to construe categories in new ways. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Form Perception/physiology , Learning/physiology , Vocabulary , Adolescent , Adult , Age Factors , Child, Preschool , Cues , Female , Humans , Male , Middle Aged , Young Adult
17.
Dev Sci ; 22(1): e12704, 2019 01.
Article in English | MEDLINE | ID: mdl-30014590

ABSTRACT

Everyone agrees that infants possess general mechanisms for learning about the world, but the existence and operation of more specialized mechanisms is controversial. One mechanism-rule learning-has been proposed as potentially specific to speech, based on findings that 7-month-olds can learn abstract repetition rules from spoken syllables (e.g. ABB patterns: wo-fe-fe, ga-tu-tu…) but not from closely matched stimuli, such as tones. Subsequent work has shown that learning of abstract patterns is not simply specific to speech. However, we still lack a parsimonious explanation to tie together the diverse, messy, and occasionally contradictory findings in that literature. We took two routes to creating a new profile of rule learning: meta-analysis of 20 prior reports on infants' learning of abstract repetition rules (including 1,318 infants in 63 experiments total), and an experiment on learning of such rules from a natural, non-speech communicative signal. These complementary approaches revealed that infants were most likely to learn abstract patterns from meaningful stimuli. We argue that the ability to detect and generalize simple patterns supports learning across domains in infancy but chiefly when the signal is meaningfully relevant to infants' experience with sounds, objects, language, and people.


Subject(s)
Child Development , Learning , Communication , Female , Humans , Infant , Language , Male , Sound , Speech
18.
Psychol Med ; 49(8): 1335-1345, 2019 06.
Article in English | MEDLINE | ID: mdl-30131083

ABSTRACT

BACKGROUND: People with schizophrenia process language in unusual ways, but the causes of these abnormalities are unclear. In particular, it has proven difficult to empirically disentangle explanations based on impairments in the top-down processing of higher level information from those based on the bottom-up processing of lower level information. METHODS: To distinguish these accounts, we used visual-world eye tracking, a paradigm that measures spoken language processing during real-world interactions. Participants listened to and then acted out syntactically ambiguous spoken instructions (e.g. 'tickle the frog with the feather', which could either specify how to tickle a frog, or which frog to tickle). We contrasted how 24 people with schizophrenia and 24 demographically matched controls used two types of lower level information (prosody and lexical representations) and two types of higher level information (pragmatic and discourse-level representations) to resolve the ambiguous meanings of these instructions. Eye tracking allowed us to assess how participants arrived at their interpretation in real time, while recordings of participants' actions measured how they ultimately interpreted the instructions. RESULTS: We found a striking dissociation in participants' eye movements: the two groups were similarly adept at using lower level information to immediately constrain their interpretations of the instructions, but only controls showed evidence of fast top-down use of higher level information. People with schizophrenia, nonetheless, did eventually reach the same interpretations as controls. CONCLUSIONS: These data suggest that language abnormalities in schizophrenia partially result from a failure to use higher level information in a top-down fashion, to constrain the interpretation of language as it unfolds in real time.


Subject(s)
Comprehension , Eye Movements , Language , Schizophrenia/physiopathology , Speech Perception , Adult , Female , Humans , Logistic Models , Male , Middle Aged , Schizophrenic Psychology , Visual Perception
19.
J Exp Child Psychol ; 173: 351-370, 2018 09.
Article in English | MEDLINE | ID: mdl-29793772

ABSTRACT

Language processing in adults is facilitated by an expert ability to generate detailed predictions about upcoming words. This may seem like an acquired skill, but some models of language acquisition assume that the ability to predict is a prerequisite for learning. This raises a question: Do children learn to predict, or do they predict to learn? We tested whether children, like adults, can generate expectations about not just the meanings of upcoming words but also their sounds, which would be critical for using prediction to learn about language. In two looking-while-listening experiments, we show that 2-year-olds can generate expectations about meaning based on a determiner (Can you see one…ball/two…ice creams?) but that even children as old as 5 years do not show an adult-like ability to predict the phonology of upcoming words based on a determiner (Can you see a…ball/an…ice cream?). Our results, therefore, suggest that the ability to generate detailed predictions is a late-acquired skill. We argue that prediction might not be the key mechanism driving children's learning, but that the ability to generate accurate semantic predictions may nevertheless have facilitative effects on language development.


Subject(s)
Anticipation, Psychological , Comprehension , Language Development , Learning , Child, Preschool , Female , Humans , Language , Male
20.
J Exp Psychol Gen ; 147(2): 190-208, 2018 02.
Article in English | MEDLINE | ID: mdl-29369681

ABSTRACT

Is consciousness required for high level cognitive processes, or can the unconscious mind perform tasks that are as complex and difficult as, for example, understanding a sentence? Recent work has argued that, yes, the unconscious mind can: Sklar et al. (2012) found that sentences, masked from consciousness using the technique of continuous flash suppression (CFS), broke into awareness more rapidly when their meanings were more unusual or more emotionally negative, even though processing the sentences' meaning required unconsciously combining each word's meaning. This has motivated the important claim that consciousness plays little-to-no functional role in high-level cognitive operations. Here, we aimed to replicate and extend these findings, but instead, across 10 high-powered studies, we found no evidence that the meaning of a phrase or word could be understood without awareness. We did, however, consistently find evidence that low-level perceptual features, such as sentence length and familiarity of alphabet, could be processed unconsciously. Our null findings for sentence processing are corroborated by a meta-analysis that aggregates our studies with the prior literature. We offer a potential explanation for prior positive results through a set of computational simulations, which show how the distributional characteristics of this type of CFS data, in particular its skew and heavy tail, can cause an elevated level of false positive results when common data exclusion criteria are applied. Our findings thus have practical implication for analyzing such data. More importantly, they suggest that consciousness may well be required for high-level cognitive tasks such as understanding language. (PsycINFO Database Record


Subject(s)
Awareness/physiology , Cognition/physiology , Comprehension/physiology , Consciousness/physiology , Language , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...