Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
J Exp Child Psychol ; 238: 105803, 2024 02.
Article in English | MEDLINE | ID: mdl-37924661

ABSTRACT

Infants reason about support configurations (e.g., teddy bear on table) and young children talk about a variety of support relations, including support-from-below (e.g., apple on table) and many other types (e.g., Band-Aid on leg, picture on wall). Given this wide variation in support types, we asked whether early differentiation of the semantic space of support may play a key role in helping children to learn spatial language in this domain. Previous research has shown such differentiation with 20-month-olds mapping the basic locative construction (BE on) to support-from-below (cube on top of box), but not to a mechanical support configuration (cube on side of box via adhesion). Older children and adults show the same differentiation, with preferential mapping of BE on to support-from-below and lexical verbs to mechanical support. We further explored the development of this differentiation by testing how children aged 2 to 4.5 years map lexical verbs to a wide variety of support configurations. In Experiment 1, using an intermodal preferential pointing paradigm, we found that 2- to 3.5-year-olds map a lexical verb phrase ("sticks to") to mechanical support via adhesion. In Experiments 2 and 3, we expanded the range of mechanical support relations and used production and forced-choice tasks to ask whether 2- to 4.5-year-olds also encode mechanical relations using lexical verbs. We found that they do. These findings suggest continuity between infancy and childhood in the way that children use spatial language to differentially map to support-from-below versus mechanical support and raise new questions about how mechanical support language develops.


Subject(s)
Language , Linguistics , Adult , Humans , Child, Preschool , Child , Adolescent , Semantics , Language Development , Learning
2.
J Exp Psychol Gen ; 152(2): 509-527, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36107694

ABSTRACT

Symmetry is ubiquitous in nature, in logic and mathematics, and in perception, language, and thought. Although humans are exquisitely sensitive to visual symmetry (e.g., of a butterfly), symmetry in natural language goes beyond visuospatial properties: many words point to abstract concepts with symmetrical content (e.g., equal, marry). For example, if Mark marries Bill, then Bill marries Mark. In both cases (vision and language), symmetry may be formally characterized as invariance under transformation. Is this a coincidence, or is there some deeper psychological resemblance? Here we asked whether representations of symmetry correspond across language and vision. To do so, we developed a novel cross-modal matching paradigm. On each trial, participants observed a visual stimulus (either symmetrical or nonsymmetrical) and had to choose between a symmetrical and nonsymmetrical English predicate unrelated to the stimulus (e.g., "negotiate" vs. "propose"). In a first study with visual events (symmetrical collision or asymmetrical launch), participants reliably chose the predicate matching the event's symmetry. A second study showed that this "language-vision correspondence" generalized to objects and was weakened when the stimuli's binary nature was made less apparent (i.e., for one object, rather than two inward-facing objects). A final study showed the same effect when nonsigners guessed English translations of signs from American Sign Language, which expresses many symmetrical concepts spatially. Taken together, our findings support the existence of an abstract representation of symmetry which humans access via both perceptual and linguistic means. More broadly, this work sheds light on the rich, structured nature of the language-cognition interface. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Language , Sign Language , Humans , Linguistics , Cognition , Concept Formation
3.
Proc Natl Acad Sci U S A ; 119(42): e2207293119, 2022 10 18.
Article in English | MEDLINE | ID: mdl-36215488

ABSTRACT

The mature human brain is lateralized for language, with the left hemisphere (LH) primarily responsible for sentence processing and the right hemisphere (RH) primarily responsible for processing suprasegmental aspects of language such as vocal emotion. However, it has long been hypothesized that in early life there is plasticity for language, allowing young children to acquire language in other cortical regions when LH areas are damaged. If true, what are the constraints on functional reorganization? Which areas of the brain can acquire language, and what happens to the functions these regions ordinarily perform? We address these questions by examining long-term outcomes in adolescents and young adults who, as infants, had a perinatal arterial ischemic stroke to the LH areas ordinarily subserving sentence processing. We compared them with their healthy age-matched siblings. All participants were tested on a battery of behavioral and functional imaging tasks. While stroke participants were impaired in some nonlinguistic cognitive abilities, their processing of sentences and of vocal emotion was normal and equal to that of their healthy siblings. In almost all, these abilities have both developed in the healthy RH. Our results provide insights into the remarkable ability of the young brain to reorganize language. Reorganization is highly constrained, with sentence processing almost always in the RH frontotemporal regions homotopic to their location in the healthy brain. This activation is somewhat segregated from RH emotion processing, suggesting that the two functions perform best when each has its own neural territory.


Subject(s)
Language , Stroke , Adolescent , Brain/physiology , Brain Mapping/methods , Child , Child, Preschool , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging/methods , Neuronal Plasticity/physiology , Young Adult
5.
Cogn Sci ; 46(1): e13081, 2022 01.
Article in English | MEDLINE | ID: mdl-35066920

ABSTRACT

Spatial construction-the activity of creating novel spatial arrangements or copying existing ones-is a hallmark of human spatial cognition. Spatial construction abilities predict math and other academic outcomes and are regularly used in IQ testing, but we know little about the cognitive processes that underlie them. In part, this lack of understanding is due to both the complex nature of construction tasks and the tendency to limit measurement to the overall accuracy of the end goal. Using an automated recording and coding system, we examined in detail adults' performance on a block copying task, specifying their step-by-step actions, culminating in all steps in the full construction of the build-path. The results revealed the consistent use of a structured plan that unfolded in an organized way, layer by layer (bottom to top). We also observed that complete layers served as convergence points, where the most agreement among participants occurred, whereas the specific steps taken to achieve each of those layers diverged, or varied, both across and even within individuals. This pattern of convergence and divergence suggests that the layers themselves were serving as the common subgoals across both inter and intraindividual builds of the same model, reflecting cognitive "chunking." This structured use of layers as subgoals was functionally related to better performance among builders. Our findings offer a foundation for further exploration that may yield insights into the development and training of block-construction as well as other complex cognitive-motor skills. In addition, this work offers proof-of-concept for systematic investigation into a wide range of complex action-based cognitive tasks.


Subject(s)
Cognition , Memory , Adult , Humans , Intelligence Tests
6.
Dev Sci ; 25(4): e13217, 2022 07.
Article in English | MEDLINE | ID: mdl-34913543

ABSTRACT

Studies of hemispheric specialization have traditionally cast the left hemisphere as specialized for language and the right hemisphere for spatial function. Much of the supporting evidence for this separation of function comes from studies of healthy adults and those who have sustained lesions to the right or left hemisphere. However, we know little about the developmental origins of lateralization. Recent evidence suggests that the young brain represents language bilaterally, with 4-6-year-olds activating the left-hemisphere regions known to support language in adults as well as homotopic regions in the right hemisphere. This bilateral pattern changes over development, converging on left-hemispheric activation in late childhood. In the present study, we ask whether this same developmental trajectory is observed in a spatial task that is strongly right-lateralized in adults-the line bisection (or "Landmark") task. We examined fMRI activation among children ages 5-11 years as they were asked to judge which end of a bisected vertical line was longer. We found that young children showed bilateral activation, with activation in the same areas of the right hemisphere as has been shown among adults, as well as in the left hemisphere homotopic regions. By age 10, activation was right-lateralized. This strongly resembles the developmental trajectory for language, moving from bilateral to lateralized activation. We discuss potential underlying mechanisms and suggest that understanding the development of lateralization for a range of cognitive functions can play a crucial role in understanding general principles of how and why the brain comes to lateralize certain functions.


Subject(s)
Brain Mapping , Functional Laterality , Adult , Brain/physiology , Child , Child, Preschool , Functional Laterality/physiology , Humans , Language , Magnetic Resonance Imaging
7.
Infant Behav Dev ; 65: 101616, 2021 11.
Article in English | MEDLINE | ID: mdl-34418794

ABSTRACT

Spatial terms that encode support (e.g., "on", in English) are among the first to be understood by children across languages (e.g., Bloom, 1973; Johnston & Slobin, 1979). Such terms apply to a wide variety of support configurations, including Support-From-Below (SFB; cup on table) and Mechanical Support, such as stamps on envelopes, coats on hooks, etc. Research has yet to delineate infants' semantic space for the term "on" when considering its full range of usage. Do infants initially map "on" to a very broad, highly abstract category - one including cups on tables, stamps on envelopes, etc.? Or do infants begin with a much more restricted interpretation - mapping "on" to certain configurations over others? Much infant cognition research suggests that SFB is an event category that infants learn about early - by five months of age (Baillargeon & DeJong, 2017) - raising the possibility that they may also begin by interpreting the word "on" as referring to configurations like cups on tables, rather than stamps on envelopes. Further, studies examining language production suggests that children and adults map the basic locative expression (BE on, in English) to SFB over Mechanical Support (Landau et al., 2016). We tested the hypothesis that this 'privileging' of SFB in early infant cognition and child and adult language also characterizes infants' language comprehension. Using the Intermodal-Preferential-Looking-Paradigm in combination with infant eye-tracking, 20-month-olds were presented with two support configurations: SFB and Mechanical, Support-Via-Adhesion (henceforth, SVA). Infants preferentially mapped "is on" to SFB (rather than SVA) suggesting that infants differentiate between two quite different kinds of support configurations when mapping spatial language to these two configurations and more so, that SFB is privileged in early language understanding of the English spatial term "on".


Subject(s)
Language Development , Semantics , Adult , Child , Cognition , Humans , Infant , Language , Learning
8.
Cognition ; 212: 104683, 2021 07.
Article in English | MEDLINE | ID: mdl-33774508

ABSTRACT

Classic theories emphasize the primacy of first-person sensory experience for learning meanings of words: to know what "see" means, one must be able to use the eyes to perceive. Contrary to this idea, blind adults and children acquire normative meanings of "visual" verbs, e.g., interpreting "see" and "look" to mean with the eyes for sighted agents. Here we ask the flip side of this question: how easily do sighted children acquire the meanings of "visual" verbs as they apply to blind agents? We asked sighted 4-, 6- and 9-year-olds to tell us what part of the body a blind or a sighted agent would use to "see", "look" (and other visual verbs, n = 5), vs. "listen", "smell" (and other non-visual verbs, n = 10). Even the youngest children consistently reported the correct body parts for sighted agents (eyes for "look", ears for "listen"). By contrast, there was striking developmental change in applying "visual" verbs to blind agents. Adults, 9- and 6-year-olds, either extended visual verbs to other modalities for blind agents (e.g., "seeing" with hands or a cane) or stated that the blind agent "cannot" "look" or "see". By contrast, 4-year-olds said that a blind agent would use her eyes to "see", "look", etc., even while explicitly acknowledging that the agent's "eyes don't work". Young children also endorsed "she is looking at the dax" descriptions of photographs where the blind agent had the object in her "line of sight", irrespective of whether she had physical contact with the object. This pattern held for leg-motion verbs ("walk", "run") applied to wheelchair users. The ability to modify verb modality for agents with disabilities undergoes developmental change between 4 and 6. Despite this, we find that 4-year-olds are sensitive to the semantic distinction between active ("look") and stative ("see"), even when applied to blind agents. These results challenge the primacy of first-person sensory experience and highlight the importance of linguistic input and social interaction in the acquisition of verb meaning.


Subject(s)
Disabled Persons , Visually Impaired Persons , Adult , Child , Child, Preschool , Female , Humans , Learning , Linguistics , Semantics
9.
Dev Sci ; 24(4): e13067, 2021 07.
Article in English | MEDLINE | ID: mdl-33226713

ABSTRACT

The neural representation of visual-spatial functions has traditionally been ascribed to the right hemisphere, but little is known about these representations in children, including whether and how lateralization of function changes over the course of development. Some studies suggest bilateral activation early in life that develops toward right-lateralization in adulthood, while others find evidence of right-hemispheric dominance in both children and adults. We used a complex visual-spatial construction task to examine the nature of lateralization and its developmental time course in children ages 5-11 years. Participants were shown two puzzle pieces and were asked whether the pieces could fit together to make a square; responses required either mental translation of the pieces (Translation condition) or both mental translation and rotation of the pieces (Rotation condition). Both conditions were compared to a matched Luminance control condition that was similar in terms of visual content and difficulty but required no spatial analysis. Group and single-subject analyses revealed that the Rotation and Translation conditions elicited strongly bilateral activation in the same parietal and occipital locations as have been previously found for adults. These findings show that visual-spatial construction consistently elicits robust bilateral activation from age 5 through adulthood. This challenges the idea that spatial functions are all right-lateralized, either during early development or in adulthood. More generally, these findings provide insights into the developmental course of lateralization across different spatial skills and how this may be influenced by the computational requirements of the particular functions involved.


Subject(s)
Brain Mapping , Functional Laterality , Adult , Child , Child, Preschool , Humans , Magnetic Resonance Imaging , Space Perception
10.
Prog Neurobiol ; 191: 101819, 2020 08.
Article in English | MEDLINE | ID: mdl-32380224

ABSTRACT

Repeated stimuli elicit attenuated responses in visual cortex relative to novel stimuli. This adaptation can be considered as a form of rapid learning and a signature of perceptual memory. Adaptation occurs not only when a stimulus is repeated immediately, but also when there is a lag in terms of time and other intervening stimuli before the repetition. But how does the visual system keep track of which stimuli are repeated, especially after long delays and many intervening stimuli? We hypothesized that the hippocampus and medial temporal lobe (MTL) support long-lag adaptation, given that this memory system can learn from single experiences, maintain information over delays, and send feedback to visual cortex. We tested this hypothesis with fMRI in an amnesic patient, LSJ, who has encephalitic damage to the MTL resulting in extensive bilateral lesions including complete hippocampal loss. We measured adaptation at varying time lags between repetitions in functionally localized visual areas that were intact in LSJ. We observed that these areas track information over a few minutes even when the hippocampus and extended parts of the MTL are unavailable. LSJ and controls were identical when attention was directed away from the repeating stimuli: adaptation occurred for lags up to three minutes, but not six minutes. However, when attention was directed toward stimuli, controls now showed an adaptation effect at six minutes but LSJ did not. These findings suggest that visual cortex can support one-shot perceptual memories lasting for several minutes but that the hippocampus and surrounding MTL structures are necessary for adaptation in visual cortex after longer delays when stimuli are task-relevant.


Subject(s)
Adaptation, Physiological/physiology , Amnesia/physiopathology , Feedback, Physiological/physiology , Hippocampus/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Visual Cortex/physiology , Aged , Attention , Female , Hippocampus/pathology , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Temporal Lobe/pathology , Time Factors
11.
Top Cogn Sci ; 12(1): 7-21, 2020 01.
Article in English | MEDLINE | ID: mdl-31904915

ABSTRACT

This is the Editor's introduction to the Special Issue of TopiCS in honor of Lila R. Gleitman's receipt of the 2017 David E. Rumelhart Prize. The introduction gives an overview of Gleitman's intellectual history and scientific contributions, and it briefly reviews each of the contributions to the issue.


Subject(s)
Awards and Prizes , Cognitive Science/history , Psycholinguistics/history , History, 20th Century , History, 21st Century , Humans
12.
Top Cogn Sci ; 12(1): 91-114, 2020 01.
Article in English | MEDLINE | ID: mdl-30478989

ABSTRACT

How do children learn the meanings of simple spatial prepositions like in and on? In this paper, I argue that children come to spatial term learning with an a priori conceptual distinction between core versus non-core concepts of containment and support, and that they learn how language maps onto this distinction by considering both the simple prepositions and the company they keep-that is, the distributions of their co-occurrences with particular verbs. Core types of containment and support are largely expressed by in/on together with the light verb BE; non-core types are expressed by lexical verbs such as insert, hang, stick, and so on, which represent the specific mechanical means by which containment or support is achieved. These latter types arguably depend on extensive learning about the particular mechanisms of containment and support, many of which are invented by humans, as well as learning the specific lexical verbs that encode these mechanisms. The core versus non-core distinction is reflected in young children's and adults' linguistic descriptions of different spatial configurations, via different distributions of expression types across different configurations. Differences between children and adults are not likely to be rooted in either conceptual or semantic differences, but rather, in the probabilistic nature of available expressions, along with early limits on children's vocabulary of lexical verbs that express complex mechanical relationships between objects.


Subject(s)
Child Development/physiology , Concept Formation/physiology , Learning/physiology , Psycholinguistics , Space Perception/physiology , Child , Child, Preschool , Humans , Language Development , Semantics
13.
Cogn Psychol ; 116: 101249, 2020 02.
Article in English | MEDLINE | ID: mdl-31743869

ABSTRACT

Previous studies have shown that the basic properties of the visual representation of space are reflected in spatial language. This close relationship between linguistic and non-linguistic spatial systems has been observed both in typical development and in some developmental disorders. Here we provide novel evidence for structural parallels along with a degree of autonomy between these two systems among individuals with Autism Spectrum Disorder, a developmental disorder with uneven cognitive and linguistic profiles. In four experiments, we investigated language and memory for locations organized around an axis-based reference system. Crucially, we also recorded participants' eye movements during the tasks in order to provide new insights into the online processes underlying spatial thinking. Twenty-three intellectually high-functioning individuals with autism (HFA) and 23 typically developing controls (TD), all native speakers of Norwegian matched on chronological age and cognitive abilities, participated in the studies. The results revealed a well-preserved axial reference system in HFA and weakness in the representation of direction within the axis, which was especially evident in spatial language. Performance on the non-linguistic tasks did not differ between HFA and control participants, and we observed clear structural parallels between spatial language and spatial representation in both groups. However, there were some subtle differences in the use of spatial language in HFA compared to TD, suggesting that despite the structural parallels, some aspects of spatial language in HFA deviated from the typical pattern. These findings provide novel insights into the prominence of the axial reference systems in non-linguistic spatial representations and spatial language, as well as the possibility that the two systems are, to some degree, autonomous.


Subject(s)
Autism Spectrum Disorder/physiopathology , Eye Movements , Language Development , Spatial Memory , Adolescent , Adult , Child , Comprehension , Female , Humans , Male , Young Adult
14.
Cortex ; 121: 264-276, 2019 12.
Article in English | MEDLINE | ID: mdl-31655392

ABSTRACT

Boundaries are crucial to our representation of the geometric shape of scenes, which can be used to reorient in space. Behavioral research has shown that children and adults share exquisite sensitivity to a defining feature of a boundary: its vertical extent. Imaging studies have shown that this boundary property is represented in the parahippocampal place area (PPA) among typically developed (TD) adults. Here, we show that sensitivity to the vertical extent of scene boundaries is impaired at both the behavioral and neural level in people with Williams syndrome (WS), a genetic deficit that results in severely impaired spatial functions. Behavioral reorientation was tested in three boundary conditions: a flat Mat, a 5 cm high Curb, and full Walls. Adults with WS could reorient in a rectangular space defined by Wall boundaries, but not Curb or Mat boundaries. In contrast, TD age-matched controls could reorient by all three boundary types and TD 4-year-olds could reorient by either Wall or Curb boundaries. Using fMRI, we find that the WS behavioral deficit is echoed in their neural representation of boundaries. While TD age-matched controls showed distinct neural responses to scenes depicting Mat, Curb, and Wall boundaries in the PPA, people with WS showed only a distinction between the Wall and Mat or Curb, but no distinction between the Mat and Curb. Taken together, these results reveal a close coupling between the representation of boundaries as they are used in behavioral reorientation and neural encoding, suggesting that damage to this key element of spatial representation may have a genetic foundation.


Subject(s)
Parahippocampal Gyrus/physiology , Pattern Recognition, Visual/physiology , Space Perception/physiology , Williams Syndrome/physiopathology , Adolescent , Adult , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Young Adult
15.
Neuropsychologia ; 127: 57-65, 2019 04.
Article in English | MEDLINE | ID: mdl-30802463

ABSTRACT

The "Landmark Task" (LT) is a line bisection judgment task that predominantly activates right parietal cortex. The typical version requires observers to judge bisections for horizontal lines that cross their egocentric midline and therefore may depend on spatial attention as well as spatial representation of the line segments. To ask whether the LT is indeed right-lateralized regardless of spatial attention (for which the right hemisphere is known to be important), we examined LT activation in 26 neurologically healthy young adults using vertical (instead of horizontal) stimuli, as compared with a luminance control task that made similar demands on spatial attention. We also varied task difficulty, which is known to affect lateralization in both spatial and language tasks. Despite these changes to the task, we observed right-lateralized parietal activations similar to those reported in other LT studies, both at group level and in individual lateralization indices. We conclude that LT activation is robustly right-lateralized, perhaps uniquely so among commonly-studied spatial tasks. We speculate that the unique properties of the LT reside in its requirement to judge relative magnitudes of the two line segments, rather than in the more general aspects of spatial attention or visual-spatial representation.


Subject(s)
Dominance, Cerebral/physiology , Functional Laterality/physiology , Neuropsychological Tests , Visual Perception/physiology , Adult , Brain Mapping , Female , Humans , Judgment , Magnetic Resonance Imaging , Male , Parietal Lobe/physiology , Photic Stimulation , Psychomotor Performance/physiology , Reaction Time/physiology , Space Perception , Young Adult
16.
Neuropsychologia ; 106: 194-206, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28987904

ABSTRACT

In this paper, we examine brain lateralization patterns for a complex visual-spatial task commonly used to assess general spatial abilities. Although spatial abilities have classically been ascribed to the right hemisphere, evidence suggests that at least some tasks may be strongly bilateral. For example, while functional neuroimaging studies show right-lateralized activations for some spatial tasks (e.g., line bisection), bilateral activations are often reported for others, including classic spatial tasks such as mental rotation. Moreover, constructive apraxia has been reported following left- as well as right-hemisphere damage in adults, suggesting a role for the left hemisphere in spatial function. Here, we use functional neuroimaging to probe lateralization while healthy adults carry out a simplified visual-spatial construction task, in which they judge whether two geometric puzzle pieces can be combined to form a square. The task evokes strong bilateral activations, predominantly in parietal and lateral occipital cortex. Bilaterality was observed at the single-subject as well as at the group level, and regardless of whether specific items required mental rotation. We speculate that complex visual-spatial tasks may generally engage more bilateral activation of the brain than previously thought, and we discuss implications for understanding hemispheric specialization for spatial functions.


Subject(s)
Parietal Lobe/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Brain Mapping , Female , Functional Laterality , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Spatial Processing , Young Adult
17.
Cognition ; 168: 146-153, 2017 11.
Article in English | MEDLINE | ID: mdl-28683351

ABSTRACT

Prior work suggests that our understanding of how things work ("intuitive physics") and how people work ("intuitive psychology") are distinct domains of human cognition. Here we directly test the dissociability of these two domains by investigating knowledge of intuitive physics and intuitive psychology in adults with Williams syndrome (WS) - a genetic developmental disorder characterized by severely impaired spatial cognition, but relatively spared social cognition. WS adults and mental-age matched (MA) controls completed an intuitive physics task and an intuitive psychology task. If intuitive physics is a distinct domain (from intuitive psychology), then we should observe differential impairment on the physics task for individuals with WS compared to MA controls. Indeed, adults with WS performed significantly worse on the intuitive physics than the intuitive psychology task, relative to controls. These results support the hypothesis that knowledge of the physical world can be disrupted independently from knowledge of the social world.


Subject(s)
Intuition , Physical Phenomena , Social Perception , Williams Syndrome/psychology , Child, Preschool , Comprehension , Female , Humans , Male , Neuropsychological Tests
18.
Cogn Sci ; 41 Suppl 4: 748-779, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27323000

ABSTRACT

Containment and support have traditionally been assumed to represent universal conceptual foundations for spatial terms. This assumption can be challenged, however: English in and on are applied across a surprisingly broad range of exemplars, and comparable terms in other languages show significant variation in their application. We propose that the broad domains of both containment and support have internal structure that reflects different subtypes, that this structure is reflected in basic spatial term usage across languages, and that it constrains children's spatial term learning. Using a newly developed battery, we asked how adults and 4-year-old children speaking English or Greek distribute basic spatial terms across subtypes of containment and support. We found that containment showed similar distributions of basic terms across subtypes among all groups while support showed such similarity only among adults, with striking differences between children learning English versus Greek. We conclude that the two domains differ considerably in the learning problems they present, and that learning in and on is remarkably complex. Together, our results point to the need for a more nuanced view of spatial term learning.


Subject(s)
Child Language , Language Development , Language , Spatial Learning/physiology , Adult , Child, Preschool , Female , Humans , Learning/physiology , Male
19.
Open Mind (Camb) ; 1(3): 136-147, 2017 Dec 01.
Article in English | MEDLINE | ID: mdl-30931420

ABSTRACT

Previous studies have shown that adults are able to remember more than 1,000 images with great detail. However, little is known about the development of this visual capacity, nor its presence early in life. This study tests the level of detail of young children's memory for a large number of items, adapting the method of Brady, Konkle, Alvarez, and Oliva (2008). Four- and six-year-old children were shown more than 100 images of everyday objects. They were then tested for recognition of familiar items in a binary decision task. The identity of the foil test item was manipulated in three conditions (Category, Exemplar, and State). Children demonstrated high accuracy across all conditions, remembering not only the basic-level category (Category), but also unique details (Exemplar), and information about position and arrangement of parts (State). These findings demonstrate that children spontaneously encode a high degree of visual detail. Early in life, visual memory exhibits high fidelity and extends over a large set of items.

SELECTION OF CITATIONS
SEARCH DETAIL
...