Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Cognition ; 234: 105368, 2023 05.
Article in English | MEDLINE | ID: mdl-36641868

ABSTRACT

Near-scale environments, like work desks, restaurant place settings or lab benches, are the interface of our hand-based interactions with the world. How are our conceptual representations of these environments organized? What properties distinguish among reachspaces, and why? We obtained 1.25 million similarity judgments on 990 reachspace images, and generated a 30-dimensional embedding which accurately predicts these judgments. Examination of the embedding dimensions revealed key properties underlying these judgments, such as reachspace layout, affordance, and visual appearance. Clustering performed over the embedding revealed four distinct interpretable classes of reachspaces, distinguishing among spaces related to food, electronics, analog activities, and storage or display. Finally, we found that reachspace similarity ratings were better predicted by the function of the spaces than their locations, suggesting that reachspaces are largely conceptualized in terms of the actions they support. Altogether, these results reveal the behaviorally-relevant principles that structure our internal representations of reach-relevant environments.


Subject(s)
Brain Mapping , Pattern Recognition, Visual , Humans , Brain Mapping/methods , Judgment , Food , Hand
2.
J Vis ; 21(7): 14, 2021 07 06.
Article in English | MEDLINE | ID: mdl-34289491

ABSTRACT

Near-scale spaces are a key component of our visual experience: Whether for work or for leisure, we spend much of our days immersed in, and acting upon, the world within reach. Here, we present the Reachspace Database, a novel stimulus set containing over 10,000 images depicting first person, motor-relevant views at an approximated reachable scale (hereafter "reachspaces"), which reflect the visual input that an agent would experience while performing a task with her hands. These images are divided into over 350 categories, based on a taxonomy we developed, which captures information relating to the identity of each reachspace, including the broader setting and room it is found in, the locus of interaction (e.g., kitchen counter, desk), and the specific action it affords. Summary analyses of the taxonomy labels in the database suggest a tight connection between activities and the spaces that support them: While a small number of rooms and interaction loci afford many diverse actions (e.g., workshops, tables), most reachspaces were relatively specialized, typically affording only one main activity (e.g., gas station pump, airplane cockpit, kitchen cutting board). Overall, this Reachspace Database represents a large sampling of reachable environments and provides a new resource to support behavioral and neural research into the visual representation of reach-relevant environments. The database is available for download on the Open Science Framework (osf.io/bfyxk/).


Subject(s)
Databases, Factual , Female , Humans
3.
Proc Natl Acad Sci U S A ; 117(47): 29354-29362, 2020 11 24.
Article in English | MEDLINE | ID: mdl-33229533

ABSTRACT

Space-related processing recruits a network of brain regions separate from those recruited in object processing. This dissociation has largely been explored by contrasting views of navigable-scale spaces to views of close-up, isolated objects. However, in naturalistic visual experience, we encounter spaces intermediate to these extremes, like the tops of desks and kitchen counters, which are not navigable but typically contain multiple objects. How are such reachable-scale views represented in the brain? In three human functional neuroimaging experiments, we find evidence for a large-scale dissociation of reachable-scale views from both navigable scene views and close-up object views. Three brain regions were identified that showed a systematic response preference to reachable views, located in the posterior collateral sulcus, the inferior parietal sulcus, and superior parietal lobule. Subsequent analyses suggest that these three regions may be especially sensitive to the presence of multiple objects. Further, in all classic scene and object regions, reachable-scale views dissociated from both objects and scenes with an intermediate response magnitude. Taken together, these results establish that reachable-scale environments have a distinct representational signature from both scene and object views in visual cortex.


Subject(s)
Pattern Recognition, Visual/physiology , Space Perception/physiology , Visual Cortex/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation/methods , Visual Cortex/diagnostic imaging
4.
Atten Percept Psychophys ; 82(1): 31-43, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31429044

ABSTRACT

Searching for a "Q" among "O"s is easier than the opposite search (Treisman & Gormican in Psychological Review, 95, 15-48, 1988). In many cases, such "search asymmetries" occur because it is easier to search when a target is defined by the presence of a feature (i.e., the line terminator defining the tail of the "Q"), rather than by its absence. Treisman proposed that features that produce a search asymmetry are "basic" features in visual search (Treisman & Gormican in Psychological Review, 95, 15-48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285-310, 1985). Other stimulus attributes, such as color, orientation, and motion, have been found to produce search asymmetries (Dick, Ullman, & Sagi in Science, 237, 400-402, 1987; Treisman & Gormican in Psychological Review, 95, 15-48, 1988; Treisman & Souther in Journal of Experimental Psychology: General, 114, 285-310, 1985). Other stimulus properties, such as facial expression, produce asymmetries because one type of item (e.g., neutral faces) demands less attention in search than another (e.g., angry faces). In the present series of experiments, search for a rolling target among spinning distractors proved to be more efficient than searching for a spinning target among rolling distractors. The effect does not appear to be due to differences in physical plausibility, direction of motion, or texture movement. Our results suggest that the spinning stimuli demand less attention, making search through spinning distractors for a rolling target easier than the opposite search.


Subject(s)
Attention , Pattern Recognition, Visual , Psychological Theory , Color , Humans , Movement , Orientation , Rotation
5.
J Exp Psychol Hum Percept Perform ; 45(6): 715-728, 2019 Jun.
Article in English | MEDLINE | ID: mdl-31120300

ABSTRACT

In everyday experience, we interact with objects and we navigate through space. Extensive research has revealed that these visual behaviors are mediated by separable object-based and scene-based processing mechanisms in the mind and brain. However, we also frequently view near-scale spaces, for example, when sitting at the breakfast table or preparing a meal. How should such spaces (operationalized here as "reachspaces"), which contain multiple objects but not enough space to navigate through, be considered in this dichotomy? Here, we used visual search to explore the possibility that reachspace views are perceptually distinctive from full-scale scene views as well as object views. In the first experiment, we found evidence for this dissociation. In the second experiment, we found that the perceptual differences between reachspaces and scenes were substantially larger than those between scene categories (e.g., kitchens vs. offices). Finally, we provide computational support for this perceptual dissociation: Deep neural network models also naturally separate reachspaces from both scenes and objects, suggesting that mid- to high-level features may underlie this dissociation. Taken together, these results demonstrate that our perceptual systems are sensitive to systematic visual feature differences that distinguish objects, reachspaces, and full-scale scene views. Broadly, these results raise the possibility that our visual system may use different perceptual primitives to support the perception of reachable and navigable views of the world. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Attention/physiology , Neural Networks, Computer , Space Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Deep Learning , Humans , Pattern Recognition, Automated , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Young Adult
6.
Acta Psychol (Amst) ; 169: 100-108, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27270227

ABSTRACT

Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization.


Subject(s)
Attention , Discrimination, Psychological , Mental Recall , Pattern Recognition, Visual , Semantics , Association Learning , Color Perception , Cues , Depth Perception , Humans , Intention , Orientation, Spatial , Photic Stimulation , Reaction Time , Size Perception
7.
J Exp Psychol Hum Percept Perform ; 41(6): 1576-87, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26191615

ABSTRACT

In "hybrid" search tasks, observers hold multiple possible targets in memory while searching for those targets among distractor items in visual displays. Wolfe (2012) found that, if the target set is held constant over a block of trials, reaction times (RTs) in such tasks were a linear function of the number of items in the visual display and a linear function of the log of the number of items held in memory. However, in such tasks, the targets can become far more familiar than the distractors. Does this "familiarity"- operationalized here as the frequency and recency with which an item has appeared-influence performance in hybrid tasks In Experiment 1, we compared searches where distractors appeared with the same frequency as the targets to searches where all distractors were novel. Distractor familiarity did not have any reliable effect on search. In Experiment 2, most distractors were novel but some critical distractors were as common as the targets while others were 4× more common. Familiar distractors did not produce false alarm errors, though they did slightly increase RTs. In Experiment 3, observers successfully searched for the new, unfamiliar item among distractors that, in many cases, had been seen only once before. We conclude that when the memory set is held constant for many trials, item familiarity alone does not cause observers to mistakenly confuse target with distractors.


Subject(s)
Attention/physiology , Mental Recall/physiology , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Humans , Photic Stimulation , Recognition, Psychology/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...