Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 18(4): e0284485, 2023.
Article in English | MEDLINE | ID: mdl-37058466

ABSTRACT

We not only perceive the physical state of the environment, but also the causal structures underlying the physical state. Determining whether an object has intentionality is a key component of this process. Among all possible intentions, the intention that has arguably been studied the most is chasing-often via a reasonably simple and stereotyped computer algorithm ("heat-seeking"). The current study investigated the perception of multiple types of chasing approaches and thus whether it is the intention of chasing that triggers the perception of chasing, whether the chasing agent and the agent being chased play equally important roles, and whether the perception of chasing requires the presence of both agents. We implemented a well-studied wolf chasing a sheep paradigm where participants viewed recordings of a disc (the wolf) chasing another disc (the sheep) among other distracting discs. We manipulated the types of chasing algorithms, the density of the distractors, the target agent in the task, and the presence of the agent being chased. We found that the participants could successfully identify the chasing agent in all conditions where both agents were present, albeit with different levels of performance (e.g., participants were best at detecting the chasing agent when the chasing agent engaged in a direct chasing strategy and were worst at detecting a human-controlled chasing agent). This work therefore extends our understanding of the types of cues that are and are not utilized by the visual system to detect the chasing intention.


Subject(s)
Motion Perception , Wolves , Humans , Animals , Sheep , Attention , Cues , Stereotyping , Intention
2.
J Vis ; 20(8): 16, 2020 08 03.
Article in English | MEDLINE | ID: mdl-32790849

ABSTRACT

A sizeable body of work has demonstrated that participants have the capacity to show substantial increases in performance on perceptual tasks given appropriate practice. This has resulted in significant interest in the use of such perceptual learning techniques to positively impact performance in real-world domains where the extraction of perceptual information in the service of guiding decisions is at a premium. Radiological training is one clear example of such a domain. Here we examine a number of basic science questions related to the use of perceptual learning techniques in the context of a radiology-inspired task. On each trial of this task, participants were presented with a single axial slice from a CT image of the abdomen. They were then asked to indicate whether or not the image was consistent with appendicitis. We first demonstrate that, although the task differs in many ways from standard radiological practice, it nonetheless makes use of expert knowledge, as trained radiologists who underwent the task showed high (near ceiling) levels of performance. Then, in a series of four studies we show that (1) performance on this task does improve significantly over a reasonably short period of training (on the scale of a few hours); (2) the learning transfers to previously unseen images and to untrained image orientations; (3) purely correct/incorrect feedback produces weak learning compared to more informative feedback where the spatial position of the appendix is indicated in each image; and (4) there was little benefit seen from purposefully structuring the learning experience by starting with easier images and then moving on to more difficulty images (as compared to simply presenting all images in a random order). The implications for these various findings with respect to the use of perceptual learning techniques as part of radiological training are then discussed.


Subject(s)
Appendicitis/diagnostic imaging , Clinical Competence/standards , Learning/physiology , Radiologists/standards , Tomography, X-Ray Computed , Visual Perception/physiology , Adult , Female , Humans , Male , Orientation , Transfer, Psychology
3.
PLoS One ; 15(3): e0229929, 2020.
Article in English | MEDLINE | ID: mdl-32150569

ABSTRACT

The visual system exploits multiple signals, including monocular and binocular cues, to determine the motion of objects through depth. In the laboratory, sensitivity to different three-dimensional (3D) motion cues varies across observers and is often weak for binocular cues. However, laboratory assessments may reflect factors beyond inherent perceptual sensitivity. For example, the appearance of weak binocular sensitivity may relate to extensive prior experience with two-dimensional (2D) displays in which binocular cues are not informative. Here we evaluated the impact of experience on motion-in-depth (MID) sensitivity in a virtual reality (VR) environment. We tested a large cohort of observers who reported having no prior VR experience and found that binocular cue sensitivity was substantially weaker than monocular cue sensitivity. As expected, sensitivity was greater when monocular and binocular cues were presented together than in isolation. Surprisingly, the addition of motion parallax signals appeared to cause observers to rely almost exclusively on monocular cues. As observers gained experience in the VR task, sensitivity to monocular and binocular cues increased. Notably, most observers were unable to distinguish the direction of MID based on binocular cues above chance level when tested early in the experiment, whereas most showed statistically significant sensitivity to binocular cues when tested late in the experiment. This result suggests that observers may discount binocular cues when they are first encountered in a VR environment. Laboratory assessments may thus underestimate the sensitivity of inexperienced observers to MID, especially for binocular cues.


Subject(s)
Virtual Reality , Cues , Depth Perception , Humans , Motion Perception , Vision Disparity , Vision, Binocular
4.
J Vis ; 19(3): 2, 2019 03 01.
Article in English | MEDLINE | ID: mdl-30836382

ABSTRACT

Intercepting and avoiding moving objects requires accurate motion-in-depth (MID) perception. Such motion can be estimated based on both binocular and monocular cues. Because previous studies largely characterized sensitivity to these cues individually, their relative contributions to MID perception remain unclear. Here we measured sensitivity to binocular, monocular, and combined cue MID stimuli using a motion coherence paradigm. We first confirmed prior reports of substantial variability in binocular MID cue sensitivity across the visual field. The stimuli were matched for eccentricity and speed, suggesting that this variability has a neural basis. Second, we determined that monocular MID cue sensitivity also varied considerably across the visual field. A major component of this variability was geometric: An MID stimulus produces the largest motion signals in the eye contralateral to its visual field location. This resulted in better monocular discrimination performance when the contralateral rather than ipsilateral eye was stimulated. Third, we found that monocular cue sensitivity generally exceeded, and was independent of, binocular cue sensitivity. Finally, contralateral monocular cue sensitivity was found to be a strong predictor of combined cue sensitivity. These results reveal distinct factors constraining the contributions of binocular and monocular cues to three-dimensional motion perception.


Subject(s)
Cues , Depth Perception/physiology , Motion Perception/physiology , Vision, Binocular/physiology , Vision, Monocular/physiology , Female , Humans , Male , Mathematics , Photic Stimulation/methods , Visual Fields/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...