Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Cognition ; 254: 105964, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39357431

RESUMO

Motor imagery (MI) of one's own movements is thought to involve the sub-threshold activation of one's own motor codes. Movement coordination during joint action is thought to occur because co-actors integrate a simulation of their own actions with the simulated actions of the partner. The present experiments gained insight into MI of joint action by investigating if and how the assumed motor capabilitiesof the imaginary partner affected MI. Participants performed a serial disc transfer task alone and then imagined performing the same task alone and with an imagined partner. In the individual tasks, participants transferred all four discs. In the joint task, participants imagined themselves transferring the first 2 discs and a partner transferring the last 2 discs. The description of the imagined partner (high/low performer) was manipulated across blocks to determine if participants adapted their MI of the joint task based on the partner's characteristics. Results revealed that imagined movement times (MTs) were shorter when the description of the imagined partner was a 'high' performer compared to a 'low' performer. Interestingly, participants not only adjusted the partner's portion of the task, but they also adjusted their own portion of the task - imagined MTs of the first disc transfers were shorter when imagining performing the task with a high performer than with a low performer. These findings suggest that MI is based on the simulation of one's own response code, and that the adaptation of MI to their partner's movements influences the MI of one's own movements.

2.
Virtual Real ; 28(2): 95, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39233779

RESUMO

Mixed reality technologies, such as virtual (VR) and augmented (AR) reality, present promising opportunities to advance education and professional training due to their adaptability to diverse contexts. Distortions in the perceived distance in such mediated conditions, however, are well documented and have imposed nontrivial challenges that complicate and limit transferring task performance in a virtual setting to the unmediated reality (UR). One potential source of the distance distortion is the vergence-accommodation conflict-the discrepancy between the depth specified by the eyes' accommodative state and the angle at which the eyes converge to fixate on a target. The present study involved the use of a manual pointing task in UR, VR, and AR to quantify the magnitude of the potential depth distortion in each modality. Conceptualizing the effect of vergence-accommodation offset as a constant offset to the vergence angle, a model was developed based on the stereoscopic viewing geometry. Different versions of the model were used to fit and predict the behavioral data for all modalities. Results confirmed the validity of the conceptualization of vergence-accommodation as a device-specific vergence offset, which predicted up to 66% of the variance in the data. The fitted parameters indicate that, due to the vergence-accommodation conflict, participants' vergence angle was driven outwards by approximately 0.2°, which disrupted the stereoscopic viewing geometry and produced distance distortion in VR and AR. The implications of this finding are discussed in the context of developing virtual environments that minimize the effect of depth distortion.

3.
Sci Rep ; 14(1): 18938, 2024 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-39147910

RESUMO

The popularity of mixed reality (MR) technologies, including virtual (VR) and augmented (AR) reality, have advanced many training and skill development applications. If successful, these technologies could be valuable for high-impact professional training, like medical operations or sports, where the physical resources could be limited or inaccessible. Despite MR's potential, it is still unclear whether repeatedly performing a task in MR would affect performance in the same or related tasks in the physical environment. To investigate this issue, participants executed a series of visually-guided manual pointing movements in the physical world before and after spending one hour in VR or AR performing similar movements. Results showed that, due to the MR headsets' intrinsic perceptual geometry, movements executed in VR were shorter and movements executed in AR were longer than the veridical Euclidean distance. Crucially, the sensorimotor bias in MR conditions also manifested in the subsequent post-test pointing task; participants transferring from VR initially undershoot whereas those from AR overshoot the target in the physical environment. These findings call for careful consideration of MR-based training because the exposure to MR may perturb the sensorimotor processes in the physical environment and negatively impact performance accuracy and transfer of training from MR to UR.


Assuntos
Desempenho Psicomotor , Análise e Desempenho de Tarefas , Realidade Virtual , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Desempenho Psicomotor/fisiologia , Realidade Aumentada , Movimento/fisiologia
4.
Behav Res Methods ; 56(4): 4103-4129, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38504077

RESUMO

Human movement trajectories can reveal useful insights regarding the underlying mechanisms of human behaviors. Extracting information from movement trajectories, however, can be challenging because of their complex and dynamic nature. The current paper presents a Python toolkit developed to help users analyze and extract meaningful information from the trajectories of discrete rapid aiming movements executed by humans. This toolkit uses various open-source Python libraries, such as NumPy and SciPy, and offers a collection of common functionalities to analyze movement trajectory data. To ensure flexibility and ease of use, the toolkit offers two approaches: an automated approach that processes raw data and generates relevant measures automatically, and a manual approach that allows users to selectively use different functions based on their specific needs. A behavioral experiment based on the spatial cueing paradigm was conducted to illustrate how one can use this toolkit in practice. Readers are encouraged to access the publicly available data and relevant analysis scripts as an opportunity to learn about kinematic analysis for human movements.


Assuntos
Movimento , Software , Humanos , Movimento/fisiologia , Fenômenos Biomecânicos , Linguagens de Programação , Masculino
5.
Q J Exp Psychol (Hove) ; 77(2): 230-241, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36999402

RESUMO

Social cues, such as eye gaze and pointing fingers, can increase the prioritisation of specific locations for cognitive processing. A previous study using a manual reaching task showed that, although both gaze and pointing cues altered target prioritisation (reaction times [RTs]), only pointing cues affected action execution (trajectory deviations). These differential effects of gaze and pointing cues on action execution could be because the gaze cue was conveyed through a disembodied head; hence, the model lacked the potential for a body part (i.e., hands) to interact with the target. In the present study, the image of a male gaze model, whose gaze direction coincided with two potential target locations, was centrally presented. The model either had his arms and hands extended underneath the potential target locations, indicating the potential to act on the targets (Experiment 1), or had his arms crossed in front of his chest, indicating the absence of potential to act (Experiment 2). Participants reached to a target that followed a nonpredictive gaze cue at one of three stimulus onset asynchronies. RTs and reach trajectories of the movements to cued and uncued targets were analysed. RTs showed a facilitation effect for both experiments, whereas trajectory analysis revealed facilitatory and inhibitory effects, but only in Experiment 1 when the model could potentially act on the targets. The results of this study suggested that when the gaze model had the potential to interact with the cued target location, the model's gaze affected not only target prioritisation but also movement execution.


Assuntos
Atenção , Sinais (Psicologia) , Humanos , Masculino , Fixação Ocular , Tempo de Reação , Movimento
6.
PLoS One ; 18(10): e0293178, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37871043

RESUMO

BACKGROUND: Joint range of motion (ROM) is an important quantitative measure for physical therapy. Commonly relying on a goniometer, accurate and reliable ROM measurement requires extensive training and practice. This, in turn, imposes a significant barrier for those who have limited in-person access to healthcare. OBJECTIVE: The current study presents and evaluates an alternative machine learning-based ROM evaluation method that could be remotely accessed via a webcam. METHODS: To evaluate its reliability, the ROM measurements for a diverse set of joints (neck, spine, and upper and lower extremities) derived using this method were compared to those obtained from a marker-based optical motion capture system. RESULTS: Data collected from 25 healthy adults demonstrated that the webcam solution exhibited high test-retest reliability, with substantial to almost perfect intraclass correlation coefficients for most joints. Compared with the marker-based system, the webcam-based system demonstrated substantial to almost perfect inter-rater reliability for some joints, and lower inter-rater reliability for other joints (e.g., shoulder flexion and elbow flexion), which could be attributed to the reduced sensitivity to joint locations at the apex of the movement. CONCLUSIONS: The proposed webcam-based method exhibited high test-retest and inter-rater reliability, making it a versatile alternative for existing ROM evaluation methods in clinical practice and the tele-implementation of physical therapy and rehabilitation.


Assuntos
Artrometria Articular , Ombro , Adulto , Humanos , Artrometria Articular/métodos , Reprodutibilidade dos Testes , Amplitude de Movimento Articular , Extremidade Superior
7.
Vision Res ; 203: 108152, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36442368

RESUMO

Visually guided reaches are performed in ≈1s. Given unstable feedback control with neural transmission delay, stable visually guided reaching is assumed to require internal feedforward models that generate simulated feedback without delay that combines with actual feedback for stability. We investigated whether stable visually guided reaching requires internal models to handle such delay. Participants performed rapid targeted reaches in a virtual environment with different mappings between speeds of the hand and hand avatar. First, participants reached with visual guidance and constant mapping. Second, feedforward reaches were performed with constant mapping and hand avatar only visible at reach start and end. Reaches were accurate. Third, participants performed reaches with visual guidance and different mappings every trial. We expected performance as in the first condition. Finally, feedforward reaches with variable mapping yielded large errors showing visual guidance in the previous condition was successful despite an ineffective internal model. We simulated reaches using a proportional rate model with disparity Tau controlling the virtual Equilibrium Point in an Equilibrium Point (EP) model. The time dimensioned information and dynamic remained stable with delayed feedback. Finally, we fit movement times using the proportional rate EP model with 0msec, 50msec, and 100msec delay. With the fitted model parameters, we compared the model reach trajectories with the behavioral trajectories. Stable visually guided reaching did not require an internal feedforward model.


Assuntos
Mãos , Movimento , Humanos , Retroalimentação , Desempenho Psicomotor
8.
Sci Rep ; 12(1): 17538, 2022 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-36266406

RESUMO

Studies have demonstrated that perceiving human and animal movements as point-light displays is effortless. However, simply inverting the display can significantly impair this ability. Compared to non-dancers and typical dancers, vertical dancers have the unique experience of observing and performing movements upside down as being suspended in the air. We studied whether this unique visuomotor experience makes them better at perceiving the inverted movements. We presented ten pairs of dance movements as point-light displays. Each pair included a version performed on the ground whereas the other was in the air. We inverted the display in half of the trials and asked vertical dancers, typical dancers, and non-dancers about whether the display was inverted. We found that only vertical dancers, who have extended visual and motor experience with the configural and dynamic information of the movements, could identify the inversion of movements performed in the air. Neither typical dancers nor non-dancers, who have no motor experience with performing the inverted movements, could detect the inversion. Our findings suggest that motor experience plays a more critical role in enabling the observers to use dynamic information for identifying artificial inversion in biological motion.


Assuntos
Dança , Percepção de Movimento , Humanos , Animais , Movimento , Orientação Espacial
9.
Vision Res ; 196: 108029, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35248890

RESUMO

Reaches guided using monocular versus binocular vision have been found to be equally fast and accurate only when optical texture was available projected from a support surface across which the reach was performed. We now investigate what property of optical texture elements is used to perceive relative distance: image width, image height, or image shape. Participants performed reaches to match target distances. Targets appeared on a textured surface on the left and participants reached to place their hand at target distance along a surface on the right. A perturbation discriminated which texture property was being used. The righthand surface was higher than the lefthand one by either 2, 4 or 6 cm. Participants should overshoot if they matched texture image width at the target, undershoot if they matched image shape, and undershoot far distances and, depending on the overall eye height, overshoot near distances if they matched image height. In Experiment 1, participants reached by moving a joystick to control a hand avatar in a virtual environment display. Their eye height was 15 cm. For each texture property, distances were predicted from the viewing geometry. Results ruled out image width in favor of image height or shape. In Experiment 2, participants at a 50 cm eye height reached in an actual environment with the same manipulations. Results supported use of image shape (or foreshortening), consistent with findings of texture properties used in slant perception. We discuss implications for models of visually guided reaching.


Assuntos
Percepção de Profundidade , Visão Binocular , Percepção de Distância , Humanos , Visão Monocular
10.
Exp Brain Res ; 239(3): 765-776, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33388908

RESUMO

We investigated monocular information for the continuous online guidance of reaches-to-grasp and present a dynamical control model thereof. We defined an information variable using optical texture projected from a support surface (i.e. a table) over which the participants reached-to-grasp target objects sitting on the table surface at different distances. Using either binocular or monocular vision in the dark, participants rapidly reached-to-grasp a phosphorescent square target object with visibly phosphorescent thumb and index finger. Targets were one of three sizes. The target either sat flat on the support surface or was suspended a few centimeters above the surface at a slant. The later condition perturbed the visible relation of the target to the support surface. The support surface was either invisible in the dark or covered with a visible phosphorescent checkerboard texture. Reach-to-grasp trajectories were recorded and Maximum Grasp Apertures (MGA), Movement Times (MT), Time of MGA (TMGA), and Time of Peak Velocities (TPV) were analyzed. These measures were selected as most indicative of the participant's certainty about the relation of hand to target object during the reaches. The findings were that, in general, especially monocular reaches were less certain (slower, earlier TMGA and TPV) than binocular reaches except with the target flat on the visible support surface where performance with monocular and binocular vision was equivalent. The hypothesized information was the difference in image width of optical texture (equivalent to density of optical texture) at the hand versus the target. A control dynamic equation was formulated representing proportional rate control of the reaches-to-grasp (akin to the model using binocular disparity formulated by Anderson and Bingham (Exp Brain Res 205: 291-306, 2010). Simulations were performed and presented using this model. Simulated performance was compared to actual performance and found to replicate it. To our knowledge, this is the first study of monocular information used for continuous online guidance of reaches-to-grasp, complete with a control dynamic model.


Assuntos
Força da Mão , Fenômenos Biomecânicos , Humanos , Desempenho Psicomotor , Visão Binocular , Visão Monocular
11.
Atten Percept Psychophys ; 83(1): 389-398, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33000441

RESUMO

Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns. Three types of color displays were tested: grayscale, original colors, or rearranged colors (where the RGB values of the original colors were adjusted). In each color condition, participants identified 12 blurry events in five experimental phases. In the first two phases, static blurry images were presented alone or sequentially with a motion mask between consecutive frames, and identification was poor. In Phase 3, where optic flow was added, identification was comparably good. In Phases 4 and 5, motion was removed, but identification remained good. Thus, optic flow improved event identification during and after its presentation. Color also improved performance, where participants were consistently better at identifying color displays than grayscale or rearranged color displays. Importantly, the effects of optic flow and color were additive. Finally, in both motion and postmotion phases, a significant portion of eye fixations fell in strong optic flow areas, suggesting that participants continued to look where flow was available even after it stopped. We infer that optic flow specified depth structure in the blurry image structure and yielded an improvement in identification from static blurry images.


Assuntos
Percepção de Movimento , Fluxo Óptico , Cor , Movimentos Oculares , Humanos , Percepção Visual
12.
Vision Res ; 173: 77-89, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32480110

RESUMO

Previously, we developed a stratified process for slant perception. First, optical transformations in structure-from-motion (SFM) and stereo were used to derive 3D relief structure (where depth scaling remains arbitrary). Second, with sufficient continuous perspective change (≥45°), a bootstrap process derived 3D similarity structure. Third, the perceived slant was derived. As predicted by theoretical work on SFM, small visual angle (<5°) viewing requires non-coplanar points. Slanted surfaces with small 3D cuboids or tetrahedrons yielded accurate judgment while planar surfaces did not. Normally, object perception entails non-coplanar points. Now, we apply the stratified process to object perception where, after deriving similarity structure, alternative metric properties of the object can be derived (e.g. slant of the top surface or width-to-depth aspect ratio). First, we tested slant judgments of the smooth planar tops of three different polyhedral objects. We tested rectangular, hexagonal, and asymmetric pentagonal surfaces, finding that symmetry was required to determine the direction of slant (AP&P, 2019, https://doi.org/10.3758/s13414-019-01859-5). Our current results replicated the previous findings. Second, we tested judgments of aspect ratios, finding accurate performance only for symmetric objects. Results from this study suggest that, first, trackable non-coplanar points can be attained in the form of 3D objects. Second, symmetry is necessary to constrain slant and aspect ratio perception. Finally, deriving 3D similarity structure precedes estimating object properties, such as slant or aspect ratio. Together, evidence presented here supports the stratified bootstrap process for 3D object perception. STATEMENT OF SIGNIFICANCE: Planning interactions with objects in the surrounding environment entails the perception of 3D shape and slant. Studying ways through which 3D metric shape and slant can be perceived accurately by moving observers not only sheds light on how the visual system works, but also provides understanding that can be applied to other fields, like machine vision or remote sensing. The current study is a logical extension of previous studies by the same authors and explores the roles of large continuous perspective changes, relief structure, and symmetry in a stratified process for object perception.


Assuntos
Percepção de Forma/fisiologia , Imageamento Tridimensional , Percepção de Movimento/fisiologia , Adulto , Feminino , Humanos , Julgamento , Masculino , Estimulação Luminosa , Projetos de Pesquisa , Limiar Sensorial
13.
Atten Percept Psychophys ; 82(3): 1488-1503, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31502187

RESUMO

Empirical studies have always shown 3-D slant and shape perception to be inaccurate as a result of relief scaling (an unknown scaling along the depth direction). Wang, Lind, and Bingham (Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1508-1522, 2018) discovered that sufficient relative motion between the observer and 3-D objects in the form of continuous perspective change (≥45°) could enable accurate 3-D slant perception. They attributed this to a bootstrap process (Lind, Lee, Mazanowski, Kountouriotis, & Bingham in Journal of Experimental Psychology: Human Perception and Performance, 40(1), 83, 2014) where the perceiver identifies right angles formed by texture elements and tracks them in the 3-D relief structure through rotation to extrapolate the unknown scaling factor, then used to convert 3-D relief structure to 3-D Euclidean structure. This study examined the nature of the bootstrap process in slant perception. In a series of four experiments, we demonstrated that (1) features of 3-D relief structure, instead of 2-D texture elements, were tracked (Experiment 1); (2) identifying right angles was not necessary, and a different implementation of the bootstrap process is more suitable for 3-D slant perception (Experiment 2); and (3) mirror symmetry is necessary to produce accurate slant estimation using the bootstrapped scaling factor (Experiments 3 and 4). Together, the results support the hypothesis that a symmetry axis is used to determine the direction of slant and that 3-D relief structure is tracked over sufficiently large perspective change to produce metric depth. Altogether, the results supported the bootstrap process.


Assuntos
Percepção de Profundidade , Humanos , Rotação
14.
Atten Percept Psychophys ; 82(3): 1504-1519, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31506917

RESUMO

Lind et al. (Journal of Experimental Psychology: Human Perception and Performance, 40 (1), 83, 2014) proposed a bootstrap process that used right angles on 3D relief structure, viewed over sufficiently large continuous perspective change, to recover the scaling factor for metric shape. Wang, Lind, and Bingham (Journal of Experimental Psychology: Human Perception and Performance, 44(10), 1508-1522, 2018) replicated these results in the case of 3D slant perception. However, subsequent work by the same authors (Wang et al., 2019) suggested that the original solution could be ineffective for 3D slant and presented an alternative that used two equidistant points (a portion of the original right angle). We now describe a three-step stratified process to recover 3D slant using this new solution. Starting with 2D inputs, we (1) used an existing structure-from-motion (SFM) algorithm to derive the object's 3D relief structure and (2) applied the bootstrap process to it to recover the unknown scaling factor, which (3) was then used to produce a slant estimate. We presented simulations of results from four previous experiments (Wang et al., 2018, 2019) to compare model and human performance. We showed that the stratified process has great predictive power, reproducing a surprising number of phenomena found in human experiments. The modeling results also confirmed arguments made in Wang et al. (2019) that an axis of mirror symmetry in an object allows observers to use the recovered scaling factor to produce an accurate slant estimate. Thus, poor estimates in the context of a lack of symmetry do not mean that the scaling factor has not been recovered, but merely that the direction of slant was ambiguous.


Assuntos
Percepção de Profundidade , Humanos
15.
Vision Res ; 158: 49-57, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30796993

RESUMO

Perceiving the spatial layout of objects is crucial in visual scene perception. Optic flow provides information about spatial layout. This information is not affected by image blur because motion detection uses low spatial frequencies in image structure. Therefore, perceiving scenes with blurry vision should be effective when optic flow is available. Furthermore, when blurry images and optic flow interact, optic flow specifies spatial relations and calibrates blurry images. Calibrated image structure then preserves spatial relations specified by optic flow after motion stops. Thus, perceiving blurry scenes should be stable when optic flow and blurry images are available. We investigated the types of optic flow that facilitate recognition of blurry scenes and evaluated the stability of performance. Participants identified scenes in blurry videos when viewing single frames and the entire videos that contained translational flow (Experiment 1), rotational flow (Experiment 2) or both (Experiment 3). When first viewing the blurry images, participants identified a few scenes. When viewing blurry video clips, their performance improved with translational flow, whether it was available alone or in combination with rotational flow. Participants were still able to perceive scenes from static blurry images one week later. Therefore, translational flow interacts with blurry image structures to yield effective and stable scene perception. These results imply that observers with blurry vision may be able to identify their surrounds when they locomote.


Assuntos
Fluxo Óptico/fisiologia , Baixa Visão/fisiopatologia , Percepção Visual/fisiologia , Feminino , Humanos , Masculino , Reconhecimento Visual de Modelos/fisiologia , Adulto Jovem
16.
Exp Brain Res ; 237(3): 817-827, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30610264

RESUMO

Mon-Williams and Bingham (Exp Brain Res 211(1):145-160, 2011) developed a geometrical affordance model for reaches-to-grasp, and identified a constant scaling relationship, P, between safety margins (SM) and available apertures (SM) that are determined by the sizes of the objects and the individual hands. Bingham et al. (J Exp Psychol Hum Percept Perform 40(4):1542-1550, 2014) extended the model by introducing a dynamical component that scales the geometrical relationship to the stability of the reaching-to-grasp. The goal of the current study was to explore whether and how quickly change in the relevant effectivity (functionally determined hand size = maximum grip) would affect the geometrical and dynamical scaling relationships. The maximum grip of large-handed males was progressively restricted. Participants responded to this restriction by using progressively smaller safety margins, but progressively larger P (= SM/AA) values that preserved an invariant dynamical scaling relationship. The recalibration was relatively fast, occurring over five trials or less, presumably a number required to detect the variability or stability of performance. The results supported the affordance model for reaches-to-grasp in which the invariance is determined by the dynamical component, because it serves the goal of not colliding with the object before successful grasping can be achieved. The findings were also consistent with those of Snapp-Childs and Bingham (Exp Brain Res 198(4):527-533, 2009) who found changes in age-specific geometric scaling for stepping affordances as a function of changes in effectivities over the life span where those changes preserved a dynamic scaling constant similar to that in the current study.


Assuntos
Atividade Motora/fisiologia , Desempenho Psicomotor/fisiologia , Percepção de Tamanho/fisiologia , Adulto , Fenômenos Biomecânicos , Tamanho Corporal , Humanos , Masculino
17.
J Exp Psychol Hum Percept Perform ; 44(10): 1508-1522, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29927269

RESUMO

Perceived slant has often been characterized as a component of 3D shape perception for polyhedral objects. Like 3D shape, slant is often perceived inaccurately. Lind, Lee, Mazanowski, Kountouriotis, and Bingham (2014) found that 3D shape was perceived accurately with perspective changes ≥ 45°. We now similarly tested perception of 3D slant. To account for their results, Lind et al. (2014) developed a bootstrap model based on the assumption that optical information yields perception of 3D relief structure then used with large perspective changes to bootstrap to perception of 3D Euclidean structure. However, slant perception usually entails planar surfaces and structure-from-motion fails in the absence of noncoplanar points. Nevertheless, the displays in Lind et al. (2014) included stereomotion in addition to monocular optical flow. Because stereomotion is higher order, the bootstrap model might apply in the case of strictly planar surfaces. We investigated whether stereomotion, monocular structure-from-motion (SFM), or the combination of the two would yield accurate 3D slant perception with large continuous perspective change. In Experiment 1, we found that judgments of slant were inaccurate in all information conditions. In Experiment 2, we added noncoplanar structure to the surfaces. We found that judgments in the monocular SFM and combined conditions now became correct once perspective changes were ≥ 45°, replicating the results of Lind et al. (2014) and supporting the bootstrap model. In short, we found that noncoplanar structure was required to enable accurate perception of 3D slant with sufficiently large perspective changes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Assuntos
Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adulto , Percepção de Profundidade/fisiologia , Feminino , Humanos , Masculino , Percepção de Movimento/fisiologia , Visão Monocular/fisiologia , Adulto Jovem
18.
J Vis ; 17(12): 13, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29067401

RESUMO

Events consist of objects in motion. When objects move, their opaque surfaces reflect light and produce both static image structure and dynamic optic flow. The static and dynamic optical information co-specify events. Patients with age-related macular degeneration (AMD) and amblyopia cannot identify static objects because of weakened image structure. However, optic flow is detectable despite blurry vision because visual motion measurement uses low spatial frequencies. When motion ceases, image structure persists and might preserve properties specified by optic flow. We tested whether optic flow and image structure interact to allow event perception with poor static vision. AMD (Experiment 1), amblyopic (Experiments 2 and 3), and normally sighted observers identified common events from either blurry (Experiments 1 and 2) or clear images (Experiment 3), when either single image frames were presented, a sequence of frames was presented with motion masks, or a sequence of frames was presented with detectable motion. Results showed that with static images, but no motion, events were not perceived well by participants other than controls in Experiment 3. However, with detectable motion, events were perceived. Immediately following this and again after five days, participants were able to identify events from the original static images. So, when image structure information is weak, optic flow compensates for it and enables event perception. Furthermore, weakened static image structure information nevertheless preserves information that was once available in optic flow. The combination is powerful and allows events to be perceived accurately and stably despite blurry vision.


Assuntos
Ambliopia/fisiopatologia , Degeneração Macular/fisiopatologia , Percepção de Movimento/fisiologia , Fluxo Óptico/fisiologia , Baixa Visão/fisiopatologia , Idoso , Idoso de 80 Anos ou mais , Análise de Variância , Estudos de Casos e Controles , Feminino , Humanos , Masculino
19.
Hum Mov Sci ; 45: 172-81, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26684725

RESUMO

Previous empirical and theoretical work suggests that effective skill acquisition requires movements to be generated actively and that learning new skills supports the acquisition of prospective control. However, there are many ways in which practice can be structured, that may affect the acquisition and use of prospective control after training. Here, we tested whether the progressive modulation and reduction of support during training was required to yield good performance after training without support. The task was to use a stylus to push a bead over a complex 3D wire path. The support "magnetically" attracted and held the stylus onto the wire. Three groups of adult participants each experienced one of three training regimes: gradual reduction of magnetic attraction, only a medium level of attraction, or low magnetic attraction. The results showed that use of a single (medium) level of support was significantly less effective in yielding good performance with low support after training. Training with low support yielded post-training performance that was equally good as that yielded by training with progressive reduction of support; however, performance during training was significantly poorer in the former. Thus, less support during training yields effective learning but more difficult training sessions. The results are discussed in the context of application to training with special populations.


Assuntos
Magnetismo , Destreza Motora , Prática Psicológica , Desempenho Psicomotor , Adulto , Antecipação Psicológica , Feminino , Humanos , Controle Interno-Externo , Masculino , Estudos Prospectivos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA