Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 276
Filter
1.
J. optom. (Internet) ; 17(3): [100491], jul.-sept2024. ilus, tab, graf
Article in English | IBECS | ID: ibc-231873

ABSTRACT

Background and objectives: The invention described herein is a prototype based on computer vision technology that measures depth perception and is intended for the early examination of stereopsis. Materials and methods: The prototype (software and hardware) is a depth perception measurement system that consists on: (a) a screen showing stereoscopic models with a guide point that the subject must point to; (b) a camera capturing the distance between the screen and the subject's finger; and (c) a unit for recording, processing and storing the captured measurements. For test validation, the reproducibility and reliability of the platform were calculated by comparing results with standard stereoscopic tests. A demographic study of depth perception by subgroup analysis is shown. Subjective comparison of the different tests was carried out by means of a satisfaction survey. Results: We included 94 subjects, 25 children and 69 adults, with a mean age of 34.2 ± 18.9 years; 36.2 % were men and 63.8 % were women. The DALE3D platform obtained good repeatability with an interclass correlation coefficient (ICC) between 0.94 and 0.87, and coefficient of variation (CV) between 0.1 and 0.26. Threshold determining optimal and suboptimal results was calculated for Randot and DALE3D test. Spearman's correlation coefficient, between thresholds was not statistically significant (p value > 0.05). The test was considered more visually appealing and easier to use by the participants (90 % maximum score). Conclusions: The DALE3D platform is a potentially useful tool for measuring depth perception with optimal reproducibility rates. Its innovative design makes it a more intuitive tool for children than current stereoscopic tests. Nevertheless, further studies will be needed to assess whether the depth perception measured by the DALE3D platform is a sufficiently reliable parameter to assess stereopsis.(AU)


Subject(s)
Humans , Male , Female , Child , Adolescent , Young Adult , Vision, Binocular , Depth Perception , Vision, Ocular , Vision Tests
2.
Elife ; 122024 Jul 18.
Article in English | MEDLINE | ID: mdl-39023517

ABSTRACT

We reliably judge locations of static objects when we walk despite the retinal images of these objects moving with every step we take. Here, we showed our brains solve this optical illusion by adopting an allocentric spatial reference frame. We measured perceived target location after the observer walked a short distance from the home base. Supporting the allocentric coding scheme, we found the intrinsic bias , which acts as a spatial reference frame for perceiving location of a dimly lit target in the dark, remained grounded at the home base rather than traveled along with the observer. The path-integration mechanism responsible for this can utilize both active and passive (vestibular) translational motion signals, but only along the horizontal direction. This asymmetric path-integration finding in human visual space perception is reminiscent of the asymmetric spatial memory finding in desert ants, pointing to nature's wondrous and logically simple design for terrestrial creatures.


Subject(s)
Distance Perception , Humans , Distance Perception/physiology , Male , Female , Space Perception/physiology , Adult , Young Adult , Optical Illusions/physiology , Visual Perception/physiology
3.
Laryngoscope ; 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38877827

ABSTRACT

INTRODUCTION: The bowing index (BI) and normalized glottal gap area (NGGA) are used to quantify vocal fold morphology in ARVA; however, the influence of the distance between the flexible laryngoscope lens and the target area is not known. The goal is to test whether the endoscopic distance impacts vocal fold morphology measurements in patients with ARVA during flexible video laryngostroboscopy (VLS). METHOD: Patients with ARVA who underwent VLS were included. Images were classified into near (close to the petiole of the epiglottis) and far (below nasopharynx, with tongue base and entire epiglottis visible) conditions. BI was calculated using a mobile application, and NGGA was measured using ImageJ. RESULTS: This study included 23 patients; the mean age was 77 ± 7 years. Mean BI measured at the near distance was higher than far distances with a mean difference of 1.94 (95% CI: 0.92-2.96, p = 0.001). NGGA showed difference with changed distance -0.24 (95% CI: -0.48 to 0.01, p < 0.05).When stratifying patients into two groups based on median BI measurement, there was a statistically significant difference between near and far conditions, with increased BI in the near condition for patients above the median (p < 0.05), but no difference between the near and far condition for patients with BI below the median. CONCLUSION: The BI and NGGA were impacted by the endoscopic distance during flexible VLS. BI was significantly higher in the near condition compared with the far condition. The difference in BI between the near and far conditions was more pronounced when the vocal fold bowing was greater. These findings call for heightened awareness of measurement discrepancies secondary to the endoscopic distance during flexible laryngostroboscopy. LEVEL OF EVIDENCE: Level 2 Laryngoscope, 2024.

4.
Neuropsychologia ; 201: 108941, 2024 Aug 13.
Article in English | MEDLINE | ID: mdl-38908477

ABSTRACT

Utilizing the high temporal resolution of event-related potentials (ERPs), we compared the time course of processing incongruent color versus 3D-depth information. Participants were asked to judge whether the food color (color condition) or 3D structure (3D-depth condition) was congruent or incongruent with their previous knowledge and experience. The behavioral results showed that the reaction times in the congruent 3D-depth condition were slower than those in the congruent color condition. The reaction times in the incongruent 3D-depth condition were slower than those in the incongruent color condition. The ERP results showed that incongruent color stimuli induced a larger N270, larger P300, and smaller N400 components in the fronto-central region than the congruent color stimuli. Incongruent 3D-depth stimuli induced a smaller N1 in the occipital region, larger P300 and smaller N400 in the parietal-occipital region than congruent 3D-depth stimuli. The time-frequency analysis found that incongruent color stimuli induced a larger theta band (360-580 ms) activation in the fronto-central region than congruent color stimuli. Incongruent 3D-depth stimuli induced larger alpha and beta bands (240-350 ms) activation in the parietal region than congruent 3D-depth stimuli. Our results suggest that the human brain deals with violating general color or depth knowledge in different time courses. We speculate that the depth perception conflict was dominated by solving the problem with visual processing, whereas the color perception conflict was dominated by solving the problem with semantic violation.


Subject(s)
Brain , Color Perception , Depth Perception , Electroencephalography , Evoked Potentials , Reaction Time , Humans , Male , Female , Color Perception/physiology , Young Adult , Reaction Time/physiology , Brain/physiology , Evoked Potentials/physiology , Depth Perception/physiology , Adult , Photic Stimulation , Time Factors , Brain Mapping
5.
Acta Psychol (Amst) ; 248: 104368, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38936232

ABSTRACT

The inhibition of return (IOR) is a phenomenon where response times (RTs) to a target appearing at a previously cued location are slower than those for an uncued location. IOR can improve visual search efficiency. This study aimed to investigate IOR in badminton athletes at different cue depths using a cue-target paradigm in three-dimensional (3-D) static and dynamic scenarios. The study involved 28 badminton athletes (M age = 21.29, SD = 2.39, 14 males) and 25 non-athletes (M age = 21.56, SD = 2.38, 11 males). In the static scenario (Experiment 1), no significant difference between IOR in cueing near and far conditions. IOR was showed both in cueing the near and far condition. Badminton athletes had a speed advantage than non-athletes. In the dynamic scenario (Experiment 2), only badminton athletes showed IOR in cueing the far-to-near condition, but not for the near-to-far. The present study showed that depth information influenced the IOR only in far-to-near condition. Badminton athletes showed more sensitivity to depth information than non-athletes. Additionally, the study expands the object-based IOR in 3-D dynamic scenario.

6.
Psychon Bull Rev ; 2024 May 28.
Article in English | MEDLINE | ID: mdl-38806789

ABSTRACT

When processing visual scenes, we tend to prioritize information in the foreground, often at the expense of background information. The foreground bias has been supported by data demonstrating that there are more fixations to foreground, and faster and more accurate detection of targets embedded in foreground. However, it is also known that semantic consistency is associated with more efficient search. Here, we examined whether semantic context interacts with foreground prioritization, either amplifying or mitigating the effect of target semantic consistency. For each scene, targets were placed in the foreground or background and were either semantically consistent or inconsistent with the context of immediately surrounding depth region. Results indicated faster response times (RTs) for foreground and semantically consistent targets, replicating established effects. More importantly, we found the magnitude of the semantic consistency effect was significantly smaller in the foreground than background region. To examine the robustness of this effect, in Experiment 2, we strengthened the reliability of semantics by increasing the proportion of targets consistent with the scene region to 80%. We found the overall results pattern to replicate the incongruous effect of semantic consistency across depth observed in Experiment 1. This suggests foreground bias modulates the effects of semantics so that performance is less impacted in near space.

7.
Psychon Bull Rev ; 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38519758

ABSTRACT

Recent studies have examined whether the internal selection mechanism functions similarly for perception and visual working memory (VWM). However, the process of how we access and manipulate object representations distributed in a 3D space remains unclear. In this study, we utilized a memory search task to investigate the effect of depth on object selection and manipulation within VWM. The memory display consisted of colored items half positioned at the near depth plane and the other half at the far plane. During memory maintenance, the participants were instructed to search for a target representation and update its color. The results showed that under object-based attention (Experiments 1, 3, and 5), the update time was faster for targets at the near plane than for those at the far plane. This effect was absent in VWM when deploying spatial attention (Experiment 2) and in visual search regardless of the type of attention deployed (Experiment 4). The differential effects of depth on spatial and object-based attention in VWM suggest that spatial attention primarily relied on 2D location information irrespective of depth, whereas object-based attention seemed to prioritize memory representations at the front plane before shifting to the back. Our findings shed light on the interaction between depth perception and the selection mechanisms within VWM in a 3D context, emphasizing the importance of ordinal, rather than metric, spatial information in guiding object-based attention in VWM.

8.
J Optom ; 17(3): 100491, 2024.
Article in English | MEDLINE | ID: mdl-38218113

ABSTRACT

BACKGROUND AND OBJECTIVES: The invention described herein is a prototype based on computer vision technology that measures depth perception and is intended for the early examination of stereopsis. MATERIALS AND METHODS: The prototype (software and hardware) is a depth perception measurement system that consists on: (a) a screen showing stereoscopic models with a guide point that the subject must point to; (b) a camera capturing the distance between the screen and the subject's finger; and (c) a unit for recording, processing and storing the captured measurements. For test validation, the reproducibility and reliability of the platform were calculated by comparing results with standard stereoscopic tests. A demographic study of depth perception by subgroup analysis is shown. Subjective comparison of the different tests was carried out by means of a satisfaction survey. RESULTS: We included 94 subjects, 25 children and 69 adults, with a mean age of 34.2 ± 18.9 years; 36.2 % were men and 63.8 % were women. The DALE3D platform obtained good repeatability with an interclass correlation coefficient (ICC) between 0.94 and 0.87, and coefficient of variation (CV) between 0.1 and 0.26. Threshold determining optimal and suboptimal results was calculated for Randot and DALE3D test. Spearman's correlation coefficient, between thresholds was not statistically significant (p value > 0.05). The test was considered more visually appealing and easier to use by the participants (90 % maximum score). CONCLUSIONS: The DALE3D platform is a potentially useful tool for measuring depth perception with optimal reproducibility rates. Its innovative design makes it a more intuitive tool for children than current stereoscopic tests. Nevertheless, further studies will be needed to assess whether the depth perception measured by the DALE3D platform is a sufficiently reliable parameter to assess stereopsis.


Subject(s)
Depth Perception , Humans , Depth Perception/physiology , Female , Male , Adult , Reproducibility of Results , Young Adult , Adolescent , Child , Middle Aged , Vision Tests/instrumentation , Vision Tests/methods , Aged , Equipment Design , Vision, Binocular/physiology
9.
bioRxiv ; 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-37662197

ABSTRACT

Remarkably, human brains have the ability to accurately perceive and process the real-world size of objects, despite vast differences in distance and perspective. While previous studies have delved into this phenomenon, distinguishing this ability from other visual perceptions, like depth, has been challenging. Using the THINGS EEG2 dataset with high time-resolution human brain recordings and more ecologically valid naturalistic stimuli, our study uses an innovative approach to disentangle neural representations of object real-world size from retinal size and perceived real-world depth in a way that was not previously possible. Leveraging this state-of-the-art dataset, our EEG representational similarity results reveal a pure representation of object real-world size in human brains. We report a representational timeline of visual object processing: object real-world depth appeared first, then retinal size, and finally, real-world size. Additionally, we input both these naturalistic images and object-only images without natural background into artificial neural networks. Consistent with the human EEG findings, we also successfully disentangled representation of object real-world size from retinal size and real-world depth in all three types of artificial neural networks (visual-only ResNet, visual-language CLIP, and language-only Word2Vec). Moreover, our multi-modal representational comparison framework across human EEG and artificial neural networks reveals real-world size as a stable and higher-level dimension in object space incorporating both visual and semantic information. Our research provides a detailed and clear characterization of the object processing process, which offers further advances and insights into our understanding of object space and the construction of more brain-like visual models.

10.
Adv Mater ; : e2310134, 2023 Dec 03.
Article in English | MEDLINE | ID: mdl-38042993

ABSTRACT

Fluid flow behavior is visualized through particle image velocimetry (PIV) for understanding and studying experimental fluid dynamics. However, traditional PIV methods require multiple cameras and conventional lens systems for image acquisition to resolve multi-dimensional velocity fields. In turn, it introduces complexity to the entire system. Meta-lenses are advanced flat optical devices composed of artificial nanoantenna arrays. It can manipulate the wavefront of light with the advantages of ultrathin, compact, and no spherical aberration. Meta-lenses offer novel functionalities and promise to replace traditional optical imaging systems. Here, a binocular meta-lens PIV technique is proposed, where a pair of GaN meta-lenses are fabricated on one substrate and integrated with a imaging sensor to form a compact binocular PIV system. The meta-lens weigh only 116 mg, much lighter than commercial lenses. The 3D velocity field can be obtained by the binocular disparity and particle image displacement information of fluid flow. The measurement error of vortex-ring diameter is ≈1.25% experimentally validates via a Reynolds-number (Re) 2000 vortex-ring. This work demonstrates a new development trend for the PIV technique for rejuvenating traditional flow diagnostic tools toward a more compact, easy-to-deploy technique. It enables further miniaturization and low-power systems for portable, field-use, and space-constrained PIV applications.

11.
Sensors (Basel) ; 23(24)2023 Dec 08.
Article in English | MEDLINE | ID: mdl-38139548

ABSTRACT

With the rapid development of vision sensing, artificial intelligence, and robotics technology, one of the challenges we face is installing more advanced vision sensors on welding robots to achieve intelligent welding manufacturing and obtain high-quality welding components. Depth perception is one of the bottlenecks in the development of welding sensors. This review provides an assessment of active and passive sensing methods for depth perception and classifies and elaborates on the depth perception mechanisms based on monocular vision, binocular vision, and multi-view vision. It explores the principles and means of using deep learning for depth perception in robotic welding processes. Further, the application of welding robot visual perception in different industrial scenarios is summarized. Finally, the problems and countermeasures of welding robot visual perception technology are analyzed, and developments for the future are proposed. This review has analyzed a total of 2662 articles and cited 152 as references. The potential future research topics are suggested to include deep learning for object detection and recognition, transfer deep learning for welding robot adaptation, developing multi-modal sensor fusion, integrating models and hardware, and performing a comprehensive requirement analysis and system evaluation in collaboration with welding experts to design a multi-modal sensor fusion architecture.

12.
J Eye Mov Res ; 16(1)2023.
Article in English | MEDLINE | ID: mdl-37965285

ABSTRACT

Precise perception of three-dimensional (3D) images is crucial for a rewarding experience when using novel displays. However, the capability of the human visual system to perceive binocular disparities varies across the visual field meaning that depth perception might be affected by the two-dimensional (2D) layout of items on the screen. Nevertheless, potential difficulties in perceiving 3D images during free viewing have received only a little attention so far, limiting opportunities to enhance visual effectiveness of information presentation. The aim of this study was to elucidate how the 2D layout of items in 3D images impacts visual search and distribution of maintaining attention based on the analysis of the viewer's gaze. Participants were searching for a target which was projected one plane closer to the viewer compared to distractors on a multi-plane display. The 2D layout of items was manipulated by changing the item distance from the center of the display plane from 2° to 8°. As a result, the targets were identified correctly when the items were displayed close to the center of the display plane, however, the number of errors grew with an increase in distance. Moreover, correct responses were given more often when subjects paid more attention to targets compared to other items on the screen. However, a more balanced distribution of attention over time across all items was characteristic of the incorrectly completed trials. Thus, our results suggest that items should be displayed close to each other in a 2D layout to facilitate precise perception of 3D images and considering distribution of attention maintenance based on eye-tracking might be useful in the objective assessment of user experience for novel displays.

13.
Elife ; 122023 09 04.
Article in English | MEDLINE | ID: mdl-37665324

ABSTRACT

Crowding occurs when the presence of nearby features causes highly visible objects to become unrecognizable. Although crowding has implications for many everyday tasks and the tremendous amounts of research reflect its importance, surprisingly little is known about how depth affects crowding. Most available studies show that stereoscopic disparity reduces crowding, indicating that crowding may be relatively unimportant in three-dimensional environments. However, most previous studies tested only small stereoscopic differences in depth in which disparity, defocus blur, and accommodation are inconsistent with the real world. Using a novel multi-depth plane display, this study investigated how large (0.54-2.25 diopters), real differences in target-flanker depth, representative of those experienced between many objects in the real world, affect crowding. Our findings show that large differences in target-flanker depth increased crowding in the majority of observers, contrary to previous work showing reduced crowding in the presence of small depth differences. Furthermore, when the target was at fixation depth, crowding was generally more pronounced when the flankers were behind the target as opposed to in front of it. However, when the flankers were at fixation depth, crowding was generally more pronounced when the target was behind the flankers. These findings suggest that crowding from clutter outside the limits of binocular fusion can still have a significant impact on object recognition and visual perception in the peripheral field.


While human eyesight is clearest at the point where the gaze is focused, peripheral vision makes objects to the side visible. This ability to detect movement and objects in a wider field of vision helps people to have a greater awareness of their surroundings. However, it is more difficult to identify an object using peripheral vision when it is surrounded by other items. This phenomenon is known as crowding and can affect many aspects of daily life, such as driving or spotting a friend in a crowd. In our three-dimensional world, peripheral objects are often at different distances. This variation in depth could influence the effect of crowding, yet little is known about its effect. While previous research has investigated the effect of small differences in depth on crowding, the studies did not replicate real-world conditions. To replicate depths that are likely to be encountered in the real world, Smithers et al. created a display using multiple screens positioned 0.4, 1.26 and 4 meters from the viewer. Images were displayed on the screens and researchers measured how well study participants could identify a target image when it was surrounded by similar, nearby images displayed closer or further away than the target. The experiments showed that most viewers are less able to recognize a target object when there are surrounding items and this effect is worsened when the items are separated from the object by large differences in depth. The findings show that instead of diminishing the effect of crowding ­ as suggested by previous studies with small depth differences ­ large depth differences that more closely recreate those encountered in the real world can amplify the effect of crowding. This greater understanding of how humans process objects in three-dimensional environments could help to better estimate the impact of crowding on people with eye and neurological disorders. In turn, the information could be used to design environments that are easier for such individuals to navigate.


Subject(s)
Histological Techniques , Visual Perception
14.
Biomimetics (Basel) ; 8(5)2023 Sep 21.
Article in English | MEDLINE | ID: mdl-37754196

ABSTRACT

Depth estimation is an ill-posed problem; objects of different shapes or dimensions, even if at different distances, may project to the same image on the retina. Our brain uses several cues for depth estimation, including monocular cues such as motion parallax and binocular cues such as diplopia. However, it remains unclear how the computations required for depth estimation are implemented in biologically plausible ways. State-of-the-art approaches to depth estimation based on deep neural networks implicitly describe the brain as a hierarchical feature detector. Instead, in this paper we propose an alternative approach that casts depth estimation as a problem of active inference. We show that depth can be inferred by inverting a hierarchical generative model that simultaneously predicts the eyes' projections from a 2D belief over an object. Model inversion consists of a series of biologically plausible homogeneous transformations based on Predictive Coding principles. Under the plausible assumption of a nonuniform fovea resolution, depth estimation favors an active vision strategy that fixates the object with the eyes, rendering the depth belief more accurate. This strategy is not realized by first fixating on a target and then estimating the depth; instead, it combines the two processes through action-perception cycles, with a similar mechanism of the saccades during object recognition. The proposed approach requires only local (top-down and bottom-up) message passing, which can be implemented in biologically plausible neural circuits.

15.
World Neurosurg ; 179: 109-117, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37619840

ABSTRACT

BACKGROUND: Rotational angiography, often referred to as a "spin", is typically presented in 2D. Since rotational angiograms are composed of images acquired from multiple angles, we took advantage of this property to develop a method for converting any rotational angiogram into a 3 dimensional (3D) video. METHODS: Our aim was to develop a low cost and easily distributable solution without requiring additional hardware or altering acquisition techniques. Six previously acquired rotational angiograms from our institution were imported using custom-written code and exported as anaglyph (red-cyan) videos. RESULTS: The resulting 3D videos convey anatomical depth that is not apparent from viewing the 2D images alone. Processing time was 1.3 ± 0.6 s (mean ± SD) per angiogram. The only associated cost was $10 for red-cyan 3D glasses. Using our software, any rotational angiogram with at least 0.3 frames per degree of rotation can be converted into 3D. CONCLUSIONS: Our solution is an inexpensive and rapid method for generating stereoscopic videos from existing angiograms. It does not require any additional hardware and is readily deployable in low-resource settings. Because the videos are in anaglyph format, they are viewable on any 2 dimensional (2D) display in the interventional suite or operating room, on a mobile device, or at home.


Subject(s)
Angiography , Software , Humans , Imaging, Three-Dimensional/methods
16.
Perception ; 52(9): 670-675, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37427447

ABSTRACT

A novel geometrical optical illusion is reported in this article: the horizontal distances of the contextual structures distort the perceived vertical positions of observed objects. Specifically, the illusion manifests in the form of connected boxes of varying widths but equal heights, each containing a circle at the center. Despite identical vertical positioning of the circles, they appear misaligned. The illusion diminishes when the boxes are removed. Potential underlying mechanisms are discussed.


Subject(s)
Optical Illusions , Orientation , Humans
17.
Vision Res ; 211: 108279, 2023 10.
Article in English | MEDLINE | ID: mdl-37422937

ABSTRACT

The debate surrounding the advantages of binocular versus monocular vision has persisted for decades. This study aimed to investigate whether individuals with monocular vision loss could accurately and precisely perceive large egocentric distances in real-world environments, under natural viewing conditions, comparable to those with normal vision. A total of 49 participants took part in the study, divided into three groups based on their viewing conditions. Two experiments were conducted to assess the accuracy and precision of estimating egocentric distances to visual targets and the coordination of actions during blind walking. In Experiment 1, participants were positioned in both a hallway and a large open field, tasked with judging the midpoint of self-to-target distances spanning from 5 to 30 m. Experiment 2 involved a blind walking task, where participants attempted to walk towards the same targets without visual or environmental feedback at an unusually rapid pace. The findings revealed that perceptual accuracy and precision were primarily influenced by the environmental context, motion condition, and target distance, rather than the visual conditions. Surprisingly, individuals with monocular vision loss demonstrated comparable accuracy and precision in perceiving egocentric distances to that of individuals with normal vision.


Subject(s)
Distance Perception , Vision, Ocular , Humans , Vision, Monocular , Walking , Vision, Binocular
18.
Vision Res ; 210: 108267, 2023 09.
Article in English | MEDLINE | ID: mdl-37285783

ABSTRACT

People with amblyopia demonstrate a reduced ability to judge depth using stereopsis. Our understanding of this deficit is limited, as standard clinical stereo tests may not be suited to give a quantitative account of the residual stereo ability in amblyopia. In this study we used a stereo test designed specifically for that purpose. Participants identified the location of a disparity-defined odd-one-out target within a random-dot display. We tested 29 amblyopic (3 strabismic, 17 anisometropic, 9 mixed) participants and 17 control participants. We obtained stereoacuity thresholds from 59% of our amblyopic participants. There was a factor of two difference between the median stereoacuity of our amblyopic (103 arcsec) and control (56 arcsec) groups. We used the equivalent noise method to evaluate the role of equivalent internal noise and processing efficiency in amblyopic stereopsis. Using the linear amplifier model (LAM), we determined the threshold difference was due to a greater equivalent internal noise in the amblyopic group (238 vs 135 arcsec), with no significant difference in processing efficiency. A multiple linear regression determined 56% of the stereoacuity variance within the amblyopic group was predicted by the two LAM parameters, with equivalent internal noise predicting 46% alone. Analysis of control group data aligned with our previous work, finding that trade-offs between equivalent internal noise and efficiency play a greater role. Our results allow a better understanding of what is limiting amblyopic performance in our task. We find this to be a reduced quality of disparity signals in the input to the task-specific processing.


Subject(s)
Amblyopia , Humans , Depth Perception , Noise , Vision, Binocular , Vision, Ocular , Visual Acuity , Case-Control Studies
19.
Minim Invasive Ther Allied Technol ; 32(4): 190-198, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37293947

ABSTRACT

Introduction: This study compares five augmented reality (AR) vasculature visualization techniques in a mixed-reality laparoscopy simulator with 50 medical professionals and analyzes their impact on the surgeon. Material and methods: ​​The different visualization techniques' abilities to convey depth were measured using the participant's accuracy in an objective depth sorting task. Demographic data and subjective measures, such as the preference of each AR visualization technique and potential application areas, were collected with questionnaires. Results: Despite measuring differences in objective measurements across the visualization techniques, they were not statistically significant. In the subjective measures, however, 55% of the participants rated visualization technique II, 'Opaque with single-color Fresnel highlights', as their favorite. Participants felt that AR could be useful for various surgeries, especially complex surgeries (100%). Almost all participants agreed that AR could potentially improve surgical parameters, such as patient safety (88%), complication rate (84%), and identifying risk structures (96%). Conclusions: More studies are needed on the effect of different visualizations on task performance, as well as more sophisticated and effective visualization techniques for the operating room. With the findings of this study, we encourage the development of new study setups to advance surgical AR.


Subject(s)
Augmented Reality , Laparoscopy , Surgeons , Surgery, Computer-Assisted , Humans , Laparoscopy/methods , Surgery, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...