Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Nat Commun ; 15(1): 2907, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38649369

ABSTRACT

Holographic displays can generate light fields by dynamically modulating the wavefront of a coherent beam of light using a spatial light modulator, promising rich virtual and augmented reality applications. However, the limited spatial resolution of existing dynamic spatial light modulators imposes a tight bound on the diffraction angle. As a result, modern holographic displays possess low étendue, which is the product of the display area and the maximum solid angle of diffracted light. The low étendue forces a sacrifice of either the field-of-view (FOV) or the display size. In this work, we lift this limitation by presenting neural étendue expanders. This new breed of optical elements, which is learned from a natural image dataset, enables higher diffraction angles for ultra-wide FOV while maintaining both a compact form factor and the fidelity of displayed contents to human viewers. With neural étendue expanders, we experimentally achieve 64 × étendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically, with high-fidelity reconstruction quality (measured in PSNR) over 29 dB on retinal-resolution images.

2.
Biomed Opt Express ; 14(5): 2166-2180, 2023 May 01.
Article in English | MEDLINE | ID: mdl-37206152

ABSTRACT

A large portion of today's world population suffers from vision impairments and wears prescription eyeglasses. However, prescription glasses cause additional bulk and discomfort when used with virtual reality (VR) headsets, negatively impacting the viewer's visual experience. In this work, we remedy the usage of prescription eyeglasses with screens by shifting the optical complexity into the software. Our proposal is a prescription-aware rendering approach for providing sharper and more immersive imagery for screens, including VR headsets. To this end, we develop a differentiable display and visual perception model encapsulating the human visual system's display-specific parameters, color, visual acuity, and user-specific refractive errors. Using this differentiable visual perception model, we optimize the rendered imagery in the display using gradient-descent solvers. This way, we provide prescription glasses-free sharper images for a person with vision impairments. We evaluate our approach and show significant quality and contrast improvements for users with vision impairments.

3.
Opt Express ; 31(26): 43864-43876, 2023 Dec 18.
Article in English | MEDLINE | ID: mdl-38178472

ABSTRACT

Diffractive optical elements (DOEs) have widespread applications in optics, ranging from point spread function engineering to holographic display. Conventionally, DOE design relies on Cartesian simulation grids, resulting in square features in the final design. Unfortunately, Cartesian grids provide an anisotropic sampling of the plane, and the resulting square features can be challenging to fabricate with high fidelity using methods such as photolithography. To address these limitations, we explore the use of hexagonal grids as a new grid structure for DOE design and fabrication. In this study, we demonstrate wave propagation simulation using an efficient hexagonal coordinate system and compare simulation accuracy with the standard Cartesian sampling scheme. Additionally, we have implemented algorithms for the inverse DOE design. The resulting hexagonal DOEs, encoded with wavefront information for holograms, are fabricated and experimentally compared to their Cartesian counterparts. Our findings indicate that employing hexagonal grids enhances holographic imaging quality. The exploration of new grid structures holds significant potential for advancing optical technology across various domains, including imaging, microscopy, photography, lighting, and virtual reality.

4.
Opt Express ; 31(26): 43908-43919, 2023 Dec 18.
Article in English | MEDLINE | ID: mdl-38178475

ABSTRACT

Joint photographic experts group (JPEG) compression standard is widely adopted for digital images. However, as JPEG encoding is not designed for holograms, applying it typically leads to severe distortions in holographic projections. In this work, we overcome this problem by taking into account the influence of JPEG compression on hologram generation in an end-to-end fashion. To this end, we introduce a novel approach to merge the process of hologram generation and JPEG compression with one differentiable model, enabling joint optimization via efficient first-order solvers. Our JPEG-aware end-to-end optimized holograms show significant improvements compared to conventional holograms compressed using JPEG standard both in simulation and on experimental display prototype. Specifically, the proposed algorithm shows improvements of 4 dB in peak signal-to-noise ratio (PSNR) and 0.27 in structural similarity (SSIM) metrics, under the same compression rate. When maintained with the same reconstruction quality, our method reduces the size of compressed holograms by about 35% compared to conventional JPEG-compressed holograms. Consistent with simulations, the experimental results further demonstrate that our method is robust to JPEG compression loss. Moreover, our method generates holograms compatible with the JPEG standard, making it friendly to a wide range of commercial software and edge devices.

5.
IEEE Trans Vis Comput Graph ; 28(11): 3854-3864, 2022 11.
Article in English | MEDLINE | ID: mdl-36044494

ABSTRACT

Virtual Reality (VR) is becoming ubiquitous with the rise of consumer displays and commercial VR platforms. Such displays require low latency and high quality rendering of synthetic imagery with reduced compute overheads. Recent advances in neural rendering showed promise of unlocking new possibilities in 3D computer graphics via image-based representations of virtual or physical environments. Specifically, the neural radiance fields (NeRF) demonstrated that photo-realistic quality and continuous view changes of 3D scenes can be achieved without loss of view-dependent effects. While NeRF can significantly benefit rendering for VR applications, it faces unique challenges posed by high field-of-view, high resolution, and stereoscopic/egocentric viewing, typically causing low quality and high latency of the rendered images. In VR, this not only harms the interaction experience but may also cause sickness. To tackle these problems toward six-degrees-of-freedom, egocentric, and stereo NeRF in VR, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. We then jointly optimize the latency/performance and visual quality while mutually bridging human perception and neural scene synthesis to achieve perceptually high-quality immersive interaction. We conducted both objective analysis and subjective studies to evaluate the effectiveness of our approach. We find that our method significantly reduces latency (up to 99% time reduction compared with NeRF) without loss of high-fidelity rendering (perceptually identical to full-resolution ground truth). The presented approach may serve as the first step toward future VR/AR systems that capture, teleport, and visualize remote environments in real-time.


Subject(s)
Computer Graphics , Virtual Reality , Humans , User-Computer Interface , Psychophysics
6.
JMIR Res Protoc ; 11(8): e40445, 2022 Aug 24.
Article in English | MEDLINE | ID: mdl-36001370

ABSTRACT

BACKGROUND: Preventable surgical errors of varying degrees of physical, emotional, and financial harm account for a significant number of adverse events. These errors are frequently tied to systemic problems within a health care system, including the absence of necessary policies/procedures, obstructive cultural hierarchy, and communication breakdown between staff. We developed an innovative, theory-based virtual reality (VR) training to promote understanding and sensemaking toward the holistic view of the culture of patient safety and high reliability. OBJECTIVE: We aim to assess the effect of VR training on health care workers' (HCWs') understanding of contributing factors to patient safety events, sensemaking of patient safety culture, and high reliability organization principles in the laboratory environment. Further, we aim to assess the effect of VR training on patient safety culture, TeamSTEPPS behavior scores, and reporting of patient safety events in the surgery department of an academic medical center in the clinical environment. METHODS: This mixed methods study uses a pre-VR versus post-VR training study design involving attending faculty, residents, nurses, technicians of the department of surgery, and frontline HCWs in the operation rooms at an academic medical center. HCWs' understanding of contributing factors to patient safety events will be assessed using a scale based on the Human Factors Analysis and Classification System. We will use the data frame theory framework, supported by a semistructured interview guide to capture the sensemaking process of patient safety culture and principles of high reliability organizations. Changes in the culture of patient safety will be quantified using the Agency for Healthcare Research and Quality surveys on patient safety culture. TeamSTEPPS behavior scores based on observation will be measured using the Teamwork Evaluation of Non-Technical Skills tool. Patient safety events reported in the voluntary institutional reporting system will be compared before the training versus those after the training. We will compare the Agency for Healthcare Research and Quality patient safety culture scores and patient safety events reporting before the training versus those after the training by using descriptive statistics and a within-subject 2-tailed, 2-sample t test with the significance level set at .05. RESULTS: Ethics approval was obtained in May 2021 from the institutional review board of the University of North Carolina at Chapel Hill (22-1150). The enrollment of participants for this study will start in fall 2022 and is expected to be completed by early spring 2023. The data analysis is expected to be completed by July 2023. CONCLUSIONS: Our findings will help assess the effectiveness of VR training in improving HCWs' understanding of contributing factors of patient safety events, sensemaking of patient safety culture, and principles and behaviors of high reliability organizations. These findings will contribute to developing VR training to improve patient safety culture in other specialties.

7.
IEEE Trans Vis Comput Graph ; 27(11): 4194-4203, 2021 11.
Article in English | MEDLINE | ID: mdl-34449368

ABSTRACT

Computer-generated holographic (CGH) displays show great potential and are emerging as the next-generation displays for augmented and virtual reality, and automotive heads-up displays. One of the critical problems harming the wide adoption of such displays is the presence of speckle noise inherent to holography, that compromises its quality by introducing perceptible artifacts. Although speckle noise suppression has been an active research area, the previous works have not considered the perceptual characteristics of the Human Visual System (HVS), which receives the final displayed imagery. However, it is well studied that the sensitivity of the HVS is not uniform across the visual field, which has led to gaze-contingent rendering schemes for maximizing the perceptual quality in various computer-generated imagery. Inspired by this, we present the first method that reduces the "perceived speckle noise" by integrating foveal and peripheral vision characteristics of the HVS, along with the retinal point spread function, into the phase hologram computation. Specifically, we introduce the anatomical and statistical retinal receptor distribution into our computational hologram optimization, which places a higher priority on reducing the perceived foveal speckle noise while being adaptable to any individual's optical aberration on the retina. Our method demonstrates superior perceptual quality on our emulated holographic display. Our evaluations with objective measurements and subjective studies demonstrate a significant reduction of the human perceived noise.


Subject(s)
Holography , Artifacts , Computer Graphics , Humans , Retina , Visual Fields
8.
Opt Express ; 28(18): 26636-26650, 2020 Aug 31.
Article in English | MEDLINE | ID: mdl-32906933

ABSTRACT

The goal of computer-generated holography (CGH) is to synthesize custom illumination patterns by modulating a coherent light beam. CGH algorithms typically rely on iterative optimization with a built-in trade-off between computation speed and hologram accuracy that limits performance in advanced applications such as optogenetic photostimulation. We introduce a non-iterative algorithm, DeepCGH, that relies on a convolutional neural network with unsupervised learning to compute accurate holograms with fixed computational complexity. Simulations show that our method generates holograms orders of magnitude faster and with up to 41% greater accuracy than alternate CGH techniques. Experiments in a holographic multiphoton microscope show that DeepCGH substantially enhances two-photon absorption and improves performance in photostimulation tasks without requiring additional laser power.

9.
IEEE Trans Vis Comput Graph ; 25(11): 3114-3124, 2019 11.
Article in English | MEDLINE | ID: mdl-31403422

ABSTRACT

In this paper, we present our novel design for switchable AR/VR near-eye displays which can help solve the vergence-accommodation-conflict issue. The principal idea is to time-multiplex virtual imagery and real-world imagery and use a tunable lens to adjust focus for the virtual display and the see-through scene separately. With this novel design, prescription eyeglasses for near- and far-sighted users become unnecessary. This is achieved by integrating the wearer's corrective optical prescription into the tunable lens for both virtual display and see-through environment. We built a prototype based on the design, comprised of micro-display, optical systems, a tunable lens, and active shutters. The experimental results confirm that the proposed near-eye display design can switch between AR and VR and can provide correct accommodation for both.


Subject(s)
Augmented Reality , Computer Graphics , Image Processing, Computer-Assisted/methods , Virtual Reality , Equipment Design , Eyeglasses , Holography , Humans
10.
IEEE Trans Vis Comput Graph ; 25(5): 1928-1939, 2019 05.
Article in English | MEDLINE | ID: mdl-30794179

ABSTRACT

Traditional optical manufacturing poses a great challenge to near-eye display designers due to large lead times in the order of multiple weeks, limiting the abilities of optical designers to iterate fast and explore beyond conventional designs. We present a complete near-eye display manufacturing pipeline with a day lead time using commodity hardware. Our novel manufacturing pipeline consists of several innovations including a rapid production technique to improve surface of a 3D printed component to optical quality suitable for near-eye display application, a computational design methodology using machine learning and ray tracing to create freeform static projection screen surfaces for near-eye displays that can represent arbitrary focal surfaces, and a custom projection lens design that distributes pixels non-uniformly for a foveated near-eye display hardware design candidate. We have demonstrated untethered augmented reality near-eye display prototypes to assess success of our technique, and show that a ski-goggles form factor, a large monocular field of view (30o×55o), and a resolution of 12 cycles per degree can be achieved.

11.
IEEE Trans Vis Comput Graph ; 24(11): 2906-2916, 2018 11.
Article in English | MEDLINE | ID: mdl-30207958

ABSTRACT

We describe a system which corrects dynamically for the focus of the real world surrounding the near-eye display of the user and simultaneously the internal display for augmented synthetic imagery, with an aim of completely replacing the user prescription eyeglasses. The ability to adjust focus for both real and virtual stimuli will be useful for a wide variety of users, but especially for users over 40 years of age who have limited accommodation range. Our proposed solution employs a tunable-focus lens for dynamic prescription vision correction, and a varifocal internal display for setting the virtual imagery at appropriate spatially registered depths. We also demonstrate a proof of concept prototype to verify our design and discuss the challenges to building an auto-focus augmented reality eyeglasses for both real and virtual.


Subject(s)
Computer Graphics , Eyeglasses , Image Processing, Computer-Assisted/instrumentation , Image Processing, Computer-Assisted/methods , User-Computer Interface , Virtual Reality , Adult , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...