Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 81
Filter
1.
Opt Express ; 32(7): 11296-11306, 2024 Mar 25.
Article in English | MEDLINE | ID: mdl-38570980

ABSTRACT

Tabletop three-dimensional light field display is a kind of compelling display technology that can simultaneously provide stereoscopic vision for multiple viewers surrounding the lateral side of the device. However, if the flat panel light field display device is simply placed horizontally and displayed directly above, the visual frustum will be tilted and the 3D content outside the display panel will be invisible, the large oblique viewing angle will also lead to serious aberrations. In this paper, we demonstrate what we believe to be a new vertical spliced light field cave display system with an extended depth content. A separate optimization of different compound lens array attenuates the aberration from different oblique viewing angles, and a local heating fitting method is implemented to ensure the accuracy of fabrication process. The image coding method and the correction of the multiple viewpoints realize the correct construction of spliced voxels. In the experiment, a high-definition and precisely spliced 3D city terrain scene is demonstrated on the prototype with a correct oblique perspective in 100-degree horizontal viewing range. We envision that our research will provide more inspiration for future immersive large-scale glass-free virtual reality display technologies.

2.
Opt Express ; 32(6): 9857-9866, 2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38571210

ABSTRACT

The three-dimensional (3D) light field display (LFD) with dense views can provide smooth motion parallax for the human eye. Increasing the number of views will widen the lens pitch, however, resulting in a decrease in view resolution. In this paper, an approach to smooth motion parallax based on optimizing the divergence angle of the light beam (DALB) for 3D LFD with narrow pitch is proposed. DALB is controlled by lens design. A views-fitting optimization algorithm is established based on a mathematical model between DALB and view distribution. Subsequently, the lens is reversely designed based on the optimization results. A co-designed convolutional neural network (CNN) is used to implement the algorithm. The optical experiment shows that a smooth motion parallax 3D image is achievable through the proposed method.

3.
Sci Rep ; 13(1): 19372, 2023 Nov 08.
Article in English | MEDLINE | ID: mdl-37938607

ABSTRACT

Learning-based computer-generated hologram (CGH) demonstrates great potential for real-time high-quality holographic displays. However, real-time 4K CGH generation for 3D scenes remains a challenge due to the computational burden. Here, a variant conventional neural network (CNN) is presented for CGH encoding with learned layered initial phases for layered CGH generation. Specifically, the CNN predicts the CGH based on the input complex amplitude on the CGH plane, and the learned initial phases act as a universal phase for any target images at the target depth layer. These phases are generated during the training process of the coding CNN to further optimize the quality. The CNN is trained to learn encoding 3D CGH by randomly selecting the depth layer in the training process, and contains only 938 parameters. The generation time for a 2D 4K CGH is 18 ms, and is increased by 12 ms for each layer in a layered 3D scene. The average Peak Signal to Noise Ratio (PSNR) of each layer is above 30dB in the depth range from 160 to 210 mm. Experiments verify that our method can achieve real-time layered 4K CGH generation.

4.
Opt Express ; 31(20): 32273-32286, 2023 Sep 25.
Article in English | MEDLINE | ID: mdl-37859034

ABSTRACT

Tabletop light field displays are compelling display technologies that offer stereoscopic vision and can present annular viewpoint distributions to multiple viewers around the display device. When employing the lens array to realize the of integral imaging tabletop light field display, there is a critical trade-off between the increase of the angular resolution and the spatial resolution. Moreover, as the viewers are around the device, the central viewing range of the reconstructed 3D images are wasteful. In this paper, we explore what we believe to be a new method for realizing tabletop flat-panel light field displays to improve the efficiency of the pixel utilization and the angular resolution of the tabletop 3D display. A 360-degree directional micro prism array is newly designed to refract the collimated light rays to different viewing positions and form viewpoints, then a uniform 360-degree annular viewpoint distribution can be accurately formed. In the experiment, a micro prism array sample is fabricated to verify the performance of the proposed tabletop flat-panel light field display system. One hundred viewpoints are uniformly distributed in the 360-degree viewing area, providing a full-color, smooth parallax 3D scene.

5.
Opt Express ; 31(18): 29664-29675, 2023 Aug 28.
Article in English | MEDLINE | ID: mdl-37710762

ABSTRACT

With the development of three-dimensional (3D) light-field display technology, 3D scenes with correct location information and depth information can be perceived without wearing any external device. Only 2D stylized portrait images can be generated with traditional portrait stylization methods and it is difficult to produce high-quality stylized portrait content for 3D light-field displays. 3D light-field displays require the generation of content with accurate depth and spatial information, which is not achievable with 2D images alone. New and innovative portrait stylization techniques methods should be presented to meet the requirements of 3D light-field displays. A portrait stylization method for 3D light-field displays is proposed, which maintain the consistency of dense views in light-field display when the 3D stylized portrait is generated. Example-based portrait stylization method is used to migrate the designated style image to the portrait image, which can prevent the loss of contour information in 3D light-field portraits. To minimize the diversity in color information and further constrain the contour details of portraits, the Laplacian loss function is introduced in the pre-trained deep learning model. The three-dimensional representation of the stylized portrait scene is reconstructed, and the stylized 3D light field image of the portrait is generated the mask guide based light-field coding method. Experimental results demonstrate the effectiveness of the proposed method, which can use the real portrait photos to generate high quality 3D light-field portrait content.

6.
Appl Opt ; 62(16): E83-E91, 2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37706893

ABSTRACT

In this paper, a photonic crystal fiber (PCF) sensor based on the surface plasmon resonance (SPR) effect for refractive index (RI) detection is proposed. We design a D-shaped polished PCF structure consisting of air holes arranged in a hexagonal lattice. The silver film is coated on the middle channel of the polished surface of the PCF. The finite element method is used to analyze the propagation characteristics of the proposed D-shaped SPR-PCF sensor. Simulation results show that the proposed D-shaped SPR-PCF sensor has a maximum wavelength sensitivity of 30,000 nm/RIU, an average wavelength sensitivity of 6785.71 nm/RIU, and a maximum resolution of 3.33×10-6 R I U in the RI range of 1.22-1.36. Owing to the high wavelength sensitivity in the considered RI range, the proposed D-shaped SPR-PCF sensor is suitable for applications in water contamination detection, liquid concentration measurement, food safety monitoring, etc.

7.
Opt Express ; 31(12): 20505-20517, 2023 Jun 05.
Article in English | MEDLINE | ID: mdl-37381444

ABSTRACT

A true-color light-field display system with a large depth-of-field (DOF) is demonstrated. Reducing crosstalk between viewpoints and increasing viewpoint density are the key points to realize light-field display system with large DOF. The aliasing and crosstalk of light beams in the light control unit (LCU) are reduced by adopting collimated backlight and reversely placing the aspheric cylindrical lens array (ACLA). The one-dimensional (1D) light-field encoding of halftone images increases the number of controllable beams within the LCU and improves viewpoint density. The use of 1D light-field encoding leads to a decrease in the color-depth of the light-field display system. The joint modulation for size and arrangement of halftone dots (JMSAHD) is used to increase color-depth. In the experiment, a three-dimensional (3D) model was constructed using halftone images generated by JMSAHD, and a light-field display system with a viewpoint density of 1.45 (i.e. 1.45 viewpoints per degree of view) and a DOF of 50 cm was achieved at a 100 ° viewing angle.

8.
Opt Express ; 31(11): 18017-18025, 2023 May 22.
Article in English | MEDLINE | ID: mdl-37381520

ABSTRACT

Image visual quality is of fundamental importance for three-dimensional (3D) light-field displays. The pixels of a light-field display are enlarged after the imaging of the light-field system, increasing the graininess of the image, which leads to a severe decline in the image edge smoothness as well as image quality. In this paper, a joint optimization method is proposed to minimize the "sawtooth edge" phenomenon of reconstructed images in light-field display systems. In the joint optimization scheme, neural networks are used to simultaneously optimize the point spread functions of the optical components and elemental images, and the optical components are designed based on the results. The simulations and experimental data show that a less grainy 3D image is achievable through the proposed joint edge smoothing method.

9.
Biosens Bioelectron ; 234: 115337, 2023 Aug 15.
Article in English | MEDLINE | ID: mdl-37126876

ABSTRACT

The rapid detection of low concentrations of Salmonella Typhimurium (S. Typhimurium) is an essential preventive measure for food safety and prevention of foodborne illness. The study presented in this paper addresses this critical issue by proposing a single mode-tapered seven core-single mode (STSS) fiber ring laser (FRL) biosensor for S. Typhimurium detection. The experimental results show that the specific detection time of S. Typhimurium is less than 20 min and the wavelength shift can achieve -0.906 nm for an S. Typhimurium solution (10 cells/mL). Furthermore, at a lower concentration of 1 cell/mL applied to the biosensor, a result of -0.183 nm is observed in 9% of samples (1/11), which indicates that the proposed FRL biosensor has the ability to detect 1 cell/mL of S. Typhimurium. In addition, the detection results in chicken and pickled pork samples present an average deviation of -27% and -23%, respectively, from the measured results in phosphate buffered saline. Taken together, these results show the proposed FRL biosensor may have potential applications in the fields of food safety monitoring, medical diagnostics, etc.


Subject(s)
Biosensing Techniques , Biosensing Techniques/methods , Salmonella typhimurium , Food Microbiology , Food , Food Safety
10.
Article in English | MEDLINE | ID: mdl-37022034

ABSTRACT

Holographic displays are ideal display technologies for virtual and augmented reality because all visual cues are provided. However, real-time high-quality holographic displays are difficult to achieve because the generation of high-quality computer-generated hologram (CGH) is inefficient in existing algorithms. Here, complex-valued convolutional neural network (CCNN) is proposed for phase-only CGH generation. The CCNN-CGH architecture is effective with a simple network structure based on the character design of complex amplitude. A holographic display prototype is set up for optical reconstruction. Experiments verify that state-of-the-art performance is achieved in terms of quality and generation speed in existing end-to-end neural holography methods using the ideal wave propagation model. The generation speed is three times faster than HoloNet and one-sixth faster than Holo-encoder, and the Peak Signal to Noise Ratio (PSNR) is increased by 3 dB and 9 dB, respectively. Real-time high-quality CGHs are generated in 1920×1072 and 3840×2160 resolutions for dynamic holographic displays.

11.
Micromachines (Basel) ; 14(3)2023 Mar 06.
Article in English | MEDLINE | ID: mdl-36985013

ABSTRACT

In this paper, we propose a method to generate multi-depth phase-only holograms using stochastic gradient descent (SGD) algorithm with weighted complex loss function and masked multi-layer diffraction. The 3D scene can be represented by a combination of layers in different depths. In the wave propagation procedure of multiple layers in different depths, the complex amplitude of layers in different depths will gradually diffuse and produce occlusion at another layer. To solve this occlusion problem, a mask is used in the process of layers diffracting. Whether it is forward wave propagation or backward wave propagation of layers, the mask can reduce the occlusion problem between different layers. Otherwise, weighted complex loss function is implemented in the gradient descent optimization process, which analyzes the real part, the imaginary part, and the amplitude part of the focus region between the reconstructed images of the hologram and the target images. The weight parameter is used to adjust the ratio of the amplitude loss of the focus region in the whole loss function. The weight amplitude loss part in weighted complex loss function can decrease the interference of the focus region from the defocus region. The simulations and experiments have validated the effectiveness of the proposed method.

12.
Opt Express ; 31(2): 1125-1140, 2023 Jan 16.
Article in English | MEDLINE | ID: mdl-36785154

ABSTRACT

Real-time dense view synthesis based on three-dimensional (3D) reconstruction of real scenes is still a challenge for 3D light-field display. It's time-consuming to reconstruct an entire model, and then the target views are synthesized afterward based on volume rendering. To address this issue, Light-field Visual Hull (LVH) is presented with free-viewpoint texture mapping for 3D light-field display, which can directly produce synthetic images with the 3D reconstruction of real scenes in real-time based on forty free-viewpoint RGB cameras. An end-to-end subpixel calculation procedure of the synthetic image is demonstrated, which defines a rendering ray for each subpixel based on light-field image coding. In the ray propagation process, only the essential spatial point of the target model is located for the corresponding subpixel by projecting the frontmost point of the ray to all the free-viewpoints, and the color of each subpixel is identified in one pass. A dynamic free-viewpoint texture mapping method is proposed to solve the correct graphic texture considering the free-viewpoint cameras. To improve the efficiency, only the visible 3D position and texture that contributes to the synthetic image are calculated based on backward ray tracing rather than computing the entire 3D model and generating all elemental images. In addition, an incremental calibration method by dividing camera groups is proposed to satisfy the accuracy. Experimental results show the validity of our method. All the rendered views are analyzed for justifying the texture mapping method, and the PSNR is improved by an average of 11.88dB. Finally, LVH can achieve a natural and smooth viewing effect at 4K resolution and the frame rate of 25 ∼ 30fps with a large viewing angle.

13.
Micromachines (Basel) ; 14(1)2023 Jan 06.
Article in English | MEDLINE | ID: mdl-36677208

ABSTRACT

Limited by the low space-bandwidth product of the spatial light modulator (SLM), it is difficult to realize multiview holographic three-dimensional (3D) display. To conquer the problem, a method based on the holographic optical element (HOE), which is regarded as a controlled light element, is proposed in the study. The SLM is employed to upload the synthetic phase-only hologram generated by the angular spectrum diffraction theory. Digital grating is introduced in the generation process of the hologram to achieve the splicing of the reconstructions and adjust the position of the reconstructions. The HOE fabricated by the computer-generated hologram printing can redirect the reconstructed images of multiview into multiple viewing zones. Thus, the modulation function of the HOE should be well-designed to avoid crosstalk between perspectives. The experimental results show that the proposed system can achieve multiview holographic augmented reality (AR) 3D display without crosstalk. The resolution of each perspective is 4K, which is higher than that of the existing multiview 3D display system.

14.
J Opt Soc Am A Opt Image Sci Vis ; 39(12): 2131-2141, 2022 Dec 01.
Article in English | MEDLINE | ID: mdl-36520728

ABSTRACT

Light field (LF) image super-resolution (SR) can improve the limited spatial resolution of LF images by using complementary information from different perspectives. However, current LF image SR methods only use the RGB data to implicitly exploit the information among different perspectives, without paying attention to the information loss from raw data to RGB data and the explicit structure information utilization. To address the first issue, a data generation pipeline is developed to collect LF raw data for LF image SR. In addition, to make full use of the multiview information, an end-to-end convolutional neural network architecture (namely, LF-RawSR) is proposed for LF image SR. Specifically, an aggregated module is first used to fuse the angular information based on a volume transformer with plane sweep volume. Then the aggregated feature is warped to all LF views using a cross-view transformer for nonlocal dependencies utilization. The experimental results demonstrate that our method outperforms existing state-of-the-art methods with a comparative computational cost, and fine details and clear structures can be restored.

15.
J Opt Soc Am A Opt Image Sci Vis ; 39(12): 2316-2324, 2022 Dec 01.
Article in English | MEDLINE | ID: mdl-36520753

ABSTRACT

Due to the limited pixel pitch of the spatial light modulator (SLM), the field of view (FOV) is insufficient to meet binocular observation needs. Here, an optimized controlling light method of a binocular holographic three-dimensional (3D) display system based on the holographic optical element (HOE) is proposed. The synthetic phase-only hologram uploaded onto the SLM is generated with the layer-based angular spectrum diffraction theory, and two different reference waves are introduced to separate the left view and the right view of the 3D scene. The HOE with directional controlling light parameters is employed to guide binocular information into the left-eye and the right-eye viewing zones simultaneously. Optical experiments verify that the proposed system can achieve binocular holographic augmented reality 3D effect successfully with real physical depth, which can eliminate the accommodation-vergence conflict and visual fatigue problem. For each perspective, the FOV is 8.7° when the focal length of the HOE is 10 cm. The width of the viewing zone is 2.3 cm when the viewing distance is 25 cm.

16.
Opt Express ; 30(24): 44201-44217, 2022 Nov 21.
Article in English | MEDLINE | ID: mdl-36523100

ABSTRACT

Three-dimensional (3D) light-field displays can provide an immersive visual experience, which has attracted significant attention. However, the generating of high-quality 3D light-field content in the real world is still a challenge because it is difficult to capture dense high-resolution viewpoints of the real world with the camera array. Novel view synthesis based on CNN can generate dense high-resolution viewpoints from sparse inputs but suffer from high-computational resource consumption, low rendering speed, and limited camera baseline. Here, a two-stage virtual view synthesis method based on cutoff-NeRF and 3D voxel rendering is presented, which can fast synthesize dense novel views with smooth parallax and 3D images with a resolution of 7680 × 4320 for the 3D light-field display. In the first stage, an image-based cutoff-NeRF is proposed to implicitly represent the distribution of scene content and improve the quality of the virtual view. In the second stage, a 3D voxel-based image rendering and coding algorithm is presented, which quantify the scene content distribution learned by cutoff-NeRF to render high-resolution virtual views fast and output high-resolution 3D images. Among them, a coarse-to-fine 3D voxel rendering method is proposed to improve the accuracy of voxel representation effectively. Furthermore, a 3D voxel-based off-axis pixel encoding method is proposed to speed up 3D image generation. Finally, a sparse views dataset is built by ourselves to analyze the effectiveness of the proposed method. Experimental results demonstrate the method's effectiveness, which can fast synthesize novel views and 3D images with high resolution in real 3D scenes and physical simulation environments. PSNR of the virtual view is about 29.75 dB, SSIM is about 0.88, and the synthetic 8K 3D image time is about 14.41s. We believe that our fast high-resolution virtual viewpoint synthesis method can effectively improve the application of 3D light field display.

17.
Micromachines (Basel) ; 13(12)2022 Nov 29.
Article in English | MEDLINE | ID: mdl-36557406

ABSTRACT

A holographic function screen (HFS) can recompose the wavefront and re-modulate the light-field distribution from a three-dimensional (3D) light field display (LFD) system. However, the spread function of existing HFSs does not particularly suit integral imaging (II) 3D LFD systems, which causes crosstalk and reduces the sharpness of reconstructed 3D images. An optimized holographic function screen with a flat-top rectangular spread function (FRSF) was designed for an II 3D LFD system. A simulation was carried out through ray tracing, which verified that the proposed diffusion function could suppress crosstalk and improve the overall effect.

18.
Opt Express ; 30(12): 22260-22276, 2022 Jun 06.
Article in English | MEDLINE | ID: mdl-36224928

ABSTRACT

Three-Dimensional (3D) light-field display has achieved promising improvement in recent years. However, since the dense-view images cannot be collected fast in real-world 3D scenes, the real-time 3D light-field display is still challenging to achieve in real scenes, especially at the high-resolution 3D display. Here, a real-time 3D light-field display method with dense-view is proposed based on image color correction and self-supervised optical flow estimation, and a high-quality and high frame rate of 3D light-field display can be realized simultaneously. A sparse camera array is firstly used to capture sparse-view images in the proposed method. To eliminate the color deviation of the sparse views, the imaging process of the camera is analyzed, and a practical multi-layer perception (MLP) network is proposed to perform color calibration. Given sparse views with consistent color, the optical flow can be estimated by a lightweight convolutional neural network (CNN) at high speed, which uses the input image pairs to learn the optical flow in a self-supervised manner. With inverse warp operation, dense-view images can be synthesized in the end. Quantitative and qualitative experiments are performed to evaluate the feasibility of the proposed method. Experimental results show that over 60 dense-view images at a resolution of 1024 × 512 can be generated with 11 input views at a frame rate over 20 fps, which is 4× faster than previous optical flow estimation methods PWC-Net and LiteFlowNet3. Finally, large viewing angles and high-quality 3D light-field display at 3840 × 2160 resolution can be achieved in real-time.

19.
Opt Express ; 30(10): 17577-17590, 2022 May 09.
Article in English | MEDLINE | ID: mdl-36221577

ABSTRACT

Accurate, fast, and reliable modeling and optimization methods play a crucial role in designing light field display (LFD) system. Here, an automatic co-design method of LFD system based on simulated annealing and visual simulation is proposed. The process of LFD content acquisition and optical reconstruction are modeled and simulated, the objective function for evaluating the display effect of the LFD system is established according to the simulation results. In case of maximum objective function, the simulated annealing optimization method is used to find the optimal parameters of the LFD system. The validity of the proposed method is confirmed through optical experiments.

20.
Opt Express ; 30(22): 40087-40100, 2022 Oct 24.
Article in English | MEDLINE | ID: mdl-36298947

ABSTRACT

Holographic display is an ideal technology for near-eye display to realize virtual and augmented reality applications, because it can provide all depth perception cues. However, depth performance is sacrificed by exiting computer-generated hologram (CGH) methods for real-time calculation. In this paper, volume representation and improved ray tracing algorithm are proposed for real-time CGH generation with enhanced depth performance. Using the single fast Fourier transform (S-FFT) method, the volume representation enables a low calculation burden and is efficient for Graphics Processing Unit (GPU) to implement diffraction calculation. The improved ray tracing algorithm accounts for accurate depth cues in complex 3D scenes with reflection and refraction, which is represented by adding extra shapes in the volume. Numerical evaluation is used to verify the depth precision. And experiments show that the proposed method can provide a real-time interactive holographic display with accurate depth precision and a large depth range. CGH of a 3D scene with 256 depth values is calculated at 30fps, and the depth range can be hundreds of millimeters. Depth cues of reflection and refraction images can also be reconstructed correctly. The proposed method significantly outperforms existing fast methods by achieving a more realistic 3D holographic display with ideal depth performance and real-time calculation at the same time.

SELECTION OF CITATIONS
SEARCH DETAIL
...