Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
Light Sci Appl ; 13(1): 231, 2024 Sep 04.
Article in English | MEDLINE | ID: mdl-39237561

ABSTRACT

In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).

2.
Light Sci Appl ; 13(1): 43, 2024 Feb 04.
Article in English | MEDLINE | ID: mdl-38310118

ABSTRACT

Image denoising, one of the essential inverse problems, targets to remove noise/artifacts from input images. In general, digital image denoising algorithms, executed on computers, present latency due to several iterations implemented in, e.g., graphics processing units (GPUs). While deep learning-enabled methods can operate non-iteratively, they also introduce latency and impose a significant computational burden, leading to increased power consumption. Here, we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images - implemented at the speed of light propagation within a thin diffractive visual processor that axially spans <250 × λ, where λ is the wavelength of light. This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features, causing them to miss the output image Field-of-View (FoV) while retaining the object features of interest. Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30-40%. We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal computational overhead, all-optical diffractive denoisers can be transformative for various image display and projection systems, including, e.g., holographic displays.

3.
Nat Commun ; 14(1): 6830, 2023 Oct 26.
Article in English | MEDLINE | ID: mdl-37884504

ABSTRACT

Free-space optical communication becomes challenging when an occlusion blocks the light path. Here, we demonstrate a direct communication scheme, passing optical information around a fully opaque, arbitrarily shaped occlusion that partially or entirely occludes the transmitter's field-of-view. In this scheme, an electronic neural network encoder and a passive, all-optical diffractive network-based decoder are jointly trained using deep learning to transfer the optical information of interest around the opaque occlusion of an arbitrary shape. Following its training, the encoder-decoder pair can communicate any arbitrary optical information around opaque occlusions, where the information decoding occurs at the speed of light propagation through passive light-matter interactions, with resilience against various unknown changes in the occlusion shape and size. We also validate this framework experimentally in the terahertz spectrum using a 3D-printed diffractive decoder. Scalable for operation in any wavelength regime, this scheme could be particularly useful in emerging high data-rate free-space communication systems.

4.
Sci Adv ; 8(48): eadd3433, 2022 Dec 02.
Article in English | MEDLINE | ID: mdl-36459555

ABSTRACT

High-resolution image projection over a large field of view (FOV) is hindered by the restricted space-bandwidth product (SBP) of wavefront modulators. We report a deep learning-enabled diffractive display based on a jointly trained pair of an electronic encoder and a diffractive decoder to synthesize/project super-resolved images using low-resolution wavefront modulators. The digital encoder rapidly preprocesses the high-resolution images so that their spatial information is encoded into low-resolution patterns, projected via a low SBP wavefront modulator. The diffractive decoder processes these low-resolution patterns using transmissive layers structured using deep learning to all-optically synthesize/project super-resolved images at its output FOV. This diffractive image display can achieve a super-resolution factor of ~4, increasing the SBP by ~16-fold. We experimentally validate its success using 3D-printed diffractive decoders that operate at the terahertz spectrum. This diffractive image decoder can be scaled to operate at visible wavelengths and used to design large SBP displays that are compact, low power, and computationally efficient.

5.
Appl Opt ; 58(20): 5422-5431, 2019 Jul 10.
Article in English | MEDLINE | ID: mdl-31504010

ABSTRACT

The classical phase retrieval problem is the recovery of a constrained image from the magnitude of its Fourier transform. Although there are several well-known phase retrieval algorithms, including the hybrid input-output (HIO) method, the reconstruction performance is generally sensitive to initialization and measurement noise. Recently, deep neural networks (DNNs) have been shown to provide state-of-the-art performance in solving several inverse problems such as denoising, deconvolution, and superresolution. In this work, we develop a phase retrieval algorithm that utilizes two DNNs together with the model-based HIO method. First, a DNN is trained to remove the HIO artifacts, and is used iteratively with the HIO method to improve the reconstructions. After this iterative phase, a second DNN is trained to remove the remaining artifacts. Numerical results demonstrate the effectiveness of our approach, which has little additional computational cost compared to the HIO method. Our approach not only achieves state-of-the-art reconstruction performance but also is more robust to different initialization and noise levels.

6.
Appl Opt ; 57(10): 2545-2552, 2018 Apr 01.
Article in English | MEDLINE | ID: mdl-29714238

ABSTRACT

Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.

SELECTION OF CITATIONS
SEARCH DETAIL