Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Opt Lett ; 48(2): 231-234, 2023 Jan 15.
Article in English | MEDLINE | ID: mdl-36638425

ABSTRACT

Co-design methods have been introduced to jointly optimize various optical systems along with neural network processing. In the literature, the aperture is generally a fixed parameter although it controls an important trade-off between the depth of focus, the dynamic range, and the noise level in an image. In contrast, we include aperture in co-design by using a differentiable image formation pipeline that models the effect of the aperture on the image noise, dynamic, and blur. We validate this pipeline on examples of image restoration and extension of the depth of focus. These simple examples illustrate the importance of optimizing the aperture in the co-design framework.


Subject(s)
Lenses , Neural Networks, Computer
2.
Opt Express ; 29(21): 34748-34761, 2021 Oct 11.
Article in English | MEDLINE | ID: mdl-34809257

ABSTRACT

In this paper we propose a new method to jointly design a sensor and its neural-network based processing. Using a differential ray tracing (DRT) model, we simulate the sensor point-spread function (PSF) and its partial derivative with respect to any of the sensor lens parameters. The proposed ray tracing model makes no thin lens nor paraxial approximation, and is valid for any field of view and point source position. Using the gradient backpropagation framework for neural network optimization, any of the lens parameter can then be jointly optimized along with the neural network parameters. We validate our method for image restoration applications using three proves of concept of focus setting optimization of a given sensor. We provide here interpretations of the joint optical and processing optimization results obtained with the proposed method in these simple cases. Our method paves the way to end-to-end design of a neural network and lens using the complete set of optical parameters within the full sensor field of view.

3.
J Opt Soc Am A Opt Image Sci Vis ; 38(10): 1489-1500, 2021 Oct 01.
Article in English | MEDLINE | ID: mdl-34612979

ABSTRACT

In this paper, we present a generic performance model able to evaluate the accuracy of depth estimation using depth from defocus (DFD). This model only requires the sensor point spread function at a given depth to evaluate the theoretical accuracy of depth estimation. Hence, it can be used for any (un)conventional system, using either one or several images. This model is validated experimentally on two unconventional DFD cameras, using either a coded aperture or a lens with chromatic aberration. Then, we use the proposed model for the end-to-end design of a 3D camera using an unconventional lens with chromatic aberration, for the specific use-case of small unmanned aerial vehicle navigation.

4.
Appl Opt ; 57(10): 2553-2563, 2018 Apr 01.
Article in English | MEDLINE | ID: mdl-29714240

ABSTRACT

We propose to add an optical component in front of a conventional camera to improve depth estimation performance of depth from defocus (DFD), an approach based on the relation between defocus blur and depth. The add-on overcomes ambiguity and the dead zone, which are the fundamental limitations of DFD with a conventional camera, by adding an optical aberration to the whole system that makes the blur unambiguous and measurable for each depth. We look into two optical components: the first one adds astigmatism and the other one chromatic aberration. In both cases, we present the principle of the add-on and experimental validations on real prototypes.

SELECTION OF CITATIONS
SEARCH DETAIL
...