Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(23)2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38067835

RESUMO

Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or a neural network. This can be used either to see sharp objects from a greater distance or to reduce manufacturing costs due to tolerance regarding the sensor position. Here, we study the case of an embedded processing with only one convolution with a finite kernel size. The finite impulse response (FIR) filter coefficients are learned or computed based on a Wiener filter paradigm. It involves an optical model typical of codesigned systems for DoF extension and a scene power spectral density, which is either learned or modeled. We compare different FIR filters and present a method for dimensioning their sizes prior to a joint optimization. We also show that, among the filters compared, the learning approach enables an easy adaptation to a database, but the other approaches are equally robust.

2.
Appl Opt ; 61(29): 8843-8849, 2022 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-36256020

RESUMO

We present a novel, to the best of our knowledge, patch-based approach for depth regression from defocus blur. Most state-of-the-art methods for depth from defocus (DFD) use a patch classification approach among a set of potential defocus blurs related to a depth, which induces errors due to the continuous variation of the depth. Here, we propose to adapt a simple classification model using a soft-assignment encoding of the true depth into a membership probability vector during training and a regression scale to predict intermediate depth values. Our method uses no blur model or scene model; it only requires a training dataset of image patches (either raw, gray scale, or RGB) and their corresponding depth label. We show that our method outperforms both classification and direct regression on simulated images from structured or natural texture datasets, and on raw real data having optical aberrations from an active DFD experiment.

3.
Appl Opt ; 60(31): 9966-9974, 2021 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-34807187

RESUMO

In this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e., depth) information. Depth is then estimated from a single image patch using a maximum likelihood criterion defined using the learned covariance. This method is applied here within a new active DFD method using a dense textured projection and a chromatic lens for image acquisition. The projector adds texture for low-textured objects, which is usually a limitation of DFD, and the chromatic aberration increases the estimated depth range with respect to a conventional DFD. Here, we provide quantitative evaluations of the depth estimation performance of our method on simulated and real data of fronto-parallel untextured scenes. The proposed method is then experimentally evaluated qualitatively using a 3D printed benchmark.

4.
Sensors (Basel) ; 19(3)2019 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-30743993

RESUMO

In the context of underwater robotics, the visual degradation induced by the medium properties make difficult the exclusive use of cameras for localization purpose. Hence, many underwater localization methods are based on expensive navigation sensors associated with acoustic positioning. On the other hand, pure visual localization methods have shown great potential in underwater localization but the challenging conditions, such as the presence of turbidity and dynamism, remain complex to tackle. In this paper, we propose a new visual odometry method designed to be robust to these visual perturbations. The proposed algorithm has been assessed on both simulated and real underwater datasets and outperforms state-of-the-art terrestrial visual SLAM methods under many of the most challenging conditions. The main application of this work is the localization of Remotely Operated Vehicles used for underwater archaeological missions, but the developed system can be used in any other applications as long as visual information is available.

5.
J Opt Soc Am A Opt Image Sci Vis ; 31(12): 2650-62, 2014 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-25606754

RESUMO

In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...