Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(21)2021 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-34770330

RESUMO

In the last stage of colored point cloud registration, depth measurement errors hinder the achievement of accurate and visually plausible alignments. Recently, an algorithm has been proposed to extend the Iterative Closest Point (ICP) algorithm to refine the measured depth values instead of the pose between point clouds. However, the algorithm suffers from numerical instability, so a postprocessing step is needed to restrict erroneous output depth values. In this paper, we present a new algorithm with improved numerical stability. Unlike the previous algorithm heavily relying on point-to-plane distances, our algorithm constructs a cost function based on an adaptive combination of two different projected distances to prevent numerical instability. We address the problem of registering a source point cloud to the union of the source and reference point clouds. This extension allows all source points to be processed in a unified filtering framework, irrespective of the existence of their corresponding points in the reference point cloud. The extension also improves the numerical stability of using the point-to-plane distances. The experiments show that the proposed algorithm improves the registration accuracy and provides high-quality alignments of colored point clouds.

2.
Sensors (Basel) ; 20(18)2020 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-32957672

RESUMO

We present two algorithms for aligning two colored point clouds. The two algorithms are designed to minimize a probabilistic cost based on the color-supported soft matching of points in a point cloud to their K-closest points in the other point cloud. The first algorithm, like prior iterative closest point algorithms, refines the pose parameters to minimize the cost. Assuming that the point clouds are obtained from RGB-depth images, our second algorithm regards the measured depth values as variables and minimizes the cost to obtain refined depth values. Experiments with our synthetic dataset show that our pose refinement algorithm gives better results compared to the existing algorithms. Our depth refinement algorithm is shown to achieve more accurate alignments from the outputs of the pose refinement step. Our algorithms are applied to a real-world dataset, providing accurate and visually improved results.

3.
Sensors (Basel) ; 19(7)2019 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-30934950

RESUMO

RGB-Depth (RGB-D) cameras are widely used in computer vision and robotics applications such as 3D modeling and human⁻computer interaction. To capture 3D information of an object from different viewpoints simultaneously, we need to use multiple RGB-D cameras. To minimize costs, the cameras are often sparsely distributed without shared scene features. Due to the advantage of being visible from different viewpoints, spherical objects have been used for extrinsic calibration of widely-separated cameras. Assuming that the projected shape of the spherical object is circular, this paper presents a multi-cue-based method for detecting circular regions in a single color image. Experimental comparisons with existing methods show that our proposed method accurately detects spherical objects with cluttered backgrounds under different illumination conditions. The circle detection method is then applied to extrinsic calibration of multiple RGB-D cameras, for which we propose to use robust cost functions to reduce errors due to misdetected sphere centers. Through experiments, we show that the proposed method provides accurate calibration results in the presence of outliers and performs better than a least-squares-based method.

4.
IEEE Trans Image Process ; 23(8): 3321-35, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24951693

RESUMO

This paper presents a method for increasing spatial resolution of a depth map using its corresponding high-resolution (HR) color image as a guide. Most of the previous methods rely on the assumption that depth discontinuities are highly correlated with color boundaries, leading to artifacts in the regions where the assumption is broken. To prevent scene texture from being erroneously transferred to reconstructed scene surfaces, we propose a framework for dividing the color image into different regions and applying different methods tailored to each region type. For the region classification, we first segment the low-resolution (LR) depth map into regions of smooth surfaces, and then use them to guide the segmentation of the color image. Using the consensus of multiple image segmentations obtained by different super-pixel generation methods, the color image is divided into continuous and discontinuous regions: in the continuous regions, their HR depth values are interpolated from LR depth samples without exploiting the color information. In the discontinuous regions, their HR depth values are estimated by sequentially applying more complicated depth-histogram-based methods. Through experiments, we show that each step of our method improves depth map upsampling both quantitatively and qualitatively. We also show that our method can be extended to handle real data with occluded regions caused by the displacement between color and depth sensors.

5.
Opt Lett ; 39(1): 166-9, 2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24365849

RESUMO

We present a method to enhance depth quality of a time-of-flight (ToF) camera without additional devices or hardware modifications. By controlling the turn-off patterns of the LEDs of the camera, we obtain depth and normal maps simultaneously. Sixteen subphase images are acquired with variations in gate-pulse timing and light emission pattern of the camera. The subphase images allow us to obtain a normal map, which are combined with depth maps for improved depth details. These details typically cannot be captured by conventional ToF cameras. By the proposed method, the average of absolute differences between the measured and laser-scanned depth maps has decreased from 4.57 to 3.77 mm.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...