Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Type of study
Language
Publication year range
1.
Sensors (Basel) ; 21(13)2021 Jun 23.
Article in English | MEDLINE | ID: mdl-34201455

ABSTRACT

High-resolution 3D scanning devices produce high-density point clouds, which require a large capacity of storage and time-consuming processing algorithms. In order to reduce both needs, it is common to apply surface simplification algorithms as a preprocessing stage. The goal of point cloud simplification algorithms is to reduce the volume of data while preserving the most relevant features of the original point cloud. In this paper, we present a new point cloud feature-preserving simplification algorithm. We use a global approach to detect saliencies on a given point cloud. Our method estimates a feature vector for each point in the cloud. The components of the feature vector are the normal vector coordinates, the point coordinates, and the surface curvature at each point. Feature vectors are used as basis signals to carry out a dictionary learning process, producing a trained dictionary. We perform the corresponding sparse coding process to produce a sparse matrix. To detect the saliencies, the proposed method uses two measures, the first of which takes into account the quantity of nonzero elements in each column vector of the sparse matrix and the second the reconstruction error of each signal. These measures are then combined to produce the final saliency value for each point in the cloud. Next, we proceed with the simplification of the point cloud, guided by the detected saliency and using the saliency values of each point as a dynamic clusterization radius. We validate the proposed method by comparing it with a set of state-of-the-art methods, demonstrating the effectiveness of the simplification method.


Subject(s)
Algorithms
2.
Sensors (Basel) ; 20(11)2020 Jun 05.
Article in English | MEDLINE | ID: mdl-32516976

ABSTRACT

Denoising the point cloud is fundamental for reconstructing high quality surfaces with details in order to eliminate noise and outliers in the 3D scanning process. The challenges for a denoising algorithm are noise reduction and sharp features preservation. In this paper, we present a new model to reconstruct and smooth point clouds that combine L1-median filtering with sparse L1 regularization for both denoising the normal vectors and updating the position of the points to preserve sharp features in the point cloud. The L1-median filter is robust to outliers and noise compared to the mean. The L1 norm is a way to measure the sparsity of a solution, and applying an L1 optimization to the point cloud can measure the sparsity of sharp features, producing clean point set surfaces with sharp features. We optimize the L1 minimization problem by using the proximal gradient descent algorithm. Experimental results show that our approach is comparable to the state-of-the-art methods, as it filters out 3D models with a high level of noise, but keeps their geometric features.

3.
Sensors (Basel) ; 20(5)2020 Mar 10.
Article in English | MEDLINE | ID: mdl-32164373

ABSTRACT

Magnetic Resonance (MR) Imaging is a diagnostic technique that produces noisy images, which must be filtered before processing to prevent diagnostic errors. However, filtering the noise while keeping fine details is a difficult task. This paper presents a method, based on sparse representations and singular value decomposition (SVD), for non-locally denoising MR images. The proposed method prevents blurring, artifacts, and residual noise. Our method is composed of three stages. The first stage divides the image into sub-volumes, to obtain its sparse representation, by using the KSVD algorithm. Then, the global influence of the dictionary atoms is computed to upgrade the dictionary and obtain a better reconstruction of the sub-volumes. In the second stage, based on the sparse representation, the noise-free sub-volume is estimated using a non-local approach and SVD. The noise-free voxel is reconstructed by aggregating the overlapped voxels according to the rarity of the sub-volumes it belongs, which is computed from the global influence of the atoms. The third stage repeats the process using a different sub-volume size for producing a new filtered image, which is averaged with the previously filtered images. The results provided show that our method outperforms several state-of-the-art methods in both simulated and real data.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Algorithms , Artifacts , Brain/diagnostic imaging , Computer Simulation , Humans , Models, Statistical , Phantoms, Imaging , Signal-To-Noise Ratio , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL