Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 29(12): 4920-4935, 2023 Dec.
Article in English | MEDLINE | ID: mdl-35862319

ABSTRACT

Tree modeling has been extensively studied in computer graphics. Recent advances in the development of high-resolution sensors and data processing techniques are extremely useful for collecting 3D datasets of real-world trees and generating increasingly plausible branching structures. The wide availability of versatile acquisition platforms allows us to capture multi-view images and scanned data that can be used for guided 3D tree modeling. In this paper, we carry out a comprehensive review of the state-of-the-art methods for the 3D modeling of botanical tree geometry by taking input data from real scenarios. A wide range of studies has been proposed following different approaches. The most relevant contributions are summarized and classified into three categories: (1) procedural reconstruction, (2) geometry-based extraction, and (3) image-based modeling. In addition, we describe other approaches focused on the reconstruction process by adding additional features to achieve a realistic appearance of the tree models. Thus, we provide an overview of the most effective procedures to assist researchers in the photorealistic modeling of trees in geometry and appearance. The article concludes with remarks and trends for promising research opportunities in 3D tree modeling using real-world data.

2.
Sensors (Basel) ; 20(8)2020 Apr 15.
Article in English | MEDLINE | ID: mdl-32326663

ABSTRACT

The characterization of natural spaces by the precise observation of their material properties is highly demanded in remote sensing and computer vision. The production of novel sensors enables the collection of heterogeneous data to get a comprehensive knowledge of the living and non-living entities in the ecosystem. The high resolution of consumer-grade RGB cameras is frequently used for the geometric reconstruction of many types of environments. Nevertheless, the understanding of natural spaces is still challenging. The automatic segmentation of homogeneous materials in nature is a complex task because there are many overlapping structures and an indirect illumination, so the object recognition is difficult. In this paper, we propose a method based on fusing spatial and multispectral characteristics for the unsupervised classification of natural materials in a point cloud. A high-resolution camera and a multispectral sensor are mounted on a custom camera rig in order to simultaneously capture RGB and multispectral images. Our method is tested in a controlled scenario, where different natural objects coexist. Initially, the input RGB images are processed to generate a point cloud by applying the structure-from-motion (SfM) algorithm. Then, the multispectral images are mapped on the three-dimensional model to characterize the geometry with the reflectance captured from four narrow bands (green, red, red-edge and near-infrared). The reflectance, the visible colour and the spatial component are combined to extract key differences among all existing materials. For this purpose, a hierarchical cluster analysis is applied to pool the point cloud and identify the feature pattern for every material. As a result, the tree trunk, the leaves, different species of low plants, the ground and rocks can be clearly recognized in the scene. These results demonstrate the feasibility to perform a semantic segmentation by considering multispectral and spatial features with an unknown number of clusters to be detected on the point cloud. Moreover, our solution is compared to other method based on supervised learning in order to test the improvement of the proposed approach.


Subject(s)
Imaging, Three-Dimensional/methods , Photography/methods , Algorithms , Ecosystem , Plant Leaves , Semantics
3.
J Forensic Sci ; 50(1): 127-33, 2005 Jan.
Article in English | MEDLINE | ID: mdl-15831006

ABSTRACT

Bite mark analysis assumes the uniqueness of the dentition can be accurately recorded on skin or an object. However, biting is a dynamic procedure involving three moving systems, the maxilla, the mandible, and the victim's reaction. Moreover, bite marks can be distorted by the anatomic location of the injury or the elasticity of the skin tissue. Therefore, the same dentition can produce bite marks that exhibit variations in appearance. The complexity of this source of evidence emphasizes the need for new 3D imaging technologies in bite mark analysis. This article presents a new software package, DentalPrint (2004, University of Granada, Department of Forensic Medicine and Forensic Odontology, Granada, Spain) that generates different comparison overlays from 3D dental cast images depending on the pressure of the bite or the distortion caused by victim-biter interaction. The procedure for generating comparison overlays is entirely automatic, thus avoiding observer bias. Moreover, the software presented here makes it impossible for third parties to manipulate or alter the 3D images, making DentalPrint suitable for bite mark analyses to be used in court proceedings.


Subject(s)
Bites, Human/classification , Dentition , Image Processing, Computer-Assisted , Software , Automation , Forensic Medicine/methods , Humans , Imaging, Three-Dimensional , Jurisprudence
SELECTION OF CITATIONS
SEARCH DETAIL
...