RESUMO
We present an investigation into adopting a model of the retino-cortical mapping, found in biological visual systems, to improve the efficiency of image analysis using Deep Convolutional Neural Nets (DCNNs) in the context of robot vision and egocentric perception systems. This work has now enabled DCNNs to process input images approaching one million pixels in size, in real time, using only consumer grade graphics processor (GPU) hardware in a single pass of the DCNN.
RESUMO
This paper presents a general framework that aims to address the task of segmenting three-dimensional (3-D) scan data representing the human form into subsets which correspond to functional human body parts. Such a task is challenging due to the articulated and deformable nature of the human body. A salient feature of this framework is that it is able to cope with various body postures and is in addition robust to noise, holes, irregular sampling and rigid transformations. Although whole human body scanners are now capable of routinely capturing the shape of the whole body in machine readable format, they have not yet realized their potential to provide automatic extraction of key body measurements. Automated production of anthropometric databases is a prerequisite to satisfying the needs of certain industrial sectors (e.g., the clothing industry). This implies that in order to extract specific measurements of interest, whole body 3-D scan data must be segmented by machine into subsets corresponding to functional human body parts. However, previously reported attempts at automating the segmentation process suffer from various limitations, such as being restricted to a standard specific posture and being vulnerable to scan data artifacts. Our human body segmentation algorithm advances the state of the art to overcome the above limitations and we present experimental results obtained using both real and synthetic data that confirm the validity, effectiveness, and robustness of our approach.
Assuntos
Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Postura/fisiologia , Imagem Corporal Total/métodos , Algoritmos , Humanos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de SubtraçãoRESUMO
OBJECTIVE: To validate a new method of facial volumetric assessment that is dependent on the use of stereophotogrammetric models and a software-based Facial Analysis Tool. DESIGN: The method was validated in vitro with three-dimensional (3D) models of a lifelike plastic female dummy head and in vivo with a male-subject head. METHODS: Thirty facial silicone explants were added in the nasal and perioral regions of each head, and their volumes were obtained by three different algorithms. These were compared with the actual values obtained by a "water displacement" method. RESULTS: The least mean error was found with the "tetrahedron formation" method followed by the "projection" method and the "back-plane construction" method. The error with the tetrahedron formation method was 0.071 cm(3) (95% confidence interval [CI]: -0.074 to 0.2161 cm3) with the in vitro models and 0.314 cm3 (95% CI: -0.080 to 0.708 cm3) with the in vivo models. The increased volumetric assessment error observed in vivo was attributed to the registration procedure and possible changes in facial expression. CONCLUSIONS: These results encourage the use of this method in the 3D assessment of orthognathic surgical outcome, provided a standardized facial expression is used for image acquisition.