Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(24)2023 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-38139622

RESUMO

Camera network design is a challenging task for many applications in photogrammetry, biomedical engineering, robotics, and industrial metrology, among other fields. Many driving factors are found in the camera network design including the camera specifications, object of interest, and type of application. One of the interesting applications is 3D face modeling and recognition which involves recognizing an individual based on facial attributes derived from the constructed 3D model. Developers and researchers still face difficulty in reaching the required high level of accuracy and reliability needed for image-based 3D face models. This is caused among many factors by the hardware limitations and imperfection of the cameras and the lack of proficiency in designing the ideal camera-system configuration. Accordingly, for precise measurements, we still need engineering-based techniques to ascertain the specific level of deliverables quality. In this paper, an optimal geometric design methodology of the camera network is presented by investigating different multi-camera system configurations composed of four up to eight cameras. A mathematical nonlinear constrained optimization technique is applied to solve the problem and each camera system configuration is tested for a facial 3D model where a quality assessment is applied to conclude the best configuration. The optimal configuration is found to be a 7-camera array, comprising a pentagon shape enclosing two additional cameras, offering high accuracy. For those who prioritize point density, a 9-camera array with a pentagon and quadrilateral arrangement in the X-Z plane is a viable choice. However, a 5-camera array offers a balance between accuracy and the number of cameras.

2.
PLoS One ; 16(10): e0259036, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34705870

RESUMO

The color of particular parts of a flower is often employed as one of the features to differentiate between flower types. Thus, color is also used in flower-image classification. Color labels, such as 'green', 'red', and 'yellow', are used by taxonomists and lay people alike to describe the color of plants. Flower image datasets usually only consist of images and do not contain flower descriptions. In this research, we have built a flower-image dataset, especially regarding orchid species, which consists of human-friendly textual descriptions of features of specific flowers, on the one hand, and digital photographs indicating how a flower looks like, on the other hand. Using this dataset, a new automated color detection model was developed. It is the first research of its kind using color labels and deep learning for color detection in flower recognition. As deep learning often excels in pattern recognition in digital images, we applied transfer learning with various amounts of unfreezing of layers with five different neural network architectures (VGG16, Inception, Resnet50, Xception, Nasnet) to determine which architecture and which scheme of transfer learning performs best. In addition, various color scheme scenarios were tested, including the use of primary and secondary color together, and, in addition, the effectiveness of dealing with multi-class classification using multi-class, combined binary, and, finally, ensemble classifiers were studied. The best overall performance was achieved by the ensemble classifier. The results show that the proposed method can detect the color of flower and labellum very well without having to perform image segmentation. The result of this study can act as a foundation for the development of an image-based plant recognition system that is able to offer an explanation of a provided classification.


Assuntos
Cor , Aprendizado Profundo , Flores , Plantas/classificação , Algoritmos
3.
PLoS One ; 11(7): e0159950, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27462986

RESUMO

Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs' faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance.


Assuntos
Reconhecimento Facial , Liderança , Classe Social , Identificação Biométrica , Expressão Facial , Docentes , Humanos , Masculino
4.
Sci Justice ; 55(6): 499-508, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26654086

RESUMO

Recently, in the forensic biometric community, there is a growing interest to compute a metric called "likelihood-ratio" when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biometric recognition system outputs a score and therefore a likelihood-ratio computation method is used to convert the score to a likelihood-ratio. The likelihood-ratio is the probability of the score given the hypothesis of the prosecution, Hp (the two biometric specimens arose from a same source), divided by the probability of the score given the hypothesis of the defense, Hd (the two biometric specimens arose from different sources). Given a set of training scores under Hp and a set of training scores under Hd, several methods exist to convert a score to a likelihood-ratio. In this work, we focus on the issue of sampling variability in the training sets and carry out a detailed empirical study to quantify its effect on commonly proposed likelihood-ratio computation methods. We study the effect of the sampling variability varying: 1) the shapes of the probability density functions which model the distributions of scores in the two training sets; 2) the sizes of the training sets and 3) the score for which a likelihood-ratio is computed. For this purpose, we introduce a simulation framework which can be used to study several properties of a likelihood-ratio computation method and to quantify the effect of sampling variability in the likelihood-ratio computation. It is empirically shown that the sampling variability can be considerable, particularly when the training sets are small. Furthermore, a given method of likelihood-ratio computation can behave very differently for different shapes of the probability density functions of the scores in the training sets and different scores for which likelihood-ratios are computed.


Assuntos
Funções Verossimilhança , Ciências Forenses , Humanos
5.
IEEE Trans Pattern Anal Mach Intell ; 36(1): 127-39, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24231871

RESUMO

The increase of the dimensionality of data sets often leads to problems during estimation, which are denoted as the curse of dimensionality. One of the problems of second-order statistics (SOS) estimation in high-dimensional data is that the resulting covariance matrices are not full rank, so their inversion, for example, needed in verification systems based on the likelihood ratio, is an ill-posed problem, known as the singularity problem. A classical solution to this problem is the projection of the data onto a lower dimensional subspace using principle component analysis (PCA) and it is assumed that any further estimation on this dimension-reduced data is free from the effects of the high dimensionality. Using theory on SOS estimation in high-dimensional spaces, we show that the solution with PCA is far from optimal in verification systems if the high dimensionality is the sole source of error. For moderate dimensionality, it is already outperformed by solutions based on euclidean distances and it breaks down completely if the dimensionality becomes very high. We propose a new method, the fixed-point eigenwise correction, which does not have these disadvantages and performs close to optimal.

6.
IEEE Trans Med Imaging ; 22(10): 1224-34, 2003 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-14552577

RESUMO

Blood pool agents (BPAs) for contrast-enhanced (CE) magnetic-resonance angiography (MRA) allow prolonged imaging times for higher contrast and resolution. Imaging is performed during the steady state when the contrast agent is distributed through the complete vascular system. However, simultaneous venous and arterial enhancement in this steady state hampers interpretation. In order to improve visualization of the arteries and veins from steady-state BPA data, a semiautomated method for artery-vein separation is presented. In this method, the central arterial axis and central venous axis are used as initializations for two surfaces that simultaneously evolve in order to capture the arterial and venous parts of the vasculature using the level-set framework. Since arteries and veins can be in close proximity of each other, leakage from the evolving arterial (venous) surface into the venous (arterial) part of the vasculature is inevitable. In these situations, voxels are labeled arterial or venous based on the arrival time of the respective surface. The evolution is steered by external forces related to feature images derived from the image data and by internal forces related to the geometry of the level sets. In this paper, the robustness and accuracy of three external forces (based on image intensity, image gradient, and vessel-enhancement filtering) and combinations of them are investigated and tested on seven patient datasets. To this end, results with the level-set-based segmentation are compared to the reference-standard manually obtained segmentations. Best results are achieved by applying a combination of intensity- and gradient-based forces and a smoothness constraint based on the curvature of the surface. By applying this combination to the seven datasets, it is shown that, with minimal user interaction, artery-vein separation for improved arterial and venous visualization in BPA CE-MRA can be achieved.


Assuntos
Algoritmos , Artérias/anatomia & histologia , Meios de Contraste , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Angiografia por Ressonância Magnética/métodos , Técnica de Subtração , Veias/anatomia & histologia , Humanos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...