Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Phys Rev Lett ; 125(4): 047702, 2020 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-32794809

RESUMO

High order perturbation theory has seen an unexpected recent revival for controlled calculations of quantum many-body systems, even at strong coupling. We adapt integration methods using low-discrepancy sequences to this problem. They greatly outperform state-of-the-art diagrammatic Monte Carlo simulations. In practical applications, we show speed-ups of several orders of magnitude with scaling as fast as 1/N in sample number N; parametrically faster than 1/sqrt[N] in Monte Carlo simulations. We illustrate our technique with a solution of the Kondo ridge in quantum dots, where it allows large parameter sweeps.

2.
Med Image Anal ; 58: 101551, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31499319

RESUMO

The advent of deep learning has pushed medical image analysis to new levels, rapidly replacing more traditional machine learning and computer vision pipelines. However segmenting and labelling anatomical regions remains challenging owing to appearance variations, imaging artifacts, the paucity and variability of annotated data, and the difficulty of fully exploiting domain constraints such as anatomical knowledge about inter-region relationships. We address the last point, improving the network's region-labeling consistency by introducing NonAdjLoss, an adjacency-graph based auxiliary training loss that penalizes outputs containing regions with anatomically-incorrect adjacency relationships. NonAdjLoss supports both fully-supervised training and a semi-supervised extension in which it is applied to unlabeled supplementary training data. The approach substantially reduces segmentation anomalies on the MICCAI-2012, IBSRv2 brain MRI datasets and the Anatomy3 whole body CT dataset, especially when semi-supervised training is included.


Assuntos
Mapeamento Encefálico/métodos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , Humanos
3.
IEEE Trans Image Process ; 21(9): 4232-43, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22614643

RESUMO

We introduce the hierarchical Markov aspect model (HMAM), a computationally efficient graphical model for densely labeling large remote sensing images with their underlying terrain classes. HMAM resolves local ambiguities efficiently by combining the benefits of quadtree representations and aspect models-the former incorporate multiscale visual features and hierarchical smoothing to provide improved local label consistency, while the latter sharpen the labelings by focusing them on the classes that are most relevant for the broader local image context. The full HMAM model takes a grid of local hierarchical Markov quadtrees over image patches and augments it by incorporating a probabilistic latent semantic analysis aspect model over a larger local image tile at each level of the quadtree forest. Bag-of-word visual features are extracted for each level and patch, and given these, the parent-child transition probabilities from the quadtree and the label probabilities from the tile-level aspect models, an efficient forwards-backwards inference pass allows local posteriors for the class labels to be obtained for each patch. Variational expectation-maximization is then used to train the complete model from either pixel-level or tile-keyword-level labelings. Experiments on a complete TerraSAR-X synthetic aperture radar terrain map with pixel-level ground truth show that HMAM is both accurate and efficient, providing significantly better results than comparable single-scale aspect models with only a modest increase in training and test complexity. Keyword-level training greatly reduces the cost of providing training data with little loss of accuracy relative to pixel-level training.

4.
IEEE Trans Image Process ; 19(6): 1635-50, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20172829

RESUMO

Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP/LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources--Gabor wavelets and LBP--showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1% at 0.1% false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions.


Assuntos
Biometria/métodos , Face/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Iluminação/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração
5.
IEEE Trans Pattern Anal Mach Intell ; 28(1): 44-58, 2006 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-16402618

RESUMO

We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, Relevance Vector Machine (RVM) regression, and Support Vector Machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6 degrees are obtained for a variety of walking motions.


Assuntos
Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Articulações/anatomia & histologia , Articulações/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Postura/fisiologia , Algoritmos , Humanos , Fotografação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA