Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38502619

RESUMO

Photorealistic stylization of 3D scenes aims to generate photorealistic images from arbitrary novel views according to a given style image, while ensuring consistency when rendering video from different viewpoints. Some existing stylization methods using neural radiance fields can effectively predict stylized scenes by combining the features of the style image with multi-view images to train 3D scenes. However, these methods generate novel view images that contain undesirable artifacts. In addition, they cannot achieve universal photorealistic stylization for a 3D scene. Therefore, a stylization image needs to retrain a 3D scene representation network based on a neural radiation field. We propose a novel photorealistic 3D scene stylization transfer framework to address these issues. It can realize photorealistic 3D scene style transfer with a 2D style image for novel view video rendering. We first pre-trained a 2D photorealistic style transfer network, which can satisfy the photorealistic style transfer between any content image and style image. Then, we use voxel features to optimize a 3D scene and obtain the geometric representation of the scene. Finally, we jointly optimize a hypernetwork to realize the photorealistic style transfer of arbitrary style images. In the transfer stage, we use a pre-trained 2D photorealistic network to constrain the photorealistic style of different views and different style images in the 3D scene. The experimental results show that our method not only realizes the 3D photorealistic style transfer of arbitrary style images, but also outperforms the existing methods in terms of visual quality and consistency. Project page:https://semchan.github.io/UPST_NeRF/.

2.
Methods ; 214: 48-59, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37120080

RESUMO

Image anomaly detection (AD) is widely researched on many occasions in computer vision tasks. High-dimensional data, such as image data, with noise and complex background is still challenging to detect anomalies under the situation that imbalanced or incomplete data are available. Some deep learning methods can be trained in an unsupervised way and map the original input into low-dimensional manifolds to predict larger differences in anomalies according to normal ones by dimension reduction. However, training a single low-dimension latent space is limited to present the low-dimensional features due to the fact that the noise and irreverent features are mapped into this space, resulting in that the manifolds are not discriminative for detecting anomalies. To address this problem, a new autoencoder framework is proposed in this study with two trainable mutually orthogonal complementary subspaces in the latent space, by latent subspace projection (LSP) mechanism, which is named as LSP-CAE. Specifically, latent subspace projection is used to train the latent image subspace (LIS) and the latent kernel subspace (LKS) in the latent space of the autoencoder-like model respectively, which can enhance learning power of different features from the input instance. The features of normal data are projected into the latent image subspace, while the latent kernel subspace is trained to extract the irrelevant information according to normal features by end-to-end training. To verify the generality and effectiveness of the proposed method, we replace the convolutional network with the fully-connected network contucted in the real-world medical datasets. The anomaly score based on projection norms in two subspace is used to evaluate the anomalies in the testing. Consequently, our proposed method can achieve the best performance according to four public datasets in comparison of the state-of-the-art methods.


Assuntos
Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...