Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nanotechnology ; 34(41)2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37437554

RESUMO

It is very important to develop novel nanocomposites as electrode materials for supercapacitors (SCs). MoSe2porous nanospheres were prepared by one-step hydrothermal method, and polyaniline (PANI) nanosheets were grownin situto obtain MoSe2/PANI capsule nanospheres (CNs). By changing the amount of aniline, it was found that MoSe2/PANI-16 CNs had the best electrochemical performance, and a high specific capacitance of 753.2 F g-1was obtained at a current density of 1 A g-1. In addition, the interface electron transport path was clarified that a C-Mo-Se bridge bonds may be formed for rapid electron transfer. The reaction kinetics was also explored. The large specific surface areas of MoSe2/PANI CNs provided more reactive sites, so that the contribution of pseudocapacitance was much larger than diffusion capacitance. The assembled MoSe2/PANI//activated carbon asymmetric supercapacitor has a energy density of 20.1 Wh kg-1at a power density of 650 W kg-1. These results indicate that the MoSe2/PANI CNs are a promising electrode material.

2.
IEEE Trans Vis Comput Graph ; 27(3): 2028-2040, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31514140

RESUMO

There have been significant advances in capturing gigapixel panoramas (GPP). However, solutions for viewing GPPs on head-mounted displays (HMDs) are lagging: an immersive experience requires ultra-fast rendering while directly loading a GPP onto the GPU is infeasible due to limited texture memory capacity. In this paper, we present a novel out-of-core rendering technique that supports not only classic panning, tilting, and zooming but also dynamic refocusing for viewing a GPP on HMD. Inspired by the network package transmission mechanisms in distributed visualization, our approach employs hierarchical image tiling and on-demand data updates across the main and the GPU memory. We further present a multi-resolution rendering scheme and a refocused light field rendering technique based on RGBD GPPs with minimal memory overhead. Comprehensive experiments demonstrate that our technique is highly efficient and reliable, able to achieve ultra-high frame rates ( fps) even on low-end GPUs. With an embedded gaze tracker, our technique enables immersive panorama viewing experiences with unprecedented resolutions, field-of-view, and focus variations while maintaining smooth spatial, angular, and focal transitions.

3.
Artigo em Inglês | MEDLINE | ID: mdl-32790628

RESUMO

Recent progresses in visual tracking have greatly improved the tracking performance. However, challenges such as occlusion and view change remain obstacles in real world deployment. A natural solution to these challenges is to use multiple cameras with multiview inputs, though existing systems are mostly limited to specific targets (e.g. human), static cameras, and/or require camera calibration. To break through these limitations, we propose a generic multiview tracking (GMT) framework that allows camera movement, while requiring neither specific object model nor camera calibration. A key innovation in our framework is a cross-camera trajectory prediction network (TPN), which implicitly and dynamically encodes camera geometric relations, and hence addresses missing target issues such as occlusion. Moreover, during tracking, we assemble information across different cameras to dynamically update a novel collaborative correlation filter (CCF), which is shared among cameras to achieve robustness against view change. The two components are integrated into a correlation filter tracking framework, where features are trained offline using existing single view tracking datasets. For evaluation, we first contribute a new generic multiview tracking dataset (GMTD) with careful annotations, and then run experiments on the GMTD and CAMPUS datasets. The proposed GMT algorithm shows clear advantages in terms of robustness over state-of-the-art ones.

4.
IEEE Trans Pattern Anal Mach Intell ; 42(7): 1570-1581, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32305897

RESUMO

Fuzzy objects composed of hair, fur, or feather are impossible to scan even with the latest active or passive 3D scanners. We present a novel and practical neural rendering (NR) technique called neural opacity point cloud (NOPC) to allow high quality rendering of such fuzzy objects at any viewpoint. NOPC employs a learning-based scheme to extract geometric and appearance features on 3D point clouds including their opacity. It then maps the 3D features onto virtual viewpoints where a new U-Net based NR manages to handle noisy and incomplete geometry while maintaining translation equivariance. Comprehensive experiments on existing and new datasets show our NOPC can produce photorealistic rendering on inputs from multi-view setups such as a turntable system for hair and furry toy captures.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...