Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14284-14300, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37552593

RESUMO

This article presents a simple yet effective multilayer perceptron (MLP) architecture, namely CycleMLP, which is a versatile neural backbone network capable of solving various tasks of dense visual predictions such as object detection, segmentation, and human pose estimation. Compared to recent advanced MLP architectures such as MLP-Mixer (Tolstikhin et al. 2021), ResMLP (Touvron et al. 2021), and gMLP (Liu et al. 2021), whose architectures are sensitive to image size and are infeasible in dense prediction tasks, CycleMLP has two appealing advantages: 1) CycleMLP can cope with various spatial sizes of images; 2) CycleMLP achieves linear computational complexity with respect to the image size by using local windows. In contrast, previous MLPs have O(N2) computational complexity due to their full connections in space. 3) The relationship between convolution, multi-head self-attention in Transformer, and CycleMLP are discussed through an intuitive theoretical analysis. We build a family of models that can surpass state-of-the-art MLP and Transformer models e.g., Swin Transformer (Liu et al. 2021), while using fewer parameters and FLOPs. CycleMLP expands the MLP-like models' applicability, making them versatile backbone networks that achieve competitive results on dense prediction tasks For example, CycleMLP-Tiny outperforms Swin-Tiny by 1.3% mIoU on ADE20 K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...