Your browser doesn't support javascript.
loading
StochCA: A novel approach for exploiting pretrained models with cross-attention.
Seo, Seungwon; Lee, Suho; Hwang, Sangheum.
Afiliação
  • Seo S; Department of Data Science, Seoul National University of Science and Technology, Seoul 01811, South Korea. Electronic address: swseo@ds.seoultech.ac.kr.
  • Lee S; Department of Data Science, Seoul National University of Science and Technology, Seoul 01811, South Korea. Electronic address: swlee@ds.seoultech.ac.kr.
  • Hwang S; Department of Data Science, Seoul National University of Science and Technology, Seoul 01811, South Korea; Department of Industrial Engineering, Seoul National University of Science and Technology, Seoul 01811, South Korea; Research Center for Electrical and Information Technology, Seoul National University of Science and Technology, Seoul 01811, South Korea. Electronic address: shwang@seoultech.ac.kr.
Neural Netw ; 180: 106663, 2024 Aug 23.
Article em En | MEDLINE | ID: mdl-39208459
ABSTRACT
Utilizing large-scale pretrained models is a well-known strategy to enhance performance on various target tasks. It is typically achieved through fine-tuning pretrained models on target tasks. However, naï ve fine-tuning may not fully leverage knowledge embedded in pretrained models. In this study, we introduce a novel fine-tuning method, called stochastic cross-attention (StochCA), specific to Transformer architectures. This method modifies the Transformer's self-attention mechanism to selectively utilize knowledge from pretrained models during fine-tuning. Specifically, in each block, instead of self-attention, cross-attention is performed stochastically according to the predefined probability, where keys and values are extracted from the corresponding block of a pretrained model. By doing so, queries and channel-mixing multi-layer perceptron layers of a target model are fine-tuned to target tasks to learn how to effectively exploit rich representations of pretrained models. To verify the effectiveness of StochCA, extensive experiments are conducted on benchmarks in the areas of transfer learning and domain generalization, where the exploitation of pretrained models is critical. Our experimental results show the superiority of StochCA over state-of-the-art approaches in both areas. Furthermore, we demonstrate that StochCA is complementary to existing approaches, i.e., it can be combined with them to further improve performance. We release the code at https//github.com/daintlab/stochastic_cross_attention.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Neural Netw Assunto da revista: NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article