Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Front Microbiol ; 14: 1111794, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36819037

RESUMO

Microalgae are a large group of organisms that can produce various useful substances through photosynthesis. Microalgae need to be genetically modified at the molecular level to become "Chassis Cells" for food, medicine, energy, and environmental protection and, consequently, obtain benefits from microalgae resources. Insertional mutagenesis of microalgae using transposons is a practical possibility for understanding the function of microalgae genes. Theoretical and technical support is provided in this manuscript for applying transposons to microalgae gene function by summarizing the sequencing method of transposon insertion sites.

2.
Proc Conf Assoc Comput Linguist Meet ; 2020: 2359-2369, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33782629

RESUMO

Recent Transformer-based contextual word representations, including BERT and XLNet, have shown state-of-the-art performance in multiple disciplines within NLP. Fine-tuning the trained contextual models on task-specific datasets has been the key to achieving superior performance downstream. While fine-tuning these pre-trained models is straight-forward for lexical applications (applications with only language modality), it is not trivial for multimodal language (a growing area in NLP focused on modeling face-to-face communication). Pre-trained models don't have the necessary components to accept two extra modalities of vision and acoustic. In this paper, we proposed an attachment to BERT and XLNet called Multimodal Adaptation Gate (MAG). MAG allows BERT and XLNet to accept multimodal nonverbal data during fine-tuning. It does so by generating a shift to internal representation of BERT and XLNet; a shift that is conditioned on the visual and acoustic modalities. In our experiments, we study the commonly used CMU-MOSI and CMU-MOSEI datasets for multimodal sentiment analysis. Fine-tuning MAG-BERT and MAG-XLNet significantly boosts the sentiment analysis performance over previous baselines as well as language-only fine-tuning of BERT and XLNet. On the CMU-MOSI dataset, MAG-XLNet achieves human-level multimodal sentiment analysis performance for the first time in the NLP community.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA