Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37256813

RESUMO

Given a model well-trained with a large-scale base dataset, few-shot class-incremental learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples by avoiding overfitting, without catastrophically forgetting all encountered classes previously. Currently, semi-supervised learning technique that harnesses freely available unlabeled data to compensate for limited labeled data can boost the performance in numerous vision tasks, which heuristically can be applied to tackle issues in FSCIL, i.e., the semi-supervised FSCIL (Semi-FSCIL). So far, very limited work focuses on the Semi-FSCIL task, leaving the adaptability issue of semi-supervised learning to the FSCIL task unresolved. In this article, we focus on this adaptability issue and present a simple yet efficient Semi-FSCIL framework named uncertainty-aware distillation with class-equilibrium (UaD-ClE), encompassing two modules: uncertainty-aware distillation (UaD) and class equilibrium (ClE). Specifically, when incorporating unlabeled data into each incremental session, we introduce the ClE module that employs a class-balanced self-training (CB_ST) to avoid the gradual dominance of easy-to-classified classes on pseudo-label generation. To distill reliable knowledge from the reference model, we further implement the UaD module that combines uncertainty-guided knowledge refinement with adaptive distillation. Comprehensive experiments on three benchmark datasets demonstrate that our method can boost the adaptability of unlabeled data with the semi-supervised learning technique in FSCIL tasks. The code is available at https://github.com/yawencui/UaD-ClE.

2.
IEEE Trans Cybern ; 52(10): 10735-10749, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33784633

RESUMO

Unsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. Most existing approaches learn domain-invariant features by adapting the entire information of the images. However, forcing adaptation of domain-specific variations undermines the effectiveness of the learned features. To address this problem, we propose a novel, yet elegant module, called the deep ladder-suppression network (DLSN), which is designed to better learn the cross-domain shared content by suppressing domain-specific variations. Our proposed DLSN is an autoencoder with lateral connections from the encoder to the decoder. By this design, the domain-specific details, which are only necessary for reconstructing the unlabeled target data, are directly fed to the decoder to complete the reconstruction task, relieving the pressure of learning domain-specific variations at the later layers of the shared encoder. As a result, DLSN allows the shared encoder to focus on learning cross-domain shared content and ignores the domain-specific variations. Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. Without whistles and bells, extensive experimental results on four gold-standard domain adaptation datasets, for example: 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks.

3.
IEEE Trans Image Process ; 30: 7842-7855, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34506283

RESUMO

Unsupervised Domain Adaptation (UDA) aims to learn a classifier for the unlabeled target domain by leveraging knowledge from a labeled source domain with a different but related distribution. Many existing approaches typically learn a domain-invariant representation space by directly matching the marginal distributions of the two domains. However, they ignore exploring the underlying discriminative features of the target data and align the cross-domain discriminative features, which may lead to suboptimal performance. To tackle these two issues simultaneously, this paper presents a Joint Clustering and Discriminative Feature Alignment (JCDFA) approach for UDA, which is capable of naturally unifying the mining of discriminative features and the alignment of class-discriminative features into one single framework. Specifically, in order to mine the intrinsic discriminative information of the unlabeled target data, JCDFA jointly learns a shared encoding representation for two tasks: supervised classification of labeled source data, and discriminative clustering of unlabeled target data, where the classification of the source domain can guide the clustering learning of the target domain to locate the object category. We then conduct the cross-domain discriminative feature alignment by separately optimizing two new metrics: 1) an extended supervised contrastive learning, i.e., semi-supervised contrastive learning 2) an extended Maximum Mean Discrepancy (MMD), i.e., conditional MMD, explicitly minimizing the intra-class dispersion and maximizing the inter-class compactness. When these two procedures, i.e., discriminative features mining and alignment are integrated into one framework, they tend to benefit from each other to enhance the final performance from a cooperative learning perspective. Experiments are conducted on four real-world benchmarks (e.g., Office-31, ImageCLEF-DA, Office-Home and VisDA-C). All the results demonstrate that our JCDFA can obtain remarkable margins over state-of-the-art domain adaptation methods. Comprehensive ablation studies also verify the importance of each key component of our proposed algorithm and the effectiveness of combining two learning strategies into a framework.

4.
Opt Express ; 27(8): 11084-11102, 2019 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-31052958

RESUMO

The adaptive wavefront interferometer (AWI) we have reported recently is utilized to test in-process surfaces with severe surface figure error which is beyond dynamic range of conventional interferometers [S. Xue, S. Chen, Z. Fan, and D. Zhai, Opt. Express26, 21910 (2018).]. However, it shows low intelligence when Monte-Carlo simulation is conducted to apply AWI on various surface figure error. In some simulation cases, the unresolvable fringes keep still or cannot be turned into completely resolvable fringes. To troubleshoot this issue, we studied AWIs in a general framework of global optimization for the first time. Under this framework, we explained that three optimization issues contribute to the poor performance of AWI. On this basis, we proposed a machine vision and genetic algorithm combined method (MV-GA) to control AWI to realize efficient and robust tests of various surface figure error. Monte-Carlo simulation and experiment verify the robustness has been greatly enhanced.

5.
Eur J Pharmacol ; 702(1-3): 85-92, 2013 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-23399769

RESUMO

The proliferation of Schwann cells around injured peripheral nerves supports the process of Wallerian degeneration and is critical for axonal regeneration. In this publication, carboxymethylated chitosan (CMCS) was studied to determine its capacity (i) to induce proliferation and secretion of nerve growth factor (NGF) and (ii) to activate Wingless-type(Wnt) protein/ß-catenin signaling pathways in rat Schwann cells. CMCS was found to induce Schwann cell proliferation and NGF synthesis in Schwann cell in a dose and time dependent manner. CMCS was shown to activate factors in the Wnt/ß-catenin signaling pathway, including Dvl-1, ß-catenin, Tcf4, Lef1, C-myc, and Cyclin D1 which are active in the proliferation of Schwann cells and biosynthesis of NGF of Schwann cell. Overall, this study suggests that CMCS can promote the proliferation of cultured Schwann cells and synthesis of NGF by activating the Wnt/ß-catenin signaling pathway.


Assuntos
Quitosana/análogos & derivados , Quitosana/farmacologia , Fator de Crescimento Neural/biossíntese , Células de Schwann/efeitos dos fármacos , Via de Sinalização Wnt/efeitos dos fármacos , Proteínas Adaptadoras de Transdução de Sinal/genética , Animais , Proliferação de Células/efeitos dos fármacos , Proteínas de Ligação a DNA/metabolismo , Proteínas Desgrenhadas , Fator 1 de Ligação ao Facilitador Linfoide/metabolismo , Fosfoproteínas/genética , Ratos , Ratos Sprague-Dawley , Células de Schwann/citologia , Células de Schwann/metabolismo , Fator de Transcrição 4 , Fatores de Transcrição/metabolismo , beta Catenina/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...