Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Cybern ; 54(4): 2193-2205, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37022277

ABSTRACT

Unsupervised multidomain adaptation attracts increasing attention as it delivers richer information when tackling a target task from an unlabeled target domain by leveraging the knowledge attained from labeled source domains. However, it is the quality of training samples, not just the quantity, that influences transfer performance. In this article, we propose a multidomain adaptation method with sample and source distillation (SSD), which develops a two-step selective strategy to distill source samples and define the importance of source domains. To distill samples, the pseudo-labeled target domain is constructed to learn a series of category classifiers to identify transfer and inefficient source samples. To rank domains, the agreements of accepting a target sample as the insider of source domains are estimated by constructing a domain discriminator based on selected transfer source samples. Using the selected samples and ranked domains, transfer from source domains to the target domain is achieved by adapting multilevel distributions in a latent feature space. Furthermore, to explore more usable target information which is expected to enhance the performance across domains of source predictors, an enhancement mechanism is built by matching selected pseudo-labeled and unlabeled target samples. The degrees of acceptance learned by the domain discriminator are finally employed as source merging weights to predict the target task. Superiority of the proposed SSD is validated on real-world visual classification tasks.

2.
IEEE Trans Neural Netw Learn Syst ; 33(10): 5293-5307, 2022 Oct.
Article in English | MEDLINE | ID: mdl-33835927

ABSTRACT

Transfer learning becomes an attractive technology to tackle a task from a target domain by leveraging previously acquired knowledge from a similar domain (source domain). Many existing transfer learning methods focus on learning one discriminator with single-source domain. Sometimes, knowledge from single-source domain might not be enough for predicting the target task. Thus, multiple source domains carrying richer transferable information are considered to complete the target task. Although there are some previous studies dealing with multi-source domain adaptation, these methods commonly combine source predictions by averaging source performances. Different source domains contain different transferable information; they may contribute differently to a target domain compared with each other. Hence, the source contribution should be taken into account when predicting a target task. In this article, we propose a novel multi-source contribution learning method for domain adaptation (MSCLDA). As proposed, the similarities and diversities of domains are learned simultaneously by extracting multi-view features. One view represents common features (similarities) among all domains. Other views represent different characteristics (diversities) in a target domain; each characteristic is expressed by features extracted in a source domain. Then multi-level distribution matching is employed to improve the transferability of latent features, aiming to reduce misclassification of boundary samples by maximizing discrepancy between different classes and minimizing discrepancy between the same classes. Concurrently, when completing a target task by combining source predictions, instead of averaging source predictions or weighting sources using normalized similarities, the original weights learned by normalizing similarities between source and target domains are adjusted using pseudo target labels to increase the disparities of weight values, which is desired to improve the performance of the final target predictor if the predictions of sources exist significant difference. Experiments on real-world visual data sets demonstrate the superiorities of our proposed method.

SELECTION OF CITATIONS
SEARCH DETAIL
...