Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Evol Comput ; 31(4): 433-458, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37155647

ABSTRACT

Existing work on offline data-driven optimization mainly focuses on problems in static environments, and little attention has been paid to problems in dynamic environments. Offline data-driven optimization in dynamic environments is a challenging problem because the distribution of collected data varies over time, requiring surrogate models and optimal solutions tracking with time. This paper proposes a knowledge-transfer-based data-driven optimization algorithm to address these issues. First, an ensemble learning method is adopted to train surrogate models to leverage the knowledge of data in historical environments as well as adapt to new environments. Specifically, given data in a new environment, a model is constructed with the new data, and the preserved models of historical environments are further trained with the new data. Then, these models are considered to be base learners and combined as an ensemble surrogate model. After that, all base learners and the ensemble surrogate model are simultaneously optimized in a multitask environment for finding optimal solutions for real fitness functions. In this way, the optimization tasks in the previous environments can be used to accelerate the tracking of the optimum in the current environment. Since the ensemble model is the most accurate surrogate, we assign more individuals to the ensemble surrogate than its base learners. Empirical results on six dynamic optimization benchmark problems demonstrate the effectiveness of the proposed algorithm compared with four state-of-the-art offline data-driven optimization algorithms. Code is available at https://github.com/Peacefulyang/DSE_MFS.git.


Subject(s)
Algorithms , Biological Evolution , Humans , Knowledge Bases , Benchmarking
2.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7621-7634, 2023 Oct.
Article in English | MEDLINE | ID: mdl-35130173

ABSTRACT

This work addresses unsupervised partial domain adaptation (PDA), in which classes in the target domain are a subset of the source domain. The key challenges of PDA are how to leverage source samples in the shared classes to promote positive transfer and filter out the irrelevant source samples to mitigate negative transfer. Existing PDA methods based on adversarial DA do not consider the loss of class discriminative representation. To this end, this article proposes a contrastive learning-assisted alignment (CLA) approach for PDA to jointly align distributions across domains for better adaptation and to reweight source instances to reduce the contribution of outlier instances. A contrastive learning-assisted conditional alignment (CLCA) strategy is presented for distribution alignment. CLCA first exploits contrastive losses to discover the class discriminative information in both domains. It then employs a contrastive loss to match the clusters across the two domains based on adversarial domain learning. In this respect, CLCA attempts to reduce the domain discrepancy by matching the class-conditional and marginal distributions. Moreover, a new reweighting scheme is developed to improve the quality of weights estimation, which explores information from both the source and the target domains. Empirical results on several benchmark datasets demonstrate that the proposed CLA outperforms the existing state-of-the-art PDA methods.

3.
Article in English | MEDLINE | ID: mdl-36269922

ABSTRACT

The purpose of this article is to address unsupervised domain adaptation (UDA) where a labeled source domain and an unlabeled target domain are given. Recent advanced UDA methods attempt to remove domain-specific properties by separating domain-specific information from domain-invariant representations, which heavily rely on the designed neural network structures. Meanwhile, they do not consider class discriminate representations when learning domain-invariant representations. To this end, this article proposes a co-training framework for heterogeneous heuristic domain adaptation (CO-HHDA) to address the above issues. First, a heterogeneous heuristic network is introduced to model domain-specific characters. It allows structures of heuristic network to be different between domains to avoid underfitting or overfitting. Specially, we initialize a small structure that is shared between domains and increase a subnetwork for the domain which preserves rich specific information. Second, we propose a co-training scheme to train two classifiers, a source classifier and a target classifier, to enhance class discriminate representations. The two classifiers are designed based on domain-invariant representations, where the source classifier learns from the labeled source data, and the target classifier is trained from the generated target pseudolabeled data. The two classifiers teach each other in the training process with high-quality pseudolabeled data. Meanwhile, an adaptive threshold is presented to select reliable pseudolabels in each classifier. Empirical results on three commonly used benchmark datasets demonstrate that the proposed CO-HHDA outperforms the state-of-the-art domain adaptation methods.

4.
IEEE Trans Neural Netw Learn Syst ; 33(8): 3857-3871, 2022 Aug.
Article in English | MEDLINE | ID: mdl-33566771

ABSTRACT

Existing transfer learning methods that focus on problems in stationary environments are not usually applicable to dynamic environments, where concept drift may occur. To the best of our knowledge, the concept drift-tolerant transfer learning (CDTL), whose major challenge is the need to adapt the target model and knowledge of source domains to the changing environments, has yet to be well explored in the literature. This article, therefore, proposes a hybrid ensemble approach to deal with the CDTL problem provided that data in the target domain are generated in a streaming chunk-by-chunk manner from nonstationary environments. At each time step, a class-wise weighted ensemble is presented to adapt the model of target domains to new environments. It assigns a weight vector for each classifier generated from the previous data chunks to allow each class of the current data leveraging historical knowledge independently. Then, a domain-wise weighted ensemble is introduced to combine the source and target models to select useful knowledge of each domain. The source models are updated with the source instances performed by the proposed adaptive weighted CORrelation ALignment (AW-CORAL). AW-CORAL iteratively minimizes domain discrepancy meanwhile decreases the effect of unrelated source instances. In this way, positive knowledge of source domains can be potentially promoted while negative knowledge is reduced. Empirical studies on synthetic and real benchmark data sets demonstrate the effectiveness of the proposed algorithm.

SELECTION OF CITATIONS
SEARCH DETAIL
...