Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; PP2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38466592

RESUMO

Deep neural networks (DNNs) have immense potential for precise clinical decision-making in the field of biomedical imaging. However, accessing high-quality data is crucial for ensuring the high-performance of DNNs. Obtaining medical imaging data is often challenging in terms of both quantity and quality. To address these issues, we propose a score-based counterfactual generation (SCG) framework to create counterfactual images from latent space, to compensate for scarcity and imbalance of data. In addition, some uncertainties in external physical factors may introduce unnatural features and further affect the estimation of the true data distribution. Therefore, we integrated a learnable FuzzyBlock into the classifier of the proposed framework to manage these uncertainties. The proposed SCG framework can be applied to both classification and lesion localization tasks. The experimental results revealed a remarkable performance boost in classification tasks, achieving an average performance enhancement of 3-5% compared to previous state-of-the-art (SOTA) methods in interpretable lesion localization.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38324429

RESUMO

The adversarial vulnerability of convolutional neural networks (CNNs) refers to the performance degradation of CNNs under adversarial attacks, leading to incorrect decisions. However, the causes of adversarial vulnerability in CNNs remain unknown. To address this issue, we propose a unique cross-scale analytical approach from a statistical physics perspective. It reveals that the huge amount of nonlinear effects inherent in CNNs is the fundamental cause for the formation and evolution of system vulnerability. Vulnerability is spontaneously formed on the macroscopic level after the symmetry of the system is broken through the nonlinear interaction between microscopic state order parameters. We develop a cascade failure algorithm, visualizing how micro perturbations on neurons' activation can cascade and influence macro decision paths. Our empirical results demonstrate the interplay between microlevel activation maps and macrolevel decision-making and provide a statistical physics perspective to understand the causality behind CNN vulnerability. Our work will help subsequent research to improve the adversarial robustness of CNNs.

3.
IEEE Trans Cybern ; 52(3): 1960-1976, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33296320

RESUMO

High-dimensional problems are ubiquitous in many fields, yet still remain challenging to be solved. To tackle such problems with high effectiveness and efficiency, this article proposes a simple yet efficient stochastic dominant learning swarm optimizer. Particularly, this optimizer not only compromises swarm diversity and convergence speed properly, but also consumes as little computing time and space as possible to locate the optima. In this optimizer, a particle is updated only when its two exemplars randomly selected from the current swarm are its dominators. In this way, each particle has an implicit probability to directly enter the next generation, making it possible to maintain high swarm diversity. Since each updated particle only learns from its dominators, good convergence is likely to be achieved. To alleviate the sensitivity of this optimizer to newly introduced parameters, an adaptive parameter adjustment strategy is further designed based on the evolutionary information of particles at the individual level. Finally, extensive experiments on two high dimensional benchmark sets substantiate that the devised optimizer achieves competitive or even better performance in terms of solution quality, convergence speed, scalability, and computational cost, compared to several state-of-the-art methods. In particular, experimental results show that the proposed optimizer performs excellently on partially separable problems, especially partially separable multimodal problems, which are very common in real-world applications. In addition, the application to feature selection problems further demonstrates the effectiveness of this optimizer in tackling real-world problems.

4.
IEEE Trans Cybern ; 52(9): 9467-9480, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33705333

RESUMO

Co-location pattern mining plays an important role in spatial data mining. With the rapid growth of spatial datasets, the usefulness of co-location patterns is strongly limited by the huge amount of discovered patterns. Although several methods have been proposed to reduce the number of discovered patterns, these statistical algorithms are unable to guarantee that the extracted co-location patterns are user preferred. Therefore, it is crucial to help the decision maker discover his/her preferred co-location patterns via efficient interactive procedures. This article proposes a new interactive approach that enables the user to discover his/her preferred co-location patterns. First, we present a novel and flexible interactive framework to assist the user in discovering his/her preferred co-location patterns. Second, we propose using ontologies to measure the similarity of two co-location patterns. Furthermore, we design a pruning scheme by introducing a pattern filtering model for expressing the user's preference, to reduce the number of the final output. By applying our proposed approach over voluminous sets of co-location patterns, we show that the number of filtered co-location patterns is reduced to several dozen or less and, on average, 80% of the selected co-location patterns are user preferred.


Assuntos
Algoritmos , Mineração de Dados , Mineração de Dados/métodos , Feminino , Humanos , Masculino
5.
IEEE Trans Neural Netw Learn Syst ; 33(11): 6613-6626, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34081586

RESUMO

Co-location pattern mining refers to discovering neighboring relationships of spatial features distributed in geographic space. With the rapid growth of spatial datasets, the usefulness of co-location patterns is strongly limited by the large number of discovered patterns containing multiple redundancies. To address this problem, in this article, we propose a novel approach for discovering the super participation index-closed (SPI-closed) co-location patterns which are a newly proposed lossless condensed representation of co-location patterns by considering distributions of the spatial instances. In the proposed approach, first, a linear-time method is designed to generate complete and correct neighboring cliques using extended neighboring relationships. Based on these cliques, a hash structure is then constructed to store the distributions of the co-location patterns in a condensed way. Finally, using this hash structure, the SPI-closed co-location patterns (SCPs) are efficiently discovered even if the prevalence threshold is changed, while similar approaches have to restart their mining processes. To confirm the efficiency of the proposed method, we compared its performance with similar approaches in the literature on multiple real and synthetic spatial datasets. The experiments confirm that our new approach is more efficient, effective, and flexible than similar approaches.

6.
IEEE Trans Cybern ; 51(4): 2055-2067, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31380777

RESUMO

Recent studies in multiobjective particle swarm optimization (PSO) have the tendency to employ Pareto-based technique, which has a certain effect. However, they will encounter difficulties in their scalability upon many-objective optimization problems (MaOPs) due to the poor discriminability of Pareto optimality, which will affect the selection of leaders, thereby deteriorating the effectiveness of the algorithm. This paper presents a new scheme of discriminating the solutions in objective space. Based on the properties of Pareto optimality, we propose the dominant difference of a solution, which can demonstrate its dominance in every dimension. By investigating the norm of dominant difference among the entire population, the discriminability between the candidates that are difficult to obtain in the objective space is obtained indirectly. By integrating it into PSO, we gained a novel algorithm named many-objective PSO based on the norm of dominant difference (MOPSO/DD) for dealing with MaOPs. Moreover, we design a Lp -norm-based density estimator which makes MOPSO/DD not only have good convergence and diversity but also have lower complexity. Experiments on benchmark problems demonstrate that our proposal is competitive with respect to the state-of-the-art MOPSOs and multiobjective evolutionary algorithms.

7.
IEEE Trans Cybern ; 51(7): 3752-3766, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32175884

RESUMO

The control of virus spreading over complex networks with a limited budget has attracted much attention but remains challenging. This article aims at addressing the combinatorial, discrete resource allocation problems (RAPs) in virus spreading control. To meet the challenges of increasing network scales and improve the solving efficiency, an evolutionary divide-and-conquer algorithm is proposed, namely, a coevolutionary algorithm with network-community-based decomposition (NCD-CEA). It is characterized by the community-based dividing technique and cooperative coevolution conquering thought. First, to reduce the time complexity, NCD-CEA divides a network into multiple communities by a modified community detection method such that the most relevant variables in the solution space are clustered together. The problem and the global swarm are subsequently decomposed into subproblems and subswarms with low-dimensional embeddings. Second, to obtain high-quality solutions, an alternative evolutionary approach is designed by promoting the evolution of subswarms and the global swarm, in turn, with subsolutions evaluated by local fitness functions and global solutions evaluated by a global fitness function. Extensive experiments on different networks show that NCD-CEA has a competitive performance in solving RAPs. This article advances toward controlling virus spreading over large-scale networks.

8.
IEEE Trans Cybern ; 50(6): 2715-2729, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31545753

RESUMO

Cloud workflow scheduling is a significant topic in both commercial and industrial applications. However, the growing scale of workflow has made such a scheduling problem increasingly challenging. Many current algorithms often deal with small- or medium-scale problems (e.g., less than 1000 tasks) and face difficulties in providing satisfactory solutions when dealing with the large-scale problems, due to the curse of dimensionality. To this aim, this article proposes a dynamic group learning distributed particle swarm optimization (DGLDPSO) for large-scale optimization and extends it for the large-scale cloud workflow scheduling. DGLDPSO is efficient for large-scale optimization due to its following two advantages. First, the entire population is divided into many groups, and these groups are coevolved by using the master-slave multigroup distributed model, forming a distributed PSO (DPSO) to enhance the algorithm diversity. Second, a dynamic group learning (DGL) strategy is adopted for DPSO to balance diversity and convergence. When applied DGLDPSO into the large-scale cloud workflow scheduling, an adaptive renumber strategy (ARS) is further developed to make solutions relate to the resource characteristic and to make the searching behavior meaningful rather than aimless. Experiments are conducted on the large-scale benchmark functions set and the large-scale cloud workflow scheduling instances to further investigate the performance of DGLDPSO. The comparison results show that DGLDPSO is better than or at least comparable to other state-of-the-art large-scale optimization algorithms and workflow scheduling algorithms.

9.
IEEE Trans Cybern ; 50(10): 4454-4468, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31545754

RESUMO

Supply chain network design (SCND) is a complicated constrained optimization problem that plays a significant role in the business management. This article extends the SCND model to a large-scale SCND with uncertainties (LUSCND), which is more practical but also more challenging. However, it is difficult for traditional approaches to obtain the feasible solutions in the large-scale search space within the limited time. This article proposes a cooperative coevolutionary bare-bones particle swarm optimization (CCBBPSO) with function independent decomposition (FID), called CCBBPSO-FID, for a multiperiod three-echelon LUSCND problem. For the large-scale issue, binary encoding of the original model is converted to integer encoding for dimensionality reduction, and a novel FID is designed to efficiently decompose the problem. For obtaining the feasible solutions, two repair methods are designed to repair the infeasible solutions that appear frequently in the LUSCND problem. A step translation method is proposed to deal with the variables out of bounds, and a labeled reposition operator with adaptive probabilities is designed to repair the infeasible solutions that violate the constraints. Experiments are conducted on 405 instances with three different scales. The results show that CCBBPSO-FID has an evident superiority over contestant algorithms.

10.
IEEE Trans Cybern ; 50(9): 4053-4065, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31295135

RESUMO

The rapid development of online social networks not only enables prompt and convenient dissemination of desirable information but also incurs fast and wide propagation of undesirable information. A common way to control the spread of pollutants is to block some nodes, but such a strategy may affect the service quality of a social network and leads to a high control cost if too many nodes are blocked. This paper considers the node selection problem as a biobjective optimization problem to find a subset of nodes to be blocked so that the effect of the control is maximized while the cost of the control is minimized. To solve this problem, we design an ant colony optimization algorithm with an adaptive dimension size selection under the multiobjective evolutionary algorithm framework based on decomposition (MOEA/D-ADACO). The proposed algorithm divides the biobjective problem into a set of single-objective subproblems and each ant takes charge of optimizing one subproblem. Moreover, two types of pheromone and heuristic information are incorporated into MOEA/D-ADACO, that is, pheromone and heuristic information of dimension size selection and that of node selection. While constructing solutions, the ants first determine the dimension size according to the former type of pheromone and heuristic information. Then, the ants select a specific number of nodes to build solutions according to the latter type of pheromone and heuristic information. Experiments conducted on a set of real-world online social networks confirm that the proposed biobjective optimization model and the developed MOEA/D-ADACO are promising for the pollutant spreading control.


Assuntos
Disseminação de Informação , Modelos Biológicos , Rede Social , Algoritmos , Heurística Computacional , Poluentes Ambientais , Internet , Modelos Estatísticos , Feromônios
11.
IEEE Trans Cybern ; 50(7): 3393-3408, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30969936

RESUMO

Large-scale optimization with high dimensionality and high computational cost becomes ubiquitous nowadays. To tackle such challenging problems efficiently, devising distributed evolutionary computation algorithms is imperative. To this end, this paper proposes a distributed swarm optimizer based on a special master-slave model. Specifically, in this distributed optimizer, the master is mainly responsible for communication with slaves, while each slave iterates a swarm to traverse the solution space. An asynchronous and adaptive communication strategy based on the request-response mechanism is especially devised to let the slaves communicate with the master efficiently. Particularly, the communication between the master and each slave is adaptively triggered during the iteration. To aid the slaves to search the space efficiently, an elite-guided learning strategy is especially designed via utilizing elite particles in the current swarm and historically best solutions found by different slaves to guide the update of particles. Together, this distributed optimizer asynchronously iterates multiple swarms to collaboratively seek the optimum in parallel. Extensive experiments on a widely used large-scale benchmark set substantiate that the distributed optimizer could: 1) achieve competitive effectiveness in terms of solution quality as compared to the state-of-the-art large-scale methods; 2) accelerate the execution of the algorithm in comparison with the sequential one and obtain almost linear speedup as the number of cores increases; and 3) preserve a good scalability to solve higher dimensional problems.

12.
IEEE Trans Neural Netw Learn Syst ; 31(5): 1557-1570, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31329131

RESUMO

In dynamic optimization problems (DOPs), as the environment changes through time, the optima also dynamically change. How to adapt to the dynamic environment and quickly find the optima in all environments is a challenging issue in solving DOPs. Usually, a new environment is strongly relevant to its previous environment. If we know how it changes from the previous environment to the new one, then we can transfer the information of the previous environment, e.g., past solutions, to get new promising information of the new environment, e.g., new high-quality solutions. Thus, in this paper, we propose a neural network (NN)-based information transfer method, named NNIT, to learn the transfer model of environment changes by NN and then use the learned model to reuse the past solutions. When the environment changes, NNIT first collects the solutions from both the previous environment and the new environment and then uses an NN to learn the transfer model from these solutions. After that, the NN is used to transfer the past solutions to new promising solutions for assisting the optimization in the new environment. The proposed NNIT can be incorporated into population-based evolutionary algorithms (EAs) to solve DOPs. Several typical state-of-the-art EAs for DOPs are selected for comprehensive study and evaluated using the widely used moving peaks benchmark. The experimental results show that the proposed NNIT is promising and can accelerate algorithm convergence.

13.
IEEE Trans Cybern ; 49(1): 27-41, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29990116

RESUMO

This paper develops a decomposition-based coevolutionary algorithm for many-objective optimization, which evolves a number of subpopulations in parallel for approaching the set of Pareto optimal solutions. The many-objective problem is decomposed into a number of subproblems using a set of well-distributed weight vectors. Accordingly, each subpopulation of the algorithm is associated with a weight vector and is responsible for solving the corresponding subproblem. The exploration ability of the algorithm is improved by using a mating pool that collects elite individuals from the cooperative subpopulations for breeding the offspring. In the subsequent environmental selection, the top-ranked individuals in each subpopulation, which are appraised by aggregation functions, survive for the next iteration. Two new aggregation functions with distinct characteristics are designed in this paper to enhance the population diversity and accelerate the convergence speed. The proposed algorithm is compared with several state-of-the-art many-objective evolutionary algorithms on a large number of benchmark instances, as well as on a real-world design problem. Experimental results show that the proposed algorithm is very competitive.

14.
IEEE Trans Cybern ; 49(8): 2912-2926, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29994556

RESUMO

Cloud workflow scheduling is significantly challenging due to not only the large scale of workflow but also the elasticity and heterogeneity of cloud resources. Moreover, the pricing model of clouds makes the execution time and execution cost two critical issues in the scheduling. This paper models the cloud workflow scheduling as a multiobjective optimization problem that optimizes both execution time and execution cost. A novel multiobjective ant colony system based on a co-evolutionary multiple populations for multiple objectives framework is proposed, which adopts two colonies to deal with these two objectives, respectively. Moreover, the proposed approach incorporates with the following three novel designs to efficiently deal with the multiobjective challenges: 1) a new pheromone update rule based on a set of nondominated solutions from a global archive to guide each colony to search its optimization objective sufficiently; 2) a complementary heuristic strategy to avoid a colony only focusing on its corresponding single optimization objective, cooperating with the pheromone update rule to balance the search of both objectives; and 3) an elite study strategy to improve the solution quality of the global archive to help further approach the global Pareto front. Experimental simulations are conducted on five types of real-world scientific workflows and consider the properties of Amazon EC2 cloud platform. The experimental results show that the proposed algorithm performs better than both some state-of-the-art multiobjective optimization approaches and the constrained optimization approaches.

15.
IEEE Trans Cybern ; 48(5): 1383-1396, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-28475072

RESUMO

The objective of cluster analysis is to partition a set of data points into several groups based on a suitable distance measure. We first propose a model called local gravitation among data points. In this model, each data point is viewed as an object with mass, and associated with a local resultant force (LRF) generated by its neighbors. The motivation of this paper is that there exist distinct differences between the LRFs (including magnitudes and directions) of the data points close to the cluster centers and at the boundary of the clusters. To capture these differences efficiently, two new local measures named centrality and coordination are further investigated. Based on empirical observations, two new clustering methods called local gravitation clustering and communication with local agents are designed, and several test cases are conducted to verify their effectiveness. The experiments on synthetic data sets and real-world data sets indicate that both clustering approaches achieve good performance on most of the data sets.

16.
IEEE Trans Cybern ; 48(7): 2139-2153, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28792909

RESUMO

This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

17.
IEEE Trans Cybern ; 47(9): 2924-2937, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28186918

RESUMO

The popular performance profiles and data profiles for benchmarking deterministic optimization algorithms are extended to benchmark stochastic algorithms for global optimization problems. A general confidence interval is employed to replace the significance test, which is popular in traditional benchmarking methods but suffering more and more criticisms. Through computing confidence bounds of the general confidence interval and visualizing them with performance profiles and (or) data profiles, our benchmarking method can be used to compare stochastic optimization algorithms by graphs. Compared with traditional benchmarking methods, our method is synthetic statistically and therefore is suitable for large sets of benchmark problems. Compared with some sample-mean-based benchmarking methods, e.g., the method adopted in black-box-optimization-benchmarking workshop/competition, our method considers not only sample means but also sample variances. The most important property of our method is that it is a distribution-free method, i.e., it does not depend on any distribution assumption of the population. This makes it a promising benchmarking method for stochastic optimization algorithms. Some examples are provided to illustrate how to use our method to compare stochastic optimization algorithms.

18.
IEEE Trans Cybern ; 47(9): 2896-2910, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28113797

RESUMO

Large-scale optimization has become a significant yet challenging area in evolutionary computation. To solve this problem, this paper proposes a novel segment-based predominant learning swarm optimizer (SPLSO) swarm optimizer through letting several predominant particles guide the learning of a particle. First, a segment-based learning strategy is proposed to randomly divide the whole dimensions into segments. During update, variables in different segments are evolved by learning from different exemplars while the ones in the same segment are evolved by the same exemplar. Second, to accelerate search speed and enhance search diversity, a predominant learning strategy is also proposed, which lets several predominant particles guide the update of a particle with each predominant particle responsible for one segment of dimensions. By combining these two learning strategies together, SPLSO evolves all dimensions simultaneously and possesses competitive exploration and exploitation abilities. Extensive experiments are conducted on two large-scale benchmark function sets to investigate the influence of each algorithmic component and comparisons with several state-of-the-art meta-heuristic algorithms dealing with large-scale problems demonstrate the competitive efficiency and effectiveness of the proposed optimizer. Further the scalability of the optimizer to solve problems with dimensionality up to 2000 is also verified.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...