Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Evol Comput ; 28(3): 437-461, 2020.
Article in English | MEDLINE | ID: mdl-31120773

ABSTRACT

Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the "simple" Random Gradient HH so success can be measured over a fixed period of time τ, instead of a single iteration. For LeadingOnes we prove that the Generalised Random Gradient (GRG) HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to k low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the k heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to n=108) and shed some light on the best choices for the parameter τ in various situations.


Subject(s)
Algorithms , Computer Heuristics , Benchmarking/statistics & numerical data , Biological Evolution , Computer Simulation , Machine Learning , Search Engine
2.
Algorithmica ; 80(5): 1634-1657, 2018.
Article in English | MEDLINE | ID: mdl-30996505

ABSTRACT

Island models denote a distributed system of evolutionary algorithms which operate independently, but occasionally share their solutions with each other along the so-called migration topology. We investigate the impact of the migration topology by introducing a simplified island model with behavior similar to λ islands optimizing the so-called Maze fitness function (Kötzing and Molter in Proceedings of parallel problem solving from nature (PPSN XII), Springer, Berlin, pp 113-122, 2012). Previous work has shown that when a complete migration topology is used, migration must not occur too frequently, nor too soon before the optimum changes, to track the optimum of the Maze function. We show that using a sparse migration topology alleviates these restrictions. More specifically, we prove that there exist choices of model parameters for which using a unidirectional ring of logarithmic diameter as the migration topology allows the model to track the oscillating optimum through n Maze-like phases with high probability, while using any graph of diameter less than c ln n for some sufficiently small constant  c > 0 results in the island model losing track of the optimum with overwhelming probability. Experimentally, we show that very frequent migration on a ring topology is not an effective diversity mechanism, while a lower migration rate allows the ring topology to track the optimum for a wider range of oscillation patterns. When migration occurs only rarely, we prove that dense migration topologies of small diameter may be advantageous. Combined, our results show that the sparse migration topology is able to track the optimum through a wider range of oscillation patterns, and cope with a wider range of migration frequencies.

3.
Algorithmica ; 78(2): 641-659, 2017.
Article in English | MEDLINE | ID: mdl-32103847

ABSTRACT

A simple island model with λ islands and migration occurring after every τ iterations is studied on the dynamic fitness function Maze. This model is equivalent to a ( 1 + λ )  EA if τ = 1 , i. e., migration occurs during every iteration. It is proved that even for an increased offspring population size up to λ = O ( n 1 - ϵ ) , the ( 1 + λ )  EA is still not able to track the optimum of Maze. If the migration interval is chosen carefully, the algorithm is able to track the optimum even for logarithmic λ . The relationship of τ , λ , and the ability of the island model to track the optimum is then investigated more closely. Finally, experiments are performed to supplement the asymptotic results, and investigate the impact of the migration topology.

SELECTION OF CITATIONS
SEARCH DETAIL
...