Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomimetics (Basel) ; 9(6)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38921187

RESUMO

In the complex and dynamic landscape of cyber threats, organizations require sophisticated strategies for managing Cybersecurity Operations Centers and deploying Security Information and Event Management systems. Our study enhances these strategies by integrating the precision of well-known biomimetic optimization algorithms-namely Particle Swarm Optimization, the Bat Algorithm, the Gray Wolf Optimizer, and the Orca Predator Algorithm-with the adaptability of Deep Q-Learning, a reinforcement learning technique that leverages deep neural networks to teach algorithms optimal actions through trial and error in complex environments. This hybrid methodology targets the efficient allocation and deployment of network intrusion detection sensors while balancing cost-effectiveness with essential network security imperatives. Comprehensive computational tests show that versions enhanced with Deep Q-Learning significantly outperform their native counterparts, especially in complex infrastructures. These results highlight the efficacy of integrating metaheuristics with reinforcement learning to tackle complex optimization challenges, underscoring Deep Q-Learning's potential to boost cybersecurity measures in rapidly evolving threat environments.

2.
Biomimetics (Basel) ; 9(5)2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38786493

RESUMO

The set-covering problem aims to find the smallest possible set of subsets that cover all the elements of a larger set. The difficulty of solving the set-covering problem increases as the number of elements and sets grows, making it a complex problem for which traditional integer programming solutions may become inefficient in real-life instances. Given this complexity, various metaheuristics have been successfully applied to solve the set-covering problem and related issues. This study introduces, implements, and analyzes a novel metaheuristic inspired by the well-established Growth Optimizer algorithm. Drawing insights from human behavioral patterns, this approach has shown promise in optimizing complex problems in continuous domains, where experimental results demonstrate the effectiveness and competitiveness of the metaheuristic compared to other strategies. The Growth Optimizer algorithm is modified and adapted to the realm of binary optimization for solving the set-covering problem, resulting in the creation of the Binary Growth Optimizer algorithm. Upon the implementation and analysis of its outcomes, the findings illustrate its capability to achieve competitive and efficient solutions in terms of resolution time and result quality.

3.
Biomimetics (Basel) ; 9(2)2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38392128

RESUMO

Population-based metaheuristics can be seen as a set of agents that smartly explore the space of solutions of a given optimization problem. These agents are commonly governed by movement operators that decide how the exploration is driven. Although metaheuristics have successfully been used for more than 20 years, performing rapid and high-quality parameter control is still a main concern. For instance, deciding the proper population size yielding a good balance between quality of results and computing time is constantly a hard task, even more so in the presence of an unexplored optimization problem. In this paper, we propose a self-adaptive strategy based on the on-line population balance, which aims for improvements in the performance and search process on population-based algorithms. The design behind the proposed approach relies on three different components. Firstly, an optimization-based component which defines all metaheuristic tasks related to carry out the resolution of the optimization problems. Secondly, a learning-based component focused on transforming dynamic data into knowledge in order to influence the search in the solution space. Thirdly, a probabilistic-based selector component is designed to dynamically adjust the population. We illustrate an extensive experimental process on large instance sets from three well-known discrete optimization problems: Manufacturing Cell Design Problem, Set covering Problem, and Multidimensional Knapsack Problem. The proposed approach is able to compete against classic, autonomous, as well as IRace-tuned metaheuristics, yielding interesting results and potential future work regarding dynamically adjusting the number of solutions interacting on different times within the search process.

4.
Biomimetics (Basel) ; 9(2)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38392135

RESUMO

In this study, we introduce an innovative policy in the field of reinforcement learning, specifically designed as an action selection mechanism, and applied herein as a selector for binarization schemes. These schemes enable continuous metaheuristics to be applied to binary problems, thereby paving new paths in combinatorial optimization. To evaluate its efficacy, we implemented this policy within our BSS framework, which integrates a variety of reinforcement learning and metaheuristic techniques. Upon resolving 45 instances of the Set Covering Problem, our results demonstrate that reinforcement learning can play a crucial role in enhancing the binarization techniques employed. This policy not only significantly outperformed traditional methods in terms of precision and efficiency, but also proved to be extensible and adaptable to other techniques and similar problems. The approach proposed in this article is capable of significantly surpassing traditional methods in precision and efficiency, which could have important implications for a wide range of real-world applications. This study underscores the philosophy behind our approach: utilizing reinforcement learning not as an end in itself, but as a powerful tool for solving binary combinatorial problems, emphasizing its practical applicability and potential to transform the way we address complex challenges across various fields.

5.
Biomimetics (Basel) ; 8(5)2023 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-37754151

RESUMO

In this work, an approach is proposed to solve binary combinatorial problems using continuous metaheuristics. It focuses on the importance of binarization in the optimization process, as it can have a significant impact on the performance of the algorithm. Different binarization schemes are presented and a set of actions, which combine different transfer functions and binarization rules, under a selector based on reinforcement learning is proposed. The experimental results show that the binarization rules have a greater impact than transfer functions on the performance of the algorithms and that some sets of actions are statistically better than others. In particular, it was found that sets that incorporate the elite or elite roulette binarization rule are the best. Furthermore, exploration and exploitation were analyzed through percentage graphs and a statistical test was performed to determine the best set of actions. Overall, this work provides a practical approach for the selection of binarization schemes in binary combinatorial problems and offers guidance for future research in this field.

6.
Biomimetics (Basel) ; 9(1)2023 Dec 25.
Artigo em Inglês | MEDLINE | ID: mdl-38248581

RESUMO

In the optimization field, the ability to efficiently tackle complex and high-dimensional problems remains a persistent challenge. Metaheuristic algorithms, with a particular emphasis on their autonomous variants, are emerging as promising tools to overcome this challenge. The term "autonomous" refers to these variants' ability to dynamically adjust certain parameters based on their own outcomes, without external intervention. The objective is to leverage the advantages and characteristics of an unsupervised machine learning clustering technique to configure the population parameter with autonomous behavior, and emphasize how we incorporate the characteristics of search space clustering to enhance the intensification and diversification of the metaheuristic. This allows dynamic adjustments based on its own outcomes, whether by increasing or decreasing the population in response to the need for diversification or intensification of solutions. In this manner, it aims to imbue the metaheuristic with features for a broader search of solutions that can yield superior results. This study provides an in-depth examination of autonomous metaheuristic algorithms, including Autonomous Particle Swarm Optimization, Autonomous Cuckoo Search Algorithm, and Autonomous Bat Algorithm. We submit these algorithms to a thorough evaluation against their original counterparts using high-density functions from the well-known CEC LSGO benchmark suite. Quantitative results revealed performance enhancements in the autonomous versions, with Autonomous Particle Swarm Optimization consistently outperforming its peers in achieving optimal minimum values. Autonomous Cuckoo Search Algorithm and Autonomous Bat Algorithm also demonstrated noteworthy advancements over their traditional counterparts. A salient feature of these algorithms is the continuous nature of their population, which significantly bolsters their capability to navigate complex and high-dimensional search spaces. However, like all methodologies, there were challenges in ensuring consistent performance across all test scenarios. The intrinsic adaptability and autonomous decision making embedded within these algorithms herald a new era of optimization tools suited for complex real-world challenges. In sum, this research accentuates the potential of autonomous metaheuristics in the optimization arena, laying the groundwork for their expanded application across diverse challenges and domains. We recommend further explorations and adaptations of these autonomous algorithms to fully harness their potential.

7.
Biomimetics (Basel) ; 9(1)2023 Dec 25.
Artigo em Inglês | MEDLINE | ID: mdl-38248583

RESUMO

Feature selection is becoming a relevant problem within the field of machine learning. The feature selection problem focuses on the selection of the small, necessary, and sufficient subset of features that represent the general set of features, eliminating redundant and irrelevant information. Given the importance of the topic, in recent years there has been a boom in the study of the problem, generating a large number of related investigations. Given this, this work analyzes 161 articles published between 2019 and 2023 (20 April 2023), emphasizing the formulation of the problem and performance measures, and proposing classifications for the objective functions and evaluation metrics. Furthermore, an in-depth description and analysis of metaheuristics, benchmark datasets, and practical real-world applications are presented. Finally, in light of recent advances, this review paper provides future research opportunities.

10.
Entropy (Basel) ; 24(9)2022 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-36141179

RESUMO

Nature-inspired computing is a promising field of artificial intelligence. This area is mainly devoted to designing computational models based on natural phenomena to address complex problems. Nature provides a rich source of inspiration for designing smart procedures capable of becoming powerful algorithms. Many of these procedures have been successfully developed to treat optimization problems, with impressive results. Nonetheless, for these algorithms to reach their maximum performance, a proper balance between the intensification and the diversification phases is required. The intensification generates a local solution around the best solution by exploiting a promising region. Diversification is responsible for finding new solutions when the main procedure is trapped in a local region. This procedure is usually carryout by non-deterministic fundamentals that do not necessarily provide the expected results. Here, we encounter the stagnation problem, which describes a scenario where the search for the optimum solution stalls before discovering a globally optimal solution. In this work, we propose an efficient technique for detecting and leaving local optimum regions based on Shannon entropy. This component can measure the uncertainty level of the observations taken from random variables. We employ this principle on three well-known population-based bio-inspired optimization algorithms: particle swarm optimization, bat optimization, and black hole algorithm. The proposal's performance is evidenced by solving twenty of the most challenging instances of the multidimensional knapsack problem. Computational results show that the proposed exploration approach is a legitimate alternative to manage the diversification of solutions since the improved techniques can generate a better distribution of the optimal values found. The best results are with the bat method, where in all instances, the enhanced solver with the Shannon exploration strategy works better than its native version. For the other two bio-inspired algorithms, the proposal operates significantly better in over 70% of instances.

11.
Comput Intell Neurosci ; 2019: 3238574, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31636660

RESUMO

The integration of machine learning techniques and metaheuristic algorithms is an area of interest due to the great potential for applications. In particular, using these hybrid techniques to solve combinatorial optimization problems (COPs) to improve the quality of the solutions and convergence times is of great interest in operations research. In this article, the db-scan unsupervised learning technique is explored with the goal of using it in the binarization process of continuous swarm intelligence metaheuristic algorithms. The contribution of the db-scan operator to the binarization process is analyzed systematically through the design of random operators. Additionally, the behavior of this algorithm is studied and compared with other binarization methods based on clusters and transfer functions (TFs). To verify the results, the well-known set covering problem is addressed, and a real-world problem is solved. The results show that the integration of the db-scan technique produces consistently better results in terms of computation time and quality of the solutions when compared with TFs and random operators. Furthermore, when it is compared with other clustering techniques, we see that it achieves significantly improved convergence times.


Assuntos
Algoritmos , Inteligência Artificial , Simulação por Computador , Aprendizado de Máquina , Análise por Conglomerados , Análise de Dados
12.
Comput Intell Neurosci ; 2019: 4787856, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30906316

RESUMO

In this research, we present a Binary Cat Swarm Optimization for solving the Manufacturing Cell Design Problem (MCDP). This problem divides an industrial production plant into a certain number of cells. Each cell contains machines with similar types of processes or part families. The goal is to identify a cell organization in such a way that the transportation of the different parts between cells is minimized. The organization of these cells is performed through Cat Swarm Optimization, which is a recent swarm metaheuristic technique based on the behavior of cats. In that technique, cats have two modes of behavior: seeking mode and tracing mode, selected from a mixture ratio. For experimental purposes, a version of the Autonomous Search algorithm was developed with dynamic mixture ratios. The experimental results for both normal Binary Cat Swarm Optimization (BCSO) and Autonomous Search BCSO reach all global optimums, both for a set of 90 instances with known optima, and for a set of 35 new instances with 13 known optima.


Assuntos
Algoritmos , Comportamento Animal/fisiologia , Simulação por Computador , Modelos Biológicos , Dinâmica não Linear , Animais , Gatos , Desenho Assistido por Computador , Humanos , Processamento de Sinais Assistido por Computador
13.
Sensors (Basel) ; 19(3)2019 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-30736434

RESUMO

During the last decade, Wireless sensor networks (WSNs) have attracted interest due to the excellent monitoring capabilities offered. However, WSNs present shortcomings, such as energy cost and reliability, which hinder real-world applications. As a solution, Relay Node (RN) deployment strategies could help to improve WSNs. This fact is known as the Relay Node Placement Problem (RNPP), which is an NP-hard optimization problem. This paper proposes to address two Multi-Objective (MO) formulations of the RNPP. The first one optimizes average energy cost and average sensitivity area. The second one optimizes the two previous objectives and network reliability. The authors propose to solve the two problems through a wide range of MO metaheuristics from the three main groups in the field: evolutionary algorithms, swarm intelligence algorithms, and trajectory algorithms. These algorithms are the Non-dominated Sorting Genetic Algorithm II (NSGA-II), Strength Pareto Evolutionary Algorithm 2 (SPEA2), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Multi-Objective Artificial Bee Colony (MO-ABC), Multi-Objective Firefly Algorithm (MO-FA), Multi-Objective Gravitational Search Algorithm (MO-GSA), and Multi-Objective Variable Neighbourhood Search Algorithm (MO-VNS). The results obtained are statistically analysed to determine if there is a robust metaheuristic to be recommended for solving the RNPP independently of the number of objectives.

14.
Springerplus ; 5(1): 1921, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27867827

RESUMO

BACKGROUND: The importance of quality assurance in the software development process cannot be overemphasized because its adoption results in high reliability and easy maintenance of the software system and other software products. Software quality assurance includes different activities such as quality control, quality management, quality standards, quality planning, process standardization and improvement amongst others. The aim of this work is to further investigate the software quality assurance practices of practitioners in Nigeria. While our previous work covered areas on quality planning, adherence to standardized processes and the inherent challenges, this work has been extended to include quality control, software process improvement and international quality standard organization membership. It also makes comparison based on a similar study carried out in Turkey. The goal is to generate more robust findings that can properly support decision making by the software community. The qualitative research approach, specifically, the use of questionnaire research instruments was applied to acquire data from software practitioners. RESULTS: In addition to the previous results, it was observed that quality assurance practices are quite neglected and this can be the cause of low patronage. Moreover, software practitioners are neither aware of international standards organizations or the required process improvement techniques; as such their claimed standards are not aligned to those of accredited bodies, and are only limited to their local experience and knowledge, which makes it questionable. The comparison with Turkey also yielded similar findings, making the results typical of developing countries. The research instrument used was tested for internal consistency using the Cronbach's alpha, and it was proved reliable. CONCLUSION: For the software industry in developing countries to grow strong and be a viable source of external revenue, software assurance practices have to be taken seriously because its effect is evident in the final product. Moreover, quality frameworks and tools which require minimum time and cost are highly needed in these countries.

15.
Springerplus ; 5(1): 1936, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27872799

RESUMO

BACKGROUND: Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. RESULTS: A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. CONCLUSIONS: In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

16.
BMC Bioinformatics ; 17(1): 330, 2016 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-27581798

RESUMO

BACKGROUND: Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. RESULTS: A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. CONCLUSIONS: The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.


Assuntos
Biologia Computacional/métodos , Neoplasias/genética , Algoritmos , Regulação Neoplásica da Expressão Gênica , Humanos , Neoplasias/classificação , Neoplasias/patologia , Software
17.
Comput Intell Neurosci ; 2015: 286354, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26078751

RESUMO

The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.


Assuntos
Algoritmos , Inteligência Artificial , Teoria dos Jogos , Resolução de Problemas , Software , Humanos
18.
ScientificWorldJournal ; 2014: 745921, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25254257

RESUMO

Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric.


Assuntos
Algoritmos , Simulação por Computador , Técnicas de Apoio para a Decisão , Modelos Teóricos , Reprodutibilidade dos Testes
19.
ScientificWorldJournal ; 2014: 189164, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24883356

RESUMO

The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.


Assuntos
Algoritmos , Inteligência Artificial , Animais , Abelhas , Comportamento Animal , Modelos Estatísticos , Modelos Teóricos
20.
ScientificWorldJournal ; 2014: 465359, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24707205

RESUMO

The Sudoku is a famous logic-placement game, originally popularized in Japan and today widely employed as pastime and as testbed for search algorithms. The classic Sudoku consists in filling a 9 × 9 grid, divided into nine 3 × 3 regions, so that each column, row, and region contains different digits from 1 to 9. This game is known to be NP-complete, with existing various complete and incomplete search algorithms able to solve different instances of it. In this paper, we present a new cuckoo search algorithm for solving Sudoku puzzles combining prefiltering phases and geometric operations. The geometric operators allow one to correctly move toward promising regions of the combinatorial space, while the prefiltering phases are able to previously delete from domains the values that do not conduct to any feasible solution. This integration leads to a more efficient domain filtering and as a consequence to a faster solving process. We illustrate encouraging experimental results where our approach noticeably competes with the best approximate methods reported in the literature.


Assuntos
Algoritmos , Teoria dos Jogos , Resolução de Problemas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...