Swarm intelligence based algorithms represent those that implement animal patterns, such as bird flocks, fish schools, animal herds or insect colonies and are used to solve optimization problems of any kind of branch. Therefore, there are agents performing collective behavior by interacting locally with their environment. These local rules and interactions between self-organized agents lead to the collective intelligence called swarm intelligence, see Karaboga et al. [1], Mavrovouniotis et al. [2] and Yang et al. [3]. A commonly used swarm intelligence based algorithm is particle swarm optimization (commonly referred to as PSO). At the present time, this algorithm is widely used to solve complex multi-objective and non-linear problems. However, at first, PSO was used to simulate the flocking process of birds, and it was with time that its capability as an optimizer was discovered, see the work of Eberhart et al. [4]. Nevertheless, PSO cannot ensure achieving an optimal solution, but it always improves with every iteration. PSO is similar to a genetic algorithm, where the system is initialized with a population of random solutions. For each solution, a randomized velocity is assigned, and the original solutions, which are called particles, are then flown through the problem space. Each particle keeps track of its coordinates in the problem space, which are associated with the best solution (fitness) it has achieved so far, see Eberhart et al. [4] and Zulueta et al. [5]. In the study of Poli [6], several works that have utilized PSO are investigated. Authors have successfully employed particle swarm optimization algorithms multiple times in Zulueta et al. [5], Martinez-Filgueira et al. [7] and Uriarte et al. [8].
Another widely used algorithm belonging to the swarm algorithms is the ant colony optimization proposed in M. Dorigo et al [9], where an optimization algorithm inspired by ant colony dynamics is proposed. Unlike the previous algorithm, the applicability of this algorithm is limited to transport optimization problems.
Evolutionary algorithms process a set of solutions in parallel, while they also exploit similarities of solutions via recombination [10]. A good example of evolutionary algorithms are genetic algorithms (GA). These algorithms are inspired by the evolution theories stating that the strongest species are more likely to survive and pass their genes to upcoming generations. Furthermore, during evolution, random gene changes can occur. If these changes are beneficial, new species evolve. However, unsuccessful changes are removed by natural selection. Following this process, GA operate with a collection of chromosomes which are known as population. This population is usually created randomly. With each iteration, the population achieves fitter solutions than before, until it eventually converges in a single solution [11]. Authors have previously worked with genetic algorithms in Sanchez-Chica et al. [12].
Another example of evolutionary algorithm is the differential evolution (DE). DE is a vector-based metaheuristic algorithm that is similar to genetic algorithms and pattern searching. As a matter of fact, it is possible to consider it a further development to GA. DE is a stochastic search algorithm with self-organizing tendency that does not utilize any derivative information. Therefore, it is a population-based and derivative-free method, as explained in Storn et al. [13] and Yang et al. [14]. Authors have applied this algorithm in some works, such as Centeno-Telleria et al. [15].
As stated by Wang et al. [16], Backtracking Search Optimization Algorithm (BSA) is one of the most recently proposed population-based evolutionary algorithms for global optimization. Unlike many search algorithms, BSA has a single control parameter. BSA has a simple structure that is effective, fast and capable of solving multimodal problems and that enables it to easily adapt to different numerical optimization problems, as the ones proposed by Civicioglu et al. [17] and Passos et al. [18].
Furthermore, there are other algorithms worth mentioning that represent a more classical view of the matter, such as Nelder–Mead simplex method or the Hillclimbing algorithm. On the one hand, the Nelder–Mead simplex algorithm, first published in 1965, is a popular direct search method for multidimensional unconstrained minimization, see Chang et al. [19] and Lagarias et al. [20]. On the other hand, the Hillclimbing algorithm, in which each point corresponds to a solution, and the height of the point corresponds to the fitness of the solution, aims to ascend to a peak by continuously moving to an adjacent state with a higher fitness, as shown by Muhlenbein et al. [21].
Authors have employed a trained neural network for this work, a multi-layer perceptron back-propagation (MLP-BP). A multilayer perceptron is a feed forward neural network that maps sets of input data onto a set of appropriate output [22]. On the other hand, backpropagation is the learning method that allows the MLP to iteratively adjust the weights in the network.
In typical swarm and evolutionary algorithms, the researcher has to do the work of knowing about the cost function in order to guide the algorithm to have a good optimal solution. This implies that if the researcher does not have a broad and deep prior knowledge of the problem on which is working, it is quite likely that an extensive series of trial and error runs of the algorithm will have to be performed. Which means spending a lot of time just to understand how to run the algorithm properly for that particular problem.
That the researcher having more knowledge about the problem is not a bad thing, it is interesting to develop tools that allow solving the problems while obtaining knowledge of the problem automatically during their solution.
In this work, the authors propose a new optimization algorithm called Basque Optimization that, in contrast to common used algorithms from the likes of swarm algorithms or evolutionary algorithms, does not stop when a solution is reached, but when the function is adequately known.
The proposed algorithm works with a number of possible solutions, which are distributed among three different solution types. The first type consists of the best positions (N1 solutions at each iteration), the second one covers the unknown positions (N2 solutions at each iteration), and the last one contains the known positions which the interpolation algorithm does not make a good prediction of (N3 solutions at each iteration). The process is repeated until the previously established stop conditions are fulfilled.
Finally, the achieved results are compared to the results of a PSO algorithm that works with the same data, in order to determine the difference between the proposed algorithm and a widely used one as PSO.