Improving Bees Algorithm Using Gradual Search Space Reduction

The aim of this research is to propose a new technique to improve Bees Algorithm. Bees Algorithm is one of the well-known metaheuristic optimization method which have been subject to several attempts to improve it by overcoming some of the weaknesses. The suggested method is derived from the numerical optimization methods, namely bracketing and region elimination methods. It employs an adaption of the regional elimination method to achieve abandonment and reduction of search space within the Bees Algorithm. The utilization of the exhaustive search involves exploring the whole search space to ﬁnd the optimum at equally located intervals. To assess performance, the proposed method was evaluated on twenty-four benchmark functions and two engineering problems. The acquired result indicated a statistically signiﬁcant improvement.


Introduction
Bees Algorithm (BA) belongs to relatively new category of algorithms called Swarm Intelligence (SI) algorithms that mimic the social behaviour of natural creations living together in swarms. SI algorithms are metaheuristic l algorithms that aim to solve certain types of problems where it is infeasible to solve using deterministic approach due to its high complexity. Those types of problems belong to NP-Hard problems and can be solved using SI methods which is stochastic in nature. Many of the SI methods are experiencing slow convergence rates and getting trapped in local optimum particularly for hard problems ( [14] ; [3] ). Moreover, due to their stochasticity, they need massive processing during their search for the optimum. The Bees algorithm is not an exception ( [19]; [2]; [24]; [11]). The reminder of this paper is structured as follows: sections 2 provide an overview of Bees Algorithm (BA) and issues with BA. Section 3 is discussing the search space method and its implementation. Section 4 is designing the experiment for testing the proposed method and discussion the result. Section 5 is testing the proposed method on engineering problems.

Bees Algorithm
Bees Algorithm originally developed by Pham and his team in 2005 ( [20]). BA is nature inspired algorithms inspired by the foraging behaviour of bees in real environment. BA starts with the initialization stage were several scout bees, randomly distributed to search for best sites. Those visited sites ranked according to their fitness. The locals search commences when the best sites selected to search in their vicinity according to certain neighbourhood distance. This requires recruiting more bees for neighbourhood search around the elite site while less bees for the cites with lower quality. The global search will start when the remaining bees which have not been selected among the best sites are assigned to search randomly again as scout bees in the search area. The new population will consist of fittest bees from the local search and the global search. The searching process will continue until the stopping criteria satisfied. The stopping criteria usually defined as either the optimum solution or near the optimum is found or when a certain number of iterations reached. The population size can be calculated as follows: Below is a list of Bees algorithm parameters that need to be initialized by the user: i number of scout bees (ns), ii number of best sites (nb) out of sites visited by ns iii number of elite sites out of nb selected sites (ne) iv number of bees recruited for ne sites (nre), v number of bees recruited for the other nb -ne selected sites (nrb), vi size of neighbourhood (ngh), The steps for the Bees Algorithm in its basic form: 1. Initialise the scout population with random solutions.
2. Evaluate fitness of the population.
3. While (stopping criterion not met) //Forming the new population. 4. Select sites for the neighbourhood search.
5. Recruit bees for selected sites (more bees for fittest sites) and evaluate fitness. 6. Select the fittest bee from each patch.
7. Assign remaining bees to search randomly and evaluate their fitness.

Issues with Bees Algorithm
. While many successful applications of Bees Algorithm were reported, like many other populations-based metaheuristics, BA experiencing some problems due to their stochastic nature. For example, getting trapped in local optima in functions such as Rosenbrock, Langerman and Bukin6. The slow convergence to the optimum due to lack of search guidance. Although BA, performing reliably on noisy problems ( [18]), its performance will degrade down on smooth unimodal functions ( [18]; [21]). BA greatly involves randomness, and the introduced method is aiming to minimize this randomness by gradually reducing the search space for a more focused acquisition of population.

Gradual Search Space Reduction Method
The proposed method in this research inspired by numerical optimization methods, namely bracketing and region elimination methods ( [4]; [5]). It borrows exhaustive search technique from bracketing while employing an adapted notion of the region elimination method to achieve abandonment and reduction of search space within the Bees Algorithm as metaheuristic approach. Additionally, to make global searches more intelligent, the roles of bees for searching in the global search stage have been varied, with different swarm of bees performing their task in various parts of the search area.

Bracketing methods and Regional Elimination:
The proposed algorithm utilizes exhaustive search from bracketing method in addition to region elimination method which are both numerical optimization approaches ( [4]; [5]). The exhaustive search involves calculating the optimum from samples taken within equally located intervals. This process has to take place within the defined boundaries of the search space ( [16]; [5]). In the region elimination method, the core concept is to consecutively eliminates parts of the search space until the exact minimum is found. The following are the steps of this method: 1. A search space region will be specified, for example (a, b) , where b > a.
3. f(x1) will be evaluated, and • If f(x1) < f(x2), the minimum cannot exist beyond x2 in the period (x2, b), hence the segment (x2, b) is abandoned from the search space.

The Bees Algorithm with Search Space Reduction (BASR)
To adapt the region elimination method, a few essential changes should be applied. The region elimination method was originally designed for optimizing unimodal functions, whereas the proposed method should cater for multimodal problems where more than one local or global optimum might exist. Moreover, the region elimination method is a deterministic method used to find the exact minimum. Thus, given that the bees algorithm is a metaheuristic approximation method, the minimum or near minimum with a stipulated error margin will satisfy the requirements. Consequently, the following modifications to the region elimination method were introduced: • The search space region (a, b) is considered as the whole search space specified for the domain of problem being tested, where b > a.
• The number of points selected should be equal to the number of the initial sample specified by parameter n of the bees algorithm (x1, x2,. . . , xn) • The elimination interval where n is the number of initial samples. Then, is evaluated, and if where F(x) is the calculated optimum, F optm is standard optimum for the problem being tested, 0.001 is the stipulated error margin. The segment, L will be eliminated from the search space from both ends. The new search space will be calculated as follows: The process will continue until the search yield values close to the optimum within the stipulated error value where F (x) − F optm <= 0.001 then, the search process will be terminated. If too many unsuccessful attempts has taken place,the search area will become too small, the search will be restarted from the original search space (a, b).If the maximum number of functions evaluation is reached (500000) without finding a good solution with the stipulated error margin, the search then will be stopped.
Furthermore, to better use the global search, bees are assigned to search in different areas of the search space. There are five different suggested search scenarios: • searching the whole search space, which decreases gradually from both ends of the search space by the elimination factor L equation (2) • searching after first quarter (1/4) of the search space from the two different ends of the search space, which gradually decreases from both ends by the elimination factor L.
• searching the area between the center and the right end (b) of the search space then gradually decreased from both sides by the elimination factor L.
• searching only the area between the center and the left end (a) of the search space and gradually decreased from both sides by the elimination factor L.
• searching the whole search space, which gradually decreases from the left end (a) only by elimination factor L.
The Steps for BASR algorithm: 1. Initialize population with random solutions using search space reduction technique.
2. Evaluate fitness of the population.
3. While (stopping criterion not met) //forming new bee population 4. Select best and elite sites for neighborhood search.
5. Recruit bees around best and elite sites and evaluate fitness.
6. Select the fittest bee from each site.
7. Assign remaining bees to search randomly using search space reduction technique with 5 different scenarios and evaluate their fitness, 8. If (stopping criterion not met) reduce the search space by L 9. End while .

Experiment Setup
To evaluate the performance of the proposed algorithm, twenty-four continuous benchmark functions used. The criteria for selecting benchmark functions for testing was designed to account for the varying degree of search space topography, complexity, and modality. This is essential to ensure testing objectivity and reliability ( [10]). Complexity includes separability and dimensionality. Separability refers to the interdependency between function parameters. Non-separable functions are mostly harder to solve than separable functions ( [10]). Dimensionality of a function demonstrates its number of decision variables or parameters. As the number of dimensions in a function increases, it generally becomes harder to optimize. Modality is a property related to the number of peaks in the search space. A function is unimodal if it has only one optimum, otherwise, with many global and/or local optima, it is regarded as multimodal ( [10] ). Test functions used in this research are briefly displayed in the list and classified according to their properties in Table 1. Although more than half of them is two-dimensional, they are predominantly multimodal and non-separable, which adds to their difficulty. The three key performance metrics used in this investigation are: the accuracy of fitness solution, the success rate, and the number of functions evaluation (NFE). The accuracy of fitness solution refers to how close the found optimum to the standard one. It can be described also as the quality of the solution. The success rate refers to the number of times the algorithm was able to converge to the optimum within the maximum number of function evaluations permitted in all the runs. The number of function evaluations (NFE) denotes the number of times the benchmark function being tested were executed in every individual run. It can also be described as the speed with which the algorithm converged to with the optimum. This is because the algorithm will need less time to converge as the function under testing executed fewer times. The methodology of this research is primarily considering the quality of the solution or accuracy then to look at the success rate and the number of function evaluation (NFE). Hence, the main objective of his research should mainly contribute to the quality of the solution.
The results of the test will be compared to those of the basic Bees algorithm (BBA) and two other well-known algorithms versions of PSO and ABC. For PSO, the relatively new version SPSO2011 will be used and for ABC, the quick ABC variant qABC will be used ( [13]; [11]). The algorithm should continue to run until the stopping criteria defined below are met: • either the global optimum found with acceptable error rate (ER) (chosen here as ER <= 0.001) or • maximum number of function evaluations reached (stipulated in this research as 500000).

Results and Discussion
The assessment of the performance of the proposed algorithm BASR involves the comparison with other three algorithms namely: BBA, qABC and SPSO2011. The parameters setting for these algorithms are shown above in Tables 2, 3, 4 and 5.
The analysis of performance according to figures in Table 6 are showing that the proposed algorithm BASR performed better than all other algorithms in at least 20 function. Mean and standard deviation figures in Table 7 confirms the conclusion about the performance data with the proposed algorithm achieved better mean in 19 function and better standard deviation in 17 function. However, in two functions, BASR reached the maximum NFE without finding the optimum within the stipulated error margin. Additionally, in Shekel function, BASR achieved better mean value even though qABC performed better and achieved more accurate optimum than BASR. Despite the above consideration, BASR performance remain remarkable considering the performance in the functions with complex topography and multi pocketed landscape such as: Rosenbrock, Ackley, Shaffer, Rastrigin, Griewank, and Drop where BASR getting better optimum in all the 50 runs. In order to assess the statistical significance of the performance of BASR, it requires the calculation of the p-value using Mann-Whitney significance test. The inspection of Table 8 reveals that the figures confirms the good performance by BASR with better p-value in at least 18 functions which comprise three-quarter of the functions. These findings support the conclusion that gradual search space reduction technique used in the proposed algorithm BASR, helped in improving the performance of Bees Algorithm.

Success Rate and Number of Function Evaluation
The examination of Table 9, indicate that the proposed algorithm BASR, achieved the highest SR average among all the algorithms. It was able to achieve %100 SR in all functions except in three functions. In Table 10, another aspect of improved performance by BASR against qABC and BBA is indicated. Nonetheless, the performance of BASR against SPSO2011 reveals less improved performance by BASR compared to other algorithms. Although the figures show lower NFE figures by SPSO2011 which suggest a better performance, it cannot be considered an indication of good performance due to the fact that SPSO2011 ended the search earlier even though its obtained optimum does not fall within the stipulated error margin of 0.001. In fact, for the functions (Easom, Shekel, Langerman, Crossit, Drop, Shubert, and McCorm) the optimum was located far away from the error margin. This phenomenon referred to as premature convergence. This is also reflected in the mean and standard deviation figures in Table 11. However, the above findings yet to be confirmed by the statistical significance test using Mann-Whitney test.   The p-value statistical figures in Table 12 for BASR against BBA and qABC confirming that BASR improved significantly in terms of NFE. BASR was better than BBA in 19 test functions and better than qABC in 17 functions. This means that BASR performed faster than these two algorithms. With regards to SPSO2011, the p-value figures were lower suggesting less significant performance by BASR; however, as discussed earlier, this cannot be considered an indication of good performance by SPSO2011. It is an aspect of the premature convergence phenomenon where algorithms abruptly stop processing without finding any good solution.

Testing with Engineering Problems
To further evaluate the performance of the proposed algorithm BASR, tow engineering problems where selected to apply the proposed algorithm. The two engineering problems are gear train problem, a none-constraint problem and tension/compression spring, a constrained problem.

Gear Train Problem
The gear train design problem is an engineering problem aiming to minimize the gear ratio to be particularly close to 1/6.931 ( [17]). The design variables here represent the number of teeth for every gear, restricted to values between 12 and 60.

Figure 2: gear train design scheme
To reduce the error value, the gear ratio should be as close as possible to 1/6.931 ( [12]). To calculate the error value: The objective function formulated as: The result of the experiment was taken out of 100 runs. The selection of parameter values for the proposed algorithms, was partially decided from the literature review of other algorithms based on BA and partially through trial and error. To evaluate the performance of the proposed algorithm, their performances were first compared with the same algorithms used to assess the performance on benchmark functions namely: BBA, qABC and SPSO2011 with the same parameter settings. The figures for BBA ,SPSO2011 and qABC were obtained experimentally. The algorithm was forced to conduct a minimum of 100 iterations to acquire a good result. The stopping criteria used are the same criteria used to test the algorithm with benchmark functions, or 500000 NFE. Furthermore, the results obtained by the test were compared with some of the well performing algorithms on this problem obtained from the literature such as: ABC ([1] ), CS ( [6] and PSO-GA ( [7]). According to Table 14, the figures show that the proposed algorithm performed better than all the other algorithms in terms of best value achieved. Additionally, the statistical indicators suggest that BASR and BBA behaved in a more stable and consistent manner than the other two. However, the comparison with the second group of algorithms: PSO-GA, BASR, ABC, and CS in 15, indicate that the proposed algorithm performed competitively with the other algorithms in terms of best value; obtaining a gear ratio of 0.14428 and an error value of 0.001%. However, the mean and standard deviation suggested that BASR was not as stable and consistent as other algorithms. With regards to the speed" ABC was the fastest as it needed only 60 NFE to achieve its best result, which is the lowest among all. BASR was the slowest among this group with 14555 NFE while CS comes in the middle with 5000 NFE.

The Tension/Compression Spring
The aim of tension/compression spring problem is to reduce the spring steel wire volume, which means decreasing the weight of the spring ( [8]).

Figure 3: Tension/Compression Spring
The objective function of tension/compression problem is: parameter values restricted to: Subject to the following constraints: Engineering design problems are predominantly complex and nonlinear with many different variables and constraints. These variables and constraints are required to be satisfied in order to achieve an optimized solution ( [6]). To ensure the variables restricted to the domain of the values, a method implemented to check that the variables values are within the limit of the domain of value and to regenerated them if they were located outside. With regards to the problem constraints, only the feasible solutions that satisfy all the constraints will be considered. 100 runs were performed, and every run was forced to achieve at least 100 iterations. To evaluate the performance of the three proposed algorithms, their performances were first compared with the same algorithms used to assess the performance on benchmark functions: BBA, qABC and SPSO2011 using the same parameter settings. The figures for BBA and qABC were obtained experimentally; while SPSO2011 figures were obtained from the literature of applying SPSO2011 on this problem ( [23]) due to inability to modify the algorithm code to test the problem constraints. Additionally, to further assess the performance, a collection of algorithms which were applied successfully as indicated by the literature used to compare performance. They are the following: evolutionary strategies ( [15]), PSO ( [9]), ABC ([1]), society and civilization algorithm (SCA) ( [22]) and unified particle swarm Optimisation (UPSOm) ( [17]). Table 16 shows the parameters used to apply the proposed algorithms on the tension/compression spring. The inspection of the figures for Tables 17 and 18, indicates that BBA achieved the best value among all; next comes the proposed algorithm BASR. This also true in terms of NFE or speed, with BBA and BASR achieving 8099,17889 respectively. However, in terms of mean and standard deviation, both BBA and BASR were in the middle, suggesting that their performance was moderately stable and consistent. Although BASR could not improve over BBA in this engineering problem, but the result indicates that the method used has the potential to achieve good performance.

Conclusion Future Work
This research presented a new method BASR to improve the performance of Bees Algorithm. It used an adaptation of two numerical method namely: Bracketingexhaustive search and region elimination methods to fit in metaheuristic approach. It was tested on twenty-four benchmark functions and two engineering problems. Its performance on benchmarks was apparently accurate and fast with an acceptable level of consistency. However, it performed moderately well on engineering problem with moderate level of consistency. The analysis of the obtained result indicated that the search space reduction has the potential to improve the BA performance on optimization of problems with continuous search space due to its ability to achieve focused search. However, this suggests the need for further improvement of the articulated method to improve on engineering problems.