Artificial bee colony algorithm with directed scout

As a relatively new model, the artificial bee colony algorithm (ABC) has shown impressive success in solving optimization problems. Nevertheless, its efficiency is still not satisfactory for some complex optimization problems. This paper has modified ABC and its other recent variants to improve its performance by modifying the scout phase. This modification enhances its exploitation ability by intensifying the regions in the search space, which probably includes reasonable solutions. The experiments were performed on CEC2014, and CEC2015 benchmark suites, real-life problems. And the proposed modification was applied to basic ABC, Gbest-Guided ABC, Depth First Search ABC, and Teaching–Learning Based ABC, and they were compared with their modified counterparts. The results have shown that our modification can successfully increase the performance of the original versions. Moreover, the proposed modified algorithm was compared with the state-of-the-art optimization algorithms, and it produced competitive results.


Introduction
Most of the real-world problems are non-convex, nondifferential, or do not have implicit equations. For conventional gradient-based optimization approaches, solving these problems is rather tricky (Zhou et al. 2017). Meta-heuristics have shown superior performance in solving these problems as a practical solution and have attracted researchers because they are simple; thus, scientists from different fields can quickly learn and apply them to solve their problems. They are also flexible; therefore, they do not need any specific modification in their structure to use them in any application. They are derivative-free; consequently, they are highly optimized for real problems with costly or unknown derivative data. Finally, they can avoid local optima because of their stochastic nature, making them a good option for real problems (Mirjalili et al. 2014). These characteristics motivated B Rustu Akay akay@erciyes.edu.tr Radhwan A. A. Saleh 206113002@kocaeli.edu.tr 1 the scientists to simulate various natural concepts, introduce new meta-heuristics, hybridize two or more, or enhance the current meta-heuristics.
The P-meta-heuristic algorithms in the literature are application-dependent; one algorithm might show promising results in one specific application but might show bad results in other applications (Wolpert and Macready 1997). This fact motivates researchers to continue developing novel P-meta-heuristic algorithms or modifying and improving the available ones. Accordingly, in this work, we will introduce a novel version of P-meta-heuristic algorithms based on one of the most common P-meta-heuristic algorithms, called artificial bee colony (ABC).
Recently, the ABC algorithm, initially developed in 2005 by Karaboga (2005), has proved its efficiency in solving many optimization problems in different applications . Indeed, it is one of the best algorithms of population-based meta-heuristics with citations exceeding 6813 in Google Scholar. Its simple structure, low number of parameters, and ease of implementation have attracted many researchers to use its versions on different applications. Although the performance of ABC in solving complex optimization problems is competitive compared to other population-based optimizer algorithms, which is due to its good exploration, it is still not good enough because of its poor exploitation (Jadon et al. 2017). That is why many researchers tried to modify the original ABC and proposed new versions. Despite the high performance of the modified versions of ABC in the literature, they may be improved if we directed the scout search to exploit the promising search area. It is worthy of mention that most ABC versions have focused on their modifications on employed and onlooker bee phases, as we will see in the following review.
In both employed and onlooker bee phases of ABC, only one of the parent solutions is modified. During the search along the current axis, this update mechanism ignores the other axes, which leads to weak exploitation (Akay and Karaboga 2012). Besides, in the scout phase, the abandoned food source is replaced by a new random source, confirming that the ABC search equation and structure focus more on exploration. At the same time, the effective meta-heuristic depends heavily on a good compromise between exploration and exploitation. Therefore, the main challenge is to improve the performance of the ABC algorithm in balancing exploration and exploitation of the solution search equation. Hence, enhancing ABC performance has become an active research topic.
The versions of ABC could be classified into three categories: versions that keep the framework of ABC but modify its search equations, versions that modify both the search equations and the framework of ABC, and versions that hybrid ABC with other algorithms.
According to the versions which keep the framework of ABC but modify its search equations, the reason behind the weakness of exploitation is the search equations of ABC since the update during the search is along the current axis and ignores the other axes (Akay and Karaboga 2012). As a result, many versions of ABC just have modified their search equations as the following versions. Inspired by PSO search equations, a gbest-guided ABC (GABC) was introduced. This new version utilized the knowledge obtained from the best global solution to improve the speed of convergence in the search equation (Zhu and Kwong 2010). In Akay and Karaboga (2012), two control parameters are presented and added to ABC to control the magnitude and frequency of perturbation. In TSai et al. (2009), the interactive ABC was used by including the principle of universal gravitation in the search solution equation. In Gao et al. (2012), two search equations were designed to maintain an excellent balance between the exploration and exploitation of ABC. In this work, a control parameter was developed in which two search equations, focusing, respectively, on exploitation and exploration, are combined with balancing exploitation with exploration (called MABC). A Gaussian ABC was provided using a parameter to control gaussian and uniform distributions frequency (dos Santos and Alotto 2011). PS-MEABC inspired the search equation, which uses the best solution and other promising solutions from PSO (Xiang et al. 2014). Another way of using the best global knowledge in the onlooker phase has been developed (Luo et al. 2013). The results of the experiments show that, in some cases, the proposed approach beats the other approaches.
According to the versions which modify both the search equations and the framework of ABC, not only are the search equations the reason for weakness but also the searching strategy needs to focus more on exploitation. For example, in Gao and Liu (2011), a parameter was introduced to balance the search equations and introduced an incoming search strategy. Gao et al. (2013a) suggested a new search equation similar to the Genetic Algorithm crossover procedure (named CABG). Besides, the orthogonal learning method was integrated with three different versions of ABC to find out more valuable information during the searching process. The simulation results show that CABG is competitive and effective. The Gaussian search equation has been proposed to provide solutions for the onlooker phase. This equation exploits the hidden information found in the best solution to improve the exploitation (Gao et al. 2015). Then, they developed two new equations to act as search equations for both onlooker and employee phases and used Powell's method to enhance the exploitation (Gao et al. 2013b). A quick ABC (qABC), which employs a novel search equation for the onlooker phase, was introduced to exploit the neighbors of the best solution . The directional information was appended to ABC to create a new search strategy to construct a candidate solution based on the previous directional information (called dABC) (Kıran and Fındık 2015). A self-adaptive local search strategy is implemented with ABC to exploit the searching area near the best solution, which leads to accelerating ABC convergence (Jadon et al. 2015). A depth-first search (DFS) framework, which assigns more computing resources to higher quality food sources, and two search equations, which are used to exploit the elite solutions, are added to enhance the performance of ABC (called DFS-ABC_elite). It was reported that the suggested framework could speed up the convergence rate, and DFSABC_elite is more robust than the compared algorithms (Cui et al. 2016). In Wang et al. (2020), the mechanism used to select the food sources in the onlooker phase of ABC is replaced by a new mechanism that depends on neighborhood radius in a ring topology. Furthermore, with the use of the opposition-based learning and neighborhood radius concept, the scout phase is strengthened. In Zhou et al. (2021), MGABC used the advantages of multi-elite guidance to increase the exploitative ability of ABC without affecting its explorative ability. For this purpose, two novel solution search equations are used instead of the original ones in employed and onlooker bee phases, respectively, furthermore a modified neighborhood search operator is developed.
According to the versions which hybrid ABC with other algorithms, the successful hybrid version combines the advantages of its components. Thus, the successful hybrid version of ABC should increase its exploitation ability. For example, Rosenbrock ABC, referred to as RABC, wherein ABC do the exploration phase, and the rotational path approach completes the exploitation phase. In Xiang and An (2013), the chaotic mapping technique is used in the initialization and scout phase to increase ABC's performance and quality (called ERABC). In Kang et al. (2013) Hooke-Jeeves pattern search is combined with ABC, where the exploitation is carried out by pattern search. Its results showed promising convergence speed, the accuracy of the solution, and efficiency. The ABC algorithm with a multi-strategy ensemble called MEABC is proposed to solve optimization problems with various features. It includes a pool of different resolution search strategies that competes with each other during the search process to produce offspring (Wang et al. 2014). Also, five multiple update strategies were combined with ABC to enhance its performance (Kiran et al. 2015). The differential evolution strategy is also hybridized with ABC, and the performance of this hybridization, called DE-ABC, is better than the performance of both DE and ABC (Abraham et al. 2012). Hybridization between ABC and PSO is produced to share valuable information between them. Two informationsharing processes are added between PSO and ABC, and the proposed hybridized method shows competitive results . To obtain the parallel value of GA calculation and the speed convergence of ABC hybridization between GA and ABC is produced by exchanging knowledge between the bee colony population and the GA population (Zhao et al. 2010). Levy flight and opposition-based learning methods are used to improve the performance of ABC by enhancing its exploitation in different works Sharma et al. (2016), Sharma et al. (2013), Saleh and Akay (2019). Another hybridization of ABC with teaching-learning-based optimization (TLBO) is produced, called TLABC, where ABC exploration combines with TLBO exploitation and enhances the performance of both ABC and TLBO (Chen and Xu 2018).
Being motivated by the fact that there is not an optimal version of ABC among the previous versions, in this paper, not only do we focus on improving the performance of the ABC algorithm, but also its most promising versions. So, the contributions of this paper are summarized as follows.
-The indicator parameters are introduced for each solution to improve the exploitation ability of the ABC algorithm. -A new local search equation is proposed to get more robust performance and enhance the convergence speed of the ABC algorithm. -Different well-known ABC variants such as GABC, DFSABC, and TLABC are also used to validate the efficiency of the proposed idea.
The rest of this paper has the following structure. The original ABC is briefly discussed in Sect. 2, and related works on improved ABCs are also introduced. The structure of the proposed algorithms is defined in Sect. 3 based on the proposed modifications. Section 4 presents simulation tests and comparisons with other algorithms. The conclusion of the paper is in Sect. 5.

Artificial bee colony algorithm
The ABC algorithm is a kind of optimization algorithm inspired by honey bees' wobble dance and intelligent behavior. In ABC, the solution imitates the food source, and the solution's fitness imitates the quantity of nectar in each source of food. The ABC algorithm splits the bee foraging activities into three stages. The first half of the colony comprises employed bees responsible for randomly looking for a portion of better food in the vicinity of the respective parent food source, then transmit food source information to onlooker bees. The second part of the colony comprises onlooker bees that use the information coming from onlooker bees to check for better food sources. If a predetermined number of evaluations (limited) do not improve a food source's quality, this source of food is abandoned by its employed bee, which then becomes a scout bee and starts to look for a new random source of food. In ABC, three control parameters are used: The number of employed equals the number of onlooker bees, which equals the number of food sources, limit, and the maximum number of cycles. The main steps of ABC are given in Algorithm (1)

Initialization
The first step of any meta-heuristic algorithm is the initialization step. Equation (1) is used to generate a random population of food sources (SN).
where x j max is the upper bound and x j min is the lower bound of the jth food source (solution).

Employed bee phase
Equation (2) is used to change the existing solution in this phase where k ∈ [2, SN] and j ∈ [2, D] are randomly chosen indices, where SN is the number of food sources, D is the dimension of the solution, and k is not equal to i.
The current solution is updated and replaced by employed bees based on the fitness value of a new solution determined by Eq. (3).

Onlooker bee phase
The task of the onlooker bees starts with the end of the employed bee task. In this phase, every onlooker bee will fly to a food source according to the quality information calculated from the food resources provided by the employee bees. Equation (4) calculates the probability of the quality of those sources. The higher the fitness value is, the greater the chance of selection.

Scout bee phase
If that source of food is not changed for a limited number of iterations, it is presumed that a food source is exhausted. After that, the employed bee becomes a scout, and a randomly new food source in the search area is generated using Eq. (2).

Gbest-guided ABC (GABC)
It is one of ABC modified versions that adjusted the search equation of ABC to implement the global best solution of PSO to direct the search for new candidate solutions to achieve better exploitation. In this version, the search Eq.
(2) is replaced by Eq. (5) to guarantee the improvement of the exploitation of ABC (Zhu and Kwong 2010).
where C is a nonnegative constant that its value controls the balance between the exploration and the exploitation and x j gbest is the jth parameter of the global best solution in the population. When the value of C decreases, the exploration increases. Whereas when its value increases, the exploitation increases. However, this constant should be carefully chosen because if its value is too high, both the exploration and the exploitation may decrease.

Depth first search ABC (DFSABC)
It is common knowledge that ABC is one of the most common swarm algorithms due to its high exploration. However, sadly, its search equation (Eq. (2)) makes it have slow convergence. Consequently, in Cui et al. (2016), a good balance between exploration and exploitation, in addition to the depth-first-search framework were proposed. There are two modifications to ABC introduced in this version, which are: -Two different new search equations replace the search equation used in the employed and onlooker bee phase of ABC. Consequently, the exploitation ability has increased. -In the onlooker bee phase, just the best solutions (elite solutions) are used among the overall population. Since more attention is paid to elite solutions regions, more improvement in exploitation will be noted These two modifications guarantee the improvement of ABC exploitation and its overall efficiency. Since DFSABC_elite is another version of ABC, it has the same phases with some changes in employed and onlooker phases. Consequently, in this section, we will discuss both the employed and onlooker phases in detail.
Employed Bee Phase At the beginning of this phase, the elite solutions (P food sources usually 10% or 20% of the population) are chosen, and a flag parameter Flag1 is set to 1. Where Flag1 is equal to 1, for each employed bee, a random source of food x r is selected and a new solution x new is generated in the neighborhood of that employed bee by Eq. (6).
where j ∈ [2, D] and x i j is the jth dimension of the ith new solution in each iteration, x b j is the jth dimension of bth solution, and the bth solution is a randomly chosen solution from the elite solutions, x k j is the jth dimension of kth solution, and the kth solution is chosen randomly from the whole current population where k, b, and r are different from each other. The old food source is replaced by the new one if and only if its objective value is better than the old one and the failure value of the ith food source will be reset to 0, and Flag1 resets to 0 (This means that the next employed bee in the same iteration will also search in the same food source as Flag1 is not 1). Otherwise, the old food source will also be maintained, the failure value of the ith food source will be increased by 1, and Flag1 resets to 1 (this means that the next employed bee is looking for another random source of food because Flag1 is 1).
Onlooker Bee Phase At the beginning of this phase, a flag parameter Flag2 is set to 1. Where Flag1 is equal to 1, for each onlooker bee a random source of food x b is selected from the elite solutions, and a new solution x new is generated in the neighborhood of that onlooker bee by Eq. (7).
where X best j is the best solution among all the population and is different from X b j . The old food source is replaced by the new one if and only if its objective value is better than the old one and the failure value of the ith food source will be reset to 0, and Flag2 resets to 0 (This means that the next onlooker bee in the same iteration will also search in the same food source as Flag2 is not 1). Otherwise, the old food source will also be maintained, the failure value of the ith food source will be increased by 1, and Flag2 resets to1 (this means that the next employed bee is looking for another random source of food because Flag2 is 1).

Teaching-learning based ABC (TLABC)
TLABC is a hybridization between TLBO and ABC algorithm, which combines the advantages of both (the exploration of ABC and the exploitation of TLBO). It effectively employs three hybrid search phases as follows (Chen and Xu 2018).
Teaching-Based Employed Bee Phase Here each employed bee uses a hybrid of TLBO and mutation operator of differential evolution to search for a new food source, which can develop the variety of search tendencies extraordinarily and upgrade the search ability of TLABC. Equation (8) is the search equation used in this phase.
Learning Based Onlooker Bee Phase In this stage, an onlooker bee chooses a food source to search out as indicated by the selection probability, which it is determined to utilize Eq. (4). After that, the onlooker bee finds out new food sources using the TLBO's learning strategy expressed in Eq. (9) where j ∈ [1, SN] and j = s.
Generalized Oppositional Scout Bee Phase In this stage, if a nourishment source cannot be improved further for a specific period, it is viewed as depleted and would be relinquished. At that point, an arbitrary candidate solution and the generalized oppositional solution of it are created. The best solution for them is utilized rather than the old depleted nourishment source. Equations (1) and (10) are the equations used in this phase, respectively.
where k is a random number in [0, 1], a j = max(X ), and b j = min(X ).

Proposed intensification in the ABC algorithm
In the scout phase of the original ABC and its other versions, the abandoned food source is replaced by a new random source or its oppositional, leading to increased exploration. However, ABC and its versions already have high exploration. So in this work, we focused on using the scout phase in exploitation instead. Accordingly, to achieve good exploitation, two steps have to be taken under consideration: first, we must know where the fertile area to be exploited is. Then we can exploit this area by using an excellent local search equation. The first step is completed by introducing indicator parameters for each solution, which increases in each evaluation if there is an improvement in that solution as it is introduced in (Eq. 11). The idea here is that the solution will continue to improve in the fertile area, unlike the useless area.
where h i is the indicator of the ith solution. It is clear from (Eq. 11) that each solution has its indicator, which increases whenever there is an enhancement of its fitness. As a result, at the early stages of searching, we may have many promising searching areas. However, our strategy is to visit them one by one from the highest fertility area to the least. By so doing we have directed the scout bees to search only in the most promising areas. The second step is completed by exploiting the neighbor of the solution with the maximum indicator (the fertile area). Here, we focus on replacing the scout phase from the exploration phase to the exploitation phase by replacing the abandoned food source with a new solution in the fertile area. To achieve this scenario, the proposed search equation (Eq. 12) is used, where a solution is created around the fertile area of the search rather than the old depleted nourishment source.
In the second term of (Eq. 12), the absolute value of the difference between the two solutions x g and x h is calculated. Using the absolute value here is to guarantee that the direction of approximation is towards x g . We need the new solution to converge towards x g because x g is always nearer than x h to the global solution except when x g = x h . This structure is incorporated in the original ABC and other well-known versions of ABC, such as GABC, DFSABC_elite, and TLABC. Therefore, the respective amended ABCs are referred to as ABC/ds, GABC/ds, DFSABC/ds and TLABC/ds, respectively.
where x ind is the new solution generated in the fertile area, x g is the current best solution, rand is a normally distributed random number in [0,1], and x h is the solution with maximum indicator value. The abundant solution in the scout phase will be replaced by x ind to increase the exploitation. To sum up, one of the following four cases might occur during the scout phase searching process.
1. When x g and x h are in opposite directions around a convex as is shown in Fig. 1a, in this case, the new solution x ind will be somewhere near the global solution of this convex. 2. When x g and x h are in the same direction as a convex, as shown in Fig. 1b, in this case, the new solution x ind will be somewhere between them. Moreover, we ensured that the number of solutions in this area increased, which guarantees increased exploitation. 3. When x g and x h are in opposite directions but the searching area is not convex, four possible scenarios might occur as the following: (a) When x g and x h are around the global convex, the new solution x ind will be somewhere near the global convex as is shown in Fig. 1c. (b) When x g and x h are around a vertex, the new solution x ind will be somewhere around this vertex, as shown in Fig. 1d. Furthermore, this is when our proposed model will not positively or negatively affect the actual algorithm performance. (c) When x g and x h are around local convex, the new solution x ind will be somewhere around this local convex, as shown in Fig. 1e. However, in this scenario, this area will be exploited very well. In the subsequent iterations, x ind will be its local solution. It is worth mentioning here the following vital points: First, after each update in the scout phase, the maximum indicator will be updated in order to indicate another promising solution, and another fertile area will occur. Second, this process not only guarantees that the searching area will be in all fertile areas, but it guarantees the searching process will follow one of the scenarios in Fig. 1c or 1d where x g is already the local optima. Finally, since the ABC is a stochastic-based algorithm, our idea is stochastic based too. However, following the logic of the proposed idea, avoiding stacking in a local optimum has a high likelihood but needs more iterations.
(a) When x g and x h are in the same direction on the global convex, shown in Fig. 1f, the new solution x ind will be somewhere near the global solution. Moreover we ensured that the number of solutions in this area increased, which guarantees increased exploitation. (b) When x g and x h are in the same direction on a local convex, shown in Fig. 1g, the new solution x ind will be somewhere near this local solution. However, in this scenario, this area will be exploited very well. In the subsequent iterations, x g will be its local solution and the 3rd scenario of the 3rd case will occur.
However, the exploration of ABC is already high, and we here make some trade-offs between exploration and exploitation by generating solutions near the best solution instead of randomly. We improved the exploitation of the algorithm and achieved an acceptable balance between exploration and exploitation. Besides, we add an indicator of the fertile area, which is the other modification we added. The fertile area could be discovered by using this indicator for each solution, which will increase whenever the solution increases. Thus, the solution which its indicator is the highest is indeed in the fertile area The ABC/ds, GABC/ds, DFSABC/ds, and TLABC/ds pseudo-codes are shown in Algorithm 2, Algorithm 3, Algorithm 4, and Algorithm 5, respectively. Proposed solution change phase (Algorithm (6)) 7: end for 8: Determine the nectar amounts of food sources by Eq. (4) 9: for i = 1 …S N do 10: Using the roulette method, some onlooker bees will be selected to move onto the food sources according to the selection probability of Eq. (4),

Time complexity of the proposed models
The complexity of the proposed models is determined based on the benchmark functions of CEC2014. In compliance with CEC2014 directives. T 0, T 1 and T 2 parameters are the same as the CEC2014 parameters. As defined in , T 0 is the time calculated by running the following test problem: Algorithm 3 Gbest-Guided ABC with Directed Scout (GABC/ds) 1: Initialize the population by Eq. (1), 2: Calculate the fitness values of the population, 3: repeat 4: for i = 1 …S N do 5: Generate a new solution xnew using Eq. (5),

6:
Proposed solution change phase (Algorithm (6)) 7: end for 8: Determine the nectar amounts of food sources by Eq. (4) 9: for i = 1 …S N do 10: Using the roulette method, some onlooker bees will be selected to move onto the food sources according to the selection probability of Eq. (4),

12:
Proposed solution change phase (Algorithm (6)) 13: end for 14: Proposed scout bee phase (Algorithm (7) Proposed solution change phase (Algorithm (6)) 7: end for 8: Determine the nectar amounts of food sources by Eq. (4) 9: for i = 1 …S N do 10: Using the roulette method, some onlooker bees will be selected to move onto the food sources according to the selection probability of Eq. (4),

11:
Generate a new solution xnew using Eq. (9), 12: Proposed solution change phase (Algorithm (6)) 13: end for 14: Proposed scout bee phase (Algorithm (7)) 15: Memorized the best solution 16: until (Cycle=MaxCycle or termination criteria are met) Algorithm 6 The procedure of proposed solution change phase 1: if fitness(xnew ) ≥ fitness(x i ) then 2: x i is replaced by xnew 3: Set failure(i ) = 0, indicator(i ) = indicator(i ) + 1, 4: else 5: Set failure(i ) = failure(i ) + 1 6: end if Algorithm 7 The procedure of proposed scout bee phase 1: Find the solution with maximum failure value. 2: if max(failure) ≥ limit then 3: Find the food source with maximum improvement indicator 4: Update its value by applying the update Eq. (12) 5: Create its generalized oppositional sol. using Eq. (10) 6: The best solution of them is utilized rather than the old depleted nourishment source. 7: end if for i = 1 : 1000000 T 1 is the time needs to evaluate function f 18 for 200,000 times and T 2 is the time taken by the optimization algorithm to solve function f 18 with 200,000 function evaluations. Execute calculation of T 2 5 times and get 5 T 2 values ( T 2 = Mean(T 2)). The complexity of the algorithm is reflected by ( T 2 − T 1)/T 0. Table 1 shows the complexity of the four proposed models against their original versions. It is clear from Table 1 that the complexity of the proposed models is very close to the original ones except for the DFSABC/ds which has lower complexity than DFSABC.

Experimental results
In this paper, we divided the discussion of the results into three subsections. In the first subsection, we used CEC2014  benchmark functions to show the performance of the proposed models. Then, in the second subsection, we used CEC2015 benchmark functions to show the performance of the proposed models in 15 challenging, expensive problems.
In the third and final subsection, the validity of the proposed models is checked on some common real-life problems.

Proposed models for solving CEC2014 benchmark functions
In this section, the validity of the proposed optimization algorithms is checked on CEC2014 benchmark problems ). This benchmark collection comprises 30 unconstrained (unimodal, multimodal, hybrid, and composite) optimization problems of varying degrees of difficulty. Table 2 shows the summary of the four types of functions presented in the CEC2014 benchmark.

Experiments configuration
The configuration of running the experiments is the same as that proposed in  as the following: each experiment was run 51 times. The stopping criteria were 10 4 * Dimension, the search space range was [−100, 100], the dimensions were D = 30 and D = 50, and the initialization within the search space was uniform random. All the comparison algorithms ABC, GABC, DFSABC, and TLABC, used the same values for common parameters. The population size (NP), the number of food sources (SN), and the limit were set to 100, 50, and SN*D for all algorithms (Karaboga and Akay 2009;Kıran and Fındık 2015). For GABC, nonnegative constant (C) was set to 1.5 (Zhu and Kwong 2010). For DFSABC, the elite solutions (P) was set to 0.1 * SN (Cui et al. 2016). Table 3 shows the control parameters for all algorithms used in this paper.
In this section compares results between ABC, ABC/ds, GABC, GABC/ds, DFSABC, DFSABC/ds, TLABC, and TLABC/ds on CEC2014 benchmarks for 30 and 50 dimension are shown in Tables 4 and 5, respectively. The average absolute error value of the objective function value for the test functions is shown in these tables. The better results in the tables are highlighted with bold letters.
The tables above show that the results of the models are very close to each other. Consequently, some statistical tests should be applied to prove that there is a significant difference between them. Thus, the Wilcoxon sum rank test is used to check whether there is a significant difference between the proposed models and the original versions.
However, the pair-wise Wilcoxon test results are shown also in Tables 4 and 5 for 30 and 50 dimensions, respectively. This test is used to show the significant difference between the original algorithms and their proposed modified versions. The symbols +, = and − mean that the proposed model is significantly superior, similar or inferior to the original version, respectively, according to the Wilcoxon rank sum test at α = 0.05 significance level.
It is noticed from Tables 4 and 5        It is seen that the modification we proposed has a different effect on improving the performance of the original versions. Consequently, we can conclude that the proposed modification affects the performance of ABC more than GABC, DFSABC, and TLABC. And it affects the performance of GABC more than DFSABC, and TLABC and so on. This is due to the variety of the power of the exploitation of the original versions. Since our proposed modification improves the exploitation of ABC versions, it has a more negligible effect on TLABC due to its high exploitation (coming from the TLBO algorithm, which is one of the best local searching optimization algorithms). However, it is clear that the proposed modification has enhanced the performance of TLABC algorith in solving the hybrid problems.
We can also conclude from Table 4 that, among the four original algorithms and the four modified versions, the GABC/ds algorithm succeeded in achieving the best results in 9 for 30-dimensional problems. In contrast, ABC/ds, DFS-ABC/ds, TLABC/ds, TLABC, DFSABC, ABC, and GABC algorithms succeeded in achieving the best results in 8, 7, 6, 4, 2, 1, and 1 problems, respectively. In problem f24, the best and same result was achieved by TLABC, and TLABC/ds.

Wilcoxon sum rank test results
From the results presented in Tables 4 and 5, we can see that GABC/ds is the best version to compare its success against the other versions. Tables 6 and 7 compare the number of success of GABC/ds against the other versions of ABC on the CEC2014 benchmark for 30 and 50 dimensions, respectively. The results in Tables 6 and 7 compare GABC/ds version with other versions as a paired comparison where the (26-3-1) cell means that GABC/ds is better than ABC in 26 problems, they have the same result in three problems, and worse in one  problem. We can conclude from these two tables that the proposed version called GABC/ds is better than ABC, GABC, DFSABC, TLABC, ABC/ds, DFSABC/ds, and TLABC/ds. Since GABC/ds is a meta-heuristic algorithm, the Wilcoxon sum rank test was used to validate the results statistically and show a significant difference between the proposed model (GABC/ds) with the other versions of ABC. It is evident in Tables 6 and 7 that there are significant differences in most cases, such that the significant difference between GABC/ds and the other versions of ABC is demonstrated. The symbols +, = and − mean that the proposed model is significantly superior, similar or inferior to the GABC/ds version, respectively, according to the Wilcoxon rank-sum test at α = 0.05 significance level.

Convergence trajectory and box plot comparison
The convergence curves on some test functions of all compared algorithms above are shown in Figs. 2 and 3 for dimensions 30 and 50, respectively. Convergence curves are drawn by an average value of 51 independent runs. It is evident from these two figures that the modified versions converge faster than the original versions in most cases. Although the modified versions track the original version's convergence rate initially, they become faster after some iterations in most cases. This is due to the dependency of our modifications on the scout phase of ABC versions. And the searching process does not enter to scout phase before some iterations (when failure exceeds the limit value). It is worth  mentioning here that our modified versions are susceptible to limit value. The box plot of some test functions of all compared algorithms above is shown in Figs. 4 and 5 for dimensions 30 and 50, respectively. It can be found that the objective distributions of the proposed methods are smaller than the original methods, demonstrating their robust performances in the test functions.

Comparison with other existing evolutionary algorithms
In this section, the performance of the proposed algorithm GABC/ds has been compared to that of state-of-the-art algo-rithms. Table 8 shows the average error of the 51 runs in objective function value. It is noticed in Table 8 that compared to other meta-heuristic algorithms, the proposed algorithm GABC/ds produces very competitive results. Moreover, Table 9 compares the proposed GABC/ds with other algorithms as a paired comparison where the result shown in the cell which is in row 6 and column 2 (21-0-9) means that GABC/ds is better than GSA in 21 problems, they do not have the same results, and worse in 9 problems. The analysis of Tables 8 and 9 shows that, in comparison to other meta-heuristic algorithms, the proposed GABC/ds algorithm produces very competitive results. Furthermore, as it is suggested in Table 9, the proposed GABC/ds algorithm shows high performance in solving multimodal problems since it  has defeated all other algorithms in solving multimodal problems. It is noticed from Tables 10 and 11 that the proposed models have achieved the best results on most cases. These results confirm the conclusion that we made from the results mentioned above, on CEC2014 benchmark functions, about improving the efficiency of ABC original versions when our proposed modification was added to them. It is also emphasized that the modification we proposed has different effects on improving the original versions' performance. Consequently, we can conclude that the proposed modification affects the performance of ABC more than GABC, DFS-ABC, and TLABC. And it affects the performance of GABC more than DFSABC, and TLABC, and so on. This is due to the variety of the power of the exploitation of the original versions. Since our proposed modification improves the exploitation of ABC versions, it has a more negligible effect on TLABC due to its high exploitation compare with ABC or GABC (from the TLBO algorithm, which is one of the best local searching optimization algorithms). From the results presented in Tables 10 and 11, we can see that ABC/ds is the best version of CEC2015 benchmark functions to compare its success against the other versions. Tables 12 and 13 compare the success of ABC/ds against the other versions of ABC on the CEC2015 benchmark for 10 and 30 dimensions, respectively. The results in Tables 12 and 13 compare ABC/ds version with other versions as a paired comparison where the result shown in the cell which is in row 6 and column 2 in Table 12 (12-1-2) means that ABC/ds is better than ABC in 12 problems, they have the same result in one problem, and worse in two problems. We can conclude from these two tables that the proposed version called ABC/ds is better than ABC, GABC, DFSABC, TLABC, GABC/ds, DFSABC/ds, TLABC/ds, DPABC, DPGABC, CABC, DPCABC, ARABC, OCABC, and HFPSO. Results shown in Tables 12 and 13 indicate the large amount of the improvement that happened when our proposed idea was implemented with the ABC algorithm. ABC/ds shows high performance on all sorts of problems of the CEC 2015 benchmark. Nonetheless, it is shown in Table 12 that none of our   proposed models achieves the best results on F3, F4, and F5. Accordingly, we need to discuss why this failure occurs.

Proposed models for solving CEC2015 benchmark functions
In benchmark CEC2015, F3-F9 are multinomial problems that are non-separable. However, among of them only F3, F4, and F5 functions have other special characteristics such as "continuous but differentiable only on a set of points = F3", "Local optima's number is huge and second better local optimum is far from the global optimum = F4", and "Continuous everywhere yet differentiable nowhere = F5". If we focused on those characteristics, we could find that they can affect our idea performance negatively as the following: -Our idea depends on the indicator of the fertile area, whose value increases when there is a slope. According to our suggestion, we can find enhancement of the fitness of the solutions in the areas that are inclined more than other areas. As a result, the indicators of the solutions in these areas will be larger than the others and they will be exploited first. This makes our model slower than the original models and sometimes gets worse results. For this reason, our model has bad results on F3 and F5. -Whenever there are many local optima, there will be many fertile areas. Therefore, our model will need to exploit many areas which lead to slowing down finding the optimal global solution. For this reason, our model has poor results on F4.

Proposed models for solving engineering design problems
Three engineering design issues are used to see how effective the proposed algorithms are in solving practical optimization problems. These issues are commonly utilized in the litera-ture and all researchers can easily find their mathematical equations and explanations.

Pressure vessel design problem
This is a common problem in the literature (Van den Bergh and Engelbrecht 2004) that aims to minimize the overall cost of a cylindrical material, shaping, and welding as in Fig. 6. The problem is mathematically formulated by Eq. (13). In this equation, four design parameters need to be optimized. Two of them are continuous (L and R), and two are integers that are multiples of 0.0625 (T s and T h ).  Gupta and Deep 2019;Le-Duc et al. 2020). To make the comparison fair, the same stopping criteria and the number of runs is chosen as in the study presented in Auger and Hansen (2005) (100,000 evaluations and 100 independent runs). The results presented in Table 14 confirm the efficiency of the proposed modified versions of ABC against their original versions on solving pres- Table 10 Comparison results of the proposed algorithms against the other versions of ABC for 10-dimensional CEC2015 benchmark problems in terms of average error value F ABC DPABC GABC DPGABC CABC DPCABC DFSABC ARABC

Tension and compression spring design problem
This problem aims to minimize the weight, subject to restrictions on shear stress, deflect, surge frequency, external diameter and design variables, tension, and compression spring as illustrated in Fig. 7 (Kannan and Kramer 1994). The problem is mathematically formulated by Eq. (14). The design parameters that need to be optimized in this problem are wire diameter (x 1 ), mean coil diameter (x 2 ), and the number of active coils (x 3 ).  . 7 Sketch map of the tension and compression spring design problem g 2 (x) = 4x 2 2 − x 1 x 2 12566(x 2 x 3 1 − x 4 1 ) Table 15 presents the experiment settings and results of the other optimization algorithms directly obtained from related papers (Auger and Hansen 2005;Aydilek 2018;Sadollah et al. 2013;Gupta and Deep 2019;Le-Duc et al. 2020;Lu et al. 2018). The results shown in Table 15 are concluded from 100 independent runs. These results concluded that the proposed modified versions are efficient and competitive in solving tension and compression spring design problem.

Frequency-modulated (FM) problem
In some contemporary music systems, FM sound wave synthesis has a significant role. The FM synthesizer optimization problem is to produce a target-like sound. The problem is a highly complex multimodal with six-dimensional and a minimum value of f (x) = 0 (Gupta and Deep 2019). The problem is mathematically formulated by Eq. (15). The decision vector that needs to be optimized is [a 1 , a 2 , a 3 , w 1 , w 2 , w 3 ]. min f (x) = 100 t=0 (y(t) − y 0 (t)) 2 y(t) = a 1 sin(w 1 tθ + a 2 sin(w 2 tθ + a 3 sin(w 3 tθ ))) y 0 (t) = (1.0) sin(5.0)tθ − (1.5) sin(4.8)tθ +(2.0) sin(4.9)tθ θ = 2π/100 and parameter range is [−6.4, 6.35]   Liang et al. 2006). The results shown in Table 16 are concluded from 30 independent runs with 30,000 maximum number of evaluations. The results in Table 16 indicate that the proposed modified versions can also solve the FM problem. Indeed, these results concluded that although the proposed modified versions did not find the optimal solution to this problem, they found results better than the original versions except the TLABC. However, the best mean result in the FM problem among the four proposed versions is achieved by TLABC/ds.

Discussion and conclusion
This paper suggests a modification that can be applied to ABC versions and enhance their performances. To check the performance of the proposed modified versions, some wellknown practical engineering design problems besides the CEC2014 and CEC2015 benchmark functions are used. In CEC2014 and CEC2015 benchmark functions experiments, 10, 30 and 50 variables are used. The proposed modified versions are compared to the original version of ABC, some common versions of ABC, and some state-of-the-art metaheuristic algorithms. The overall results of this modification enhanced the performance of the original versions. Moreover, based on results gained from the experiments of CEC2014 benchmark functions, GABC/ds can provide highly competitive performance compared with ABC, GABC, DFSABC, TLABC, ABC/ds, DFSABC/ds, TLABC/ds, GSA, CS, LX-BBO, B-BBO, SOS, GWO, and RW-GWO. Furthermore, the other proposed version ABC/ds can provide higher competitive performance than ABC, GABC, DFSABC, TLABC, GABC/ds, DFSABC/ds, TLABC/ds based on the results gained from the experiments of CEC2015 benchmark functions. We can summarize our results according to the type of optimization problems as the following. First, proposed modified versions of the ABC algorithm have a superior exploitation ability on the unimodal functions. Second, the results of multimodal functions verified the efficient exploration ability of the proposed modified versions. Third, the results on the hybrid and composite functions were strong evidence of the superiority at avoiding the local optimal. Finally, the convergence analysis of these versions against their original versions verified an improvement in convergence. Moreover, the real-life problems have shown that the For future studies, this proposed modification can also be applied to the other meta-heuristics to improve their efficiency. Multi-objective and binary versions of the proposed algorithm can be investigated. Furthermore, we will integrate the proposed model with machine learning techniques such as artificial neural networks and support vector machine to optimize their parameters.