An Improved Biogeography-based Optimization with Hybrid Migration and Feedback Differential Evolution and its Performance Analysis

： Biogeography-based optimization (BBO) is not suitable for solving high-dimensional or multi-modal problems. To improve the optimization efficiency of BBO, this study proposes a novel BBO variant, which is named ZGBBO. For the selection operator, an example learning method is designed to ensure inferior solution will not destroy the superior solution. For the migration opeartor, a convex migration is proposed to increase the convergence speed, and the probability of finding the optimal solution is increased by using opposition-based learning to generate opposite individuals. The mutation operator of BBO is deleted to eliminate the generation of poor solutions. A differential evolution with feedback mechanism is merged to improve the convergence accuracy of the algorithm for multi-modal and irregular problems. Meanwhile, the greedy selection is used to make the population always moves in the direction of a better area. Then, the global convergence of ZGBBO is proved with Markov model and sequence convergence model. Quantitative evaluations, compared with three self-variants, seven improved BBO variants and six state-of-the-art evolutionary algorithms, experimental results on 24 benchmark functions show that every improved strategy is indispensable, and the overall performance of ZGBBO is better. Besides, the complexity of ZGBBO is analyzed by comparing with BBO, and ZGBBO has less computation and lower complexity.

Islands with low HSI tend to immigrate Poor individuals tend to accept variables from good Islands with high HSI tend to emigrate Good individuals tend to share variables with poor BBO mainly improves the quality of habitats through population evolution, and through migration between species to find the most suitable habitat for the survival of species, which is the optimal solution of the optimization problem. It is mainly accomplished by the following three steps.

Initialization
The initialization steps of BBO include parameter initialization and population initialization. Parameter initialization includes setting the population size NP, the maximum number of species that can be accommodated in each habitat Smax, the maximum immigration rate I and the maximum emigration rate E. Only when Smax、I and E are determined, can we calculate the immigration rate i  、emigration rate i  、species number Si and species probability Pi for each habitat. In addition, in order to calculate the mutation rate mi of each habitat, the maximum mutation rate mmax of the habitat should be set.
When initializing the population, NP habitats are randomly generated as the original population. The operation is that each habitat randomly generates a number within a range of values on each variable. Assuming that each habitat has n variables, each habitat is initialized according to Eq. (1). represents a number in the range of (0,1) generated randomly to ensure that the variable does not exceed the bound. Once the population is initialized, the HSI, which is the evaluation function value, for each habitat can be calculated to measure the quality of each habitat.

M igration
BBO algorithm designs migration operators between habitats, and uses the migration rate of each individual to share information between individuals according to probability (Simon 2008). Therefore, there is a positive correlation between habitat HSI and species number S. This means that the higher the HSI, the greater the S, and vice versa. After the fitness values of NP habitats were calculated, each i x was sorted in descending order, that is, i x was rearranged according to HSI from high to low. Eq.
According to the number of species in the habitat, the immigration rate and emigration rate of each habitat can be calculated. Different migration rate models w ill have an important impact on the optimization performance of the algorithm. Ma (2010) proved through experiments that the performance of complex migration rate model is better than that of simple migration rate model. Therefore, when BBO algorithm and other improved BBO algorithms are implemented in this paper, the cosine migration rate model proposed by Ma Where, I is the maximum immigration rate, E is the maximum emigration rate, and Smax is the maximum number of species.
BBO will first determine the feature k of habitat i x to be migrated according to the immigration rate i  when performing the migration operation. In each generation, the probability of substitution for each independent variable of the i-th individual is i  , that is, i  is equal to the probability of substitution for each independent variable of i x .
(0,1) rand is used to generate random numbers between (0,1). If the random number is smaller than the immigration rate, this dimension feature of habitat i x need to be replaced, and i x is also known as the habitat to be immigrated. If we have choosed the variable of the solution to be replaced, then we choose to emigrate individual j x with a probability proportional to the emigration rate {} i  . According to Eq. (4), the original BBO uses the "roulette" method to determine the habitat j x to be emigrated, and finally replaces the k-dimension variable of i x with the k-dimens ion variable of j x . Algorithm 1 gives the calculation flow of BBO migration operator. 1 selecte ( is )= d .
The species probability of a habitat is inversely proportional to the mutation rate in that habitat. Habitats with high species probability had better environment and less mutation probability. Habitats with low species probability are poorer, less suitable for species and more likely to undergo environmental mutations. In the original BBO, Eq. (6) was used to calculate the habitat mutation rate mi.
Where, mmax is the maximum mutation rate and is given as the initial value. Like the immigration rate and the emigration rate, the mutation rate mi of habitat i x is equal to the probability of mutation for each variable in i x . During mutation, a random number between (0,1) is generated for each habitat. If the random number is less than the mutation rate mi, a number within the value range is randomly generated for each dimension variable to replace the original variable value. Algorithm 2 gives the calculation flow of BBO mutation operator. There are some defects in the three main operators (selection, migration and mutation) of BBO, and the convergence speed of the algorithm is slow in the late stage, and it is easy to fall into the local optimal solution, which affects the optimization performance and running speed of the algorithm. The reasons for these shortcomings of BBO will be analyzed in depth below.
(1) It can be seen from Eq. (4) that the original BBO algorithm us es roulette to select habitats for emigration. Although the probability of being selected is proportional to the emigration rate of habitats, it cannot avoid the immigration of inferior individuals to superior ones. For each habitat, if selected for immigration, the corresponding emigration habitat is selected from the remaining NP-1 habitats. If habitat i x is immigrated, it is likely to be emigrated of habitat j x (j>i), which means that habitats with lower H SI immigrate to habitats with higher H SI, and variable values of poorer individuals replace variable values of better individuals, thus reducing fitness of better individuals and causing "negative action". Therefore, the random selection method based on roulette tends to bring in poor individuals, reduce the population quality and slow down the convergence rate.
(2) The original BBO uses the method of direct migration to replicate the variables between candidate solutions, and the migration operator will directly replace the corresponding features of the immigration habitat with the features of the emigration habitat. However, if an individual performs well on the questions, it does not mean that the individual performs w ell on the variable values of each dimension. Therefore, the original migration operator is completely possible to make a dimension of the habitat become worse after being replaced, and the suitability is reduced. M oreover, the single migration method is not suitable for solving the multi-modal problem, and it is easy to make the algorithm fall into the local optimal value, resulting in search stagnation.
(3) The original BBO uses random mutation to improve population diversity and help the algorithm jump out of the local optimal solution. For individuals with low fitness, random mutation tends to produce better individuals and improve the quality of the population. However, for individuals with high fitness, random mutation can easily destroy them, leading to worse individuals and lower population diversity. This way of mutation is blind and cannot guarantee the mutation to the direction of the optimal solution. In addition, the mutation operator needs to calculate the species probability and mutation rate of all habitats. From Eq. (5), it can be seen that the calculation of species probability is complicated and requires a lot of calculation, which consumes resources. Therefore, the random mutation strategy is not only easy to destroy the individuals with high fitness, but also increases the time consumption of the algorithm and reduces the convergence speed of the algorithm.
(4) In the biogeography-based optimization, the information transfer mechanis m between individuals makes it have good utilization ability of population information, and can efficiently utilize existing habitat information for optimization.
However, the BBO search ability is weak. It only relies on the substitution of s everal variables to search the problem space, so there is less chance to generate new solutions. Due to the lack of BBO search ability, the population diversity of BBO decreases rapidly in the late iteration, and the convergence rate of the algorithm is slow in the late evolution. Improving BBO's mining capacity has also been a research focus.

Review of BBO's work
Since BBO was proposed, it has been widely favored by scholars around the world. In the first year after the algorithm was proposed, 38 articles about BBO were published, 81 in the second year, and 145 in the third year. In recent years, the BBO algorithm is still emerging in an endless stream of articles. In order to improve the optimization performance of BBO algorithm, many scholars have conducted a lot of research. We will briefly review the research work on BBO in the past five years. Loon et al. (2016)  ability. Under the framework of NBBO algorithm, only two or more sub-iterations are considered in each iteration to perform the search task. In each sub-iteration, a sub-population is selected from the current population according to the triangular probability distribution, and the migration habitat is selected from this sub-population. In addition, this algorithm also introduces a two-stage migration operator into BBO, which enables the algorithm to search for the optimal solution quickly. The overall framework of NBBO makes its development ability and search ability reach a good balance, and it is not easy to fall into the local optimal state. M oreover, the mutation operator of the original algorithm is deleted, which makes up for the calculation amount brought by the new strategy. In order to reduce the computational complexity, Zhang et al. (2019a) also deleted the mutation operator in BBO, combined the differential mutation and sharing operator into BBO's migration operator, and combined the improved migration operator with one-dimensional and full-dimensional search strategies to conduct alternate search. Because many operators are integrated into the new algorithm to improve the search performance, it is named efficient and merged biogeography-based optimization (EM BBO). The same year, in order to alleviate the rotation variance of BBO and overcome its premature convergence problem, TDBBO was designed by Zhao et al. (2019). TDBBO adopts linear migration model and sinusoidal migration model respectively in the early and late evolution stage, and us es differential mutation operator to alleviate rotation variance. TDBBO is much better than the original BBO algorithm in solution quality, convergence speed and stability. In 2020, An et al. (2020) proposed an improved non-dominated sequencing biogeography-based optimization (INSBBO) to solve the multi-objective flexible job-shop scheduling problem. In order to overcome the pressure of individual s election in pareto dominance principle, IN SBBO proposed a V-dominance algorithm based on the volume surrounded by the value of normalized objective function to enhance the convergence of the algorithm. On the other hand, in order to avoid the loss of some better solutions in the process of evolution, the author constructed an elite storage strategy (ESS) to store these better solutions, and improved the migration and mutation operators of the original BBO, which further improved the optimization ability of the algorithm. M eanwhile, BBO was also des igned as a binary algorithm for feature s election (FS) (Albashish et al. 2020). The support vector machine recurs ive feature elimination (SVM -RFE) is embedded into the BBO to improve the quality of the obtained solutions in the mutation operator which striking an adequate balance between exploitation and exploration of the original BBO. The new method, BBO-SVM -RFE, outperforms the BBO method and other existing wrapper and filter methods in terms of accuracy and number of selected features. In 2021, Ghatte (2021) studied and discussed firefly algorithm (FA) and BBO, and fused the two algorithms to obtain a hybrid algorithm (FABBO). One defect of FA is that all the individuals will converge to a better solution at the end of iteration, so the algorithm will converge to a local optimal, which is not suitable for global optimization problems. Considering that the global search capability of FA is weak and the local search capability of BBO is weaker than FA, so the integration of FA and BBO can overcome the above defects. FABBO is essentially a two-stage algorithm: in the first stage, FA is used for preliminary optimization to find some local optimal solutions ; The second stage uses BBO to perform a more refined search of the solution obtained in the first stage.
Recently, Sang et al. (2021) proposed an improved BBO algorithm by hierarchical tissue-like P system with triggering ablation rules in view of many shortcomings of BBO in terms of global optimization, convergence speed and algorithm complexity, which is named DCGBBO. Sang et al. first proposed a dynamic crossover migration operator to improve the global search capability and increase species diversity. Then, dynamic Gaussian mutation operator is introduced to accelerate the convergence speed and improve the local search ability of the algorithm. Finally, the hierarchical tissue-like P system is combined with BBO to implement the migration and mutation of habitats by using evolutionary rules and communication rules, which fully reduces the computational complexity.
The research on the BBO has been continuously since the algorithm was put forward, and it has alw ays been the hot spot of scholars in various countries. In just 13 years, BBO has developed theoretical materials about it, including M arkov model, dynamic system model, statistical mechanics model, etc. (M a et al. 2017). Although the improvement of BBO by many scholars has improved the performance of the original algorithm to a certain extent, with the rapid development of modern science and technology, practical problems in life have higher and higher requirements on algorithm performance, so the overall performance of BBO still need to be improved.

ZGBBO algorithm
In section 2.2, we analyze the shortcomings of the standard biogeography-based optimization in detail. In order to improve the algorithm performance and make BBO algorithm more advanced and superior, this paper proposes four improvement strategies for the four defects of BBO in section 2.2 through a large number of experiments and studies, which effectively make up for the shortcomings of the original BBO. The new algorithm is a highly competitive improved algorithm w ith fast convergence speed and high convergence precis ion when searching the optimal solution. We named it ZGBBO after the surnames of the first two authors of this paper. The algorithm principle and calculation process of ZGBBO will be described in detail in this chapter.

Selection operator based on example learning method
In view of the first deficiency of the original BBO in section 2.2, we use example learning method to replace the original roulette wheel s election method. Random selection of roulette tends to bring in poor individuals, which reduce the population quality and slow down the convergence rate. The ranking of each individual in a population is related to its HSI value, which means the higher the HSI, the higher the habitat ranking, and the lower the H SI, the lower the ranking. Therefore, in the improvement strategy, we set examples based on the ranking of each habitat and proposed the example learning method. For habitat i x , it ranks i out of all habitats, and the HSI of other habitats higher than habitat i x can only be 1 2 ， . These habitats rank higher than i x , so they become the role models of i x , and i x becomes the learner. In order to explain the principle of example learning method more intuitively, we will elaborate in detail according to Fig. 3.
Assuming that all individuals in the population have been ranked, 1 x is the best individual and NP x is the worst individual. As shown in Fig. 3 (a), it is a random selection mode. Each of the two habitats can migrate with each other. 1 x can be replaced by any lower quality solution, while NP x can migrate variable values to any better solution. Therefore, the random selection operator will caus e the bad solution to destroy the better solution, thus reducing the population diversity of the algorithm. Fig. 3 unifrnd(*) is a random number between 1 and i-1,while round(*) is an integer function that computes the ranking of sample individuals. In section 2.3, all the work on BBO uses random selection operators, so none of these algorithms avoids the bad influence of poor solutions on better ones. The comparison algorithm in this paper, PRBBO (Feng et al. 2017), takes this problem into account and adopts a selection method bas ed on random topological rings, so that the habitat can only migrate between adjacent individuals . However, this operation only reduces the probability of the inferior solution replacing the superior one, and does not eliminate this phenomenon.
M oreover, the loop topology reduces the algorithm's ability to utilize population information, which makes the algorithm converge slowly.
Using the example learning method to select the emigrating habitat can ensure that the migrating habitat to i x has a higher fitness than i x , while the better solution cannot be replaced by the poor solution, avoiding the situation that the good solution is destroyed by the poor solution. In addition, the example learning method does not need to calculate the emigration rate of each habitat, which reduces one calculation step and reduces the calculation amount.

Hybrid migration strategy
Section 2.2 points out that the second disadvantage of the biogeography-based optimization is the use of direct migration operators.
This single migration method is not suitable for solving the multi-modal problem, and it is easy to make the algorithm fall into the local optimal value, resulting in search stagnation. Aiming at the above shortcomings, we integrate convex migration mechanism and opposition-based learning strategy into BBO, and obtain a hybrid migration operator. Firstly, the convex migration mechanism is used to replace the original migration operator. In convex migration, the variable of immigration habitat is no longer a duplicate of the variable of emigration, but is replaced by a convex combination of the variables of the two habitats. In migration, the emigration habitat is selected by example learning method in section 2.1, and the other paternal gene is from the optimal individual of the current population. The specific calculation equation is shown in Eq. (8) : x is the current optimal individual, t is the current iteration number, and j x is the example selected through Eq. (7). It can be seen that when =0 Second, the opposition-based learning strategy is integrated into the original BBO. Convex migration mechanism can only ensure the selected individuals to move towards the optimal solution direction, but can not help the rest of the individuals to obtain a better position. Therefore, it is necessary to design the updating formula for the position of the remaining individuals. When the random number generated on a variable of habitat i x is greater than the immigration rate i  , this dimensional variable still needs to be replaced. The specific operation is to generate the opposite individual i x of habitat i x , and replace the variable of habitat i x with the variable of opposite individual i x . Since Ergezer et al. (2014) proved that the quas i-reflective opposite individual has a large probability to approach the optimal solution of the problem. The opposite individual i x of i x is generated by Eq. (9): It can be seen from Eq. (9) that the variable value of each dimension of the opposite individual i x is equal to the random number between the variable value of each dimension of i x and the median value of the upper and lower bounds of the variable of this dimension.
However, implementing the opposition-based learning strategy requires a lot of computing resources, and an additional fitness evaluation is required for each generated opposite individual. When performing evolutionary algorithms, opposite individuals cannot be generated randomly, only when there is reason to believe that additional calculations will lead to better results. Although quasi-reflective opposite is used in literature (Ergezer et al. 2014) to generate opposite individuals, it carries out opposition-based learning for all individuals in the population. In fact, the top individual is not worth generating its opposite individual. EM BBO (Zhang et al. 2019a) generates its opposite individual for only one individual in the population, which fully reduces the complexity of the algorithm. But this individual is not the last one, it is chosen at random. Therefore, it is possible to generate opposite solutions of the better solutions, thus reducing the diversity of the population. Therefore, when the opposition-based learning strategy is applied to the BBO in this paper, only NP/2 individuals from the less adaptable part of the population are taken to generate opposite individuals. A well-adapted individual is unlikely to produce an opposite individual that is stronger than the original. In other words, the solution closer to the optimal value is not worth generating its opposite solution. Random generation of opposite individuals will not only waste the evaluation times of the function, but also reduce the quality of the population, so only the opposite solution of poor individuals will be generated.
Combined with the example learning method in section 3.1, convex migration mechanism and opposition-based learning strategy in section 3.2, the pseudo-code of migration operator of ZGBBO is given in Algorithm 4.

Feedback differential evolution mechanism
The improvement strategies in Sections 3.1 and 3.2 are mainly used to improve the search efficiency of the algorithm, but cannot effectively help the algorithm jump out of the local optimal solution. The mutation operator can add new individuals to the population, so the algorithm is not easy to fall into the local optimal state. However, as can be seen from section 2.2, the random mutation method of the original BBO is easy to generate low-quality individuals and reduces population diversity. Random mutation can not effectively help the algorithm to jump out of the local optimal solution, and the calculation of species probability consumes CPU resources. Therefore, in the calculation process of ZGBBO, we delete the random mutation operator, which further reduces the computational complexity and eliminates the bad solution caused by random mutation. M eanwhile, in view of the weakness of BBO's weak search ability pointed out in section 2.2, this paper integrates the differential evolution algorithm into BBO, and uses DE's s earch ability to balance the information utilization capacity and new solution development capacity of BBO. Therefore, we design differential evolution with feedback mechanism in ZGBBO to replace the original mutation operator, so that the population can select the mutation mode intelligently according to the change of the current optimal value.
In the differential evolution algorithm, each generation of population adds new genetic information to the individuals through the addition, subtraction and scaling operation between multiple different vectors to generate mutant individuals, so as to realize the population replacement. M utation operation is a key step to help the algorithm jump out of local optimal state and search for optimal value. The original differential evolution algorithm has a variety of mutation mechanis ms. Through comparative analysis, the characteristics and suitable situations of various mutation formulas are listed in Table 2.
The vector mutation mode of the traditional differential evolution algorithm is always set in advance and will not be affected by the change of the overall fitness value of the population. In other words, the whole differential evolution system is a "static" evolution process.
However, a s ingle mutation method cannot be suitable for all objective functions and optimization problems, and a single mutation mechanism will lead to low adaptability and small scope of application of the algorithm. In addition, different mutation modes are applicable to different search states, and the population cannot adjust its search direction in time according to individual distribution in the search space during evolution. For example, when an individual falls into the local opt imal solution, it is very likely to bring the whole population into the local optimal state, so the population diversity decreas es. At this time, random individuals are needed t o help the population jump out of the local optimal solution. In this cas e, the mutation approach is suitable for using DE/ rand /1 and DE/ rand /2. For a large search range, the population is easy to be distributed in each region of each space during initialization, so the search directions are different, and the algorithm is easy to diverge. At this time, it is necessary to make the population move towards the direction of the optimal individual to improve the s earch efficiency and the convergence speed of the algorithm. In this case, mutation approaches DE/ best /1 and DE/ best /2 are optimal choices. Any single mutation method can not adjust the search direction of the population in time during the evolution process, which reduces the flexibility and convergence accuracy of the algorithm.  (Sang et al. 2021) all add differential mutation operator to improve the search ability of original BBO. However, these algorithms only design dynamic transformation modes with two mutation modes at most. The algorithm is not universal and can not search effectively when faced with multi-modal problem. These algorithms are actually static, that is, the population cannot adaptively adjust the search direction. In order to make the population select the mutation formula intelligently according to the current evolution situation, we set up a loop with feedback mechanis m. The algorithm is formed into a clos ed-loop evolution process so that the search direction can be adjusted according to the current population standard deviation. The specific operation is : the standard deviation of habitat suitability value is taken as feedback information, so the mutation mode is dynamically selected according to the standard deviation of the population in each iteration to form an evolution process with feedback mechanism. The standard deviation and relative standard deviation of the population are calculated as follows: Where,  is the mean value of fitness of all individuals in the population, and  is a temporary variable to measure the relative size of the population standard deviation. A ccording to Eqs. (10) and (11), a differential evolution process with feedback mechanism is designed, and this calculation step is used to replace the mutation operator in the original BBO. A lgorithm 5 des cribes the specific calculation process of feedback differential evolution.
The standard deviation of the population can be us ed to feedback the degree of aggregation and dispersion of the population, and the mutation mode of the algorithm can be dynamically adjusted in real time to ensure that the population always carries out global search in the direction of the optimal solution. At the same time, the powerful search ability of differential evolution also enables B BO to make full use of the existing population information and constantly develop new individuals in the search space. This feedback mechanis m greatly improves the convergence velocity and precision of the algorithm, and improves the comprehensive performance of the algorithm.

Greedy selection for the best individual
In order to ensure that the population always searches in a better direction, we do greedy selection of the optimal individuals in each generation in the algorithm, so that the optimal individuals in each generation of the population will not be worse than the previous generation. Algorithm 6 gives the specific operation of the greedy selection strategy for optimal individuals.
Algorithm 6 Greedy selection strategy for optimal individuals in ZGBBO if fitness( The reason for designing Algorithm 6 is that the optimal individuals of the current population are used in both Eq. (8) and Algorithm 5.
Only when the best individual of each generation does not degenerate can we ensure that the population does not degenerate during evolution. If the optimal individual of the previous generation is better than the optimal individual of the current generation, the population will search in the opposite direction and start to reverse evolution, which not only affects the convergence accuracy of the algorithm, but also leads to the blind search of the population in the solution space. INSBBO ) uses an elite storage strategy to preserve superior individuals. But the way it works is that the elite of the previous generation directly replaces the inferior individuals of the current generation, which means that there are likely to be several identical individuals in the population, thus reducing the divers ity of the population. And the elite storage strategy does not guarantee that the population will search in a better direction. Therefore, this paper uses the optimal individual greedy choice to replace the elite storage.
ZGBBO removes the mutation operator of the original BBO, reduces the computational cost, and replaces it with a differential evolution mechanis m with feedback loop. We improve the migration operator of the original algorithm, design a convex migration operator, and incorporate a opposition-based learning strategy. Example learning method is adopted to select excellent individuals for migration, which reduces the calculation of habitat emigration rate and further reduces the computational complexity. In A lgorithm 7, we describe in detail the pseudo-code of ZGBBO's calculation process. In each iteration of the original BBO algorithm, the immigration rate and emigration rate of each habitat in the population need to be recalculated. But the immigration rate and emigration rate are based on rankings. In other words, the immigration rate and emigration rate are only related to the ranking of habitats. According to Eq. (3), when the ranking of an individual is determined, the immigration and emigration rate of the individual can be determined. Therefore, in the perform Algorithm 4 to complete the migration calculate the standard deviation and relative standard deviation of the population through Eqs. (10) and (11) perform Algorithm 5 and use feedback differential evolution to search for optimization calculate the fitness value HSI for each habitat in the current population perform Algorithm 6 to retain the optimal individual end while

ZGBBO convergence proof
In this chapter, we will use two methods to prove the convergence of ZGBBO algorithm. In section 4.1, M arkov model is us ed to prove the convergence of ZGBBO. In section 4.2, a new proof method is proposed to prove that ZGBBO has global convergence again by establishing a sequence convergence model.

Markov model
This section uses M arkov model to prove the convergence of ZGBBO algorithm. The following explanations should be made: (1) the ZGBBO algorithm proposed in this paper is based on real number coding and is proposed for continuous variables, so when proving global convergence, the search space of the algorithm is a continuous state space.
(2) The improved BBO is composed of selection, migration and feedback differential evolution, which is independent of the maximum number of iterations, and the population size of the algorithm is fixed. Therefore, it can be considered that the optimization process of ZGBBO satisfies the finite homogeneous M arkov model.
The n-order matrix Q is a reducible matrix, if Q can be obtained Eq. (12) by the same row and column transformations: Where, C is a primitive matrix of order ( ) m m n  , while R and T are non-zero matrices of order nm − . If then the matrix Q  is a stable random matrix, and is uniquely determined and independent of the initial distribution (Kingman 1981).
If Q  is a stable random matrix, then Q  satisfies the following conditions: The population of ZGBBO is randomly divided into w subsets, so when the iteration number is t, the population can be expressed as According to the above description, for sub-population ( ), 1,2, , According to Eqs. (15) to (18), 12 , SS are non-zero matrices. The sum of probabilities of each line of M arkov transition matrix is 1, so 11 1 p = , that is, 11 p is the first-order primitive matrix. Therefore, M arkov state transition matrix P meets the condition requirements of Theorem 1, so P is a reducible random matrix.
Therefore, the following formula can be obtained: We know that 11 1 p = , so we have 11 11 1 pp  == , and 1 SO  = . The sum of probabilities of each line of M arkov state transition matrix is 1, so there must be 2 (1,1, ,1) S   = . Then the following formula can be obtained: According to Eq. (21), when the evolutionary algebra t → , the probability 1 1 . Therefore, regardless of the initial state, after countless iterations, each state () i xt will converge to the global optimal solution with probability 1. That is, the following formula holds: is the global optimal value, and () P is the probability that the optimal value of the t-th iteration of the algorithm converges to the global optimal value. According to Eq. (22), the ZGBBO algorithm proposed in this paper must converge to the global optimal value after s everal iterations, so the algorithm has global convergence.

Sequence convergence model
In this section, we will prove the global convergence of ZGBBO with a new method. For a global optimization problem, assuming that its optimal solution is * x , then * () fx is the global optimal value. The optimal solution of ZGBBO algorithm in the t-th iteration is t best x , and () t best fx is the current optimal value. According to the sequence convergence theorem, the equivalent condition that ZGBBO algorithm can find the global optimal value * () fx is that a certain () t best fx is in the  domain of the global optimal value * () fx , that is, During the evolution of ZGBBO, there exists a best individual in each iteration of population. The set formed by these individuals is: Where, T represents the maximum number of iterations. Thus, sequence A can be constructed according to Eq. (24) : As can be seen from section 3.4, since greedy selection is adopted for the optimal individual of each generation in ZGBBO, the optimal value of each generation population must be better or equivalent that of the previous generation. Therefore, the follow ing formula must be true: With the evolution, the population will gradually move closer to the range where the optimal solution exists, that is, the probability of the optimal individual in the population entering the  domain of the global optimal solution increases gradually. Eq. (26) is used to express the probability that the optimal value () t best fx of the current population converges to the global optimal value: According to Eq. (25), the following relationship must exist: Therefore, after the t-th iteration, the probability that the current optimal value does not converge to the global optimal value is: From Eq. (27), it can be seen that t p is monotone and does not decrease, so the following formula is true: And since 1 p is the probability, so 1 01 p  , then we have According to Eq. (30), after a large number of iterations, the probability that the algorithm does not converge to the optimal value is 0.
Therefore, as the number of iteration t increases, ZGBBO will eventually converge to the global optimal value * () fx in the form of probability 1, so the proof is completed.

Experiment and analysis
In order to test and verify the practicality and advancement of the ZGBBO algorithm proposed in this paper, a series of comparative experiments are carried out in this chapter. Firstly, the parameter sensitivity of the proposed algorithm is analyzed. Secondly, the ZGBBO is compared with its own variant algorithm, that is, the new algorithm obtained by ZGBBO without any improved strategy is compared and analyzed with the complete ZGBBO, in order to prove the effectiveness and necessity of the three improved strategies proposed in this paper. Then, ZGBBO is compared with the original BBO and other 6 excellent BBO variants proposed in the resent 5 years. Finally, ZGBBO is compared with 6 new state-of-the-art population intelligence evolutionary algorithms proposed in the res ent 3 years to fully prove its advanced nature. All experiments in this paper were carried out on the computer with Intel(R) Core(TM ) I5-8500 CPU @ 3.00ghz processor, 8.00 GB running memory and 64-bit operating system, and the development environment was M atlab R2020a.
At present, the test functions of evolutionary algorithms are mainly derived from Congress on Evolutionary Computation (CEC), the most widely used are CEC2013 (Liao et al. 2013), CEC2014 (Erlich et al. 2014) and CEC2017 (Awad et al. 2017). This paper summarizes and selects 24 different types of high-dimensional benchmark functions to test, and the specific information is shown in Table 3. Step Among them, f1-f10 are static single objective functions with only one optimal value, which is usually used to test the convergence speed and optimization accuracy of the algorithm, and the execution abil ity of the algorithm. f11-f24 are high-dimensional multi-modal functions, and the number of their local minimum increas es exponentially with the dimension. It is mainly used to test the ab ility of the algorithm to jump out of the local optimal state and the global search ability of the algorithm. In order to fully prove the algorithm testing ability of these multi-modal functions selected in this paper, we take the two-dimens ional search space as an example to plot figures of several representative multi-modal functions, as shown in Fig. 4. f13 f16 f18 f19 f22 f24

Fig. 4 T he two-dimensional graphs of multi-modal functions
It can be seen from the six sub-graphs of Fig. 4 that these multi-modal functions have many local minima in the two-dimensional problem environment. As the dimension of the problem increases, the number of these local optimal values will increase e xponentially. For example, the function f22 has only one minimum, but it is very shallow at the minimum and there are many local minima nearby, so it is difficult to find the theoretical minimum with high accuracy. Searching for the optimal solution on these irregular multi-modal problems is a huge challenge to the algorithms. Therefore, these benchmark functions can be used to effectively test the optimization performance of the algorithms.

Parameter sensitivity analysis
In this section, we will analyze the parameter sensitivity of the proposed algorithm. An appropriate parameter value is very important to the optimization performance of the algorithm, and even affects the ability of the algorithm to solve most problems. According to Algorithm 5, Cr is a parameter that needs to be initialized in addition to the original parameter. Cr means the individual crossover rate in feedback differential evolution mechanism, which can also be understood as mutation rate in our algorithm. For convenience, the maximum immigration rate I and emigration rate E are set as 1, and the optimal value of Cr is determined by adjusting the value of Cr. Cr value is adjusted and is set to 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , then the optimal values of the 24 benchmark functions in  Table 4. From Table 4, it can be found that when Cr value is 0.8, its Friedman test result is the best and its average ranking is the highest (2.5714). When Cr value is 0.0 or 1.0, the experimental results are the worst. It shows that the lack of feedback differential evolution mechanism or too frequent use it will serious ly reduce the optimization efficiency of the algorithm. In addition, the value of Cr also has a very serious influence on the optimization results of some problems. As shown in Fig. 5, for example, on the functions f7, f8, f9 f10, f18 and f22, different values of Cr will significantly affect the optimization performance of the algorithm. On these functions, Cr equals 0.8 is the best choice. So the parameter of our algorithm is suggested that Cr is 0.8. f7 f8 f9 f10 f18 f22

Comparison between ZGBBO and its own variants
In this paper, ZGBBO algorithm is compared with its own three variants to prove the necessity of three main improvement strategies.
Since the greedy selection of optimal individual in section 3.4 makes the optimal value of each generation population not worse than that of the previous generation, which improves the convergence accuracy of the algorithm, the improved strategy in Section 3.4 need not be verified. The three variant algorithms used for comparison in this section are the algorithm formed by the lack of one of the three improved mechanisms in section 3.1, 3.2 and 3.3. The specific information is shown in Table 5. There are three variants: ZGBBO_1, which lacks the example learning method in section 3.1, that is, BBO algorithm using roulette selection operator, improved migration operator and feedback difference mechanis m; ZGBBO_2, which lacks the improved migration operator in section 3.2, that is, BBO algorithm using example learning method, original migration operator and feedback difference mechanism; ZGBBO_3, lacks the feedback difference mechanism in section 3.3, that is, BBO algorithm using example learning method, improved migration operator and random mutation operator. The maximum immigration rate I and maximum emigration rate E of the four ZGBBO algorithms are 1, the population size NP=5D (D is the problem dimens ion), the maximum species number Smax=2NP, the maximum iteration times T=1000, and the maximum mutation rate max 0.05 m = of ZGBBO_3. In order to avoid contingency and keep the experiment scientific, four algorithms will independently run for 50 times on each test function (D=50). The mean value (M ean) and standard deviation (Std) of the results of 50 times will be counted.
At the significance level of 0.05

 =
, Wilcoxon rank sum test is performed (Joaquín D errac et al. 2011). The comparison results of the four algorithms are shown in Table 6. The bold data represents the optimal results of the four algorithms. The Wilcoxon rank sum test results of the 24 test functions are (w/t/l), and the representative meaning is w(+:win)/t(=:tie)/l(-:lose). For each test function in the table, "-" means that the performance of the contestant algorithm is worse than that of ZGBBO, "+" means that the performance of the contestant algorithm is superior, and "=" means that the result of the contestant algorithm is the same as that of ZGBBO, that is, there is no statistical significance in their performance difference. As can be seen from Table 6, the overall optimization performance of the three ZGBBO variants is not as good as that of ZGBBO.
ZGBBO_1 without the example learning method has the same results on 9 problems as ZGBBO, but is inferior to ZGBBO on 15 problems.
ZGBBO_2, which lacks convex migration mechanism and opposition-based learning, tied with ZGBBO on 3 problems, and is inferior to ZGBBO on 21 problems. The convergence of ZGBBO_3 without feedback differential evolution is the same as that of ZGBBO in only one problem, and the convergence accuracy is not as high as that of ZGBBO on the remaining 23 problems. None of the three variants showed better performance than ZGBBO, indicating that the lack of any improvement strategy would reduce the search capability of the algorithm.
ZGBBO, which integrates all the improved mechanisms, shows obvious advantages on 24 test functions and has higher convergence accuracy, which indicates that the improved strategies proposed in this paper are effective and can improve the performance of the algorithm. M oreover, all three improvement mechanisms are indispensable.

Comparison of convergence rates
This section compares the convergence speed of ZGBBO and the three variants on different test functions, so as to compare the algorithm performance. The specific operation is to take the optimal value of each algorithm and the search results of each generation in the 50 times of operation in section 5.2.1, and plot the optimal convergence images of the four algorithms on different test functions (D=50), as shown in Fig. 6 As can be seen from Fig. 6, ZGBBO's convergence speed and convergence accuracy are significantly higher than those of the other three self-variant algorithms, no matter on single-modal functions or multi-modal functions. Therefore, the three improved strategies are all resultful. The overall convergence s ituation of the four variants on different test functions is roughly the same, ZGBBO has the fastest convergence speed and the highest accuracy, while ZGBBO_2 has the slowest convergence speed. From Fig. 6, the convergence of the four comparison algorithms is bas ically the same on the single-peak functions f1-f9 and multi-modal functions f16, f17 and f21. The convergence accuracy of ZGBBO is rapidly improved and 20 to 100 exponential levels higher than that of the other three variants. The optimization performance of ZGBBO_1 and ZGBBO_3 is between ZGBBO and ZGBBO_2. For the multi-modal functions f9-f11, f13-f22, f24, ZGBBO_2 can not jump out of the local optimal solution, and the convergence accuracy of it is low. Therefore, the convex migration mechanism and opposition-based learning strategy adopted in this paper can not only accelerate the convergence speed of the algorithm, but also effectively help the algorithm to jump out of the local optimal state. In the case of single peak, the convergence speed of ZGBBO_3 without feedback difference mechanis m is obviously faster than that of ZGBBO_1 without example learning method. However, in the case of multi-modal problem, ZGBBO_3 without feedback differential mechanism can not select the mutation mode intelligently according to the population information, so it is difficult to jump out of the local optimal solution and the s earch speed is slower than that of ZGBBO_1. It can be seen that the feedback difference mechanism can help the intelligent evolution of the population, change the direction of optimization and improve the calculation accuracy. In addition, ZGBBO_1 without example learning method has faster convergence speed than ZGBBO_2 on all test functions and better search performance than ZGBBO_3 on functions f1-f3, f5-f11 and f16-f23. However, the convergence speed and accuracy of ZGBBO_1 are always worse than that of ZGBBO. Therefore, the example learning method effectively improves the convergence speed of the algorithm and enables the algorithm to converge to the global optimal value more quickly.
In general, ZGBBO is the best algorithm among the four algorithms, and its competitiveness is significantly stronger than the other three variants. Therefore, this paper is essential to the three improvement mechanisms of BBO. In addition, ZGBBO_1 algorithm and ZGBBO_3 algorithm are obviously superior to ZGBBO_2 algorithm, and show obvious advantages in the convergence accuracy of most test functions, which indicate that the convex migration operator and opposition-based learning strategy adopted in this paper have the greatest influence on the performance of the improved algorithm.

Comparison between ZGBBO and congeneric algorithms
This section compares ZGBBO with BBO and other six state-of-the-art improved BBO algorithms. Six algorithms with the strongest competitiveness and obvious improvement effect in the past five years are selected for simulation experiments to compare the overall performance. Table 7 shows the detailed information of six BBO congeneric algorithms. to the original references for each algorithm, as shown in Table 8. As can be seen from Table 8, the ZGBBO proposed in this paper has few parameter settings, so the algorithm has good robustness. The algorithm performance is basically not affected by parameter settings.  Table 3. The specific operation is to set the maximum evaluation times (FEs) of each test function, and record the optimal value searched when FEs is reached. According to the setting method of FEs of test function in CEC2017, FEs of each test function is equal to 10 4 D (D is the problem dimension). Each algorithm s earches for the optimal values of 24 test functions in the search space of D=10, D=30 and D=50 respectively. In order to avoid contingency, each algorithm will independently run for 50 times in different problem dimensions, and the algorithm performance of the algorithm will be showed by taking the best value (Best), mean value (M ean) and standard deviation (Std) as evaluation indexes. For the three evaluation indexes, the best value reflects the convergence accuracy of the algorithm, the mean value reflects the optimization ability of the algorithm, and the standard deviation represents the stability of the algorithm. Therefore, mean value are the focus of comparison. Tables 9-11 show the optimization results of eight algorithms on 24 test functions in 10, 30 and 50 dimensions respectively, where the best results of the eight algorithms are shown in bold. It can be seen from Table 9 that the optimization performance of the original BBO is not as good as that of the other 7 improved algorithms on the 10-dimensional test functions. Therefore, the research of other scholars and the algorithm proposed in this paper have improved the performance of the original BBO algorithm to some extent. Except for the original BBO, the other algorithms can converge to the optimal value accurately on at least one test function, and the standard deviation is 0. In contrast, the ZGBBO proposed in this paper has the best overall performance, which can accurately converge to the global optimal value every time on 15 test functions, such as single-peak functions f1, f3, f5, f6, f8-f10 and multi-modal functions f11-f14, f17, f19, f23, f24, etc. A lthough ZGBBO does not converge to the global optimal value every time on the test functions f4, f7, f15, f18 and f20-f22, its mean value and standard deviation are the best among the eight comparison algorithms, which showing better searching ability. It can be seen that the ZGBBO not only has high convergence accuracy in searching, but also is not easy to fall into local optimal state when solving multimodal problems. In addition, PRBBO, WRBBO, HBBO-CM A and DCGBBO all show strong competitiveness in the 10-dimensional search space. The mean value of PRBBO converges to the optimal value successfully on five problems, and the result is the best on one problem. On four problems, PRBBO and ZGBBO do not converge to the optimal value, but the obtained mean value is the same. WRBBO converges to the global optimal value on four problems, and the search results on two problems are consistent with ZGBBO. HBBO-CMA finds the global optimal value on three problems, and the mean value on five problems is the same as that of ZGBBO, which does not find the best solution.
DCGBBO accurately finds the optimal result on five test questions, that is, the mean value and standard deviation obtained are all 0.
Therefore, in the 10-dimensional optimization environment, the overall performance of ZGBBO algorithm is the best. It is not easy to fall into the local optimal solution, and the convergence accuracy is significantly higher than the other seven comparison algorithms. shows obvious advantages over other s even comparison algorithms in terms of mean values for s ingle-peak problems f1-f10. For multimodal problems, on all functions, ZGBBO converges to the global optimal value or the result is the best except the mean obtained on problem f20 is not as accurate as PRBBO and WRBBO. Especially for functions f2, f4, f16 and f17, the mean obtained by ZGBBO is at least 100 exponential levels higher than other algorithms. Besides, PRBBO also shows well competitiveness. For instance, with the mean value of seven test functions being the same as that of ZGBBO, and the result of one test function being better than that of ZGBBO.
According to the above analysis, ZGBBO algorithm can still maintain outstanding search performance in the 30-dimensional optimization environment, and is more effective than other comparison algorithms. As can be seen from Table 11, in the 50-dimensional problem environment, ZGBBO still has the best overall performance among the eight comparison algorithms. Obvious ly, ZGBBO shows an absolute advantage on the s ingle-peak functions f1-f10, and the mean value obtained on all the single-peak functions is the s mallest among the eight algorithms. This is because the convex migration mechanism and example learning approach proposed in this paper can effectively improve the speed of the population moving to the global optimal solution, so that the algorithm can quickly converge. On the multi-modal functions, the performance of ZGBBO on the functions f20-f22 decreases with the increase of the problem dimension, and the result obtained is not as desired as WRBBO and PRBBO. However, for other multi-modal problems, the search results of ZGBBO are the best among the eight algorithms, such as the multi-modal functions f11-f14, f19, f23 and f24, ZGBBO converges to the global optimal value accurately, and the mean and standard deviation are all 0, which showing excellent search performance. Therefore, on the 50-dimens ion problems, ZGBBO maintains good overall performance, and its search ability basically does not decrease with the increase of dimension, which has good robustness.
From the overall observation of Tables 9-11, it can be found that the optimization performance of ZGBBO algorithm is less sensitive to the problem dimension, and it can still maintain excellent searching ability on high-dimens ional problems. For the single-peak problems, ZGBBO algorithm can always move to the global optimal solution quickly in high dimensional environment, and keep high calculation accuracy. On multi-modal problems, ZGBBO algorithm can always successfully jump out of the local optimal state and converge to the global optimal value in high dimensional environment. The other seven BBO variants, such as PRBBO, WRBBO and DCGBBO, have strong competitiveness in the 10-dimensional problem environment, and the overall performance is wonderful. However, with the problem dimens ion increas ing, only PRBBO maintains better competitiveness in the 30-dimensional problem environment. In the 50-dimensional problem environment, The performance of these algorithms reduces quickly, and is not as stable as ZGBBO algorithm. The convergence accuracy of ZGBBO algorithm decreases s lightly with the increase of problem dimension. Especially for the functions f5, f9, f11-f14, f19, f23 and f24, the convergence accuracy of ZGBBO algorithm is not affected by the problem dimension, and it can still find the global optimal value precisely. However, the convergence precis ion of the other seven comparison algorithms decreases greatly with the increase of the problem dimension, so these algorithms do not have ideal robustness and are not suitable for high-dimensional problems. With the development of modern society, the requirements of algorithms to solve practical problems in life have gradually increas ed, and algorithms must be able to solve high-dimens ional problems effectively. The ZGBBO algorithm proposed in this paper maintains excellent optimization performance in high dimensions. Therefore, ZGBBO algorithm is more advanced and effective, which is worth adopting and promoting.
In order to fully compare the overall performance of the eight BBO algorithms, Friedman test is performed on the eight algorithms according to the optimization results of the algorithms in Table 11 in a 50-dimensional search environment. The specific results are shown in Table 12 and Fig. 7.
It can be seen from Table 12 that the search results of ZGBBO algorithm on 21 test functions are the best among eight BBO algorithms, and the search results only on functions f20-f22 are not as ideal as other algorithms. According to the average ranking of eight algorithms in Fig. 7, PRBBO and WRBBO are two BBO variant algorithms with strong competitiveness, which are effective and advanced among BBO improved algorithms. However, the DCGBBO algorithm proposed in 2021 is only better than BBO algorithm in the search results, so its competitiveness is weak. According to Table 12 and Fig. 7, ZGBBO ranks the highest in average and the first in total among the eight comparison algorithms, which has the best overall performance. Therefore, ZGBBO is an advanced algorithm with stron g competitiveness and high optimization performance.

Comparison of convergence rates
The experiment in this section is to compare the convergence speed of ZGBBO and other seven BBO variant algorithms on 24 different test functions in 50 dimensions, so as to compare the algorithm performance. The specific operation is to set the maximum number of iterations T=1000 and the population size NP=5D. Each algorithm is run independently for 50 times. The optimal running result and the search results of each generation are taken to plot the optimal convergence images of the eight algorithms on different test functions, as shown in Fig. 8 It can be observed from the 24 sub-graphs in Fig. 8, the convergence speed of the ZGBBO algorithm proposed in this paper is significantly faster than that of other s even comparison algorithms on 20 test functions, which showing obvious advantages. On the single-peak functions f1-f7, f9 and f10, ZGBBO rapidly converges to the global optimal solution from the beginning of iteration, and the convergence precision is 10 to 150 exponential levels higher than other algorithms. On the function f8, ZGBBO's convergence speed is not as fast as TDBBO and WRBBO, but it is better than the other five algorithms. Therefore, ZGBBO is more competitive than other BBO algorithms w ith higher search efficiency and rapider convergence rate on single-peak problems. On the multi-modal problems, ZGBBO also reflects a huge advantage. On the functions f11-f19, f23 and f24, the convergence rate of ZGBBO is significantly faster than that of the other seven BBO variants. Especially on f11, f12, f19, f23 and f24, the convergence curve of ZGBBO is almost invisible, and it rapidly approaches to the optimal value from the beginning of evolution. It can also be seen from F ig. 8 that on the multi-modal functions f11, f16-f19, f23 and f24, the other seven comparison algorithms fall into local optimal solution, resulting in search stagnation, while ZGBBO does not fall into local optimal value, and the algorithm is always in convergence state. Therefore, ZGBBO can effectively jump out of the local optimal solution, and the overall performance is better than other congeneric algorithms. Although ZGBBO is not the fastest convergent on f20-f22, it is only slower than WRBBO or TDBBO. In contrast, WRBBO and TDBBO show strong competitiveness in algorithm convergence speed. It is impossible for an algorithm to show optimal performance on all problems, so ZGBBO is still an algorithm worth adopting and developing.

Comparison between ZGBBO and other evolutionary algorithms
In this section, we compare the performance of ZGBBO with six state-of-the-art evolutionary algorithms proposed in the resent three  Table 13.   As can be seen from Table 14, in a 10-dimensional problem environment, six state-of-the-art algorithms and ZGBBO all show excellent optimization performance. Especially for the multimodal problem f12, all the seven algorithms successfully jump out of the local optimal state and converge to the global optimal value, with the mean and standard deviation equal to zero. On multimodal problem f23, except ESDA and AOA algorithms, which do not show good performance, the mean value and standard deviation obtained by other algorithms are all 0, and the theoretical optimal solution is successfully searched. In contrast, in the low-dimensional search environment, the MPA showes strong competitiveness, and the mean value obtained on 10 problems is better than other algorithms or the same as ZGBBO, and the mean value is zero on 8 problems, successfully converging to the global optimal value. The mean value of ZGBBO is better than that of other algorithms on 21 test functions, and the mean value of ZGBBO on 15 functions is equal to the theoretical optimal value. Therefore, in the 10-dimensional problem environment, the overall performance of ZGBBO algorithm is better, and other algorithms also show the performance of advanced algorithms. algorithms, which is less superior than the calculation results in the 10-dimens ional problem environment. But as advanced algorithms, they still keep high convergence accuracy on some problems. For instance, for multimodal problems f12, f14, and f23, most of the s ix excellent algorithms give the s ame results as 10 dimensions, with a mean value of zero. ChOA still has the smallest mean value on f7, and ESDA still has the highest convergence accuracy on f18, which is at least 4 exponential levels higher than other algorithms. In contrast, the convergence accuracy of ZGBBO does not decrease obvious ly, which is basically the same as the calculation result in the 10-dimensional problem environment. Therefore, in the 30-dimensional search environment, ZGBBO still maintains superior search performance and has better robustness.  Table 16 in a 50-dimensional search space. The specific results are shown in Table 17 and Fig. 9.  Fig. 9 Average ranking of ZGBBO and six state-of-the-art algorithms As can be seen from Table 17, the optimization result of ZGBBO algorithm on 18 benchmark functions is the best among seven evolutionary algorithms, and the success rate of ZGBBO algorithm in six state-of-the-art algorithms reaches 75%. According to the average ranking of six comparison algorithms in Fig. 9, M PA proposed in 2020 is the most competitive among all competitors, ranking only second to ZGBBO on average, and is an excellent algorithm among the new evolutionary algorithms proposed in recent two years. In contrast, the search results of AOA proposed in 2021 on 24 test functions are only better than AEFA proposed in 2019. The rest of the state-of-the-art algorithms show unique high performance and excellent search ability on different text functions, but they also have low convergence accuracy and slow search speed on other text functions, and their comprehensive competitiveness is general. According to Table 17 and Fig.   9, ZGBBO has the best overall performance among the seven evolutionary algorithms and is more competitive than the other six state-of-the-art algorithms. Therefore, ZGBBO is a new evolutionary algorithm with advanced and superiority.

Comparison of convergence rates
In order to fully verify the advancement of ZGBBO, we compare the convergence rate of ZGBBO and six state-of-the-art evolutionary algorithms on 24 different benchmark functions of 50 dimensions in this section. If section 5.4.1 compares the convergence accuracy of the algorithms under the same maximum evaluation times, then this section compares the convergence speed of the algorithms under the same iteration times. The specific operation is the same as the section 5.2.3, the maximum iteration times T=1000 and the population size NP=5D are set, and each algorithm is independently run for 50 times. The optimal running result and the calculation results of each generation are recorded, and the optimal convergence image of seven comparison algorithms on the high-dimensional test function is drawn, as shown in Fig. 10 It can be observed from Fig. 10 that under the same number of evolutions, ZGBBO has a faster convergence speed than the other 6 state-of-the-art algorithms on 75% of the test functions. Especially for the single-peak functions f1-f10, the convergence curve of ZGBBO decreases rapidly and is obviously better than other algorithms. Although the convergence speed of ZGBBO on f8 is not the fastest among the seven algorithms, it is second only to the electrostatic discharge algorithm. For multimodal problems, the ZGBBO algorithm combined with differential evolution shows outstanding performance among six new evolutionary algorithms. For example, on the multi-modal functions f11-f13, f15-f17, f19, f23 and f24, ZGBBO does not fall into the local optimal solution like other evolutionary algorithms, but rapidly converges to the global optimal solution, and the convergence curve decreases significantly. In contrast, the electrostatic discharge algorithm proposed in 2019 and the marine predator algorithm proposed in 2020 also show strong competitiveness in convergence speed.
For example, on f11-13, f15, f19 and f23, the convergence curve of M PA almost coincides with the convergence curve of ZGBBO. On multi-modal function f14, the convergence curve of M PA decreases faster than that of ZGBBO, while on f20-f22, the convergence rate of ESDA algorithm is the most ideal among the seven comparison algorithms. Even so, the convergence speed of ZGBBO on multi-modal functions f14 and f20-f22 is only second to MPA or ESDA, and better than other state-of-the art algorithms. In addition, ESDA algorithm has the best convergence speed on 20.8% of the test questions, while ZGBBO has the fastest convergence speed on 75% of the test questions. Therefore, the convergence of ZGBBO is still more competitive than the other six state-of-the-art evolutionary algorithms, and the algorithm has better performance on the whole. So ZGBBO is a ideal algorithm to choose and use.

ZGBBO complexity analysis
The evaluation of an algorithm includes three aspects: optimization performance, convergence speed and complexity. Previously, we have verified that ZGBBO has excellent optimization performance and search ability. Now, the complexity of ZGBBO is analyzed to fully prove the effectiveness and advancement of ZGBBO algorithm. In the same development environment, the number of function evaluations and the calculation design of the algorithm jointly determine the running time of the algorithm. However, in the experimental settings of this paper, all the algorithms set the same maximum evaluation times for each benchmark function, so the reason for ZGBBO's fast convergence speed and short running time is not caused by the use of more function evaluation times, but that this paper fully reduces the computational complexity in the search process of ZGBBO. We will analyze and discuss it in detail in this chapter.

Time consumption
In the experiment of the Chapter 5, we can find ZGBBO algorithm's convergence speed is quicker than the other algorithms. But the algorithm convergence speed running time is short, doesn't mean algorithm complexity is low. In real life there are many problems need to be within a certain amount of time we find a solution. Therefore, sacrifice a lot of time to get the solution is not desirabl e. In order to verify that ZGBBO is also an acceptable algorithm in terms of time consumption, we compare and analyze the running time of ZGBBO and original BBO. We measure the average CPU running time of each iteration when BBO and ZGBBO are solving 24 benchmark problems, so as to compare the results of time sacrifice of the algorithm, as shown in Fig. 11.
As can be seen from Fig. 11, although several improvement strategies are added to ZGBBO, the time consumption of the algorithm does not increas e. M oreover, on some functions, such as f19, ZGBBO has a shorter average running time than BBO. This is because when ZGBBO adds the improved strategy, it also deletes the mutation operator and other calculation steps of the original BBO, so as to balance the running time of the algorithm. Combined with the above experimental results, it can be found that ZGBBO can obtain higher precis ion problem solutions in the same or shorter time as the original BBO.

Fig. 11
Average runtime comparisons among BBO and ZGBBO on the benchmark functions

calculated amount
To verify the comparison results in Fig. 11, we will analyze the calculation steps of ZGBBO and original BBO in depth. In order to facilitate intuitive comparison and understand the differences between two algorithms, we transform the calculation steps of BBO and ZGBBO into flow charts, as shown in Fig. 12. Based on the observation of Fig. 12, we carry out the following analysis and discussion.
First of all, in each iteration of the original BBO algorithm, the immigration rate and emigration rate of each habitat in th e population need to be recalculated, so the total calculation times are 2· NP· T. But the immigration rates and emigration rates are based on rankings. In other words, the immigration rates and emigration rates are only related to the ranking of habitats. According to Eqs. (2) and (3), as long as the ranking of individuals is determined, the immigration rate and emigration rate of corresponding individual can be determined.
Therefore, in the iteration process of ZGBBO, no matter how many times the total iteration is, the immigration rates and emigration rates of all habitats in the population are calculated only once. In addition, ZGBBO adopts the example learning method to select h abitats for migration, which does not need to calculate the emigration rate of each habitat, further reducing the calculation amount, and the total calculation times is NP. Therefore, compared with BBO, ZGBBO greatly reduces the operational complexity in the calculation of the immigration rate and the emigration rate, saving at least 2·(NP-1)·T times calculation.

Fig. 12 Flowchart of BBO and ZGBBO
Secondly, the mutation rate of each habitat only needs to be calculated once. According to Eqs. (5) and (6), as long as the immigration rate and emigration rate are determined, the mutation rate of a habitat can be obtained. Therefore, habitat mutation rate is also based on habitat suitability ranking and does not need to be calculated repeatedly in each iteration. However, the original BBO does not avoid repeated calculation, but calculates the mutation rate of each habitat in every generation. The species probability of the habitat should be calculated first to calculate the mutation rate, that is, the calculation times of each generation is 2·NP, so the total calculation times of BBO mutation operator is at least 2·NP·T. By contrast, ZGBBO directly deletes the mutation operator in the search process and uses the feedback differential evolution mechanism to replace it. Although the standard deviation of population suitability needs to be calculated, the amount of calculation is far less than the species probability. ZGBBO calculates 0.8 time's standard deviation for the whole population on average in each iteration, that is, the total calculation times is 0.8·T. Therefore, ZGBBO greatly reduces the computational complexity and saves at least 2·NP·(T-0.8) calculations in the mutation step . In addition, ZGBBO has one more judgment step than BBO in the migration process, but it does not introduce additional loops or add additional calculations. BBO used roulette to select the habitat to be emigrated, and the calculation times of each generation is NP, so the total calculation times is NP·T. Then, ZGBBO adopts the example learning method to select the emigration habitat. According to Eq. (7), each individual only needs to calculate once to select the habitat for migration. Therefore, the total calculation times of ZGBBO in the migration process is also NP·T times, and the calculation amount does not increas e. Finally, ZGBBO adds a opposition-based learning strategy, which generates opposite individuals for the half of individuals with poor fitness in each iteration. The algorithm generates an opposite individuals would increase one calculation, and the calculation times of each generation would increase to NP/2, so the total calculation times is NP·T /2. Although ZGBBO increases NP·T /2 computation in the opposition-based learning, this has been compensated in the other operators mentioned above.
To sum up, ZGBBO reduces the computational complexity of the algorithm from several parts and achieves the purpose of reducing the computational amount. Although the amount of calculation is increased in the process of generating opposite individuals, it is fully compensated in other parts. Compared with the original BBO algorithm, the calculation times saved by ZGBBO algorithm is at least: 2  (NP-1)  T+2  NP  (T-0.8) -NP  T /2=(3.5T-1.6)  NP-2T.

Conclusions and future research
In this paper, we analyze the performance deficiencies of the standard BBO algorithm, and propose a new biogeography-based optimization named ZGBBO. The framework of ZGBBO mainly includes three operators: selection, migration and feedback differential evolution. The selection operator and migration operator are used to improve the convergence accuracy and speed of the algorithm, so as to enhance the optimization efficiency. The feedback differential evolution is used to help the algorithm escape the local optimal solution, so as to enhance the optimization ability of multi-modal problems. To reduce the computational complexity, the mutation operator of BBO is deleted. M eanwhile, we creatively establish a population-based sequence convergence model to prove the convergence of the algorithm. In order to verify the effectiveness and necessity of three improved operators, simulation experiments are carried out on ZGBBO and its three variants. Experimental results on 24 benchmark functions show that the three improved strategies are indispensable. In addition, ZGBBO is compared w ith seven improved BBO variants and six state-of-the-art evolutionary algorithms. On the whole, ZGBBO has the best overall performance. Finally, the algorithm complexity of ZGBBO is analyzed. By comparing with the original BBO, the effectiveness of the proposed algorithm is fully verified.
With the rapid development of modern society, practical problems in real life have higher and higher requirements for algorithms.
Although the ZGBBO algorithm presented in this paper shows good results on high-dimensional test functions, it has not been applied to practical problems. An excellent algorithm need to serve human life. Therefore, the next step will be to further study the defects of ZGBBO algorithm, improve its optimization ability, and fully reduce the running time. In the future, ZGBBO will be applied to solve more complex practical problems, such as engineering optimization, wireless sensor network coverage, environmental monitoring, etc.