Elite-guided multi-objective cuckoo search algorithm based on crossover operation and information enhancement

In recent years, the cuckoo search (CS) algorithm has been successfully applied to single-objective optimization problems. But in real life, most optimization problems are multi-objective optimization problems (MOPs). In order to enable CS to better solve MOPs, this paper proposes an elite-guided multi-objective cuckoo search algorithm based on cross-operation and information enhancement (CIE-MOCS). This algorithm first enhances its population diversity through crossover operation, then adds elite individuals to guide its update process to speed up the algorithm convergence speed. Finally, the method of information enhancement is adopted in the abandonment process, so that the algorithm is not easy to fall into the local optimum. In order to verify the performance of the algorithm, this paper uses a variety of benchmark functions and performance evaluation indicators to evaluate it, and provides a case to verify the effectiveness of the algorithm in practical applications. The experimental results show that CIE-MOCS has good performance compared with the contrasting algorithms.


Introduction
There are many complex optimization problems in real life (Abdel-Baset et al. 2018). Many researchers have made great contributions to solving these problems and proposed many effective optimization methods, but these methods still have some problems. For example, in the optimization process, it is often not the optimization of a single objective, but the optimization of multiple conflicting objectives Garg 2015;Tang et al. 2019;Toktos et al. 2020), namely Multi-objective optimization problems (MOPs) (Zheng et al. 2017;Gaspar-Cunha et al. 2008;Ustun et al. 2021). For multi-objective optimization, the current mainstream method for constructing multi-objective optimization algorithms is the Pareto construction method proposed by Miettinen et al., 1998), such as NSGA-II (Deb et al. 2002), SPEA2 (Zitzler et al. 2001) and MOEA/D (Zhang et al. 2007). Although the above methods have made great contributions in the field of optimization, they find that there are still problems such as slow algorithm convergence and easy to fall into local optimum, when researchers use optimization algorithms to solve multi-objective problems (Guo et al. 2016;Pan et al. 2015).
In order to solve the above problems and improve the optimization performance of the algorithm, the researchers analyzed the problems existing in the optimization algorithm. Glamsch et al. (2021) proposed that there is a strong dependence and correlation between the diversification of the initial population and the optimization performance, and based on this idea, they used different initialization methods (McKay et al. 1965;Macqueen et al. 1965;Schmidt 2006;Wessing 2017) to diversify the initial population and then used NSGA-III to solve the problem. The algorithm performance obtained by the initialization method is significantly different. In view of the influence of the initial population diversity on the performance of the algorithm, the latest research proposes that the quasi-random sequence and the ring sequence act on the initial population, and combine them with particle swarm optimization (PSO). The results show that the initial population diversity has a greater impact on the performance of the & Xiaochen Hao haoxiaochen@ysu.edu.cn algorithm (Ashraf et al. 2022). Different from the conclusion that the initial population can improve the algorithm's optimization ability, Haupt (2004) pointed out that in the global optimization algorithm based on population iterative search, in the iterative process, the population with good diversity can improve the global optimization ability of the algorithm. The idea reveals that the diversity of the population should also be intervened in the iterative process to ensure better algorithm performance. It can be seen that in the optimization process of the optimization algorithm, the diversity of the population has a great influence on the performance of the algorithm. In the optimization process of the Genetic Algorithm (GA) (Holland 1975), the crossover operation is a kind of method that can enhance the diversity of the population. In addition, a crossover operation is performed in each iteration to ensure the diversity of the population during optimization. The improvement of population diversity can improve the comprehensive performance of the algorithm. For the algorithm convergence speed, different scholars have also put forward a variety of ideas (Xiang et al. 2017;Wang et al.2017). Zaenudin and Kistijantoro 2017) proposed an enhanced Pareto evolutionary algorithm SPEA2 based on the selection process, and obtained an optimization algorithm with a faster convergence speed. Ding et al. (2018) introduced the artificial immune mechanism into BSO, stratified the individual fitness, and introduced the concentration mechanism, which improved the convergence speed and solution accuracy of the algorithm while maintaining the diversity of the population. But the most famous method is the differential evolution algorithm (DE) for improving the algorithm convergence speed (Ston et al. 1997). By improving its mutation strategy, DE replaces the basis vector in the original DE/rand/1 with the optimal individual DE/best/1 after the last iteration, so that the optimal solution guides the optimization to speed up the algorithm convergence speed. This method of guiding through the optimal solution may also cause the algorithm to fall into a local optimum due to misguided guidance, so how to avoid the local optimum is also a research hotspot in the field of optimization.
To avoid the algorithm falling into a local optimum, PSO (Kennedy et al. 1995) iterates by updating the position and velocity formulas. The velocity formula contains random individual information in the population. The individual information provides a vector for the velocity, which is based on the existing individual updates it to get a new individual. This update method based on individual information will make full use of the existing information to get rid of the local optimum dilemma. To further enhance the global optimization capability of PSO, Zhou et al. (2022) designed a quantum particle swarm optimization algorithm based on a truncated average stabilization strategy, which used quantum wave function calculation information to determine the global optimal value. In addition to PSO, Cao et al. (2019) proposed an active learning brainstorming optimization algorithm (ALBSO) with a dynamically changing clustering period. This algorithm adopts a onestep dynamic changing clustering period and periodically classifies the population information through the clustering period, which reduces the complexity while ensuring the global optimum. The optimization process of the algorithm is actually a process of information exchange, so obtaining more and better information is of great significance to the performance of the optimization algorithm.
The above summarizes several factors that affect the performance of the optimization algorithm, and gives some algorithm improvement methods for the above factors, all of which have achieved good performance improvement. However, the above-mentioned algorithm improvement only improves a certain factor, and does not provide an improvement method for balancing each factor. As an effective swarm optimization algorithm, cuckoo search (CS) (Yang et al. 2009) has been applied to many fields and achieved good results (Mohamad et al. 2014). According to the above analysis, this paper proposes an elite-guided multi-objective cuckoo search algorithm based on crossover operation and information enhancement (CIE-MOCS). This algorithm uses CS to establish multi-objective cuckoo search (MOCS) through Pareto method. MOCS combines elite guidance, information enhancement, and crossover operation into CIE-MOCS, which enables it to handle multi-objective optimization problems with high performance.
The rest of the paper is structured as follows: In Sect. 2, the background and related work of this paper are described. Section 3 presents the method of building MOCS. A detailed description of the proposed improved algorithm is given in Sect. 4. Section 5 explains the experimental results and analyzes them using statistical testing methods . Section 6 presents a simple case study of the CIE-MOCS.

Problem description and related work
For the CIE-MOCS proposed in this paper, this section gives a general description of the multi-objective problem and describes and gives an analysis of the recent related work for cuckoo search. Where Sect. 2.1 gives a general description of MOPs. Section 2.2 analyzes some current improvement methods for cuckoo search (CS).

Problem description
The multi-objective optimization problem (MOPS) can be simply defined as: min f ðXÞ ¼ ðf 1 ðXÞ; f 2 ðXÞ; . . .; f r ðXÞÞ In the above equation, min f X ð Þ is an optimization problem with r optimization objectives, g i X ð Þ is an inequality constraint, and h i X ð Þ is an equation constraint, where X ¼ x 1 ; x 2 ; . . .; x n ð Þ is a given decision variable which satisfies the above constraint. In MOPS, All optimal solutions constitute the Pareto frontier (PF known ), which corresponds to the decision space called the Pareto optimal solution set. The goal of the multi-objective optimization algorithm is to solve for the optimal set of solutions that is closest to the true preto frontier (PF true ).

Related work
Cuckoo search (CS) is a new swarm intelligence algorithm proposed by Yang et al. (2009). Due to its simple structure, few parameters and easy implementation, it has attracted extensive attention of scholars. In this subsection, a brief review of research on CS will be made.
CS was originally designed to solve single-objective optimization problems. In order to better solve single-objective optimization problems, many researchers mixed it with other algorithms to obtain better application value or algorithm performance. Shehab et al. (2019) mixed the bat algorithm and CS into CSBA, and experiments proved that CSBA is stronger than CS in local search. Guo et al. (2016) combined CS and PSO to form a new optimization algorithm to optimize the preventive maintenance optimization model (PMPOM), and obtained results with fast convergence speed and better solution performance. In addition to mixing optimization algorithms, some researchers use optimization algorithms to optimize the parameters of prediction algorithms to make prediction results more accurate. Zhang et al. (2020) mixed SVM with CS to construct a new adaptive forecasting model to predict power load. It has become a common method to improve the algorithm by mixing the algorithms and taking the advantages of each algorithm. For cuckoo search, in addition to the improved method of the above hybrid algorithm, there are also some ideas of improving internal structure. Wang et al. (2016) used the chaotic graph to adjust the CS optimization step size while adding an elite scheme to construct a chaotic cuckoo search (CCS). The experimental results show that its optimization ability is better than CS. Pan et al. (2020) proposed a compact cuckoo search based on mixed uniform sampling technology, which enhanced the optimization ability of CS, and applied it to the field of UAV logistics to reduce the cost of use. Through the above analysis CS has a wide range of application prospects, but in real life, most optimization problems are multi-objective optimization, so in order to further make CS have better application prospects, it is of great significance to carry out multi-objective research on CS. Yang et al. (2013) proposed a Pareto-based multi-objective cuckoo search algorithm (MOCS), which verified its good multi-objective optimization performance through benchmark problems and case analysis. Since the MOCS was proposed, many improved optimization algorithms have emerged. Chen et al.(2021) proposed a decomposition-based multi-objective cuckoo search (MOCS/D) to solve multi-objective optimization problems using the idea of MOEA/D algorithm. This method balances the convergence and diversity of the algorithm, and has achieved good results in test cases. Cui et al. (2019) used nondominated sorting and setting reference points to enhance the performance of the algorithm. The experimental results show that this improved algorithm has better optimization performance than other algorithms. Although Zhang et al. (2018) also used non-dominated sorting to generate Pareto fronts, they also designed a dynamic local search method to enhance the local search ability, thereby enhancing the convergence, scalability and distribution performance of MOCS.
The purpose of algorithm performance improvement is to solve practical problems better and faster. Othman et al. (2020) proposed a hybrid multi-objective cuckoo search algorithm based on evolutionary operators for cancer diagnosis and classification. The experimental results show that the improved algorithm is better than MOCS in diagnosis and classification. In the optimization scheduling research, the completion time and carbon emissions are used as objective functions, and the optimal sub-region is obtained by using the hybrid MOCS of the controllable optimization space, which effectively solves the flow shop scheduling problem (Gu et al. 2021). Madni et al. 2019) proposed a MOCSO to solve the problem of cloud computing resource allocation, and the experimental results show that it balances more goals within the expected time and cost. MOCS has a good ability to solve practical problems, and when faced with specific problems, researchers will propose some specific improvement methods to better solve this problem, but the application of such algorithms in other fields may be limited. How to propose a modified MOCS with improved performance and wide application value is the research purpose of this paper.
Elite-guided multi-objective cuckoo search algorithm based on crossover operation and information… 4763 To sum up, the improvement of MOCS has achieved remarkable results, but with the development of society, the multi-objective problem becomes more and more complex, and how to propose a high-performance and easy-to-apply MOCS has become a research hotspot. Combined with the analysis of the introduction, this paper improves CS, and proposes an elite-guided multi-objective cuckoo search algorithm based on cross operation and information enhancement (CIE-MOCS) to process MOPs. The main contributions of this paper are as follows: This paper builds a Pareto-based multi-objective cuckoo search algorithm.
The crossover operation diversifies the population to obtain a better solution space, making it easier for the algorithm to obtain the global optimum.
An elite guidance strategy based on fast non-dominated sorting and crowding degree sorting is proposed to enhance the fast convergence of the algorithm.
An information enhancement method is proposed to make the algorithm obtain more solution space information in the process of abandonment, so that the algorithm is not easy to fall into the local optimum dilemma.
3 Multi-objective cuckoo search

Cuckoo search
The cuckoo search algorithm can find the optimal solution quickly and efficiently by simulating the parasitic broodrearing behavior of cuckoo nests and combining the Lévy flight mechanism of birds and insects to perform a stochastic search for the optimal solution. To simplify the description of the cuckoo search, three idealized rule representations are used.
Each cuckoo lays one egg at a time and then produces the eggs randomly in the nest of its choice.
The best nests with high quality will be preserved for the next generation.
The number of available host nests is fixed, and the probability of a cuckoo laying an egg being found by a host bird is Pa 2 0; 1 ½ . When a host bird finds an egg laid by a cuckoo it will throw the egg away or either build a new nest.
Corresponding to the algorithm itself, each egg in the nest represents a solution, while a cuckoo egg represents a new solution whose purpose is to replace the not-so-good solution in the nest with a new and better one. For a minimization problem, the quality or fitness of the solution can be simply inversely proportional to the objective function. Based on the three rules above, the cuckoo search pseudo-code is shown in Algorithm.1.
The update process of cuckoo search performs a global search by Lévy flight to find the location and path formula of the host nest as in Eq. (2): In Eq.  Þand comparing rand with the abandonment probability Pa. If rand\Pa, the nest position is randomly updated once, otherwise the nest position remains unchanged. The equation for the abandonment process is shown in Eq. (4).
are two random nests subtracted at time t. rand is a random number from 0 to 1. When rand is less than pa, nest abandonment is performed to generate a new solution, and the opposite is kept x t ð Þ i unchanged, and the abandoned solution v t ð Þ i is obtained after this process.

Multi-objective cuckoo search
There are two general approaches to handle multi-objective optimization problems: one is to linearly weight multiple objectives by giving them reasonable weighting factors according to their importance so that the MOP is converted into a single objective problem (SOP) to be solved. The second approach is to construct the Pareto optimum in such a way that the optimal set of solutions is formed and the optimal individual is selected from the optimal set of solutions as the optimal solution. Converting a multi-objective to a single objective by linear weighting to handle optimization problems, On the one hand, the weighting coefficients are difficult to be selected accurately, on the other hand, the solution obtained after single-objective optimization is too absolute, while the multi-objective optimization objectives conflict with each other, and theoretically there is no absolute optimal solution. Therefore, most of the current multi-objective optimization problems are based on Pareto's approach for finding the optimum. And constructing the Pareto optimal solution set can be constructed by the dealer's rule, ring rule, recursion, and fast non-dominated sorting. After analyzing and studying through several classes of methods, this paper uses fast non-dominated sorting with low time complexity and easy implementation to convert CS to MOCS that can handle multi-objective problems, and combines it with an elite population retention strategy based on crowdedness sorting for better overall fitness of its population.

Fast non-dominated sort
In the population-based multi-objective optimization algorithm, after one iteration of the population, there are various relationships between each individual in the population, such as the relationship between domination and dominated, the relationship between non-dominated individuals, and the relationship between dominated individuals. The relationship between individuals is the core idea of fast non-dominated sorting. Taking the two-dimensional objective minimization as an example, it is assumed that several individuals build Pareto through fast non-dominated sorting as shown in Fig. 1.
As shown in Fig

Crowded ranking
Crowding ranking is a sorting method based on fast nondominated sorting to calculate the pros and cons of unrelated individuals in the same layer. As shown in Fig. 2, the crowding ranking of the set with the total number of N individuals to the individual i in the Paeto frontier is calculated by its two adjacent individuals i À 1 and i þ 1.
As shown in Fig. 2, to calculate the distance of individual i, the distance initialization is first performed on the  Elite-guided multi-objective cuckoo search algorithm based on crossover operation and information… 4765 individuals in the same layer, and set the distance of individual i to L i ½ d ¼ 0. Then all individuals in the same layer are sorted in ascending order by the objective function value, and let the value of the first individual and the last individual be 1,that is L 1 Among them, L i þ 1 ½ d and L i À 1 ½ d are the objective function values of the i þ 1th and i À 1th, and f max m and f min m are the maximum and minimum values of the objective function among individuals in the same layer, respectively.

Elite retention strategies
The elite retention strategy is a method of mixing the parent and child in the process of algorithm optimization, and selecting the mixed population through crowding degree sorting to generate the next iteration population. Assuming that the population size is N, the specific process is shown in Fig. 3.
As shown in Fig. 3, the parent population (N) is mixed with the child population (N) generated by the parent iteration to form a mixed population (2 N). The mixed population is then sorted by non-dominant sorting to stratify the individuals, and then the crowdedness of each layer is sorted, and the elite population (N) with the same size as the initial population is selected to participate in the next iteration.

CIE-MOCS
Section 3 points out the method for converting CS to MOCS and gives the necessary explanation of the method used during the build process. But the purpose of this paper is to provide an improved MOCS with high performance and broad application prospects. However, Yang et al. (2009) mentioned that the CS has few parameters and the parameter adjustment has little effect on the optimization ability, so it is difficult to improve the algorithm performance by adjusting its parameters. This paper proposes a population diversification method based on crossover operations, an elite guidance strategy in the nest update process, and an information enhancement in the abandonment process to improve MOCS.

Population diversification based on crossover operations
In the introduction analysis, it is mentioned that the diversity of the population has a great influence on the improvement of algorithm performance. As one of the most widely used optimization algorithms, GA's mutation and crossover operations have a strong role in improving the diversity of the population. But the mutation operation has a great impact on individuals, which may lead to the loss of original data information in the practical application of the algorithm. Therefore, this paper uses crossover operation which can adjust the probability to diversify the population of CS. Compared with the method that only enhances the diversity of the initial population, this paper performs crossover operation before each iteration to diversify the population, so as to improve the global optimization ability of MOCS and make the algorithm easier to get the global optimal solution. The crossover operation is to binary code two individuals as two chromosomes. The DNA is cut at the same position of the two chromosomes, and the two strings are crossed and combined to form two new chromosomes. The specific process is shown in Fig. 4.
As shown in Fig. 4, first randomly select the start and end positions of several genes in a pair of chromosomes (parents), and then exchange the positions of these two sets of genes. At this time, the chromosomes may have the possibility of gene conflict, and conflict detection is required. When doing conflict detection, a mapping relationship should be established according to the two groups of genes exchanged. As shown in the figure, the conflicting genes in Proto-child1 are (1,2,9), and the conflicting genes in Proto-child2 are (3, 4, 5). Therefore, a mapping relationship is established as shown in the figure, and finally a progeny chromosome without genetic conflict is obtained. In this paper, the crossover operation is used for each iteration of MOCS, and the crossover operation is performed on the population of the elite retention strategy to enhance the diversity of the population, so as to achieve the purpose of enhancing the performance of the algorithm.

The update process of elite guidance
After Sect. 4.1, the algorithm has obtained a population with good diversity. At this time, the algorithm will enter the MOCS update process. The update process of the original algorithm is shown in Eq. (2). The main steps of the update process are a È L s; k ð Þ, where L s; k ð Þ (Lévy flight) is a random walk, and a is the step factor, and are a random individual and the current individual. The vector formed by the difference, which together with Lévy flight constitute the step size and direction of the update process, and its process is shown in Fig. 5. As shown in Fig. 5, the difference between the current the difference is smaller and the search radius k 1 is smaller.
j is farther away from x t ð Þ i , the difference is larger, the search radius k 1 is larger, and the search direction is completely random. This method directly results in that its search process is completely random and has no clear direction, which will make the search range of k 1 far away from x t ð Þ best . As shown in Fig. 5, in the search range with r as the radius, the closest distance to x t ð Þ best is r 1 , so it is not easy to find the optimal solution x t ð Þ best by random method, which leads to the problem of slow convergence.
In this subsection, aiming at the problem of slow convergence due to the large randomness of search radius and search direction, an elite guidance strategy is proposed to enhance the optimization efficiency of MOCS in the process of solution update. The method combines the elite retention strategy proposed in Sect. 3.2.3 to extract the individual x tÀ1 ð Þ best with the best fitness as the elite individual of the ith iteration. The use of this method will provide a reference search radius and direction for the update process. The specific update process is shown in Fig. 6.
The population-based optimization algorithm is a gradual process from suboptimal to optimal as the number of iterations increases, so the probability of x best found by two iterations is two adjacent solutions. Therefore, using the t À 1 th optimal solution to guide the tth optimization can guide the search radius and direction to the region where the optimal solution exists. As shown in Fig. 6, when x tÀ1 ð Þ best is used as the elite to replace the random solution x t ð Þ j , the newly generated step size factor is shown in Eq. (6). Elite-guided multi-objective cuckoo search algorithm based on crossover operation and information… 4767 In this way, the optimization radius k 2 ¼ã È L s; k ð Þ will automatically adjust the search radius with the distance between x tÀ1 ð Þ best and x t ð Þ i , and also provide a direction closer to x t ð Þ best for optimization. However, the final search radius and direction should be combined with the step size and direction of Lévy flight. The method proposed in this section is shown in Fig. 6, in the search range where k 2 is the radius, the shortest distance from x t ð Þ best is r 2 and r 2 \r 1 , so x tþ1 ð Þ i has a greater probability to approach x t ð Þ best to speed up the convergence of the algorithm. The improved update process is shown in Eq. (7).
The proposal of the elite strategy can not only improve the convergence speed of the algorithm, but also because the strategy of selecting elite individuals is to select the individuals with the highest fitness. In combination with practical applications, it is possible to artificially specify how to select the frontier of Paeto, so this method also has great application prospects in practice. And this paper also gives a practical case in Sect. 6.

Abandonment process for information enhancement
After the algorithm goes through the elite-guided update process, it will proceed to the abandonment process, which is shown in Eq. (4). Because cuckoo search has the risk of falling into a local optimum in its search process, the reason for falling into a local optimum is that the algorithm does not receive enough information or receives the wrong information in the process of finding the optimum, which causes it to end the global search too quickly and then do a local search. In order to reduce the probability of MOCS falling into local optimum, its ability to obtain information in the process of finding the optimum should be enhanced. In this paper, we improve the abandonment process of the solution (Eq. 4) in the algorithm's search for the best. The when rand\pa, is that the abandonment process can learn the information of the search space more effectively, enhance the ability of solving the global optimum in the solution process, and avoid falling into the local optimum. The improved abandonment process is shown in Eq. (8), and the abandonment process information enhancement diagram is shown in Fig. 7: As shown in Fig. 7a, before information enhancement is performed, the algorithm randomly selects two individuals r2 . The direction of this vector has been fixed, and then multiplied with rand to is the optimal path for x t ð Þ i . The step size and direction of the optimization path in this method are too single, and the difference between the solution after x t ð Þ i abandonment and before abandonment is not large, and the impact on jumping out of the local optimum is limited.
In this paper, an information-enhanced abandonment process is proposed, as shown in Fig. 7b. On the basis of two random individuals, this paper adds the vector difference r3 and x t ð Þ r4 in the abandonment process of MOCS. This vector is the same as basis vector. After obtaining more information about the search space, the improved basis vector provides more possibilities for the direction and distance of the abandonment process on the basis of the previous one, and increases the possibility of the algorithm jumping out of the local optimum through various changes in distance and direction.

Algorithm flow chart and steps
According to 4.1-4.3, this section will take the two-dimensional objectives as an example to explain the algorithm flow of CIE-MOCS and provide pseudo code.

CIE-MOCS flow chart
In the following description, N represents the initial population size, and the algorithm flow chart is shown in Fig. 8. In order to have a clearer understanding of the algorithm flow, the algorithm steps are described in detail here, and the detailed parameter settings and algorithm stop conditions are given.
Step 1: Parameter initialization (Generate an initial population P 1 with N individuals, and set the maximum iteration number T of the algorithm, the crossover probability CR = 0.7, the step size scaling factor coefficient a 0 ¼ 0:1, and the abandonment probability Pa ¼ 0:25.) Step 2: Crossover operation is performed on the initial population to generate a diverse population, where the crossover probability is CR = 0.7.
Step 3: Calculate the objective value and perform fast non-dominated sorting and crowding sorting to obtain the Pareto optimal solution set, and obtain elite individuals from it.
Step 4: Use elite individuals to guide the algorithm update process and speed up the algorithm convergence speed. Get the updated population P 3 with the step size scaling factor a 0 ¼ 0:1 in the elite bootstrap.
Step 5: Input P 3 into the optimization abandonment process, and use information enhancement in this process, so that the algorithm is not easy to fall into the local optimum, and a new population P 4 after optimization is obtained, in which the abandonment probability Pa ¼ 0:25 and rand 2 0; 1 ½ .
Step 6: Mix P 2 and P 4 to obtain a mixed population, and calculate the objective value of the mixed population, and perform fast non-dominated sorting and crowding degree sorting on the objective value. Take the top N populations to form the initial population of the next iteration. After sorting, the solutions on the front of Pareto are taken out to form the Pareto optimal solution set.
Step 7: Determine whether the stopping condition is reached. If the number of iterations of the algorithm is t\T, return to Step 2 to continue the iteration. Otherwise, the algorithm stops the iteration, ends the optimization, and outputs the optimization result.

CIE-MOCS pseudocode
Combined with the above process, the pseudo code of CIE-MOCS is given as shown in Algorithm 2.

Experiment
This section presents several experimental studies conducted to analyze the advantages of CIE-MOCS. First, Sects. 5.1 and 5.2 introduce two sets of benchmarking models and seven performance metrics, respectively. Then, the parameter settings of all the comparison algorithms are given in Sect. 5.3. Next, 30 independent experiments are carried out on CIE-MOCS and its comparison algorithms, and the experimental results are statistically tested by mean and standard deviation (std), and some necessary discussions are given in Sect. 5.4 according to the statistical test results. Finally, Overall conclusion of the experiment in Sect. 5.5.

Benchmark models
To evaluate the performance of CIE-MOCS, two sets of benchmark models, namely ZDT (Zitzler et al. 2000) and DTLZ (Deb et al. 2005) problems, are employed as benchmark problems for comparison experiments. It should be pointed out that in this experiment, the number of points of the Pareto real surface (PF true ) of the benchmark problem is different, and the PF true is also used as the reference surface of the evaluation index, but the approximate Pareto surface (PF known ) is obtained when solving different test problems cannot be greater than the number of PF true points, so different benchmark models should be given different initial population numbers. The characteristics of the benchmark problem and the settings of the initial population are shown in Table 1 5

.2 Performance metrics
In this section, according to the performance to be evaluated for the multi-objective optimization algorithm, five sets of evaluation indicators are given to evaluate the algorithm's operating efficiency (time) to verify the computational cost, and the average coverage (C) (Zitzler et al. 1998) to verify the proportion of non-dominated solutions, Spacing (S) (Schott 1995) and Maximum Spread (MS) (Tsou et al. 2006) to evaluate the diversity of algorithms, distance-based approach evaluation method (D) (Zitzler 1999) and generational distance (GD) (Veldhuizen and Lamont 1998) to evaluate algorithm convergence, and a comprehensive evaluation metric Hvpervolume (HV) (Zitzler 1999). The specific performance definitions and explanations of the evaluation indicators are shown in Table 2.

Algorithm parameter settings
In our experiments, MOCS was considered when comparing with CIE-MOCS to verify the effectiveness of the improvement, and the classical algorithm MOEA/D was also selected to verify the advantages of this algorithm, where Pc is the crossover probability, Pm is the mutation probability, x_num is the number of decision variables, g 1 is the simulated binary crossover parameter, g 2 is the polynomial variation parameter, and other details about MOEA/D, please refer to previous work (Zhang et al. 2007), The parameters of each algorithm are shown in Table 3.

Performance metrics analysis
In this section, we will conduct statistical test experiments and analyze the performance classification of the algorithm  Table 2. The data in bold represents the algorithm with better performance on the benchmark model. And ?/-/*/9 represents better performance (?), worse performance (-), and similar performance (*) among the 9 benchmark models for a more intuitive view of the experimental results.

Time efficiency indicator
In order to detect the time efficiency of the algorithm, this paper uses the CPU running time to verify the algorithm efficiency, in seconds (s). The device information used in this experiment is the CPU model Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz, and the RAM is 16 GB. The algorithm comparison experiments are shown in Table 4. As shown in Table 4, the CPU running time of the algorithm MOCS and CIE-MOCS is short, and the algorithm efficiency is obviously better than that of MOEAD. This is because the Pareto-based method is used in the construction of MOCS, and this construction method enables MOCS to obtain a faster solution. The speed shows that the method of building a multi-objective optimization algorithm in this paper is superior.  Elite-guided multi-objective cuckoo search algorithm based on crossover operation and information… 4771

Count indicator
The count index counts the number or proportion of nondominated solutions that meet the given requirements in the solution set PF known . A solution is a non-dominated solution, which means that this solution is a better elite solution, so the number or proportion of non-dominated solutions owned by the solution set PF known can be to a certain extent. It reflects the advantages and disadvantages of the algorithm. In this paper, the index C is used to calculate the dominance relationship between the two solution sets. The smaller C is, the higher the proportion of non-dominated solution sets obtained, and the better the algorithm performance. See Table 5 for specific data. As shown in Table 5, on the 9 benchmark models, 6 of CIE-MOCS outperform the comparison algorithms, while the three algorithms of ZDT4 are all 1, which is not obviously good or bad, and the performance of MOEAD on DTLZ2 and DTLZ4 is better than the rest of the algorithms, but the overall performance of CIE-MOCS is better. The MOCS algorithm does not have the optimal situation in the comparison. With the improvement of the method in this paper, the ability of CIE-MOCS to obtain the non-dominated solution set has been greatly improved.

Convergence indicator
Most of the convergence indicators reflect the closeness of PF known to PF true through the distance from PF known to PF true , and different types of distances are selected for different indicators. Most of the convergence indicators need to be compared with PF true . The better the convergence of PF known , the better the convergence of the algorithm. As shown in Table 6, this paper uses GD and D to measure the convergence of the algorithm. Table 6, CIE-MOCS outperforms the comparison models on the 9 benchmark models in the D index except ZDT1. In the GD index, CIE-MOCS is better than the comparison model except that MOEA/D in ZDT1 is the best, and the performance of DTLZ7 is similar to MOCS, and the other seven benchmark models are better than the comparison models. It can be seen from the comparison results that CIE-MOCS has better convergence performance.

Comprehensive indicator
The hypervolume indicator (HV) is used to evaluate the performance of the algorithm by calculating the value of the hypervolume of the space enclosed by the non-dominated solution set and the reference point. The larger the value of HV, the stronger the comprehensive performance of the algorithm. Table 8 shows the numerical calculation of the HV of different algorithms in the benchmark test problem.
As shown in Table 8, CIE-MOCS outperforms the comparison algorithms in 5 of the 9 benchmark problems, and the performance of MOEAD is significantly better than CIE-MOCS and MOCS in ZDT4, DTLZ1 and DTLZ2. The performance of CIE-MOCS is similar to that of MOEA/D on DTLZ4. But on the whole, CIE-MOCS has better comprehensive performance than the comparison algorithm in the comprehensive evaluation.

Experimental conclusion
As shown in Sects. 5.1-5.4, this experiment presents two groups of nine benchmark models, and uses seven performance metrics in total to conduct experiments on CIE-MOCS and its comparison algorithms. First of all, the experimental results prove that the performance of CIE-MOCS has been greatly improved compared with MOCS in terms of convergence, diversity and solving ability after the improved method in this paper, indicating that the improved method proposed in this paper is effective. Compared with the classical algorithm MOEAD, it is very obvious that the construction method of the multi-objective optimization algorithm in this paper has the advantage of high operating efficiency, and in most other performances, CIE-MOCS also has obvious advantages. However, in terms of algorithm scalability, it is a shortcoming and deficiency of CIE-MOCS, which has also become the main problem that researchers need to solve later. To sum up, it can be concluded that the CIE-MOCS proposed in this paper is a hybrid optimization algorithm with high performance.

Case study
To verify that the algorithm proposed in this paper has practical application value, CIE-MOCS is applied to cement industry in this paper. The cement industry, a major energy-consuming industry in China, has a production process that includes: mining of raw materials, grinding of raw materials, firing of clinker, grinding of clinker, and packaging and transportation. Clinker grinding accounts for up to 40% of the energy consumption in the cement production process, and clinker grinding is the last process in the cement industry. The cement grinding system will eventually produce finished cement with a certain specific surface area. The specific surface area of cement is a quality indicator of whether the cement is qualified or not, and its value determines the type of cement and reflects whether the cement quality is qualified or not. Therefore, the clinker grinding process has a great influence on the energy consumption and the quality of the finished cement products, so the stability of the working conditions during the operation of the cement grinding system is of great significance for energy saving and product qualification.
Based on the above description, we use the cement mill power consumption E and the cement specific surface area Q as optimization objectives. But the two objectives of the optimization problem are to maximize or minimize at the same time. The cement mill power consumption can be minimized, but the specific surface area of cement is a range, and its cannot simply be minimized, because the quality of cement with too small a specific surface area is substandard. Therefore, we take the standard value of a cement type, and make the cement specific surface area minus the standard value to take the absolute value, the ?/-/*/9 0/9/0/9 3/5/1/9 5/3/1/9 closer the absolute value is to 0, the closer the specific surface area is to the standard value, the standard value is taken as 350. Therefore, the objective function of the optimization model is obtained.
where Eqs. (8) and (9) represent the minimization of electrical consumption and the minimization of the absolute value of the standard value of specific surface area. Equation (10) then indicates that there are 11 decision variables, and all of them have inequality constraints. As for the cement mill power consumption E and cement specific surface area Q, since they cannot be obtained directly, considering that the cement data have strong timeseries characteristics, a long and short term memory network (LSTM) is used to predict them and obtain E and Q provided in Eqs. (8) and (9). After the objective function is established, the elites in the elite-guided strategy mentioned in Sect. 4.2 can be artificially specified as elite individuals according to the project reality. This optimization model specifies that the individual with the lowest power consumption is taken as the elite individual to participate in the update process, and this selection will further reduce the power consumption.
To show the superior performance of CIE-MOCS compared to other algorithms, in the experiment this model still uses MOCS and MOEA/D as the comparison algorithm, and the optimal decision values obtained are passed through the model decision for 10 min. In order not to lose generality, the data of different moments were used to optimize 10 times, and the experimental results are shown in Fig. 9 and Table 9.
As shown in Fig. 9, the optimized power consumption and specific surface area values of the optimization algorithm are better than the historical values, indicating that the framework of this case is effective. However, in the algorithm intercomparison CIE-MOCS optimization outperforms MOCS in terms of electrical consumption and outperforms MOCS and MOEA/D in terms of specific surface. However, there is no significant difference with MOEA/D in terms of electrical consumption. The analysis in Table 9 shows that the specific area values of CIE-MOCS is still lower than that of MOEA/D. The CIE-MOCS optimization performs best in practical applications. Again, based on this paper, the convergence speed of 100 iterations in one optimization is also used as an evaluation index, and the absolute value of the convergence speed of the difference between cement mill power consumption and cement specific surface area is compared in Fig. 10.
As shown in Fig. 10, the convergence speed of CIE-MOCS is better than MOCS and MOEA/D in terms of power consumption and specific surface area. Among them, in the comparison of power consumption convergence, CIE-MOCS converged before 30 iterations, while MOEA/D converged at about 60 iterations, and MOCS did not converge to the minimum at the end of 100 iterations. In the comparison of the absolute value of the difference in the comparison table, it also shows that the CIE-MOCS has the fastest convergence speed.

Conclusion
This paper proposes an elite guided multi-objective cuckoo search algorithm based on crossover operations and information enhancement, namely CIE-MOCS. The working principle and existing problems of CS algorithm are analyzed, and a Pareto-based multi-objective cuckoo search algorithm, namely MOCS, is built. In order to enhance the performance of MOCS, this paper strengthens the diversity performance of the population through crossover operation, and enhances the information acquisition ability of the algorithm in the process of abandonment through information enhancement, so that the algorithm is not easy to fall into the local optimum, and then combines the elite guidance strategy to adopt the optimal individual to guide the update. The procedural method makes it easier for the algorithm to search for the optimal solution, so as to speed up the convergence speed of the algorithm. Our experimental results also prove that under 9 benchmark functions and 5 sets of evaluation indicators, CIE-MOCS has better operating efficiency, faster convergence speed, better solving ability, better distribution performance, and comprehensive performance on most test problems. In order to further verify the practical application ability of CIE-MOCS, this paper also gives a simple practical application case. In this case, CIE-MOCS has better solving ability than the comparison algorithm when solving the minimization multi-objective optimization problem and convergence speed, which further proves the superior performance of the algorithm.
In future work, we will investigate the insufficient ductility of CIE-MOCS and apply it to more real-life multiobjective optimization problems. At the same time, we intend to our study to many-objective optimization problems(MaOPs). MaOPs indeed appear widely in real-word applications and is a hot research topic. Therefore, it is a promising direction to solve MaOPs using CIE-MOCS.