NERS_HEAD: a new hybrid evolutionary algorithm for solving graph coloring problem

The graph coloring problem is an NP-hard problem. Currently, one of the most effective methods to solve this problem is a hybrid evolutionary algorithm. This paper proposes a hybrid evolutionary algorithm NERS_HEAD with a new elite replacement strategy. In NERS_HEAD, a method to detect the local optimal state is proposed so that the evolutionary process can jump out of the local optimal state by introducing diversity on time; a new elite structure and a replacement strategy are designed to increase the diversity of the evolutionary population so that the evolution process can not only converge quickly but also jump out of the local optimal state in time. The comparison experiments with current excellent graph coloring algorithms on 59 DIMACS benchmark instances show that NERS_HEAD can effectively improve the efficiency and success rate of solving graph coloring problems.


Introduction
The graph coloring problem (GCP) is a typical combinatorial optimization problem that has been applied to many issues, such as: printed circuit boards testing (Garey et al. 1976), frequency assignment in mobile radio telephone systems (Gamst 1986), resource allocation in the bus networks (Woo et al. 1991), noise reduction in Very Large Scale Integration (VLSI) Circuits (Maitra et al. 2010), minimize the maximum vertex interference in Wi-Fi channel assignment (Orden et al. 2018), efficient reflection string analysis (Grech et al. 2018) and scheduling for highthroughput cascaded classifiers (Hsiao et al. 2019).
The GCP is a well-known NP-hard problem, and there are many approaches to solve it. The greedy heuristic is one of these methods: assign different colors to each vertex in a specific or random order of vertices and ensure no conflicts occur, forming a K-coloring scheme step by step. Such a method is swift but does not give the minimum number of colors in most cases. Some efficient greedy heuristics are DSATUR (Brélaz 1979) and RLF (Leighton 1979). Meanwhile, some exact algorithms have also been proposed to solve the graph coloring problem, such as (Jabrayilov and Mutzel 2018).
Local search algorithms are also widely used in GCP. A random scheme with conflicting edges is generated for a fixed number of colors (e.g., the chromatic number). Then the conflicting edges are reduced by iteratively adjusting the vertex color in the neighborhood. Hertz and Werra proposed the TabuCol , which introduces the short-term memory mechanism of the tabu list. It records the last vertex color change in the tabu list and does not make the same color change within a certain tabu tenure, effectively preventing loops. In 2006, Blöchliger and Zufferey proposed two improved strategies for partial schemes and a reactive tabu tenure, achieving good results on complex benchmark instances (Blöchliger and Zufferey 2008). Other local search algorithms include simulated annealing algorithm (Chams et al. 1987), quantum annealing algorithm (Titiloye and Crispin 2011a), and improving probability learning based on local search (Zhou et al. 2018).
Population-based evolutionary algorithms are also an effective approach to solving GCP. In 1996, Fleurent and Ferland improved the genetic algorithm to solve GCP (Fleurent and Ferland 1996). In 1998, Mitchell introduced the genetic algorithm (Mitchell 1998) in detail. In 1999, Galinier and Hao introduced the hybrid evolutionary algorithm (HEA) by combining the tabu search algorithm with the evolutionary algorithm framework (Galinier and Hao 1999). In 2008, an adaptive memory algorithm (Galinier et al. 2008) based on recombination operators was proposed to achieve the same results as the HEA. In 2010, an adaptive multi-parent crossover operator and a pool updating strategy were proposed (Lü and Hao 2010). In 2011, Titiloye combined an evolutionary algorithm and an improved simulated annealing algorithm to form a distributed hybrid quantum annealing algorithm (Titiloye and Crispin 2011b), which led to further improvements in the results on the benchmark instances. Wu and Hao first used a preprocessing method to extract larger independent sets for large graphs with high density and a large number of vertices. They then used a memetic algorithm to color the residual graphs, achieving improved results on the benchmark instances (Wu and Hao 2012). In 2013, Marappan and Sethumadhavan proposed uniparental conflict gene crossover and conflict gene mutation operators to make the genetic algorithm more effective (Marappan and Sethumadhavan 2013). However, this algorithm is more flexible than the HEA. In 2015, Moalic and Gondran proposed a hybrid evolutionary algorithm based on two individuals (HEAD), which reduced computation time (Moalic and Gondran 2015). In 2018, Moalic and Gondranintroduced random crossover and unbalanced crossover in HEAD to improve the performance of the algorithm (Moalic and Gondran 2018). In 2020, Sharma and Chaudhari proposed a tree-based maximum independent set extraction method in two steps to solve the GCP (Sharma and Chaudhari 2020), obtaining some reasonable experimental results. In 2021, a population-based weight learning framework was proposed to solve GCP (Goudet et al. 2021).
Reducing the computation time and increasing the success rate with optimal solution preservation are the goals of improving the approximation algorithm for solving GCP. For example, hybrid evolutionary algorithms use the tabu search to improve the obtained schemes locally and then use crossover operators to provide good diversity globally. Inspired by HEAD, we propose a hybrid evolutionary algorithm NERS_HEAD for solving GCP. The main innovations include: 1. Propose strategy 1 A method for judging local optimal state in the evolutionary process that can introduce diversity at the right time; 2. Propose strategy 2 A new method for managing diversity to make elite individuals more diverse; 3. Strategies 1 and 2 are combined to form NERS_HEAD to improve the efficiency of solving GCP.
The organization of this paper is as follows: Sect. 2 presents some necessary knowledge needed for the GCP and the basic framework of the hybrid evolutionary algorithm HEAD. Section 3 focuses on the new elite individual replacement strategy proposed in this paper. Experimental results are given in Sect. 4, and the necessary analysis of the experimental results is presented. Section 5 presents the conclusion and possible future improvements.

Related work
This section introduces concepts and basic algorithms that need to be used when solving GCP with hybrid evolutionary algorithms and the basic framework of the HEAD algorithm referenced in this paper.

Schemes and objective function
Given an undirected graph G = {V, E}, the graph coloring problem(GCP) can be described as dividing the vertex set The vertices in each subset are assigned the same color, and the vertices in different subsets are assigned different colors. Each such division is called a K-coloring scheme (also called scheme). If the Kcoloring scheme s = {V 1 , V 2 ,…, V K } makes Vu, v [ V, \ u,v [ [ E and u, v have different colors, then s is called a K-coloring solution (solution) of the GCP.
For a K-coloring scheme s, the objective function can be defined as formula (1). where, It is obvious that f (s) C 0. When f (s) = 0, the scheme s is the solution of GCP. Therefore, GCP can be expressed as an optimization problem: Given all possible schemes S, find s* in S such that f(s*) = 0 and K is minimum: where S is all possible schemes. Solving GCP is finding (searching) K-coloring schemes to reduce the value of the objective function f to 0. The Kcoloring scheme s 1 is better than s 2 means f (s 1 ) \ f (s 2 ).

The tabu search algorithm Tabucol
The tabu search algorithm  has been widely used since it was proposed in 1987. Algorithm 1 gives a typical tabu search algorithm for solving the GCP (Galinier and Hao 1999). Since the Tabu search algorithm does not depend on the quality of the initial scheme, an initial scheme is generally initialized at random: each vertex is assigned a color no greater than K to obtain a scheme with conflicting edges as the input to the algorithm. Resets number of iterations and creates s* to save the gbest scheme(lines 1-2). Line 3 is the termination condition. The size of the MaxIter is determined by the experiment. In line 5, the color of the vertex v is changed to minimize the objective function value(using a one-step move strategy, see Sect. 2.3). In line 6, the tabu list is introduced to avoid making the same vertex color transformation within a certain number of iterations, and it is a two-dimensional list. One dimension is the vertices, and the other dimension is the colors. When a color transformation is performed, it is recorded in the tabu list, and the same operation cannot be performed again within a certain tabu tenure tl. Line 9 updates the gbest scheme s*. In line 12, after reaching the number of iterations, the gbest scheme s* is returned.
The parameter tl in line 6 of Algorithm 1 is called tabu tenure. Its size directly affects the search effect on the neighborhood schemes. A longer tabu tenure can explore a larger search space, but if it is set too long, it will not play the role of the tabu. However, a small tabu tenure will cause the search process to be limited to a smaller range, so the size of the parameter tl is essential. A better method was obtained in Galinier and Hao (1999) using formula (3) to generate the tabu tenure in a semi-random manner.
where the range of the parameter A is [0,9], o = 0.6, and f (s) represents the number of conflicting edges (objective function value) of the current scheme s. In this paper, we also adopt such a configuration. Meanwhile, static, dynamic, and reactive tabu tenure change strategies are introduced in FOO-PARTIALCOL (Blöchliger and Zufferey 2008).

Distance between schemes
For any two schemes s 1 =fV 1 1 ; . . .V 1 K g and s 2 = fV 2 1 ; . . .V 2 K g, changing the color i of vertex u in s 1 to j, is a move of vertex u from V 1 i to V 1 j , called a one-step move. The distance between schemes s 1 and s 2 is defined as the number of moves in one-step needed to convert schemes s 1 to s 2 , denoted as d(s 1 , s 2 ), obviously d(s 1 , s 2 ) = d(s 2 , s 1 ).
When the distance between two schemes is 1, they are said to be neighbors of each other. s 1 and s 2 in Fig. 1a are neighbors of each other, and when the vertex G 2 V 2 3 move to V 2 2 , s 2 is the same as s 1 , and the transformation between them requires only one-step move, d(s 1 , s 2 ) = 1. The distance between schemes is only related to the division of vertices. In Fig. 1b, the colors of the vertices in s 1 and s 2 are different, but the vertex division is exactly the same. So, d(s 1 , s 2 ) = 0.
The distance between the schemes can be solved using the bipartite graph maximum weight matching. The schemes s 1 and s 2 are considered as two disjoint sets in the bipartite graph, color subsets in each scheme are considered as the vertices of the bipartite graph, and the number of matches between the color subsets of s 1 and s 2 is considered the weight between the vertices of the bipartite graph. When the maximum weight matching value of s 1 and s 2 is calculated, the distance between s 1 and s 2 equals the number of vertices of the undirected graph G minus this maximum weight matching. To more clearly introduce the distance calculation process for s 1 and s 2 . Figure 2 gives an example of converting two schemes to a bipartite graph and calculating the distance between schemes. Figure 2b is the bipartite graph abstracted from Fig. 2a. The number of vertices in Fig. 2a is 10, and the bigraph shows that the maximum matching number is 6(3 ? 2 ? 1), then the distance between s 1 and s 2 is 10 -6 = 4. (see Fig. 2).

Greedy partition crossover GPX
The Greedy Partition Crossover(GPX) is a crossover operator. The two parents (schemes) s 1 =fV 1 1 ; . . .V 1 K g and s 2 = fV 2 1 ; . . .V 2 K g, partition the vertices into K subsets according to color, such as V 1 i in s 1 , where 1 represents parent 1, i represents color i. All vertices in this subset are assigned color i. Algorithm 2 will get a new offspring combining the two parents. Alternately selecting the largest color subset of the two parents in turn (lines 2-7). Then assign this subset to the offspring, and this subset in both parents is removed (lines 8-9) until all subsets in the parents have been selected. In line 11, the vertices that have not yet been assigned are randomly assigned to the subset of the offspring.

Hybrid evolutionary algorithm in Duet HEAD
Hybrid evolutionary algorithms are a class of algorithms that combine local search algorithms with evolutionary algorithms and are often used to solve optimization problems. In 2018, Moalic and Gondran gave a hybrid evolutionary algorithm HEAD (Moalic and Gondran 2018), as in Algorithm 3. HEAD removes the complex selection and update operators from the evolutionary algorithm and uses an elite strategy to manage population diversity. Algorithm 3 randomly initializes two parents(p 1 , p 2 ), elite individuals(elite 1 , elite 2 )and the gbest (line 1). The GPX was used to generate two different offspring, and then the Tabucol was used to improve the two offspring to replace the two parents (lines 4-7). The gbest and the elite 1 are updated in lines 8 and 9. After each generation cycle (Iter cycle = 10), the elite 2 from the previous generation cycle replaces the p 1 (line 11). The final output is the gbest scheme.
Because the population uses only two individuals, there is no redundant selection, and each individual is involved in the evolutionary process. Experiments show that it can quickly find the solution of small and medium-sized graphs. However, introducing the elite individual with a fixed generation cycle is not flexible. Moreover, there are fewer options for elite individuals, which sometimes do not effectively provide diversity. So this paper proposes a new elite individual replacement strategy to improve the HEAD algorithm.

Hybrid evolutionary algorithm (NERS_HEAD)
This section discusses the hybrid evolutionary algorithm NERS_HEAD proposed in this paper to solve GCP and the replacement strategy of its elite individuals. This section includes: (1) The framework of NERS_HEAD (Sect. 3.1); (2) The local optimal state detection method based on the change of objective function value(Sect. 3.2); (3) the elite construction method and fitness function for selecting elite individual replacements (Sect. 3.3).

NERS_HEAD framework
Inspired by HEAD, we give a hybrid evolutionary algorithm NERS_HEAD with a new elite replacement strategy as Algorithm 4. In Algorithm 4, six schemes are randomly generated, which are two parents (p 1 , p 2 ), two elite individuals (elite 1 , elite 2 ), temp is the individual with the smallest objective function in each generation period, and the gbest is the individual with the smallest objective function during the whole Evolutionary process(line 1). The GPX was used to generate two different offspring, and then the Tabucol was used to improve the two offspring to replace the two parents (lines 4-7). Update temp and gbest (lines 8-9). During evolution, the change of the objective function value is used to determine whether the evolutionary process falls into a local optimal state (lines 10-13) (Sect. 3.2). If trapped in the local optimal state, a new scheme is obtained by MPX(Multi-parental crossover) of elite individuals from the previous two periods(line 15). Then this scheme is added to the elite pool after the Tabucol (line 16) (Sect. 3.3). The fitness between elite and parents is calculated, and the smallest fitness is selected for elite replacement (lines 17-21) (Sect. 3.3). Finally, update the elite individual and the generations (lines 22-27). Output the gbest(line29). This paper uses MPX for the case of 2 parents (m = 2), and the pseudo-code is Algorithm 5. The input of the algorithm is two schemes (parents), and the output is one scheme (offspring), which first selects the largest color subset among the parents, assigns it to the offspring, and then removes all the vertices containing this subset from the parents (lines 2-4). After all the K color subsets are selected, the vertices in the offspring that are not assigned a color are randomly assigned a color (line 6).

Method of detecting local optimal state
Moalic and Gondean proposed two versions of HEAD (Moalic and Gondran 2018), the first version without the strategy of adding elite individuals, which allows the population to converge quickly but with a low success rate. The second version introduces elite individuals according to a fixed number of generations, which improves the success rate but reduces the efficiency. Inspired by these two versions, in NERS_HEAD, only p 1 and p 2 are used to participate in evolution so that the evolution process can converge faster. At the same time, after every u generation of evolution, it is judged whether the evolution process falls into a local optimal state. If it is, it needs to jump out in time. This section discusses the judgment method of whether the evolution process is in a local optimal state, and the strategy of jumping out of the local optimal state is discussed in Sect. 3.3.
In NERS_HEAD, the change of the objective function value and the distance between p 1 and p 2 are used to detect whether the evolutionary process falls into a local optimal state.
First, the objective function value after u generations is equal means that the evolutionary process has fallen into a local optimal state. For scheme s, searching for a feasible scheme according to formula (1) is the process of searching for the objective function value f (s) that decreases to 0. Every u generations, the objective function value of the gbest is recorded in f curr (Algorithm 4, line 11). The f pre records the objective function value of the gbest in the last u generations(Algorithm 4, line 13). D is equal to the difference between f pre and f curr (Algorithm 4, lines 12). If D ¼ 0 means the population falls into a local optimal state. The number of the generation period u is determined experimentally (Sect. 4.2).
Second, when the d (p 1 , p 2 ) \e * n means that the evolutionary process is trapped in a local optimal state. n is number of the vertices, and e is 0.1. This condition is to prevent the algorithm from stopping if the first condition is not satisfied since the algorithm ends when the distance between the parents is 0.

Strategy for jumping out of local optimal state
In the process of finding the gbest scheme, HEAD can achieve fast convergence in the early stage, and after falling into a local optimal state, it is challenging to jump out. If the structure of the scheme is completely broken to increase diversity will lead to worse results, and too little diversity cannot be jumped out in time. Hence, it is crucial to introduce proper diversity.
Since there are only two individuals, as much diversity as possible is obtained using the elite individuals in the pregeneration period. This paper proposes saving the elite individuals in the first two generation periods and combining two elites using multi-parental crossover (MPX) to be added to the elite pool. It can add more diversity while preserving the structure of most schemes.
MPX(m = 2) differs from GPX. Instead of alternating the selection of both parents, MPX selects the largest color subset in both parents each time. The color subsets of the same parent can be chosen consecutively and simultaneously. Since crossover does not improve the objective function value, but on the contrary, random coloring in the last step will also increase the objective function value, so this offspring has to be improved by Tabucol and then added to the elite pool (Algorithm 4, line 16). The Tabucol is more time-consuming, so we set the number of the iteration to be 0.1* Iter TC . Because the purpose of this step is not to find the solution but to reduce the number of conflicts generated in the last step, in this way, the elite pool contains the elite individuals (elite 1 and elite 2 ) from the first two generation periods and the newly generated individuals(elite 3 ) from MPX.
The reasons for choosing MPX rather than GPX are: 1. GPX is used between the two parents(p 1 and p 2 ) during each evolution, but the preserved elite individuals come from both parents, so the reuse of GPX between elite individuals may produce the same scheme as the offspring. 2. The combination of different crossover operators (GPX between parents and MPX between elite individuals) can improve the global search ability of the population.
Meanwhile, this paper proposes a new approach to choosing the elite for replacement, which calculates the fitness between the elite individuals(elite 1 , elite 2 , elite 3 ) and the parents (p 1 and p 2 ) based on the objective function and distance, as in the formula (4).
where fit(i, j) represents the fitness value of i and j, i represents elite individuals, j represents parents. f ij represents the difference between the objective functions value of i and j, dij represents the distance between i and j, b is 0.15, and n is the number of vertices. The proposed fitness calculation method combines distance and objective function. According to the experiment (Sect. 4.5), the difference of the objective function can reflect the degree of similarity between two individuals to some extent. If the difference is larger, the less similar the two individuals are. However, d ij is more responsive to the similarity of two individuals than f ij because when the distance of two individuals is 0, these two individuals must be the same. Still, if f ij is 0, it cannot be judged that two individuals must be the same. So the value of f ij is weakened by opening the root sign on the fitness value. Adjusting the ratio of b can influence the degree of distance on the fitness value. The smaller the   40,43] value of b, the greater the influence of distance on the fitness value, and the larger the value of b, the smaller the influence on the fitness value. The idea of the formula is to find out the smallest fitness between elite individuals and parents, that is, the more similar individuals. The purpose of increasing diversity is to prevent gradual homogenization between individuals. Replacing j with i corresponding to the minimum fitness value fit(i,j) prevents the two individuals after replacement from being too similar. In the process of calculating f

Experimental and analysis
This section will introduce the experimental instances, the parameter settings of the experiments, and the detailed results and analysis. Experiments on the effectiveness of strategy 2 are given in Sect. 4.3. A comparison with the result of the excellent current algorithms will also be made to verify the effectiveness of the algorithm improvements.(Sect. 4.5). Fig. 3 The experiment of local optimal state detection period u

Instances and experimental environment
The instances used in our experiments come from the DIMACS challenge benchmark instances, most of which are random or quasi-random. There are a total of 59 instances, which are widely used in the research of graph coloring algorithms.
The instance details are shown in Table 1, where the chromatic number is marked as v(G) (a and b in [a,b] represent the lower and upper bounds of the chromatic numbers). The Destiny is 2m/n(n -1), where m and n are the number of edges and vertices of the graph, respectively. NERS_HEAD algorithm is coded in C??. The experimental environment is windows 10, and the processor is Intel Xeon Platinum 8369HC 3.3 GHz processer-4 cores and 8 GB of RAM. Since HEAD is open source, we run the HEAD algorithm and NERS_HEAD under the same experimental conditions to compare the experimental results.

Local optimal state detection period u
During the evolution of NERS_HEAD, every u generation will detect whether the algorithm falls into a local optimal state. Two examples, dsjc1000.1 and dsjc500.5, are used to determine a more suitable value for the local optimal state detection period. The experiment results are shown in Fig. 3, where the horizontal coordinates are the size of the period u, ranging from 1 to 15. Each value runs the example 50 times, the left vertical coordinate indicates the number of generations, and the right vertical coordinate  indicates the success rate (the number of times to obtain the optimal solution/the total number of times to execute). As u increases, the success rate tends to a stable value, but the average number of generations increases gradually and takes more time. Therefore, u = 5 is chosen as the local optimal state detection period for the final experiment because the success rate is high and the number of generations is low at this point. Figure 4 gives a comparison of the average evolutionary generations solved using the elite replacement of strategy 2 and the elite replacement strategy of HEAD. NERS_HEAD can reduce the number of generations in most instances. Figure 4a gives instances for the generations less than 2000 and Fig. 4b gives instances for the generations greater than 2000.

Elite individual replacement effectiveness experiment
If better quality elites can be selected to provide diversity during the evolutionary process, it can reduce the number of generations. So, strategy 2 is effective.
Meanwhile, the dsjc1000.1 is used to explore the changes in the objective function value during the evolutionary process. HEAD and NERS_HEAD output the objective function value of the gbest scheme at every 50 generations. After running 20 times, the objective function values of the same evolutionary generation were averaged and plotted (see Fig. 5).
Overall, the value of the objective function decreases with the number of generations during the evolution. NERS_HEAD can find the solution faster.

Relationship between the difference of objective function and distance
In designing the fitness function, the objective function difference is needed to determine the degree of similarity between two individuals. There is no theoretical basis for whether individuals with larger or smaller objective function differences are more similar, so we determine this roughly by experiment. The dsjc1000.1 example, during each generation, outputs the distance and the difference of the objective function between two parents. The difference of the objective function for each scheme after the Tabucol is in a small range, and two schemes of the same objective function difference may correspond to multiple distances. Therefore, the horizontal coordinate is the objective function difference, and the vertical coordinate is the average distance corresponding to each objective function difference. In Fig. 6, the curve shows that as the difference in the objective function becomes larger, the distance between the two individuals increases. It shows that the difference in the objective function of the two schemes is inversely correlated with the degree of similarity.

Algorithm comparison
Solving VCP based on integer linear programming (ILP) is an important exact algorithm. For example, (Jabrayilov and Mutzel 2018) gave the algorithm ILP-solver for solving VCP, which includes four models: AREP (basic sequencing problem), POP (mixed partial ordering problem), ASS (extended ASS-S) and POP2 (a hybrid of the models POP and ASS). The comparison result of HEAD, NERS_HEAD, and ILP-solver (Jabrayilov and Mutzel 2018) is shown in Table 2. The form of the first 7 columns is x(y), x and y are the experimental results of NERS_HEAD and HEAD (Moalic and Gondran 2018), respectively. When x = y, only x is listed. The first column is the name of the instance. The second column is the chromatic number. The third column K represents the number of colors. The fourth column is the number of Tabucol iterations. The fifth column is Success: success_runs/total_runs. The right side is the number of total runs, usually set to 20 (10 for C2000.5 and C4000.5 for larger graphs), and the left side is the number of times the solution was found successfully. The Generation in the sixth column represents the number of crossovers or generations. The seventh column represents the average computation time. The last two columns show the optimal solution and the average computation time of the ILP-solver from Jabrayilov and Mutzel (2018). For each instance, only the experimental results of the four models of ILP-solver with lb = ub and minimum time are given. In Table 2, Bold font indicates better values.
For the success rate, NERS_HEAD has two data that are worse than HEAD, three data are better than HEAD, and the other success rates are the same. Only on the basis that the success rate is guaranteed it makes sense to compare the generations and computational time reductions. For 12 instances of the first category(dsjc), it can reduce the number of generations for seven instances and the computation time for six instances. The reduction in computation time is not particularly significant because of the additional time cost required to construct the elite pool. In the second category(le450), most of the examples are relatively simple and can be solved quickly. NERS_HEAD reduces the number of generations and computation time of 3 instances. For le450_15c and le450_15d, an alternative approach is used: adding a randomly generated scheme inside the elite pool and directly replacing one of the two parents with that approach gives a more significant success rate improvement (le450_15c improves from 3/20 to 20/20, and le450_15d improves from 1/20 to 20/20). In the remaining instances, the number of generations and the computation time can be reduced for most of the instances.
From Table 2, among 59 instances and 66 data items, NERS_HEAD can reduce the evolutionary generation of 28 data items and the computation time of 26 data items than HEAD. The average reduction in the number of evolutionary generations was calculated to be 29.77%. The average reduction in computation time is 22.56%. In particular, dsjc500.1 reduces the number of evolutionary generations by 51.40% and the computation time by 40%. r1000.1c reduces the number of evolutionary generations by 82.30% and the computation time by 73.45%. Although the number of generations is somewhat random in each calculation, the effectiveness of the strategies can be demonstrated if it is reduced in most instances. Therefore, NERS_HEAD is effective in reducing the number of evolutionary generations and computation time. Table 2 also shows the results of comparing the solutions of four models (AREP, POP, ASS and POP2) of ILPsolver (Jabrayilov and Mutzel 2018) with the optimal solutions of NERS_HEAD. The comparison results on 37 instances given in Jabrayilov and Mutzel (2018) are as follows: ILP-solver obtained the solutions of 28 instances within one hour, NERS_HEAD found the optimal solution of 33 instances (in 29 case with a success rate of 100%, in 4 cases not in all runs). In terms of the average computation time of 27 instances with the same solution, NERS_ HEAD is 0.75 s, ILP-solver is 192.92 s. Therefore, compared with the ILP-solver, NERS_HEAD can obtain the same solution with lower time cost.
In general, in most instances, NERS_HEAD can obtain the optimal solution with high success rate and low time cost.

Conclusion
This paper proposes a hybrid evolutionary algorithm NERS_HEAD based on an elite replacement to solve GCP. NERS_HEAD guides the evolution direction of the population by setting the criteria of whether the evolutionary process is trapped in the local optimal state and improves the global search ability by increasing the diversity of elites. The comparison experiment with the current excellent GCP solving algorithm HEAD on 59 DIMACS instances shows that NERS_HEAD can reduce the number of evolutionary generations and the computing time of most instances while ensuring the success rate. The average reduction in evolution generations and calculation time reached 29.77% and 22.56%, respectively. Meanwhile, NERS_HEAD is compared with ILP-based exact algorithms ILP-solver, which is better in terms of optimal solution and solution time. Therefore, NERS_HEAD is a more effective GCP solving algorithm.
Constructing elite individuals suitable for general instances has always been one of the main problems in solving GCP. MPX is used in this paper, but it has many crossover operators and mutation operators (Marappan and Sethumadhavan 2020) when solving the GCP. For different types of graphs, changing the operator may get better results. In future work, we can study the characteristics of graphs and choose different operators to construct elite individuals.