Optimization of Process Parameters on Surface Hardness and Energy Consumption in Milling of 7050 Aluminum Alloy using Enhanced NSGA-II

Aluminum alloy has high strength and light weight. It is widely used for aircraft fuselage, propellers and other parts which work under high load conditions. High-quality parts made of aluminum alloy processed by computerized numerical control (CNC) machine often have the characteristics of high cost in their processing. In order to achieve high surface quality and control processing costs, this article takes the workpiece surface hardness and machining energy consumption as targets. Intelligent optimization algorithm is used to find the optimal combination of milling parameters to obtain ideal targets. CNC milling parameter optimization is a multiparameter, multi-objective, multi-constraint, discrete nonlinear optimization problem which is difficult to solve. For this challenge, an improved NSGA-II is presented, named enhanced population diversity NSGA-II (EPD-NSGA-II). EPD-NSGA-II is improved with the normal distribution crossover, adaptive mutation operator of differential evolution, crowding calculation method considering variance and modified elite retention strategy to achieve enhanced population diversity. 12 test functions are chosen for experimentation to verify the performance of the EPDNSGA-II. The values of three evaluation indicators show that the proposed approach has good distribution and convergence performance. Finally, the approach is applied in the milling parameters optimization of 7050 aluminum alloy to get the optimal solutions. Results indicate that the EPDNSGA-II is effective in optimizing the problem of milling parameters.


Introduction
Modern machining technology has expanded significantly, and milling is an important part of the machining process (Mahdavinejad et al. 2012). Milling is used to process complex shapes and features such as molds, thin-walled complex curved surfaces, artificial prostheses, and blades. Aluminum alloy has the properties of high strength, good resistance to stress corrosion cracking, good fracture toughness and fatigue (Rambabu et al. 2017). It is widely used in mold manufacturing and aerospace industry as a common material for mass production of high-strength lightweight parts (Sathish and Karthick 2020). Aluminum alloy can also be used as thin-walled parts (Dai et al. 2020) and has a wide range of application prospects. However, in the aluminum alloy milling process, it consumes valuable natural resources and emitting harmful pollutants (Diaz et al. 2011). Therefore, aluminum alloy milling requires high-quality processing surfaces and low energy consumption.
The surface work hardening of materials is a common phenomenon in processing, so it is one of the important factors affecting the surface quality of the workpiece  . After the workpiece is processed, the hardness of the machined surface will be higher than the original hardness of the workpiece. This phenomenon is called work hardening and the machined surface is called surface hardening layer. The hardness of the surface hardening layer caused by machining is inhomogeneous, which will reduce the wear resistance and fatigue strength of the parts. In addition, the formed surface hardening layer may cause subsequent tool wear, which affects machining efficiency and quality ). Microhardness test, Rockwell hardness test and Brinell hardness test are usually used to obtain the hardness of the surface hardening layer. The formation of work hardening is very complex which depends on material properties, cooling and lubrication conditions, tool geometry and cutting parameters ). Bhopale et al. (2015) used the microhardness test to measure the surface hardness of the processed Inconel 718 material and studied the relationship between different combinations of milling parameters and the surface hardness. The test results show that the surface hardness will change significantly when the milling parameters are changed.
The milling process consumes a lot of energy while converting materials into high-quality products. Research on reducing processing energy consumption has great significance as the global climate is deteriorating. Since the early 1980s, the energy consumption of machine tools in the use stage has been studied (Imani Asrai et al. 2018). Filippi et al. (1981) noticed that increasingly powerful motors were installed on machine tools, but the installed power of machine tools was never fully utilized. There are generally two ways to improve energy efficiency: the first is to use advanced equipment and new machining technologies, the second is to determine optimal parameters via optimization techniques (Nguyen 2019). Vu et al. (2020) optimized the process parameters of AISI H13 steel hard milling. The cutting energy could be reduced about 14% compared with the worst case.
Researchers used different methods to optimize cutting parameters in order to achieve the goals of improving workpiece's surface quality, and reducing energy consumption. Some scholars used the orthogonal design method (Hanafi et al. 2012; Ghani et al. 2004) to obtain the optimal cutting parameter level in the experimental design, but this method cannot determine whether there is a better solution beyond the experimental parameter levels. In order to solve this problem, some scholars established a mathematical relationship between the input parameters and the target. And then, meta-heuristic algorithm is applied to search different solutions. Genetic algorithm (GA) was applied to find the best cutting conditions in the micro end milling C360 Copper alloy material in terms of minimum machining time (Kumar 2018).  used an improved K-means multi-objective particle swarm optimization algorithm for decreasing the temperature and energy consumption in the corner milling of brass. An adaptive simulated annealing algorithm was used to optimize the milling parameters of stainless steel 304 for improving energy efficiency, power factor and decreasing surface roughness (Nguyen et al. 2020). However, some of these algorithms have fast solving efficiency and some have high solution accuracy. Each algorithm has its applicable problem, so it is necessary to choose a suitable algorithm for a specific problem.
The NSGA-II algorithm proposed by Deb et al. (2002) has good convergence and high accuracy of optimization results. It is one of the most popular multi-objective optimization algorithms (Yusoff et al. 2011). Sen et al. (2019)applied the NSGA-II algorithm in the multiobjective optimization for end milling parameters of Inconel 690 material. NSGA-II was used to optimize cutting parameters in the turning of AISI 4140 steel for reducing cutting energy and improving energy efficiency (Park et al. 2016). The above studies successfully verified the feasibility of applying the NSGA-II algorithm in the optimization of cutting parameters. However, the optimization of milling parameter is usually a multimodal landscape optimization problem (Huang et al. 2015), which makes NSGA-II easily fall into the local optimum. In order to improve the performance of NSGA-II algorithms, Wang et al. (2011) used dynamic crowding distance and controlled elitism, which increased the diversity of algorithms. Fu et al. (2014) divided the population into internal and external to improve population diversity and local search capabilities. The external population was used to store non-dominated solutions. The internal population combined crowding distance and hybrid grid methods to participate in the evolution of generations. Gu et al. (2020) adopted a symmetric Latin hypercube design to generate the initial population and introduced an adaptive differential evolution algorithm to improve the diversity of candidate solutions and the convergence efficiency. D'Souza et al. (2010) proposed time-space trade-off method to improve non-dominated sorting and reduce runtime complexity. It can be seen that the current improvements to NSGA-II are mainly in population diversity and convergence efficiency. There are few articles that improve the population diversity, search performance and solving efficiency at the same time.
Thus, this paper proposes an improved NSGA-II, named enhanced population diversity NSGA-II (EPD-NSGA-II) to optimize the 7050 aluminum alloy milling parameters to obtain minimum surface hardness and energy consumption. The subsequent portion of this paper has been prepared as follows: Section 2 proposes the EPD-NSGA-II. Section 3 verifies the performance of the EPD-NSGA-II. Section 4 applies the EPD-NSGA-II to find an ideal solution in 7050 aluminum alloy milling experiment. Conclusion are made in Section 5.

An improved NSGA-II for multi-objective optimization
NSGA-II is obtained by NSGA (Srinivas and Deb 1994) after a series of improvements. NSGA-II used fast non-dominated sorting to reduce the computational complexity and adopted the elitism strategy and crowding distance assignment to maintain diversity.

Multi-objective optimization problem
The multi-objective optimization problem (MOP) has a set of global optimal solutions, called the pareto solution set. All of solutions are optimal for the problem. MOP usually can be expressed as: : ( ) ≤ 0, = 1,2, … ,

≤ ≤
where F is the objective vector and m is the number of target functions. ( ) is the inequality constraint, is the decision variable. and are the lower and upper bounds of the decision variable, respectively. Assuming that 1 and 2 are two solutions to this MOP, then they have the following relationship:  If all ( 1 ) ≤ ( 2 ) and there is at least one sub-objective function ( 1 ) < ( 2 ), then 1 dominates 2 .  If there are sub-objective functions ( 1 ) ≤ ( 2 ) and ( 1 ) ≥ ( 2 ) at the same time, it is said that 1 and 2 are non-dominant.

Original NSGA-II
NSGA-II algorithm is composed of seven parts: population initialization, selection, crossover, mutation, non-dominated sorting, crowding distance calculation, and elitism strategy. The pseudocode of the NSGA-II is shown in Table 1.

Enhanced population diversity NSGA-II
EPD-NSGA-II is proposed, focusing on three aspects: population diversity, search performance and solving efficiency. Firstly, the normal distribution crossover (Min et al. 2009) (NDX) and the adaptive mutation operator of differential evolution (DE) algorithm (Storn and Price 1997) are introduced into NSGA-II to enhance the spatial search ability and increase the population diversity to avoid falling into local optima. Secondly, the deductive sort (McClymont and Keedwell 2012) is adopted to increase the solving efficiency. In addition, a novel crowding distance formula considering variance is proposed to reserve individuals with large differences on different sub targets. Finally, the elite retention strategy is modified to reserve more non-dominated sets in the early stages of evolution. The EPD-NSGA-II flow chart is shown in Fig. 1. The improved part is shown with a blue dashed line. In the following, all parts of the EPD-NSGA-II are described comprehensively.

(a) Normal distribution crossover
The crossover operators combine parent solutions to generate offspring solutions (Chacón and Segura 2018). The simulated binary crossover (SBX) operator is used in NSGA-II (Deb and Agrawal 1995). It is the most popular operator used in multi-objective evolutionary algorithms. However, the small exploration and exploitation interval of SBX operator affects the convergence speed. In order to explore a wider space when the exploration and exploitation probabilities are both 0.5, the NDX is introduced in the NSGA-II. The formula is calculated as: where u is a random number uniformly distributed in the interval (0,1) , (0,1) N is a normally distributed variable with a mean of 0 and standard deviation of 1. 1 and 2 are two randomly selected parent individuals. 1 and 2 are two offspring individuals after crossover. represents the − th dimension vector. The SBX and NDX methods are compared in one dimension search space in order to show the advantages of the NDX. 10,000 offspring are generated under the conditions of parent individuals x1=0.3 and x2=0.6. Fig. 2 statistics the distribution of offspring obtained by SBX and NDX methods. It can be seen that the NDX method can search a wider space so as the generated individuals have better diversity.

(b) Adaptive mutation operator of differential evolution
The mutation operator plays a critical role in controlling the optimization process and affecting the performance of evolutionary algorithms (Chen et al. 2019). The offspring produced by typical mutation operator in NSGA-II has great randomness. It often slows down the efficiency of the NSGA-II. Therefore, the mutation of the DE algorithm is adopted to guarantee the convergence speed of NSGA-II. The mutation of the DE algorithm for each target can be expressed as: where F is the mutation scale factor, g is current generation. X p1 (g), X p2 (g) and X p3 (g) are three individuals randomly selected from = [ 1 ( ), 2 ( ), … , ( )] whose population number is N. The value of X p1 (g), X p2 (g) and X p3 (g) are different.
The mutation scale factor F is a fixed value, which is generally randomly generated between 0 and 1. A adaptive mutation scaling factor proposed by Xia and Liang (2020) is used here. The improved scale factor F is calculated as follows: where min and max are the minimum and maximum of mutation scale factor.
represents the current iteration.
represents the maximum number of iterations. The adaptive mutation factor F gradually decreases from to as the generation increases. This method can not only ensure the diversity of the population in the early stage of search, but also effectively increasing the possibility of searching for the global optimal solution.

(c) Deductive sort
Fast non-dominated sort is proposed in NSGA-II. The computational complexity of this method is ( ) 2 and the space complexity is ( ) 2 . M is the number of objectives and N is the population size. The computational and space complexity of NSGA-II can be further reduced. Considering that the target number of milling parameter optimization in this paper is two. The deductive sort is introduced into NSGA-II for the first time, because deductive sort executes faster than other sorting methods when the target number is between 2 and 16. The computational complexity of deductive sort is ( ) 2 in the worst case and the space complexity is ( ) so it can significantly increase the solving efficiency. The pseudocode of the deductive sort is given in Table 2.

(d) A new crowding distance formula considering variance
The crowding distance formula in NSGA-II algorithm only considers the distance between adjacent individuals (Lai and Deng 2018). In order to reserve individuals with large differences in crowding distance on different sub targets. A new formula considering variance of crowding distance is proposed. It can be calculated as: where is crowding distance formula in NSGA-II. +1 and −1 are the ( + 1) ℎ and ( − 1) ℎ fitness value in the ℎ objective, respectively. is a new crowding distance formula with considering variance.

(e) Modified elite retention strategy
The elitist strategy can preserve the best individual of each generation (LUO et al. 2018). In addition, an appropriate elite retention strategy could further improve the solution quality, as well as the convergence speed (Wang et al. 2017). Therefore, the elite retention strategy has been modified in order to ensure the diversity of population in the early stages of evolution. All nondominated individuals in first layer are retained. Individuals are selected at a ratio of 0.9 if the number of non-elitist sets in other layers is more than 5. The selection formula is as follows: where is the number of the individuals of ℎ non-dominated set. represents the allowable number of the individuals of ℎ non-dominanted set.
is an operation that rounds the value of ( * 0.9) to the nearest integer less than or equal to the result.

Performance measures
In this paper, Diversity metric△ (Deb et al. 2002) , generational distance (GD) (Van Veldhuizen and Lamont 1998) and invert generational distance (Reyes-Sierra and Coello 2005) (IGD) are adopted to evaluate the algorithm's performance. Diversity metric △ is used to calculate the nonuniformity in the distribution, as defined by the equation: where and are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set. d i is the Euclidean distances between two adjacent solutions. ̅ is the average of all distances , = 1,2, … , ( − 1), assuming that the cardinality of is . Diversity metric △ takes the lower value, the better distributions of solutions. GD is the most classic metric of convergence. The GD metric is defined through the equation: where represents the obtained Pareto optimal solutions and represents a set of uniformly points in the true PF. is a point in , ( , ) is the nearest Euclidean distance between and . | | is the cardinality of . The smaller the value of GD, the better the convergence performance of the algorithm.
IGD represents the average value of minimum distance between the obtained solutions and reference points in the objective space. The IGD metric is defined through the equation: where represents a set of uniformly points in the true PF and represents the obtained approximate PF.
is a point in .
( , ) is the Euclidean distance between the nearest individual from to obtained PF. | | is the cardinality of . The smaller the value of IGD, the better diversity and convergence of solutions.

Test results and discussion
Twelve In all tests, the parameters setting of EPD-NSGA-II and original NSGA-II are set as follows. Population size is set to 100, the maximum number of iterations is set to 500, the crossover rate is 0.9, the mutation probability is 0.1, and the distribution indices of crossover operator and mutation operator of NSGA-II are both 20. The experiments were run on MATLAB 2020a, the computational platform is equipped with 2.6GHz Intel Core 6 Duo CPU and 8GB RAM.
To observe the results obtained by the EPD-NSGA-II and original NSGA-II on the test functions more intuitively. The PF obtained by EPD-NSGA-II and original NSGA-II are plotted in Figs. 3 and 4. Fig. 3 shows the obtained PF by EPD-NSGA-II and true PF on ZDT series functions. As shown, the EPD-NSGA-II can come close to the true PF on ZDT series functions and NSGA-II can only converge to the true PF on ZDT6 test function. In both aspects of convergence and distribution of solutions, the EPD-NSGA-II performed better than original NSGA-II in ZDT series functions.  Fig. 4 shows the obtained PF and true PF on DTLZ series functions. As shown, the EPD-NSGA-II can come close to the true PF except for the DTLZ1 and DTLZ3 test function. Although it does not converge to the true PF on the DTLZ1 and DTLZ3 test function, but the obtained PF has better distributions compared to NSGA-II. This is due to the use of the new crowding distance formula. NSGA-II can only converge to the true PF on DTLZ5 test function. The PF obtained by the EPD-NSGA-II has better distribution and overlap rate. . 4 Objective-space PF on DTLZs using EPD-NSGA-II and original NSGA-II In order to further analyze the superiority of the proposed algorithm, a comparison with NSGA-II, MOEA/D and NMPSO is performed on the chosen benchmarks. Each algorithm is implemented 21 independent runs. Tables 3, 4 and 5 record the statistical comparison results regarding the mean value and standard of diversity metric△, deviation (std) of GD, and IGD respectively. △metric is used to measure the diversity in solutions of the pareto-optimal set. GD measures the extent of convergence to a known set of Pareto solutions. IGD can reflect the convergence and diversity of obtained solutions. Furthermore, the smallest mean value obtained by the four algorithms is marked in bold font.   Table 3 shows the mean and standard deviation of the diversity metric△ values obtained using four algorithms. It can be observed that the EPD-NSGA-II performs best in 11 test functions. Original NSGA-II obtains 1 best △metric result for ZDT2 test function. It means that EPD-NSGA-II outperforms the other algorithms in diversity for most of the test functions. Table 4 shows the convergence of obtained solutions by different algorithms. With respect to the GD value, it can be seen that the EPD-NSGA-II produces better convergence in all test problems except in ZDT6, DTLZ1 and DTLZ3. The EPD-NSGA-II gets the smallest standard deviation on 9 test functions. This indicates that EPD-NSGA-II has good convergence and robustness. Table 5 shows the IGD values obtained by four algorithms. EPD-NSGA-II obtains the first ranking on 9 test functions. However, its performance on the DTLZ1 and DTLZ3 test functions is weak. In summary, EPD-NSGA-II still obtains a well converged as well as diverse PF.
In conclusion, it can be summarized that the EPD-NSGA-II approach successfully finds PF with good convergence and uniformity, relative to the other three algorithms.

CNC milling experiment
Aluminum alloy has the characteristics of high strength and low density, and it is widely used in mold manufacturing and aerospace industry. Application scenarios and production conditions of Aluminum alloy determines that Aluminum alloy parts require high quality and low cost. In order to study the influence of milling parameters on parts quality and production cost, CNC milling experiment on 7050 aluminum alloy was conducted in this paper. In order to simulate the production of aluminum alloy complex curved parts, a ball-end milling cutter was used to process curved surface contours. The surface hardness of the workpiece and the processing energy consumption are regarded as the optimization targets and the ideal combination of milling parameters is searched to optimize them.

Experimental equipment and materials
fatigue resistance. The cutting experiment was performed on a five-axis machining center. The milling processing method was down milling, and the curved surface profile was formed after multiple passes. A three-phase power analyzer was used to measure milling processing power. The surface hardness of the machined workpiece was measured by a HR-150A Rockwell hardness tester. The photograph of the experiment setup is shown in Fig. 5. The overview of the experimental setup and measurement process is shown in Fig. 6. Coolant was used to lower the milling temperature. The three-phase power analyzer was connected between the working motor and the machining center. A computer was used to collect the data obtained by the power meter and record the power of the entire CNC milling process. The surface hardness of the workpiece was measured after the experiment was completed. The processing workpiece specification is 80mm×50mm×25mm (long(L)×wide(W)×high(H)), shown in Fig. 7. The specifications of machine tool, measuring instrument, and cutter are listed in Table 6.

Data collection and handling
Rectangular workpiece was rough milled first with a flat-end milling cutter, and then a ball-end milling cutter was used for further finishing. The stepped shape formed by rough process is indicated by blue lines and the parabola formed by finishing is shown in green, as shown in Fig. 8. The data collected in this experiment was in the finishing stage. Considering that if data collection is performed on the entire finishing process, it will consume a lot of time and greatly increase the workload. Therefore, the cutting path of the milling cutter was selected from w=30mm to w=32mm as the experiment data collection interval. The tool cut along the length of the rectangular workpiece, and the formation of curved surface was processed through multiple small feed passes. The measurements of surface hardness and power were both performed in this interval.

Fig. 8 Experimental data collection interval
Energy consumption data was obtained by multiplying the processing time and average power. The average power was obtained by averaging the milling power curve obtained during processing. The milling power curve is shown in Fig. 9.

Experiment design and results
Experiment was designed by using orthogonal design method for three parameters with four levels. The input parameters were the feed per tooth, axial depth of cut and spindle speed. The range of input parameters were determined according to the range of commonly used parameters for aluminum alloy finish milling. The range of milling parameters and their levels are listed in Table  7. The design of 16 sets of orthogonal experiments is shown in Table 8. Rockwell hardness (HRC) of machined surface and energy consumption (EC) were the two responses. In order to reduce the measurement error, the HRC was measured at 5 different positions of the workpiece. The mean was computed after five measurements. Table 9 shows the measurement results of 16 orthogonal experiments.

Modeling hardness and energy consumption
In order to find the ideal combination of milling parameters, the relationship between the independent variable and the response is established first. Due to the complex nonlinear relationship between the surface hardness and energy consumption and the milling parameters. This paper uses the ordinary least squares algorithm to establish a multiple nonlinear regression model for the two responses. The ordinary least squares algorithm is used to determine the unknown parameters of the regression model to achieve the goal of minimizing the residual sum of squares of the true value and the predicted value. The calculation formula is: is the value obtained from the experiment,̂is the corresponding predicted value and x represents independent variable. a and b are the unknown parameters of regression model. Taking the partial derivatives of a and b and set the results equal to zero. Then the values of a and b can be obtained. The formula is as follows: . . . . . .
In order to facilitate calculations, milling parameters are standardized in the range of 0 to 1. A multiple nonlinear regression model of the Rockwell hardness (HRC) and energy consumption (EC) are obtained as follows: Using coefficient of determination R 2 to test the model fitting effect, R 2 is obtained by the following formula: where i y is the measured value, ˆi y is the predicted value， y is the average of the measured value and n is the number of experiment. R 2 values of the predicted surface hardness and energy consumption model are 0.92 and 0.98, respectively. Distribution of actual and predicted values in Fig. 10 shows that the established response regression model had a high degree of fitting. Relative error between the measured value and predicted value of surface hardness and energy consumption is listed in Table 10. The average errors of surface hardness and energy consumption are 1.9% and 3.2%, respectively. The established regression models are accurate and reliable.    Table 11. As Table 11 presented, the energy consumption reaches the minimum 2421J and the surface hardness is maximum 28HRC under the parameter combination of Number 1. The energy consumption reaches the maximum 5454.4J and the surface hardness is minimum 16.7HRC under the parameter combination of the Number 16. The optimal ranges of speed, feed per tooth, and cutting depth are 6000n/min to 12000n/min, 0.05mm to 0.08mm, and 0.199mm to 0.2mm, respectively. It can be seen from Table 11 that slight changes in parameters may cause large fluctuations to the target. In most cases, a reasonable reduction of the speed can reduce energy consumption without significant increment in surface hardness. However, decreased energy consumption will lead to increasement of surface hardness, so there is no solution is better than the other solutions. Decision makers need to choose a solution that suits them according to their actual situation.

TOPSIS-based multi-objective decision-making
This section demonstrates how to help mechanical engineer to make decisions on multiobjective problems in engineering practice. This article supplements how to use TOPSIS technology (Hwang and Yoon 1981) to find an optimal solution when the 100 non-dominated solutions are obtained. The basic process of this method is: (13) where X is the original data matrix, n indicates the number of available alternatives, m is the number of objectives. Each column of X is normalized by the following formula: i D  represents the distance between each alternative and the positive ideal solution. i D  represents the distance between each alternative and the negative ideal solution. Choosing the best alternative through the following formula: The larger the value of i C , the closer the distance to the positive ideal solution and the farther the distance to the negative ideal solution. The TOPSIS technology is used for making decision from the 100 non-dominant solutions obtained above. The value of the optimal parameters combination and corresponding target values are shown in Table 12. The best compromise solution corresponds to n as 6000 n/min, as 0.08 mm, as 0.2mm, which gives the suitable surface hardness (20.5 HRC) and energy consumption (4453.9 J).

Conclusion
In this paper, NDX method, adaptive mutation operator of DE, deductive sort, considering variance of crowding distance and the improved elite retention strategy were used simultaneously to improve NSGA-II. EPD-NSGA-II was applied to optimize the milling parameters to obtain suitable surface hardness and energy consumption. Few studies have taken surface hardness as an optimization target. However, better part quality can be obtained by optimizing the surface hardness. Taking the energy consumption as the optimization target can save resources and reduce costs. The concluding remarks of this paper are listed below: (1) EPD-NSGA-II and other three multi-objective algorithms are tested on the ZDT and DTLZ test functions. Three indicators are used for algorithm performance evaluation. The results of the three indicators show that EPD-NSGA-II is superior than the other three multi-objective algorithms for most test functions. The EPD-NSGA-II algorithm can find non-dominated solutions that are closer to the true PF than NSGA-II. This is because the use of NDX method, adaptive mutation operator of DE and improved elite retention strategy can ensure convergence and can also maintain the diversity of the pareto set in the early stage of the evolution. However, EPD-NSGA-II faces difficulties in solving multi-mode problems such as DTLZ1 and DTLZ3. This is because the algorithm falls into the local optimum of the objective function space, and the convergence speed slows down or stops.
(2) End milling operation of 7050 aluminum alloy curved surface contours was successfully designed by 16 sets of orthogonal experiments. The orthogonal array can reduce the number of experiments and saving resources. The processing power and the surface hardness after milling were measured. The multiple nonlinear regression model is established by the least square method and the predicted value is compared with the experimental value. The comparison shows that the predicted output is in good agreement with the actual data. R 2 values of the predicted surface hardness and energy consumption model are 0.92 and 0.98.
(3) The established regression model is solved by EPD-NSGA-II algorithm and the original NSGA-II algorithm is used for comparison. The EPD-NSGA-II obtained a more convergence and uniform distributed solutions than NSGA-II when optimizing the milling parameters of 7050 aluminum alloy. The optimized time required for EPD-NSGA-II and NSGA-II is 0.4s and 2.4s, respectively. The obtained non-dominated solution indicates that a lower spindle speed can reduce energy consumption without significant increment in surface hardness when the feed per tooth is 0.08mm and the axial cutting depth is 0.2mm. The best compromise solution obtained by TOPSIS corresponds to n as 9271 n/min, as 0.08 mm, as 0.2mm, which gives ideal surface hardness (20.5 HRC) and energy consumption (4453.9 J). An effective solution has reference for the actual 7050 aluminum alloy milling process, especially when energy consumption and machined quality are considered.

Declarations
a. Funding