A mixed Harris hawks optimization algorithm based on the pinhole imaging strategy for solving numerical optimization problems

The Harris hawks optimization (HHO) algorithm is a new metaheuristic algorithm proposed in recent years. Due to the shortcomings of this algorithm in solving complex high-dimensional optimization problems with a slow convergence speed, low accuracy, and the high likelihood to fall into local optimum, a mixed Harris hawks optimization (MHHO) algorithm based on the pinhole imaging strategy is proposed, including four strategies to improve the optimization performance. Firstly, the pinhole imaging strategy is used to enable the Harris’ hawks to approach the optimal solution faster and accelerate convergence. Secondly, the spiral parameter is introduced into the exploration phase to help the searching paths of the Harris’ hawks more diverse and improve the global search ability of the algorithm. Finally, the greedy strategy of the aquila optimization algorithm and the position update strategy of the flower pollination optimization algorithm are embedded in the exploitation stage to make the algorithm jump out of local optimum effectively. To verify the effectiveness of the proposed MHHO algorithm, it is compared with the classical HHO algorithm and 16 other state-of-the-art algorithms, and extensively tested on 23 well-known benchmark functions, the IEEE CEC2017 test sets, and three complex constrained engineering optimization problems. The test results show that MHHO achieves the top ranking on both the benchmark functions and the CEC2017 test sets, demonstrating its superior performance in terms of faster convergence speed and higher accuracy.


Introduction
In the history of biological evolution in nature, each population has its own way of survival, and swarm intelligence optimization algorithms are optimization strategies designed inspired by nature's biological evolution. With the rapid development of science and technology, various complex optimization problems have naturally emerged in endlessly. The optimization problems in the real world include singleobjective [1], multi-objective [2], linear, and nonlinear problems [3], and are accompanied by various constraints of decision-making problems as well as expensive computational costs. Therefore, with regard to optimal solution of complex functions, traditional methods face many difficulties, such as high computational costs and poor optimization effects [4]. In recent years, to better address the shortcomings of traditional optimization methods, which lack the ability to optimize well, an increasing number of metaheuristic algorithms have been proposed and applied to solve various real-life global optimization and engineering problems due to their effectiveness and practicality. The working principle of metaheuristic algorithms is to randomly initialize a set of solutions and iteratively approach the optimal solution through a predetermined number of iterations. These algorithms have the advantages of simple principles and easy implementation, and their search mechanisms can better obtain the global optimal solution of a given objective function. Therefore, metaheuristic algorithms are widely used in various engineering fields, such as path planning [5], feature selection [6], scheduling optimization [7], image segmentation [8], and model prediction [9]. According to the source of inspiration for different algorithms, they are usually classified into four major categories.
The first category is evolutionary algorithms, the most famous ones of which employ the heuristic mechanism of Darwinian biological evolution, genetic algorithm (GA) [10], differential evolutionary algorithm (DE) [11] and genetic programming (GP) [12] belong to this category of algorithms, which use selection, crossover, mutation, and recombination in each iteration to approach the chromosome with the best fitness value. The second category is swarm intelligence algorithms, which are inspired by the living habits, life cycles, hunting behaviors and migration and mating strategies of biological populations. The ant colony optimization algorithm (ACO) [13] is inspired by ants' food-seeking strategy and inter-population communication based on pheromones. The grasshopper optimization algorithm (GOA) [14] is inspired by the large-scale movement of larvae and adult locusts and the aggregation behavior of finding food sources. The whale optimization algorithm (WOA) [15] is inspired by the hunting behavior of humpback whales. The seagull optimization algorithm (SOA) [16] is inspired by the foraging behavior of seagulls during migration. The third category is algorithms based on physical and chemical phenomena, which imitate the laws of physics in nature. This category includes simulated annealing algorithm (SA) [17], central force algorithm (CFO) [18], and moth flame optimization algorithm (MFO) [19].

3
The fourth category is human behavior-based algorithms, which are inspired by human social behaviors, such as tabu search (TS) [20], social evolution and learning optimization (SELO) [21], and imperialist competition algorithm (ICA) [22].
The Harris hawks optimizer (HHO) [23] is a new swarm intelligence algorithm proposed by Heidari et al., which is inspired by the cooperative raiding and chasing behavior among Harris' hawks in nature. Due to its effectiveness and practicability, it is widely used to solve a variety of complex optimization problems. Nevertheless, when tackling highly complex optimization problems, the HHO algorithm may become trapped in local optima, and its convergence speed tends to be slow. To solve this problem, the MHHO algorithm is proposed in this paper to enhance the optimization performance of the original algorithm. The main contributions of this paper are as follows: • The use of a pinhole imaging-based strategy enables the algorithm to adjust the leader position at each iteration, facilitating a faster approximation of the global optimum solution and accelerating the convergence process. • The spiral factor increases the diversity of search paths for the Harris' hawks, ultimately enhancing the algorithm's global search capability. • Introducing two distinct aquila position update strategies and using a greedy approach to determine the leadership position between the two types of hawk individuals can enhance the algorithm's ability to optimize complex functions and improve the stability of the algorithm. • Replacing the original progressive rapid dives hard besiege approach with flower pollination position update strategy enables the algorithm to avoid local optima and converge more quickly to a global optimum solution.

Related work
Many scholars had put forward their own opinions on the HHO algorithm, which also stemmed from the problem of insufficient convergence accuracy and speed of the original algorithm for different research applications. Abd et al. [24] proposed a hybrid optimization algorithm of Harris hawks and moth flame based on fractionalorder chaotic maps and population dynamics, and the test results showed the effective competitiveness of the proposed algorithm. Hussain et al. [25] proposed an efficient sine-cosine mixed Harris hawks optimization algorithm for low-and high-dimensional feature selection, and experimental statistical analysis exhibited that the proposed algorithm produced efficient search results without increasing time cost. Gölcük et al. [26] proposed a quantum particle swarm optimized multi-population Harris hawks optimization algorithm, designed as a multi-population algorithm to further handle possible multiple optimal solutions and demonstrated its effectiveness on dynamic optimization problems. The multi-strategy Harris hawks optimization algorithm proposed by Abbasi et al. [27] was used to solve tapered roller bearing design problems, which confirmed the effectiveness of the algorithm under complex and variable constraints. The improved HHO algorithm proposed by Hamza et al. [28] was used to optimize the prioritization of software testing, and the experimental results highlighted the high

Fundamental framework of the HHO algorithm
The HHO algorithm simulates the actual hunting behavior of Harris' hawks in nature. The algorithm uses two different search mechanisms for optimization, which are global exploration and local mining. Next, the mathematical model is utilized to simulate the predation strategy of the Harris' hawks under different mechanisms.

Exploration stage
In the global exploration stage, the Harris' hawks choose the predation strategy with probability q and perform position update according to Eq. (1).
In Eq. (1), X(t) and X(t + 1) are the positions of the Harris' hawks at the t-th and t + 1-th iterations, respectively, X prey (t) represents the position of the prey at the t-th iteration. r 1 , r 2 , r 3 , r 4 , q are random numbers between 0 and 1, UB and LB are the upper and lower bounds of the search space, X rand (t) is the location of a randomly selected individual from the current Harris' hawks, and X mean (t) represents the average position of the Harris' hawks at the t-th iteration, as in Eq. (2).

Transitory phase
Global and local switching during the optimization phase is a necessary condition for the accurate operation of swarm intelligence algorithms. The HHO algorithm first realizes the transition between the global and local exploitation stages through the energy Eq. (3). If |E| ≥ 1 , the algorithm will perform the global exploration stage; if |E| < 1 , the algorithm will perform the local mining stage. As the prey is chased, the energy E will decrease with the number of iterations, and thus the following four strategies will be used for optimization during the local mining stage.
In Eq. (3), E is the escape energy of the prey, E 0 is the initial energy before the prey is captured and satisfies equation: E 0 = 2rand − 1 , which rand is a random number between the interval (0, 1). T max is the maximum number of iterations.

Soft besiege
If r ≥ 0.5 and |E| ≥ 0.5 , the prey has enough escape energy and escapes by jumping, and the Harris' hawks choose a soft besiege method to pounce at the best position to arrest the prey. The position update strategy is described in Eq. (4). (1) In Eqs. (4) and (5), ΔX(t) represents the difference between in position between the Harris' hawks and the prey during the iteration process, and J denotes that the escape of the prey is by random jumping.

Hard besiege
If r ≥ 0.5 and |E| < 0.5 , the prey's escape energy becomes low, and the Harris' hawks choose to quickly conduct a hard besiege of their prey. The position update strategy is shown in Eq. (6).

Soft besiege
If r < 0.5 and |E| ≥ 0.5 , the prey has enough escape energy, and the Harris' hawks then choose a more innovative soft besiege approach to pounce and capture the pray. The Levy flight is introduced to simulate the prey's escape mode. The location update strategy is shown in Eq. (9).
In Eq. (8), D is the optimization problem dimension, S indicates a random vector by size 1 × D and LF is the Levy flight function.

Hard besiege
If r < 0.5 and |E| < 0.5 , the prey's escape energy becomes low and the Harris' hawks choose to quickly perform an innovative hard besiege approach on the prey. The position update strategy is described in Eq. (12). The pseudo-code of the HHO algorithm is given in Table 1.

Pinhole imaging strategy
Inspired by the opposite-based learning [34] and the lens imaging principle of light [35], this paper introduces a new pinhole imaging-based learning strategy for the HHO algorithm and applies it to generate the opposite individuals of the global optimal solution after population initialization, and thus to produce more diverse opposition points. Figure 1 depicts a one-dimensional theoretical learning model of pinhole imaging. In Fig. 1, the constants m and n represent the upper and lower bounds of the search space, respectively, and the projection of the light source with a height of h on the x-axis is the global optimal position X prey . The projection of the reflected light source h * on the x-axis is X * prey , which is the reverse of the individual X prey . The mathematical model of (12)  (14).
When k = 1, X * prey = m + n − X prey , which is the general backward learning solution equation. It can be seen that general backward learning is a special case of pinhole imaging learning strategy. Properly changing the value of k can change the position of the solution of the inverse individual, resulting in a new position closer to the optimal solution in each iteration process.

Spiral position update strategy 1
In the exploration stage of the HHO algorithm, a random variable is introduced in Eq. (1) to control the position update of the Harris' hawks, so that they approach the target along a fixed spiral position each time, and the whole algorithm is prone to fall into local optimum. To solve this problem, Eq. (15) introduces a spiral parameter that changes with the number of iterations. The spiral parameter simulates the change in the distance between the position of the Harris' hawks and that of the prey during the process of the Harris' hawks capturing the prey, so that the predator can dynamically adjust its flight position and increase its ability to explore unknown areas. It can ensure a more diverse range of population positions, thereby improving the global search ability of chasing prey and increasing the convergence accuracy of the entire algorithm. The position update strategy is shown in Eq. (15).

Expanded exploration position update strategy 2
If r ≥ 0.5 and |E| ≥ 0.5 , the prey has enough energy to escape. Inspired by the highaltitude flight of the aquila that bends vertically to choose the best prey area, the greedy hunting strategy for the Harris' hawks and aquila is introduced. That is, whoever captures the prey in the optimal position becomes the leader of the next hunt, and the position update strategy is shown in Eq. (17).
If r ≥ 0.5 and |E| < 0.5 , the escape energy of the prey becomes lower. Due to the inspiration that the low-altitude and slow-falling attack method of the aquila, which uses the selected area of the target to approach and attack prey, the greedy hunting strategy for the Harris' hawks and aquila is introduced similarly. The strategy is exactly the same as when the escape energy is higher, and the predator with the optimal position is naturally selected as the leader. The position update strategy is shown in Eq. (19).

Flower pollination location update strategy 3
To effectively improve the local search capability of the HHO algorithm, inspired by cross-pollination and self-pollination of flowers [36], the pollen update strategy is used to replace the hard besiege of the asymptotic fast swoop. The position update strategy is given in Eq. (22).

3
A mixed Harris hawks optimization algorithm based on the pinhole… In (21), X m (t) and X k (t) are the positions of two individuals randomly selected from the current Harris' hawks, and is a random number between 0 and 1. Table 2 presents the pseudo-code of the MHHO algorithm.

Analysis of algorithm computational complexity
The time complexity of an algorithm refers to the amount of time resources consumed by a computer during the execution process and can serve as an important reference for optimizing algorithm performance. For optimizing algorithms, the main factors that affect time complexity depend on three parameters: the number of populations N, the number of iterations T, and the dimensionality D. The specific calculation of time complexity consists of three parts, including population initialization, updating of global optimal solution and optimal individual position, and updating of positions for other individuals in the population during iterations. Table 3 provides the calculation of time complexity for all comparison algorithms.  Using all functions, and the best solution is calculated and named as an X prey 7 : Perform the PI strategy for the best X prey using Eq. (14) 8 : ‖if (f (X * prey ) < f (X prey ), X prey = X * prey ) 9 : for (each hawk X i ) do

Benchmark functions and comparison algorithms
To verify the effectiveness of the MHHO algorithm, the 23 classical benchmark test functions [37] [38] are first selected for testing. These test functions consist of the unimodal functions (F1-F7), the multimodal functions (F8-F13) and fixeddimensional functions (F14-F23). Unimodal functions (F1-F7) are used as complex spherical and valley-type numerical optimization problems with only one global optimum, but are not easy to find. Therefore, such functions can be used to test the exploitation capabilities of the algorithm. Unlike unimodal functions, multimodal functions (F8-F13) have many local optima. Moreover, as the dimensionality increases, the scale of such function problems grows exponentially, which can be used to test the exploration abilities of the algorithm. The fixed-dimensional functions (F14-F23) are complex multimodal functions with specific dimensions, which can be used to test the overall performance of algorithm. The basic formulas of these functions are shown in Table 4. Then, the standard IEEE CEC2017 test functions (F1-F30) are also introduced in this section to further evaluate the effectiveness of the proposed algorithm. Based on all the optimization problems in this paper, the MHHO algorithm is compared with other state-of-the-art optimization algorithms, such as the Gray Wolf optimization (GWO) algorithm [39], the Chimpanzee optimization algorithm (ChOA) [40], the aquila optimization algorithm (AO) [41], the sparrow search algorithm (SSA-Sparrow) [42], HHO [23], improved Harris hawks optimization algorithm (HHOSCA) [43], Sooty Tern optimization algorithm (STOA) [44], multi-verse

3
A mixed Harris hawks optimization algorithm based on the pinhole…   0 , 0316

Experimental setup
The test environment for this experiment is an Intel(R) Core(TM) i7-10510U CPU @ 1.80 GHz 2.30 GHz (MATLAB 2019b). During testing, the population size N is set to 30, the maximum number of iterations T max is 500. In addition to the fixed dimensions of multimodal functions (F14-F23), F1-F13 are also tested with three dimensions of 50, 100, and 300, and the dimensions of the IEEE CEC2017 test functions are 10, 30, and 50. In order to overcome the influence of randomness and obtain more accurate results, 30 independent runs of different algorithms are performed successively, and the Wilcoxon rank sum test and Freidman statistical test are also implemented. Table 5 presents the parameter settings of other comparison algorithms.

Analysis and discussion of variable dimension benchmark functions
First comes the comparison results of our proposed algorithm and six other algorithms in variable dimension benchmark functions. The mean and standard deviation of the GWO, ChOA, AO, SSA-Sparrow, HHO, HHOSCA and MHHO algorithms on 50, 100, and 300 dimensions are presented in Tables 6, 7 and 8, respectively. Via the analysis by Wilcoxon rank-sum test, the best results are marked in bold. As given in Tables 6, 7 and 8, the MHHO algorithm has better results in all the spatial dimensions of the optimization problems and still maintains its advantages stably when the spatial dimension increases. Simultaneously, Figs. 2 and 3 show the representative convergence curves of the six benchmark functions in 50 and 300 dimensions, respectively. According to these figures, it is explicit that the convergence speed and accuracy of the MHHO algorithm are superior to those of the other six algorithms. Specifically, let us illustrate with a few examples. In Tables 6, 7 and 8, the four comparison algorithms (i.e., AO, SSA-Sparrow, HHO, and HHOSCA), like MHHO, can find the optimal solution in the benchmark functions F9 and F11 on 50 and 300 dims. However, in sub-diagram (d) & (e) of Figs. 2 and 3, our MHHO algorithm demonstrates its impressively faster convergence speed, and the optimal solution can be obtained in the second generation, which is undoubtedly a major improvement. Additionally, for the benchmark function F5 (Rosenbrock), it is a typical non-convex function. Due to its extreme complexity, it is easy to make the algorithm fall into a local optimum, even the newly proposed AO algorithm does not work satisfactorily in solving this problem. When it comes to our MHHO algorithm, it can increase the error precision of the optimal result to 5 decimal places; even if the problem's dimension rises to 100, this precision can still be maintained stably. The aforementioned results make it abundantly clear that the local exploitation ability and convergence speed of the algorithm have been significantly improved. Furthermore, Table 9 presents the Wilcoxon rank-sum test results with a significance level of 0.05 for 13 100-dimensional classical benchmark functions. As can be seen from this table, compared with GWO, ChOA, AO, SSA-Sparrow, HHO and HHOSCA algorithms, MHHO achieves the results that all P-values less than 0.05 in the 13 benchmark functions, proving the feasibility of the MHHO algorithm. According to the average results in Tables 6, 7 and 8, Fig. 4 plots the average Friedman rank test results of MHHO and six comparison algorithms in 13 benchmark functions with different dimensions. In Fig. 4, it clear that for each dimension, the MHHO algorithm ranks first, followed by AO, HHOSCA, HHO, SSA-Sparrow, GWO, and ChOA. In conclusion, the above statistical analysis results show that the MHHO algorithm significantly outperforms the other six comparison algorithms on 13 classical benchmark functions in 50, 100 and 300 dimensions.

Analysis and discussion of fixed dimension benchmark functions
The comparison results of MHHO and six comparison algorithms in fixeddimension benchmark functions are listed in Table 10. The findings in Table 10 Fig. 5, although most of the comparison algorithms can reach the optimal value in F16 and F17, the variance of optimal solution obtained by MHHO is reduced to 16 decimal places, which is an accuracy that no other comparison algorithms can achieve as best they can. Meanwhile, Fig. 6 shows the average Friedman rank test results of MHHO and other algorithms in ten fixed-dimension benchmark functions. In this figure, the MHHO algorithm comes out number one, followed by GWO, HHO, AO, SSA-Sparrow, HHOSCA, and ChOA.

Number of lteration
In addition, besides assessing the quality of the algorithm's solution, the time required for each algorithm to solve the corresponding optimization problems should also be taken into account to better comprehensively evaluate the performance of metaheuristic algorithms. Table 12 displays the total CPU running time of seven comparative algorithms on the 23 classic benchmark functions with three different dimensions.
It can be seen from Table 12 that although the proposed MHHO algorithm does not have the shortest running time in solving the corresponding optimization problems, the time cost in seconds is negligible considering the convergence accuracy and speed of the algorithm. Therefore, under the premise of ignoring the time cost, the MHHO algorithm is more competitive compared to other algorithms.

3
A mixed Harris hawks optimization algorithm based on the pinhole…

Effectiveness analysis and discussion of the MHHO algorithm
In this section, a set of experiments are designed to verify the effectiveness of each strategy introduced to the MHHO algorithm. More specifically, the four strategies, i.e., the pinhole imaging strategy, the spiral parameter position update strategy, the aquila greedy position update strategy, and the flower pollination optimization position update strategy, are consecutively added to the original HHO algorithm in sequence, and the new resulting algorithms are named as MHHO-1, MHHO-2, MHHO-3, and MHHO-4, respectively. Obviously, the MHHO-4 here is equivalent to the MHHO algorithm we mentioned earlier. Simultaneously, several typical benchmark functions F1, F5, F12, F21, F22 and F23 are chosen for comparison of the HHO and four new algorithms. In order to ensure the consistency of the experiment, the dimension of the variable-dimension functions F1, F5 and F12 is set to 50. The numerical and graphical results of this set of experiments are presented in Table 13 and Fig. 7, respectively. It can be seen that each strategy has a certain effect in different test functions, and the performance of the MHHO-x algorithm continues to improve along with the coherent addition of new strategies. These results amply demonstrate the independent effectiveness of different strategies and their interdependence.

Experiments on CEC2017 benchmark functions and discussion
To further evaluate the performance of our MHHO algorithm, we choose the IEEE CEC2017 benchmark problems in this section, which is more challenging.     Table 17, it can be observed that ChOA has the longest CPU running time, followed by MHHO. Although the total running time of other algorithms on the 29 function sets is shorter, their performance is inferior. Moreover, the total running time of MHHO on the 29 function sets is only a few seconds longer, which is negligible compared to the accuracy achieved.

Engineering application problems
In order to better verify the practicability and effectiveness of our proposed algorithm in the engineering field, MHHO is applied to solve three complex constraint engineering optimization problems in this section, and the optimal results obtained are compared with those of other different meta-heuristic algorithms. The specific model parameters of these optimization engineering problems come from the literature [52].

Design of the I-beam
I-beams refer to rolled iron beams, steel beams or cast steel beams with an I-shaped section. The 3D model and cross-section view of I-beam are shown in Fig. 9. For engineers and technicians in related field, the goal of the I-beam design problem is to minimize the beam's vertical deflection while simultaneously satisfying both the cross-sectional area and stress constraints for a given load. The objective function and constraints of this problem are expressed as follows: The optimal numerical results of MHHO and PSO, RSO, WOA, BOA, ALO are shown in Table 18. On the basis of comparison of these numerical values, it is clear Fig. 6 Mean ranking of the seven algorithms on ten benchmark functions with fixed dimensions

3
A mixed Harris hawks optimization algorithm based on the pinhole…

3
A mixed Harris hawks optimization algorithm based on the pinhole…

3
A mixed Harris hawks optimization algorithm based on the pinhole… that the optimum cost obtained by MHHO is the smallest (equals to that of ALO), which also indicates that the optimization ability of this algorithm is the best.

Design of the reducer
In the problem of reducer design, the optimization objective is to minimize the weight of the reducer under 11 constraints, and the plan view is shown in Fig. 10.
There are seven design variables, face width b(x 1 ), tooth module m(x 2 ), pinion number z(x 3 ), first shaft length l 1 (x 4 ), second shaft length l 2 (x 5 ), first shaft diameter d 1 (x 6 ), second shaft diameter d 2 (x 7 ), and the objective function, and constraints of this problem are expressed as follows: +7.4777(x 6 3 + x 7 3 ) + 0.7854(x 4 x 6 2 + x 5 x 7 2 ) subject to :    9 ≤ x 6 ≤ 3.9, 5 ≤ x 7 ≤ 5.5. Table 19 shows the objective function values of MHHO and four comparison algorithms (i.e., HHO, GWO, WOA, and ChOA) on the reducer design problem, as well as the corresponding optimization variables. Because MHHO outperforms all other algorithms, the result of comparison of the objective function values is equally encouraging.

Design of the tubular column
The problem model designs a uniform cross-section tubular column, and the objective of this problem is to carry compressive loads at minimal cost. As shown in Fig. 11, the design variables are the average diameter of the cross-section column d(x 1 ), and the thickness of the column t(x 2 ). Its optimization model is as follows: f (X) = 9.8x 1 x 2 + 2x 1 subject to : g 1 (X) = 1.59 − x 1 x 2 ≤ 0,   Table 20, the best numerical result of MHHO is compared with those of AO, SSA-Salp, MVO, SSA-Sparrow, ChOA, and HHOSCA algorithms. Consistent with the conclusions in the previous two engineering optimization problems, the MHHO algorithm performs the best among all the comparison algorithms.
In addition, Table 21 provides the CPU running time of different algorithms for solving three engineering application problems. From Table 21, it can be A mixed Harris hawks optimization algorithm based on the pinhole… seen that for the I-beam problem, both the ALO and MHHO algorithms are able to achieve the optimal value, but the running time of ALO is slightly longer. Although the running time of the MHHO is the longest for the reducer and tubular column problems, its performance is the best.

Conclusion and future work
Aiming at the shortcoming of the HHO algorithm, this paper proposes four improved strategies to avoid this algorithm from falling into local optimum and improve its performance. First, inspired by the lens imaging principle of light, the acceding of pinhole imaging strategy enables the algorithm to approach the optimal solution at the fastest speed and accelerate convergence. Second, the position updating strategy based on the spiral parameter is introduced in the exploration phase to achieve diversification of the Harris' hawks. Third, the greedy predation strategy shared by the Harris hawks optimization and the aquila optimization algorithm is employed to obtain a better prey position. Finally, the update strategy of the flower pollination optimization method is used to replace the hard siege of the Harris hawks' gradual and rapid subduction, which improves the local search ability of the algorithm.
The improved algorithm is extensively tested on 23 well-known benchmark functions, IEEE CEC2017 complex benchmark functions, and three complex constraint engineering optimization problems, and the numerical outcomes are compared with those of other state-of-the-art algorithms. The comparison results show that the improved algorithm has strong competitiveness in convergence accuracy and speed. Although the improved MHHO algorithm has shown significant performance improvements, the optimization performance on some test functions tends to decline as the dimension increases, which to some extent reflects that the performance of MHHO is not yet perfect enough. Therefore, how to keep the leaders of the algorithm always close to the optimal solution when optimizing non-convex functions like F5, and prevent them from falling into local optima, is the direction that the author needs to work on next.
In future work, research can be conducted from two perspectives. First, we can attempt to use other improvement strategies to further enhance the optimization performance of HHO and solve higher-dimensional optimization problems. Second, we can apply the proposed MHHO algorithm to solve real-world problems in practical applications, such as job shop scheduling problems, traveling salesman problems, and multi-objective optimization problems, to expand the application fields of the algorithm. Availability of data and materials Data and materials are available.
Code availability Code is available.