An improved cooperation search algorithm for the multi-degree reduction in Ball Bézier surfaces

Cooperation search algorithm (CSA) is a new metaheuristic algorithm inspired from the team cooperation behaviors in modern enterprises and is characterized by fast convergence. However, for complex multimodal problems, it may get trapped into local optima and suffer from premature convergence for the shortcoming of population updating guided only by leading individuals. In this paper, the issue of low convergence efficiency and convergence accuracy of the CSA algorithm on complex multimodal problems is dramatically alleviated by integrating the mutation and crossover operators in DE algorithm. Experimental results demonstrate the better performance of CCSA on convergence speed and accuracy as compared to other existing optimizers. Furthermore, aiming at the problem that there is no universal approach for the multi-degree reduction in Ball Bézier surfaces under different interpolation constrains, we propose a new method to solve this problem by introducing metaheuristic methods, where the change of interpolation constrains is treated as the change of decision variables. The modeling examples show that the proposed method is effective and easy to implement under different interpolation constrains, which can achieve the multi-degree reduction in Ball Bézier surfaces at one time and can simplify the degree reduction procedure significantly.


Introduction
Mathematical optimization refers to the seeking of a solution within an available solution space such that the objective function reaches its maximum or minimum. A large number of real-world problems in practice can be attributed to optimization problems. Therefore, the solving methods for optimization problems and their solution accuracy have always been the focus of many scholars. Conventional methods based on derivative theory have the advantage of high B Hongchan Zheng zhenghc@nwpu.edu.cn Huanxin Cao huanxincao@sina.cn Gang Hu hugang@xaut.edu.cn 1 precision, whereas they have high performance requirements for objective function and constraint functions. Besides, they are sensitive to the initial value. Inspired from the abundant mechanisms and principles in nature, scholars have developed numerous metaheuristic methods to address various optimization problems in the past few decades. These metaheuristic methods have the advantages of easy implementation, high flexibility, free derivative information demand, high robustness and excellent ability to escape from local optima. Thus, these metaheuristic methods have been successfully applied to solve the optimization problems in many fields, such as system control (Nedic et al. 2015;Stojanovic and Nedic 2016a, b), task scheduling, image processing (Gan et al. 2022), path planning (He et al. 2021), electric power design and many other engineering design problems. Metaheuristic methods are well appreciated by scientific community and are becoming increasingly popular due to the characteristic of derivative-free mechanism and their superiority over conventional methods, especially in case of multimodal, discrete and non-differential complex optimization problems.
According to the thought origin and employed search mechanism, metaheuristic methods can be broadly classified into six main categories (Molina et al. 2020):(1) breedingbased evolutionary algorithms, which mimic the evolution laws of natural species. These algorithms generally start with a randomly generated population, then a process of reproduction and survival iterated over successive generations enables the population to potentially evolve toward promising regions. The representatives include the genetic algorithm (GA) Holland (1975), differential evolution (DE) Das and Suganthan (2011); (2) swarm intelligence-based algorithms, which simulate the collective behavior of decentralized self-organized systems, in either natural or artificial environments to effectively collect and utilize the obtained information about search space to guide the search of population during the process of iterations. The representatives of this category include particle swarm optimization (PSO) Eberhart and Kennedy (1995), gray wolf optimization (GWO) Mirjalili and Mirjalili (2014), cuckoo search (CS) Yang and Deb (2009), squirrel search algorithm (SSA) Jain et al. (2019), marine predators algorithm (ODMPA) Hu et al. (2021), lion optimization algorithm Yazdani et al. (2022), Walnut optimization algorithm (Emami et al. 2022).
(3) physics-/chemistry-based algorithms, which are inspired from the physical/chemical rules in the universe. The representatives include gravitational search algorithm (GSA) Rashedi et al. (2009), multi-verse optimizer (MVO) Mirjalili et al. (2016) and chemical reaction optimization algorithm (CRO) Lam and Li (2010); (4) social human behavior-based algorithms, which mimic the intelligent social behaviors of human beings. The representatives include ideology algorithm (IA) Huan et al. (2017) and most valuable player algorithm (MVPA) Khattab et al. (2019); (5) plants-based algorithms, such as forest optimization algorithms (FOA) Ghaemi and Feizi-Derakhshi (2014). (6) algorithms with miscellaneous sources of inspiration, i.e., algorithms which do not fit in any of the previous categories, such as the Ying-Yang Pair Optimization (YYOP) Punnathanam and Kotecha (2016) and so on.
For almost all the metaheuristic algorithms, the search procedure includes two aspects: exploration for discovering new search regions in the entire solution space, and exploitation for investigating the potential of available promising areas (Elaziz et al. 2017). However, it is not easy to keep a proper balance between exploration and exploitation because of the randomness rooted in the search mechanism. By far, no single optimizer is found to be adequate to optimally solve all optimization problems. This means that one method may produce satisfying solutions for a certain set of problems but fail to achieve it on other classes of problems (Wolpert and Macready 1997). With the advancement in technology, optimization problems in reality are becoming increasingly complex and high-dimensional. As the existing metaheuristic methods are used to solve these high-dimensional complex problems, they may get trapped into local optima and suffer from premature convergence, and thus fail to yield feasible solution. Therefore, it is of great importance to further develop some new and effective metaheuristic methods for real-world complex problems with unknown decision spaces. This practical necessity keeps the domain of metaheuristic methods research open and allows scholars to improve the existing algorithms or propose novel algorithms for better optimization. Besides, extending the scope of application of metaheuristic algorithms to solve the problems in different fields is also the focus of many scholars.
As for the improvement of existing algorithms, since different optimizers may have different strengths, it is reasonable to think and has been documented that applying multiple operators in one optimizer framework can help the optimizer to obtain better performance in solving diverse optimization problems. As an illustration, in order to improve the exploration capability of GWO, personal best history of each individual, as well as crossover, and greedy selection is applied in the memory-based gray wolf optimizer (mGWO) Gupta and Deep (2020), and better search efficiency, as well as solution accuracy, is achieved. Beside, chaotic mapping is introduced into GWO to maintain population diversity (Saxena et al. 2019), and opposition-based learning is employed in GWO to enhance the exploration capability (Ibrahim et al. 2018). The slow convergence of the sine cosine algorithm (SCA) Mirjalili (2016) is ameliorated in the improved sine cosine algorithm combined with optimal neighborhood and quadratic interpolation strategy (QISCA) Guo et al. (2020). The same issue is resolved by embedding the reproduction mechanism of the invasive weed optimization (IWO) Mehrabian and Lucas (2006) in the SSA algorithm and improved exploitation capability is achieved in the ISSA method (Hu et al. 2019). Random replacement strategy and double adaptive weights are introduced into the WOA to enhance its convergence speed and exploration ability, respectively (Chen et al. 2020). Likewise, there are a good deal of studies on the improvement of the PSO algorithm, such as opposition-based particle swarm algorithm (OBPSO) Wang et al. (2007), the synergy of the sine cosine algorithm and particle swarm optimizer (SCA-PSO) Nenavath et al. (2018). A brief literature review on nature-inspired algorithms is presented in Table 1.
Cooperation search algorithm (CSA) Feng et al. (2021) is a novel social human behavior-based algorithm proposed by Feng in 2021, which simulates the efficient team cooperation behaviors and dynamic position updating mechanism in modern enterprises to achieve population evolution during the optimization procedure. Compared with most of the existing methods, the CSA has superiority performance in convergence rate, but as used to solve some high-dimensional complex problems, especially for multimodal problems, it may get trapped into local optimal just like most of the metaheuristic methods with quick convergence and lead to a low Table 1 Brief literature review on nature-inspired optimization algorithms Algorithm

Inspiration
Year Genetic algorithm (GA) Holland (1975) Evolution 1975 Particle swarm optimization(PSO) Eberhart and Kennedy (1995) Intelligent social behavior of bird flock 1995 Invasive weed optimization(IWO) Mehrabian and Lucas (2006) Law of colonizing weeds The growth of walnut trees in nature 2022 convergence speed. This is because the population updates in these algorithms are almost entirely guided by leading individuals in swarm and they do not make full use of population's search information to guide the movement direction (Tanweer et al. 2015), so these algorithms have a high probability to stuck in local optima and suffer from premature convergence. For alleviating the premature convergence of the CSA, this paper proposes an improved cooperation search algorithm (CCSA) by incorporating a mutation and crossover operators in differential evolution methods. The CCSA employs the mutation operator to explore the neighborhood potential areas of personal best solutions to enhance population diversity. Then, the crossover operator is employed to make full use of the excellent solution structure of individuals and strengthen the collaboration among individuals to improve population's global exploration capability. Though these operators will increase the time complexity of the CSA algorithm, they can accelerate the global convergence speed, boost the exploration efficiency and solution quality of the cooperation search algorithm effectively on complex problems. Furthermore, aiming at the problem that there is no universal approach for the multi-degree reduction in Ball Bézier surfaces under different interpolation constrains, metaheuristic algorithms are introduced into the multi-degree reduction in Ball Bézier surfaces in this paper. In computer-aided design (CAD) and computer-aided geometric design(CAGD) field, there are extensive studies on curves and surfaces. However, there is no object without thickness in nature, and the curve/surface without thickness is usually only a mathematical abstraction or simplification in geometric modeling. Besides, in recent years, 3D printing finds extensive applications in modern industry manufacture and also arouses great interests in many other fields. 3D printing turns 3D digital models into real objects by building them up layer by layer. With the development of multi-material 3D printer, 3D printing provides an appealing way of fabricating complex solid objects of the real world (Pasko et al. 2008;Song et al. 2018), thus the modeling of various 3D solid objects has attracted considerable attention.
As a skeleton-based solid parametric representation of 3D freeform objects, Ball curves and surfaces play an important role in the modeling of objects with variable thickness, such as the modeling of cerebral vascular (Wang et al. 2016), plant stems Zhu et al. 2008), 3D human modeling (Xu et al. 2011) and other objects with varying thickness . Ball curves/surfaces use control balls instead of simple control points in curve/surface design and their deformation can be easily achieved by manipulating their control balls directly. Besides, since Ball curves/surfaces are defined by explicit parametric equations, their properties can be discussed directly and globally. Wu et al. investigated the properties (Wu et al. 2007), intersection (Fu et al. 2018) and extension (Liu et al. 2020) of Ball B-Spline curves (BBSC). Hu and Wang Hu and Wang (2008) investigated the exact boundary of Ball Bézier surface and its approximation by polynomial form. Liu Liu and Deng (2008) proposed the fitting of scattered data with Disk/Ball Bézier and B-spline curves/surfaces. In practical applications, product data need to be transferred between different design and manufacturing systems, whereas these systems usually have different requirements on the representation of curves/surfaces, such as limits on the degree of polynomial curves/surfaces. As a result, for a Ball curve/surface, its multi-degree reduction plays a key role in its data transmission between diverse CAD/CAM systems. Therefore, Wu Wu and Deng (2006) presented the degree reduction in Ball Bézier surfaces over rectangular domain. Chen Chen et al. (2007) proposed the degree reduction of Ball Bézier surfaces over triangular domain. Due to the complex constrains in the degree reduction definition of Ball surfaces, for each kind of specific interpolation constrain, concrete analysis is required for the multi-degree reduction in Ball surfaces in the previous works. These tedious and complicated analysis and derivation lead to the lack of universal approach for the multi-degree reduction in Ball surfaces under different interpolation constrains. To solve this problem, we propose a new method for the multi-degree reduction in Ball Bézier surfaces by introducing the CCSA algorithm, which can be used to deal with different interpolation constrains, simplifying the degree reduction procedure significantly. Of course, compared with traditional degree reduction methods of Ball Bézier surfaces, the computation load of the proposed method would also increase.
The arrangements for this paper are as follows: Sect. 2 describes the cooperation search algorithm; Sect. 3 presents the main inspiration for this paper and proposes the improved cooperation search algorithm combined with a crossover operation (CCSA). In Sect. 4, conceptual comparative analysis of CCSA with other metaheuristic algorithms is provided. In Sect. 5, the performance of CCSA is analyzed based on experimental results. In Sect. 6, the CCSA is applied to the optimal multi-degree reduction in Ball Bézier surfaces. Finally, conclusions and future work are discussed in section 7.

Cooperation search algorithm
As a novel social human behavior-based algorithm, CSA algorithm simulates the efficient team cooperation behaviors and dynamic position updating mechanism in modern enterprises to realize the process of optimization.

Team cooperation behaviors in modern enterprises
The team cooperation behaviors and position updating mechanism in modern enterprises are introduced as follows. Generally, a company usually provides four different kinds of positions, including the board of directors, the board of supervisors, the chairman and the staff. The board of directors is in charge of the company, whose members are elected from the individuals in the company. The board of supervisors is supposed to exercise responsible supervision over executive directors for promoting the interests of shareholders. The chairman on duty is elected from the members in the board of directors, which is mainly responsible for the smooth and ordered running of the company and has conclusive influence on the company. The staffs are engaged in specific works under the leadership of the board of directors. The staffs are empowered to elect the leaders in the board of directors and the board of supervisors.
As we all know, human beings are one of the most important factors for productivity improvement. Thus, in order to achieve the scientific development of a company and the promotion of its market competitiveness, it is essential to improve staff's strength, i.e., to help the staffs gain knowledge as much as possible. The knowledge of a staff is mainly affected by the chairman on duty for its highest position in the team. Besides, the leaders in the board of directors and the board of supervisors can offer extensive information to staff for reducing or even avoiding possible mistakes. After a period of time, under-performing staffs and leaders may be replaced by staffs with better performance, while the ordinary staffs are encouraged to promote their job positions through their good performance. This kind of team cooperation behaviors and dynamic position updating mechanism can help the company to maintain the initiative and thus to realize its sustainable development. Figure 1 gives a sketch map to illustrate the team relationship in modern enterprises.

Search principle of the CSA method
In the CSA method, the optimization process of the problem under consideration is regarded as the development of a company. Each solution is treated as a staff and a group of staffs constitute a company team. The fitness value of a solution for the target problem represents the performance of the corresponding staff. The board of directors and the board of supervisors are composed of M global best solutions the group has found by far (an external elite set) and N personal best solutions, respectively. The chairman on duty is randomly selected from the board of directors. The implementation process of the CSA is described as follows.
(1) Team building phase. For conducting uniform search in the initial phase, all the staffs in the team are randomly x where N denotes the population size, D is the number of decision variables, x t i, j is the jth decision variable of the ith solution at tth iteration. φ(x, x) represents a uniformly distributed random number in the interval [x, x].
(2) Team communication operator. Every staff could obtain new knowledge by exchanging information with the leaders of the chairman, board of directors and board of supervisors. As shown in Eq.(2), the team communication operator involves three parts: the knowledge A from the chairman on duty (namely a randomly selected member in the board of directors), the collective knowledge B from the board of directors and the collective knowledge C from the board of supervisors.
where y t+1 i, j denotes the jth decision variable of the ith communication solution at t + 1th iteration. gbest t m, j denotes the jth decision variable of the mth global best solution that the population has found by far. pbest t i, j is the jth decision vari- 8. Apply Eq.(10) to select N better solutions for the next iteration.
end 9. Output the global best individual as the final optimal solution for the target problem.

Fig. 2
Sketch map of the team communication mechanism in the CSA method able of the ith personal best solution. A t i, j is the knowledge gained from the chairman and cha is a randomly selected index from the set {1, 2, . . . , M}. B t i, j is the mean knowledge gained from the M number of global best solutions. C t i, j is the mean knowledge gained from the N personal best solutions. α and β are learning coefficients, which are used to control the relevant effects of the corresponding subpopulations.
(3) Reflective learning operator. In addition to exchanging information with leaders, each staff could also obtain new message by summing its own experience in the opposite direction, which is represented as follows where z t+1 i, j is the jth value of the ith reflective solution in t + 1 th iteration.
(4) Internal competition operator. In order to guarantee that staffs with better performance could be conserved to promote the company's market competitiveness, the following competition operator is applied where F(y) is the fitness value of the solution y.
The pseudo-code of CSA is shown in Table 2 and the sketch map of team communication mechanism is given in Fig. 2.

Motivation of the work
For an optimization problem with D decision variables, the fitness value of a solution is determined by the solution structure in all the D dimensions, that is It is natural to think that individuals whose fitness values are slightly worse may have good solution structure in some particular dimensions, and thus, learning from these individuals can help a individual gain knowledge as much as possible (Gupta and Deep 2020;Wang et al. 2018). The search mechanism of CSA implies the dependency of the search directions on the leading individuals, which will cause the good solution structure that some slightly worse individuals hold in some dimensions to be neglected and cause the population to lose diversity, increasing the possibility of getting trapped at local optima and decreasing the convergence accuracy of population. Though the reflective learning operator can increase population diversity and thus can alleviate this issue to some extent, it still has some blindness in the search process.
In order to make full use of every individual's excellent solution structure during the search process and enhance the exploration and exploitation capability of the CSA, the CSA algorithm is modified by incorporating a DE/best/1 mutation operator and a binomial crossover operator, which is named CCSA.

Search principle of the CCSA algorithm
The search principle of the CCSA algorithm is described as follows.
(1) Team building phase. Generate a initial population to form the team of an enterprise by Eq.(1). Determine the leaders in the board of directors (M number of global best solutions {gbest 1 m } M m=1 ) and the board of supervisors (N number of personal best solutions {pbest 1 i } N i=1 ), as well as the chairman (a randomly selected solution gbest 1 cha, j from the M global best solutions).
(2) Team communication operator. Ordinary staffs learn from leaders in the board of directors and the board of supervisors, as well as the chairman, simultaneously to gain new knowledge by Eq.(2)-Eq.(5).
(3) Mutation operator. In order to strengthen the collaboration among individual staffs and explore the neighborhood potential areas of the personal best states of individual staffs, we utilize the personal best solution pbest 1 i of each individual and modify the DE/best/1 mutation operator in differential evolution algorithms as follows to help individual staffs gain new knowledge where s t+1 i is the ith mutation solution, x r 1 and x r 2 are two randomly selected individuals in the current population. The learning coefficient k controls the influence of the difference vector. The high value of k supports extensive exploration and the small value of k encourages local exploitation. In this study, we set k as a variable, which is linearly decreased from the initial value 1.5 to the final value 0.
(4) Crossover operator. To make full use of the good solution structure of individual staffs, the binomial crossover operator in DE is utilized to merge the messages obtained from team communication and mutation operator, which is expressed as where g t+1 i, j is the jth value of the ith crossover solution, C R is the crossover probability. y t+1 i is the team communication solution obtained from Eq.(2) and s t+1 i is the mutation solution obtained from Eq.(12).
(5) Reflective learning operator. Each individual gains new knowledge by summing its experience in the opposite direction, which is expressed as below: where g t+1 i, j is jth value of the ith crossover solution obtained from Eq.(13) and z t+1 i, j is the jth value of the ith reflective solution in t + 1th iteration.
(6) Greedy selection. In an enterprise, staffs with better performance are usually conserved for promoting the company's market competitiveness. To realize this goal, the following greedy selection is applied Table 3 gives the pseudo-code of the proposed CCSA algorithm.
The traits of the CCSA algorithm are summarized as below: (1) In the team communication phase of CCSA, new direction for movement of a individual is guided by M Gbest and N Pbest and a randomly selected Gbest solution, i.e., cumulative effect of all three is considered, which can help the population find the promising regions and accelerate convergence effectively.
(2) The mutation operator implemented on personal best solution and the crossover operator can significantly increase population diversity, guaranteeing the exploration capability of the algorithm.
(3) The reflective learning operator enables the search of a individual to be transferred from one region to another, which can help population jump out of local optima. Compared with the well known Quasi-opposition learning strategy, in which the value range of the Quasi-opposite point of a point g i, j , the value range of the reflective point is wider, and thus, the reflective points of the population can be more dispersed. Consequently, the CCSA possesses an excellent exploration ability.
(4) The greedy selection enables the population to seek out high-quality solutions for the next iteration to accelerate the convergence.
(5) The swarm is able to achieve an appropriate balance between global exploitation and local exploration via the operators above, increasing the probability of approximating the global optimal solution.

Time complexity analysis of CCSA
Time complexity is a key factor to evaluate the efficiency of an intelligent algorithm in solving problems. It is supposed that the population size N and T iterations are involved in the optimization procedure of a target problem with D decision variables. Besides, it is assumed that the fitness value evaluation per solution is much more complicated than that of other calculations. In the optimization process of the target problem with the CCSA algorithm, N solutions are evaluated in the original team building phase, then N team communication solutions, N crossover solutions and N reflective solutions are evaluated in each iteration. Therefore, the total number of evaluations with the CCSA algorithm is O(N + 3N T ).

Conceptual comparative analysis of CCSA with other metaheuristic algorithms
In general, all nature-inspired metaheuristics present two features, i.e., adaptability and choice of the fittest, so natureinspired metaheuristics apparently looks quite similar. In fact, their solution updating mechanism differs from each other. In this section, the updating mechanism of the proposed CCSA algorithm is compared with that of the particle swarm optimization (PSO), the genetic algorithms(GA), and the squirrel search algorithm (SSA).

CCSA versus PSO
Both the CCSA and PSO utilize the cumulative effect of global best information and personal best information to guide the evolution of the population, however technically, they present several differences in their formulation and updating mechanism. Some of the major differences are described as follows:  Output the global best solution as the final optimal solution for the target problem.

End
(1) The PSO utilizes one global best solution, while the CCSA utilizes M number of global best solutions to guide the search direction.
(2) In PSO, the solution updating of a individual x i just involves its own personal best solution pbest i and the global best solution gbest, while in CCSA, the solution updating involves the collective knowledge from all personal best solu- . More importantly, a randomly selected global best solution gbest cha plays an important and even conclusive role in the solution updating.
(3) The mutation and crossover operators, as well as the reflective learning operator, are employed in the CCSA to maintain population diversity, whereas PSO does not utilize them.

CCSA versus GA
The mutation and crossover operators are involved in both the CCSA and GA algorithms, but the meaning of the two operators in CCSA and the meaning of the two operators in GA are totally different.
(1) In GA, all the updating operators are implemented on chromosomes encoded by an array of parameter values. In contrary, for CCSA, all the updating operators are implemented on the parameter values directly.
(2)The crossover operator in GA swaps a subsequence of two of the chosen chromosomes to create two offspring. In CCSA, for two given solutions, a crossover solution is generated by randomly choosing each of its parameter value from two candidate solutions.
(3) The mutation operator in GA means that randomly flips individual bits in the chromosomes (turning a 0 into a 1 and vice versa). While in CCSA, the mutation operator means probabilistic neighborhood selection, i.e., basically replaces one solution vector by an arithmetic recombination of the solution vector and the difference vector of two randomly selected solution vectors.

CCSA versus SSA
Squirrel search algorithm is a novel nature-inspired metaheuristic algorithm, inspired by the dynamic foraging behavior of flying squirrels and their efficient gliding flight. The typical assumptions of the SSA algorithm are (a) There are N number of flying squirrels in forest, each of which is considered to be on a tree and searches for food individually.
(b) There are three kinds of trees in the forest: one hickory tree F S ht (hickory nuts food source), three acorn trees F S at (acorn nuts food source) and the remaining normal trees F S nt .
The position updating mechanism of the squirrel search algorithm includes the following three cases: Case 1. Flying squirrels on acorn trees (F S t at ) may move toward the hickory tree (F S t ht ), which can be represented as , R 1 ≥ P dp , Random location, otherwise.
where d g is a gliding distance, G c is a gliding constant that controls the balance between exploration and exploitation of SSA, R 1 is a random number uniformly distributed in [0,1], P dp is the predator presence probability.
Case 2. Some of the flying squirrels on normal trees (F S t nt ) may move toward the acorn trees for food, which can be represented as where R 2 is a random number uniformly distributed in [0,1].
Case 3. The remaining flying squirrels on normal trees (F S t nt ) may move toward the hickory tree (F S t ht ) for food, which can be represented as where R 3 is a random number uniformly distributed in [0,1]. Both the CCSA and SSA work on effective division of labor, which gives them a similar appearance superficially, however, their population updating mechanisms are entirely different.
(1)In SSA, the pattern matrix is divided into three regions and SSA employs three strategies in different regions of pattern matrix. On the contrary, in CCSA, the whole pattern matrix utilizes a universal updating strategy, which is much easier to implement.
(2)In the mutation operator of CCSA, the difference information of two randomly selected individuals is used to increase population diversity. While in SSA, this goal is achieved by introducing predator presence, which is modeled using a probabilistic behavior.

Numerical experiments
To verify the optimize performance of the CCSA algorithm in terms of convergence speed, convergence accuracy and robustness, in this section, the CCSA algorithm will be tested on 23 classic benchmark functions (Guo et al. 2020) and the latest collection of benchmarks CEC 2017 (Awad et al. 2016).

Effect of the crossover probability CR
According to the search mechanism of the CCSA algorithm, the value of the crossover probability C R has a certain impact on the performance of the CCSA algorithm. To investigate the effect of C R on the CCSA algorithm, experimental studies are carried out on some of the 23 well-known benchmark functions to verify the performance of the CCSA algorithm with the crossover probability C R being 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, respectively. The 23 well-known benchmark functions include the unimodal functions F1-F7 in Table 4, the multimodal functions F8-F13 in Table 5, and the fixed dimensional multimodal functions F14-F23 in Table 6. The experimental results are shown in Table 7. As shown in Table  7, the CCSA algorithm performs best with C R being 0.5.

Comparison of CCSA and other algorithms on 23 classic benchmarks
To verify the performance of the proposed CCSA algorithm, we carry out a comparison study on the performance of the proposed CCSA algorithm and six existing metaheuristic algorithms, including the PSO Eberhart and Kennedy (1995), GWO Mirjalili and Mirjalili (2014), mGWO Gupta and Deep (2020), QISCA Guo et al. (2020), SSA Jain et al. (2019) and the CSA algorithm on the 23 classic benchmark functions first. In order to conduct a fair comparison among the algorithms used in this paper, the parameter configurations of the algorithms are set in accordance with those presented by its authors. For the sake of fairness, each algorithm will run on each benchmark function independently for 20 times. The statistical results are displayed below. The population size N and the maximum iteration number T in these algorithms are set as 50 and 500, respectively. The mean (AVE) and standard devi- ation (STD) results of the 23 benchmark functions with the seven algorithms are provided in Table 9. Here, we mark the best and the second best result obtained by those algorithms on each test function with bold font and with underlined, respectively. The last row of Table 9 provides the Wilcoxon rank-sum-test result with the CCSA algorithm as the comparison example. The"+/=/−" in the last row of Table 9 denotes the number of functions"superior/comparable/inferior" to the CCSA method for each comparison algorithm in the Wilcoxon rank-sum-test to evaluate the advantages and disadvantages of these algorithms. From Table 9, it can be found that the CCSA method achieves the best or sub-best result on 21 out of 23 benchmark functions, 15 of which are the best result. Besides, the CCSA uniformly converges to the theoretical optimum in 20 independent experiments in F1, F3, F9, F11, F17. Table 10 shows the detection value P in the Wilcoxon rank-sum-test. With the significance level α = 0.05, the bold-face data in Table 10 indicate that there is no significant difference between the optimization results obtained by the comparison algorithm and the CCSA algorithm.
In order to reflect the search efficiency of each algorithm more intuitively, Fig. 3 provides the convergence curves of some of the 23 benchmark functions with the seven algorithms. As shown in Fig. 3, the CCSA can converge to the global optimal value on most of the test functions and has the fastest convergence speed among these algorithms. This might be because the team communication operator enables the algorithm to conduct global exploration and determine the promising search area quickly. While the crossover and reflective learning operator enable the algorithm to do exploration in different areas within the search space, which are helpful for the algorithm to jump out of local optima as well as to enhance convergence speed and solution accuracy. All these reveal that the CCSA algorithm outperforms T o t a l r a n k 2 9 2 7 4 4 5 8 ISSA n 1 , n 2 , G c , P dp , max seed , min seed 3, 9, 1.9, 0.1, 5, 2 SCA-PSO w, C 1 , C 2 , r 2 , r 3 , r 4 , α 0.9 → 0.4, 2, 2, 2π ·rand, 2·rand, 2 CCSA α, β, M, k 0.1, 0.15, 3, 1.5→0 other comparison algorithms in terms of convergence speed and solution accuracy.
To explore the distributions of optimal values obtained in different runs, Fig. 4 shows the Box plot of the seven testing algorithms for several benchmark functions, in which abundant information (like the minimum, median, second or third quartile, maximum) about the results obtained by the testing algorithms is shown intuitively. From Fig. 4, we can see that the objective distributions of the CCSA algorithm are smaller than those of the other comparison algorithms in most cases, indicating the excellent robust performance of the CCSA in solving test problems.

Comparison of CCSA and other algorithms on CEC 2017 benchmarks
To fully verify the performance of the CCSA algorithm on solving high-dimensional complex optimization problems, we use the CCSA algorithm to solve the latest collection of benchmarks CEC 2017 and compare the results with six latest metaheuristic algorithms, including the OBPSO Wang et al. (2007), SCA- PSO Nenavath et al. (2018), QISCA Guo et al. (2020), mGWO Gupta and Deep (2020), ISSA Hu et al. (2019) and the CSA algorithm. In the CEC 2017 test, the population size N and the maximum iteration number T are set as 50 and 1000, respectively. Table 11 shows the average Ave, standard deviation and the sorting rank of the seven algorithms running independently for 20 independent runs in the 50 dimensions. The second row to the bottom of Table  11 provides the total ranking of the 29 test functions for each algorithm and the last row provides the total running time of each algorithm. Table 12 shows the detection value P in the Wilcoxon rank-sum-test with the significance level α = 0.05. The bold face data in Table 12 indicate that there is no significant difference between the optimization results obtained by the comparison algorithm and the CCSA algorithm. Besides, Figs. 5 and 6 display the comparison of convergence curves and Box plot of nine test functions solved by the seven algorithms. As shown in Table 11, the CCSA algorithm performs best on 19 out of 29 test functions with a total ranking 47, while the total ranking of the comparison algorithm OPSO, SCA-PSO, QISCA, mGWO, NOSSA and CSA is 174, 100, 88, 117, 100, 186, respectively. Besides, the Wilcoxon rank-sum-test result of the six comparison algorithms is 0/0/29, 3/0/26, 5/0/24, 4/0/25, 6/0/23, 0/0/29, respectively. Figures 5 and 6 reveal that the CCSA has a fast convergence rate and a high convergence accuracy. It is clear from the obtained results that the CCSA outperforms other comparison algorithms significantly.
The time complexity of the PSO, SSA, GWO, mGWO, CSA, QISCA and the CCSA algorithm is O (N + N T ), O(N +3N T ), O(N +3N T ), respectively. The time complexity of the CCSA algorithm is relatively high. However, the tests results on 23 classic benchmark functions and 29 CEC 2017 benchmark functions demonstrate that the CCSA has excellent local exploitation performance in dealing with unimodal problems and has the ability to jump out of local optima in dealing with multimodal problems. Besides, as shown in Table 9, 10, 11 and 12, the results obtained by the CCSA algorithm are much better than those obtained by the CSA in solving multimodal problems, indicating that the exploration capability of the CSA algorithm is greatly improved after incorporating these modifications. This may be because the probabilistic neighborhood selection in mutation operator and the merging of different dimensional information of various solutions in crossover operator can help population to find more excellent solutions in different areas, and thus, the population has a high probability of jumping out of local optima. Consequently, the CCSA can yield better optimization result than other comparison algorithms in general. Given the excellent optimization performance of the proposed CCSA algorithm, we apply the CCSA algorithm to solve the multi-degree reduction problems of Ball Bézier surfaces.

The optimal multi-degree reduction in Ball
Bézier surfaces

Ball Bézier surfaces and their degree reduction
Let R denote the set of all real numbers and R + denote the set of all non-negative real numbers. A Ball is defined as the following point set Definition 1 Given (m + 1) × (n + 1) control balls { p i, j , r i, j } m,n i, j=0 , a Ball Bézier surface of degree (m × n) is defined as where B m i (u) is the ith Bernstein basis function of degree m and B n j (v) is the jth Bernstein basis function of degree n. As P (u, v) can be rewritten as a Ball surface can be regarded as having two parts: (1) the center surface (2) the radius function For a Ball Bézier surface P (u, v) of degree (m × n), its multi-degree reduction is to find a Ball Bézier surface of degree (m 1 × n 1 ) (m 1 ≤ m; n 1 ≤ n) R a n k 6 2 3 5 4 7 1 such that and the radius of the low-(m 1 × n 1 )th-degree Ball Bézier surface Q (u, v) is as small as possible (Wu and Deng 2006). Since the inclusion condition in Eq.(25) is not easy to deal with, it is usually substituted by where S(u, v) and R (u, v) are the center surface and radius function of the original Ball Bézier surface P (u, v), respectively, and C(u, v) and G(u, v) are the center surface and radius function of the lower-degree Ball Bézier surface Q (u, v), respectively, and for a fixed set of parameters (u, v), In the previous literature, the constrain in Eq.(26) is usually further simplified by The condition simplification from Eq.(26) to Eq.(27) will cause the radius of the resulting Ball Bézier surface larger than that of the theoretical optimum. Besides, the consequent derivation and calculation are also quite complicated. More importantly, the degree reduction problems under different interpolation constrains, such as degree reduction without interpolation constrains, degree reduction with endpoints interpolation or with boundaries interpolation, need to be analyzed and solved using different methods, respectively, which leads to no universal approach that can be used to solve the multi-degree reduction problems of Ball surfaces under different interpolation constrains.

The algorithm for the optimal multi-degree reduction in Ball Bézier surfaces
In some cases, there are no interpolation constraints on the degree-reduced Ball surfaces, while in some cases, the degree-reduced Ball surfaces are required to interpolate some endpoints or boundaries of the original one, which means that some of the control balls of the degree-reduced Ball surfaces need to be fixed and the others are movable (or unknown). In order to determine the optimal positions and radiuses of the unknown control balls, we separate the fixed and unknown control balls first. Let I 1 = {0, 1, . . . , m} × {0, 1, . . . , n}, I 2 = {0, 1, . . . , m 1 } × {0, 1, . . . , n 1 }, and M ⊂ I 2 be a subset which contains pairs formed by row and column indices of the unknown control balls of the Ball Bézier surface Q (u, v) or is empty. Similar to the works in Wu and Deng (2006), the center surface and radius function of a Ball Bézier surface are treated as two separate parts, and thus, the multi-degree reduction in a Ball Bézier surface is divided into two steps: Step 1 The optimal multi-degree reduction in center surface.
Step 2 The optimal multi-degree reduction in radius function.
In this study, for a Ball Bézier surface, the optimal multidegree reduction in its center surface and radius function are formulated as two optimization problems. The change of different interpolation constrains is treated as the change of decision variables, while the objective function and constraint conditions keeping unchanged, and thus, we can use an optimization method to deal with all these cases.

The optimal multi-degree reduction in center surface
As for the optimal multi-degree reduction in center surface in Step 1, we formulate the problem as an optimization one in which the objective function is defined based on the distance between the original (m × n)th-degree center Bézier surface and the low-(m 1 ×n 1 )th-degree center Bézier surface as follows: By separating the fixed and unknown control balls of the low-(m 1 × n 1 )th-degree Ball Bézier surface Q (u, v), its center surface C(u, v) can be represented as Juhász and Róth (2019) where Then, the objective function F 1 [C; S] in Eq.(28) can be rewritten as Since the Bernstein bases have the formulas we can obtain where where B i jkl = 1 (2m 1 + 1)(2n 1 + 1) Thus, the objective function in Eq.(28) can be expressed as In order to determine the positions of the unknown control points {q i, j } (i, j)∈M which minimize the objective function F 1 [C; S], one has to solve the quadratic minimization problem F 1 [C; S] → min. We take the partial derivatives of Eq.(35) with respect to the unknown control points and set up the following system of linear equations the ith row of which can be represented as Thus, the unknown control points {q i, j } (i, j)∈M of the approximate center surface C(u, v) can be obtained by solving the the system of linear equations in Eq.(37).
Compared with the degree reduction methods based on Chebyshev polynomials which can only achieve 1-degree reduction one time (Wu and Deng 2006), with our method, the optimal low-(m 1 × n 1 )th-degree Bézier surface can be obtained directly by solving a system of linear equations, simplifying the degree reduction procedure of center surface remarkably.
After determining the center surface of the low-(m 1 × n 1 )th-degree Ball Bézier surface Q (u, v), take sufficient uniform sampling points {(u i , v j )} K 1 ,K 2 i, j=1 ∈ [0, 1] 2 (here, we set K 1 = K 2 = 20) to compute and store the discrete distances between the original center surface and the approximate one for the next step. Step 1: Input a Ball Bézier surface P (u, v) and the degree of output Ball Bézier surface.
Step 5: Combining C(u, v) and G(u, v), output the optimal lower-degree Ball Bézier surface.

Fig. 7
The optimal multi-degree reduction from a Ball Bézier surface of degree (6 × 6) to one of degree (5 × 5) without interpolation constrains. a The original Ball Bézier surface. b The original center surface and the optimal approximate center surface obtained by the proposed method. c Convergence curves of the objective function F 2 [G; R] in Eq. (39) with the proposed CCSA and six existing metaheuristic algorithms. d Comparison between the function R(u, v)+ C(u, v), S(u, v) and the optimal radius function G(u, v) obtained using the CCSA. e The optimal low-(5 × 5)th-degree Ball Bézier surface obtained using the CCSA

The optimal multi-degree reduction in radius function
As for the optimal multi-degree reduction in the radius function of Ball Bézier surface P (u, v) in Step 2, we formulate the problem as the following optimization one.
Objective function where R (u, v) and G(u, v) denote the radius functions of the Ball Bézier surface before and after degree reduction, respectively, d i, j is the discrete distance between the original and the approximate center surface for (u i , v j ). Variable ranges where the decision variables {g i, j } (i, j)∈M of the optimization problem are the unknown control radiuses of the low-(m 1 × n 1 )th-degree Ball Bézier surface Q (u, v), and The first term in Eq.(39) is used to ensure that the radius function G(u, v) of the lower-degree Ball Bézier surface Q (u, v) is as small as possible, while the second term is employed to guarantee the radius function G(u, v) to be larger than R(u i , v j ) + d i, j for any (u i , v j ) for fulfilling the inclusion condition in Eq.(25). k is a parameter determined by the reduced degree. Here, we set k = 3. λ is a weighting coefficient, which should be sufficiently big to ensure the inclusion condition in Eq.(25). Here, we set λ = 100. The optimization problem can be solved using an metaheuristic method, with which we can search within the variable range for the optimal set of control radiuses of the lower-degree Ball Bézier surface Q (u, v). Then, the optimal low-(m 1 × n 1 )th-degree approximate radius function of the original Ball Bézier surface can be obtained. Combining the optimal center Bézier surface C(u, v) obtained in Step 1 and the radius function G(u, v) obtained in Step 2, the optimal low-(m 1 × n 1 )th-degree Ball Bézier surface Q (u, v) can be obtained. The implementation step of the optimal multi-degree reduction in Ball Bézier surface based on the proposed CCSA algorithm is shown in Table 13.

Experiments to verify the optimal multi-degree reduction algorithm
To verify the effectiveness of the proposed algorithm, the degree reduction results of different Ball Bézier surfaces without interpolation constrains, with four endpoints interpolation and with two boundaries interpolation, are provided in Figs. 7, 8 and 9, respectively. In the degree reduction in Ball Bézier surface, the comparison diagram of the center surfaces before and after degree reduction is displayed to intuitively show the degree reduction effect of the center surface. Moreover, the comparison diagram of the radius function G(u, v) and the function (u, v) for (u, v) ∈ [0, 1] 2 . If so, the inclusion condition in Eq.(25) is satisfied, then the approximation quality of G (u, v) to R(u, v) + C(u, v) − S(u, v) can be used to evaluate the degree reduction effect of the radius function; if not, the inclusion condition in Eq.(25) is not satisfied, and thus, the degree reduction method is not valid. Finally, the Ball Bézier surfaces before and after degree reduction are displayed to show the degree reduction effect of the Ball Bézier surface intuitively. Fig. 9 The optimal multi-degree reduction from a Ball Bézier surface of degree (7×7) to one of degree (7×5) with two boundaries interpolation. a The original Ball Bézier surface. b The original center surface and the optimal approximate center surface obtained by the proposed method. c Convergence curves of the objective function F 2 [G; R] in Eq.(39) with the proposed CCSA and six existing metaheuristic algorithms. d Comparison between the function R(u, v)+ C(u, v), S(u, v) and the optimal radius function G(u, v) obtained using the CCSA. e The optimal low-(7 × 5)th-degree Ball Bézier surface obtained by the CCSA, where the control balls of the two unchanged boundaries are drawn in red Example 1 Given a Ball Bézier surface of degree (6 × 6), Fig. 7 shows the optimal approximate Ball Bézier surface of degree (5 × 5) under no interpolation constrains.
Example 2 Given a Ball Bézier surface of degree (7 × 7), Fig. 8 shows the optimal approximate Ball Bézier surface of degree (5 × 5) with the interpolation of the four endpoints of the original Ball surface.
Example 3 Given a Ball Bézier surface of degree (7 × 7), Fig. 9 shows the optimal approximate Ball Bézier surface of degree (7 × 5) with the interpolation of two boundaries of the original Ball surface.

Results and discussion
(1). As for the method for the multi-degree reduction in center surface, from Figs. 7(b), 8 and 9(b), we can see that the obtained optimal lower-degree center surface is very close to the original one. The boundary curves of the center surface before and after degree reduction basically coincide, and there are quite a bit of overlapping regions between the degree-reduced center surface and the original one. All these indicate that good approximation effect can be achieved by using the proposed multi-degree reduction method for Bézier surfaces.
(2). As for the multi-degree reduction in radius functions, it can be seen from Figs. 7(d), 8 and 9(d) that the green surface, i.e., the radius function after degree reduction, is always above the yellow surface, i.e., the original radius function, indicating that the condition G(u, v) ≥ R(u, v) + C(u, v), S(u, v) is satisfied for any (u, v) ∈ [0, 1] 2 . Under this condition, G(u, v) approximates R(u, v) + C(u, v), S(u, v) as close as possible for different original Ball Bézier surfaces, which reveals that both the inclusion condition in Eq.(25) and the constrain that the radius of the lower-degree Ball Bézier surface should be as small as possible can be fully satisfied by using the proposed multi-degree reduction method for radius function.
(3). It is obvious from the convergence curves of the CCSA and other existing metaheuristic algorithms in Figs. 7(c), 8 and 9(c) that the CCSA algorithm has better performance both on convergence speed and accuracy in solving the degree reduction problems of radius functions.
(4). By comparing the original Ball Bézier surfaces and the ones after degree reduction, we can see that each pair of Ball Bézier surfaces are similar both in shape and thickness.
All these demonstrate that, Algorithm 3 works well in the optimal multi-degree reduction in Ball Bézier surfaces under different interpolation constrains.

Conclusion
This study proposes an improved cooperation search algorithm (CCSA) by incorporating a mutation and a crossover operator. The mutation operator is adopted to enhance population diversity and the crossover is employed to avoid the ignorance of obtained potential message of the search space. These operators can help enhance the collaborative strength of the team and maintain an appropriate balance between exploration and exploitation. Efficiency investigation of the proposed CCSA is conducted on 23 classic benchmark functions and 29 CEC 2017 benchmark functions. Numerical experiments on the benchmark problems demonstrate the better search efficiency, solution accuracy and convergence speed with the proposed CCSA.
Beside, in order to address the issue that, under each kind of specific interpolation constrain, concrete analysis is required for the multi-degree reduction in Ball Bézier surfaces, we propose a new universal method for the multidegree reduction in Ball Bézier surfaces under different interpolation constrains by introducing metaheuristic algorithm. A comparison study is carried out on the performance of the CCSA and some existing metaheuristic algorithms in solving the degree reduction problem. Among these algorithms, the CCSA performs best both in convergence speed and solution accuracy. The degree reduction results under different interpolation constrains show that the proposed method is effective and flexible, which achieves the automatic and intelligent multi-degree reduction in Ball Bézier surfaces. Owing to the excellent optimization ability of the CCSA algorithm, it is a future research topic to apply the CCSA algorithm to economic and production areas (such as for the model establishment of production inventory under carbon emission (Ruidas et al. 2021) and price revision (Ruidas et al. 2022), and the model establishment of economic production quantity (EPQ) ). Besides, exploring the practical applications of metaheuristic algorithms in the areas of computer-aided design (CAD) and computer-aided geometric design (CAGD) will be a promising research issue in the future, such as curve and surface fitting and 3D reconstruction.