Self-adaptive salp swarm algorithm for optimization problems

In this paper, an enhanced version of the salp swarm algorithm (SSA) for global optimization problems was developed. Two improvements have been proposed: (i) Diversification of the SSA population referred as SSAstd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{std}$$\end{document}, (ii) SSA parameters are tuned using a self-adaptive technique-based genetic algorithm (GA) referred as SSAGA-tuner\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{GA-tuner}$$\end{document}. The novelty of developing a self-adaptive SSA is to enhance its performance through balancing search exploration and exploitation. The enhanced SSA versions are evaluated using twelve benchmark functions. The diversified population of SSAstd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{std}$$\end{document} enhances convergence behavior, and self-adaptive parameter tuning of SSAGA-tuner\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{GA-tuner}$$\end{document} improves the convergence behavior as well, thus improving performance. The comparative evaluation against nine well-established methods shows the superiority of the proposed SSA versions. The enhancement amount in accuracy was between 2.97 and 99% among all versions of algorithm. In a nutshell, the proposed SSA version shows a powerful enhancement that can be applied to a wide range of optimization problems.

Due to their success features, swarm-based intelligence algorithms have been widely tailored for various types of optimization problems. However, the efficiency of these algorithms is directly affected by the search space nature of optimization problems (Tubishat et al. 2020). Therefore, the theoretical aspects of the solvent algorithm could be improved in line with the search space properties of such an optimization algorithm. The improvements are normally on either parameter settings or operator behavior. Sometimes, the improvement can be achieved by hybridizing such algorithms with other algorithms to enhance convergence characteristics. In another perspective, these algorithms do not have the same structure and searching mechanism. They are different in properties, and they also behave differently based on the problem under consideration. For instance, it is a remark that the BFO has complex structure while PSO has so simple structure (Sun and Xu 2017), at the same time the PSO converges easy but ACO converges slowly . Moreover, exploration and exploitation for ABC and BFO are well-organized and have better capability, whereas it is poor in for ACO (Edathil and Singh 2019). Next, ABC and ACO are strongly affected and sensitive to parameter setting and initial values, instead PSO sensitive to problem dimension. Finally, ABC has strong randomness behavior with low accurate outcomes (34,35,Heydarpoor et al. 2020).
Based on "No Free Lunch" (NFL) theorem (Wolpert and Macready 1997), no algorithm has the ability to handle all types of problems. Several works were attempts to improve the performance of swarm intelligence algorithms to get efficient and accurate outcomes. A new swarm intelligence algorithm was introduced a few years ago by Mirjalili et al. (2017) called salp swarm algorithm (SSA). This algorithm mimics the behavior of the Salp fish in the sea. In addition, SSA has been verified by different engineering applications and several benchmark problems . So, it has been applied to a lot of variety of optimization problems. For instance in Khamees et al. 2018;Sayed et al. 2018), the SSA algorithm is employed for Feature Selection Problem, whereas in El-Fergany (2018), SSA is utilized for Polymer Electrolyte Membrane (PEM) fuel cells to extract the optimal parameters. Another noticeable use for SSA is designing the Complementary Metal-Oxide-Semiconductor (CMOS) analog integrated circuit (IC) by Asaithambi and Rajappa (2018). Also, Ekinci and Hekimoglu (2018) employs the SSA algorithm in calibrating the power system stabilizer for a power system for multi-machine. Next, the SSA is then used by Hussien et al. (2017) to estimate the activities of a chemical substance. Furthermore, in Wang et al. (2018), the study of Short-Term Load Forecasting utilizing the SSA classifier was conducted. Also, the problem of fish image segmentation utilizing SSA in . Moreover, SSA employed for predicting parameter values for the curve of soil water retention which proposed in , where SSA is proposed for parameter optimization of a detection model used for photovoltaic cell techniques (Abbassi et al. 2019). Furthermore, utilizing SSA for power load frequency control and for load frequency control of power systems was done in Barik and Das (2018) and Holland John (ddd) respectively.
A few improvements attempts on SSA were proposed in the last years. Hegazy et al. (2020) enhances the SSA structure through tuning control parameters. In addition, a binary SSA algorithm was introduced to tackle the Arctan transformation problem (Rizk-Allah et al. 2018). Also, a chaos-induced SSA is proposed in Yu et al. (2018) where the variables of chaotic initialized through a chaotic sequence which employed to substitute random variables. Furthermore, a Chaos-Induced and Mutation-Driven Schemes based SSA, and greedy criteria are hybridized with the SSA algorithm to improve convergence Sayed et al. 2018).
Therefore, as mentioned previously, from the limitations of the SSA algorithm and the modifications made on it by several researchers, we conclude that the SSA algorithm does not work in the required form in its standard form, and it needs to be modified and enhanced in order to come up with satisfactory and competitive results. This motivates us to try to modify the SSA algorithm that enhances its performance.
The main objective of this work is to propose a new improved salp swarm algorithm (SSA) by means of incorporating two improvement strategies in the initial population and the parameter tuning. To achieve such objective, the following contributions are made: (1) In the first improvement strategy, the initial population is chosen based on the diversity measurement where several populations are generated and the one with the highest diversity value is selected which is called diversified SSA algorithm (SSA std ).
(2) In the second strategy, self-adaptive parameter control is utilized in SSA parameters using a genetic algorithm to find the optimal parameter for SSA which is called self-adaptive SSA (SSA G A−tuner ). (3) For verification and validation purposes, the proposed self-adaptive SSA algorithm is compared against the standard SSA algorithm and also with state of the art algorithms using twelve standard benchmark functions. (4) The results prove the high impact of the self-adaptive SSA on the final outcomes.
The rest of the paper is organized as follows: Sect. 2 scans the Related Works. The fundamental background for the standard SSA and GA are discussed in Sect. 3. The proposed diversified SSA algorithm and self-adaptive SSA algorithms are described in Sect. 4. Results and discussions are thoroughly discussed and analyzed in Sect. 5. Finally, the conclusion and possible future directions are illustrated in Sect. 6.

Related works
In this section, the two main concepts of diversity and parameter control are overviewed. These related to the main contributions of the present paper. Initially the related work of population diversity is provided in Sect. 2.1 while the proper and relevant literature about control parameters are given in Sect. 2.2.

Population diversity
As conventionally known, the initial parameter affects the convergence behavior of any swarm-based or evolutionarybased metaheuristic algorithms. When the population-based algorithm initiated with a strong population with appreciated diversity, the problem search space can be entirely navigated with an effective scan. Recall, the optimization domain concurs that the search shall be a concern with diversity in the initial stage and it will be turned toward equilibrium state until the search is stagnated in which the intensification state is reached. In general, two population diversity strategies can be categorized: on-line (diversity preservation) and offline (diversity preservation) (Senkerik et al. 2018;Dash et al. 2019). Off-line diversity strategy defined as the process of initialized a diverse population before metaheuristic is executed. Whereas, diversity preservation strategy can be defined as the process of monitoring and keeping the population with as much as possible diverse through the algorithm execution. Population diversity must be maintained even before or through algorithm execution because metaheuristics performance is sensitive to the initial population diversity (Talbi 2009;Dash et al. 2019). Several research studies on initial population diversity were proposed to investigate its impact on algorithm performance and the final solution quality. Some of those studies are summarised in Table 1.
All the mentioned methods focus on increasing the initial population diversity in order to increases the robustness of the proposed algorithm toward premature convergence (Song et al. 2019;Deng et al. 2019), not trapped in local optima (Eskandari et al. 2019), and achieve balance among exploitation and exploration (Balande and Shrimankar 2019).

Parameter control strategies
Normally, the parameter settings of any optimization algorithm can be classified into two types: parameter tuning and parameter control (Eiben et al. 1999). Parameter tuning is the process of choosing the right parameter before the search. These parameter values will remain unchanged during the search. Normally parameter tuning is carried out by either experienced users or by exhausted ad-hoc experimental parameters study. On the other hand, parameter control defined as the process of finding the optimal parameter settings for the optimization algorithm during the search to empower its search process thus improving the final outcomes [93]. The main purpose of parameter controls is twofold: (i) To build a parameter-less optimization algorithm that can be used by naive users as a black-box. (ii) To make use of the full utilization of the algorithm efficiency by striking the right balance among wide-area exploration and local-nearby exploitation during the search.
There are three types of parameter control strategies: deterministic, adaptive and self-adaptive illustrated in Fig. 1. Deterministic parameter control modifies the value of the parameters during the search using normally the number of  Eiben and Smith (2015) generations without feedback from the accumulative search process. While the adaptive parameter control is a strategy that updates the value of the parameters during the search based on feedback from the accumulative search. For example, when the search frequently improves the population, the parameters are updated to control their operator to focus on intensification rather than diversification. In contrast, when the search is stagnated and the population became ideal, the parameter values are updated to control their operator to focus on diversification rather than intensification. The third type is the self-adaptive or "evolution of evolution" parameter control in which the parameter values of the outer evolutionary algorithm is updated using another inner evolutionary algorithm. The inner evolutionary algorithm uses the set of parameters as a chromosome to be optimized and evaluated by the outer evolutionary algorithm. This type of parameter control strategy is the core contribution of this paper where the SSA is the outer evolutionary algorithm and GA is the inner evolutionary algorithm. Some researchers from the literature on parameter control strategies are summarized in Table 2.

Research background
In order to provide a self-exploratory paper, this section presents the standard Salp Swarm Algorithm (SSA) in Sect. 3.1 and standard Genetic Algorithm (GA) in Sect. 3.2.

Fundamentals to salp swarm algorithm
Salp swarm algorithm (SSA) was proposed by Mirjalili et al. (2017). It inspired the sea salps swarming behavior. Salp is considered as a type of Salpidae family and it has a cylindrical shape as shown in Fig. 2a. In order to move in the sea, salps are able to form a chain, namely "swarm chain" shown also in Fig. 2b. The swarm behavior assists salps for foraging and moving easily. The first salp in the chain is called the leader, and the rest of the salps are called the followers.
Chen et al.
El Afia et al.
To elaborate, the leader has an important task to guide the swarm chain in movement and foraging. Therefore, the salps position is formulated as dim − dimension in the search area, where dim is the number of decision variables (or solution dimension) for a certain problem. Moreover, the salps position is saved in a matrix namely x. Furthermore, food source (F) in the search area is the main target of the salp chain.
SSA mechanism begins with a set of random positions for salps. Formally, the positions of the salps are generated using Eq. (5) Where every solution is represented as where ub j and lb j are the upper and lower bounds of decision variable j, respectively. A set of salps constitute one solution, and a set of solutions constitute one population as shown in Fig. 3.
After computing the fitness value for every solution, the best solution can be found and appointed to source food (F). In addition, the leader movement is computed using Eq. (2).
where x 1 j represent the salp leader coordinates in the j th dimension, F j is the food source coordinates in the j th dimen- It can be notable from Eq. (2) that the leader movement updated according to the food source. Also, the parameter c 1 which is computed using Eq. (3) is the most important parameter, thus it is responsible significantly for the balance among exploration and exploitation .
where (l) and (L) are the current iteration and a maximum number of iterations, respectively. The parameters c 2 and c 3 are random numbers uniformly initialized in the range of [0, 1] On the other hand, follower salps positions are computed using Eq. (4).
where i ≥ 2 and x i j represent the ith follower salp position in the jth dimension.
The simulation for salp swarm behavior is as follows. The algorithm starts the initialization stage by initializing a collection of salp chains representing a group of solutions, these solutions in combine considered as the initial population of the algorithm. In sequence, the fitness for all solutions is calculated, and the best solution is determined. After initialization, the SSA algorithm improvement stage starts by calculating the c 1 , c 2 , and c 3 parameters values. Then, the position for salps is updated, even by Eq. (2) for the first salp in the chain or by Eq. (4) for the rest of the salps in the chain. Updating solutions step is followed by checking whether the salps are still within the upper and lower limit range, if any salp is above the upper limits it is reset to the upper bound and if any salp below the lower limit it is reset to the lower bound. At this point, the fitness for updated solutions is calculated and compared with the fitness of the initial solutions and chose the best one as the best solution.
The above-mentioned steps are carried out iteratively until the termination criteria are met, of course except for the initializing step. At last, the Standard Salp Swarm Algorithm flowchart is presented in Fig. 4, and the pseudo-code is given in Algorithm 1.

Fundamentals to genetic algorithm
Genetic algorithm (GA) is a popular population-based evolutionary-based algorithm proposed by Wang et al. (2018). It is initiated with a population of individuals. Each individual has a set of genes. GA has conventionally utilized the survival of the fittest rule in the natural selection principle (Goldberg and Holland 1988;Holland 1992). Evolution after evolution, GA regenerates the current population using three main operators: selection, crossover, and mutation. Each GA gene is a decision variable and each individual is a solution, as shown in update c1 using Eq. (3) 10: for each solution in Pop init do 11: update the first salp using Eq. (2)   to be evaluated to get their fitness by utilizing what's called the objective function. For the purpose of promoting low fitting individuals, an elitism mechanism for selecting the best individuals are employed. Also, the probability of selecting poor solutions mechanism employed to raise the local optima prevention.
In addition, the GA algorithm considered as a reliable algorithm and trustworthy to find the global optimum (Premalatha and Natarajan 2009; Ghorbani et al. 2018;Mirjalili 2019), so that, its technique preserves the best solutions through all generation and utilize it to enhance the poor solutions. So, all the population individuals turn out to be better. Crossover among individuals leads to exploitation of the "zone" between the two parental solutions given. Also, mutation benefits the algorithm, where this operator modifies the genes inside the chromosomes randomly, which will preserve the population individual's diversity and raise the GA behavior of exploration. Furthermore, the mutation operator may cause essential better solutions and guide other solutions to the global minima. Procedurally, GA has several steps to be executed, discussed as follows: Initial Population GA begins its process with a random population, which comprises multi individuals called chromosomes. Every chromosome has a group of variables that imitates the natural genes, as presented in Fig. 5. Selection the main inspiration for the GA algorithm is natural selection. The fittest individual has the more chance to be selected for mating, which increases their genes contribution in the production of the next generation. The selection of individuals depends on their probability values, which in turn depends on the fitness values assigned by the GA algorithm. Crossover the crossover process is about an exchange of genes between two individuals (parent solutions) who have been pre-selected based on their fitness to generate two new individuals (children solutions), as seen in Fig. 6. The two popular methods for crossover are single-point and double-point methods. This operator is normally controlled by crossover rate γ r where γ r ∈ [0, 1]. Mutation The mutation is the process of altering single or multi genes in the children's solutions, which presented in Fig. 7. Usually, the mutation rate set to be low because raising it may cause the GA algorithm to be just a random search technique. In addition to this, it takes advantage of the mutation that it preserves the diversity of the population by proposing more randomness and raising the  In a nutshell, GA always begins its process with random individuals comprising its population, and across its process, it utilizes the early mentioned operators (Selection, Crossover, and Mutation) to enhance the population. Also, the best solution is considered the global optimum best approximation for the problem under solution. Finally, the high-level schemata of the Standard Genetic Algorithm are given in Algorithm 2.

Proposed method
As aforementioned, in this paper, two main contributions are proposed to improve the performance of SSA: (1) Initial Population Diversification of SSA This is achieved by generating multiple initial populations and chooses the most diversified one based on the statistical indications related to standard deviation. The diversified population selected is referred to as SSA std .
(2) Self-Adaptive SSA Incorporating the self-adaptive concepts in order to select the most appreciated parameters The following subsection will be thoroughly discussed the two contributions.

Diversification of initial population
The standard SSA algorithm structure is improved by modifying the initial population initialization strategy. The modification includes a statistical indication based on the idea of computing the standard deviations of the initial populations. Therefore, the diversity of SSA is improved by means of striking the right balance between exploration and exploitation during the search. The proposed algorithm is referred to as SSA std .
Initially, multiple random populations as many as (Max # Pop) are generated using Eq. (5). This done in a loop of k cycles as shown in Fig. 8. At each k cycle (say i), the standard deviation of the generated population std(X k ) is calculated. To elaborate, for every decision variable (x i j ) (k) in the population k the standard deviation is calculated using Eq. (6), and the average of standard deviations (i.e., avg(std(x i j )), ∀i = (1, 2, . . . , dim) ∀ j = (1, 2, . . . , n)) of the decision variables are calculated using Eq. (7). Thereafter, a comparison between the average of standard devia-tions (i.e., avg(std(x i j ) (k) )) for all generated populations is conducted. The population with the highest average standard deviation is selected and used for SSA. The generation process of diversified initial population for SSA pseudo-coded found in Algorithm 3.
x i represents the i th generated solution, where (i = 1, 2, . . . , N ) ( j = 1, 2, . . . , dim). In addition, N represents the size of the population. Also, the dim represents the dimensions of the solution (size). The ub and lb represent the upper and lower bounds of the solution space.
x i represents the ith generated solution. In addition, x represents the average of the generated solution. Also, the n represents the population size.

Self-adaptive salp swarm algorithm
The second contribution of this work implies proposing a self-adaptive salp swarm algorithm where the genetic algorithm (GA) is used in each iteration of the salp swarm algorithm to tune its parameters. The proposed algorithm is referred to as SSA G A−tuner . In SSA G A−tuner , GA plays a crucial role in determining the optimal parameters for SSA.
The proposed SSA G A−tuner has five steps as shown in the flowchart visualized in Fig. 10 and pseudo-coded in Algorithm 5. These steps can be thoroughly discussed as follows: Initialization Initially the individual of GA is represented as a vector (i.e., y = ( p 1 , p 2 )) of length d = 2. The decision variables in the individual are the p 1 and p 2 which is the first and second parameters to determine c 1 presented in Eq. (9). To evaluate each individual, the SSA is used as a standard benchmark function with predefined population size and specific maximum number of iteration (i.e. max I tr). The results obtained by SSA for each individual is considered as the fitness function value. For GA, the initial population of size (G A PopSi ze ) is randomly generated with the discrete range of p 1 , p 2 ∈ (0, 1, . . . , 15). This value range is selected after intensive experiments whereby this value range yields the best results. The idea of the evolution of evolution can be used to implement the self-adaptation of parameters. Here the parameters to be adapted are encoded into the chromosomes and undergo mutation and recombination. The better values of these encoded parameters lead to better individuals, which in turn are more likely to survive and produce offspring and hence propagate these better parameter values. Selection The proportional selection scheme (i.e., roulette wheel selection ) that is utilized the survival-of-the-fittest principle is used to select the fittest individuals. In the proportional selection scheme fitness function of each individual is calculated using SSA. This is done by using any individual as an input parameter for SSA and the fittest solution produced is considered as the objective function value for that solution. The selection probability of each individual is calculated by the fitness function value of that individual relative to the fitness function values of the other individuals in the GA population (G A Pop ). Formally, let the ϕ i is the selection probability of the individual i. The value of ϕ i is calculated as in Eq. .
Note that the

pie-charts the probability of five individual G A Pop
. The selection probability of the each individual is represented as a portion in the pie-chart. In a nutshell, the larger portion means higher chance of selecting that individual. Encoding In the encoding step, the whole decision variables in the individuals stored in G A Pop are reformulated using a binary format. For example, let the individual be x = {6, 14}, it will be reformulated to binary as follows: x = {0110, 1110}. It is worth noting that, the individual is Crossover The selected individuals pass to a crossover operator in which two encoded parents are randomly chosen. Thereafter, single and double point crossover is used as shown in Fig. 6. In the single-point crossover, the parent solutions chromosomes exchanging after a randomly selected cut point to yield two new chromosomes. Where in the double-point crossover, two parents are chosen randomly. Therefore, two cut-points are pinned. The genes between the two cut-points are exchanged to yield two new chromosomes. The crossover rate γ r where γ r ∈ [0, 1] is used to determine the probability of using a crossover operator. The higher value of γ r closed to one leads to the use of a crossover operator for almost the entire population of individuals. This means that the genes will be heavily inherited between individuals. In the proposed method, γ r = 70%. Mutation mutation is the next GA operator where one or more genes, based on mutation rate m r is altered in the chromosome to avoid similarity between solutions and to keep solutions away from local solutions. In addition, the mutation rate was assigned very low to ensure that the GA algorithm search process in not primitive random. An example of this operator is shown in Fig. 7. From the figure, it is clear that only a trivial change has occurred in the chromosome genes after the mutation process. Conventionally, μ r is assigned by small value to control the search better. In the proposed method, μ r = 1%. Decoding The new chromosomes is decoded from binary into decimal format. Evaluation using SSA To evaluate each individual, SSA is used. The gene values in each individual are used by SSA as initialized values for p 1 and p 2 using one benchmark function for all GA population. Thereafter, the optimal value obtained by SSA is the fitness function value for each solution. Note that to evaluate any individual, the SSA is repeated 31 replications and the average of the best-solutions obtained by all replications is calculated to be the fitness value. The pseudo-code for calculating fitness is given in Algorithm 4. It is worth mentioning that the diversified Initial Population strategy presented in Sect. 4.1 is used in SSA to generate the initial population.
Algorithm 4 Evaluating GA Individuals ( p 1 , p 2 ) using SSA Pseudo Code 1: set GA population size referred as G A PopSi ze 2: set No of SSA Runs referred as repetition 3: set counter j ← 1 4: while ( j ≤ G A PopSi ze ) do 5: set counter k ← 1 6: while (k ≤ repetition) do 7: Calculate (SS A( f j , Sol k )) using Pop init best 8: Calculate (best − average ( f j ,Sol k ) ) 9: k + + 10: end while 11: set counter i ← 1 12: f (Sol i ) = sum(Sol i )/repetition 15: i + + 16: end while 17: j + + 18: end while GA Termination Criteria The selection, encoding, crossover, mutation, decoding and evaluation with elitism operators are repeated until the maximum number of generations (G A MaxGen ) is reached. After G A MaxGen is met, the best individual is selected to be the optimized parameter for SSA.

Results and discussion
To evaluate the performance of the proposed algorithms, two sets of experiments are conducted. In the first set of experiments, the effect of the proposed diversified population in SSA std is studied by comparing it against the standard SSA algorithm over twelve benchmark functions. In the second set of experiments, the effect of the self-adaptive tuning parameter in the diversified population SSA G A−tuner is studied and compared against a diversified population SSA std without self-adaptive tuning parameter, as well as the standard SSA algorithm using the same twelve benchmark functions. In order to comparatively evaluate the proposed method, nine comparative algorithms are used using ten benchmark functions. Finally, statistical evaluation is also conducted where the Wilcoxon Mann-Whitney Statistical test is used to provide statistical indications for significant results.

Benchmark functions
The benchmark functions are grouped into two types, a unimodal and multi-modal, and are listed with their mathemati- generate random values for ( p 1 , p 2 ) to form a GA population G A Pop 9: calculate the fitness for Sol best using SSA 10: i + + 11: end while 12: set counter j ← 1 13: while ( j ≤ G A MaxGen ) do 14: Selection: select 2-pairs of ( p 1 , p 2 ) at random referred as (parents) 15: Encoding: encode the selected parents in binary format 16: Crossover: apply single or double point crossover {considering probability} 17: Mutation: mutate one or two digit randomly {considering prob-ability} 18: Decoding: decode the generated offspring 19: calculate the fitness for each offspring using SSA 20: if offspring fitness better than parents fitness then 21: replace parent with offspring 22: end if 23: j + + 24: end while 25: return best ( p 1 , p 2 ) 26: −−−−−−−Stage 3: Improvement: Salp Swarm Algorithm− − − − − −− 27: as in Algorithm 1 (from line-6 to line-8) 28: update c1 based on the best ( p 1 , p 2 ) using Eq. (9) 29: as in Algorithm 1 (from line-10 to line-30) cal formulations, boundaries, global optima, and dimension in the Tables 4 and 5. In general, the uni-modal functions are convenient for examining the algorithm exploitation capabilities, where the multi-modal problems that have multi-local minima are more convenient for examining the algorithm exploration capability.
For the purpose of evaluating the performance of the proposed algorithms, a collection of parameter settings is given as shown in Table 3 as suggested in Mirjalili et al. (2017), and a collection of evaluation criteria was conducted in this work as follows: • Mean Value It is the average of best-obtained values over multiple experimental runs. • Standard Deviation (STD) Show if the proposed algorithm has the ability to generate the best value for multiple experimental runs.

Effect of diversified population on SSA std
The comparison among proposed diversified SSA std and the standard SSA algorithm performance over 31 experimental runs are illustrated in Table 6. The performance measures of the obtained results are calculated, including mean and standard deviation for each benchmark function. It is notable that the SSA std algorithm is able to obtain the best results and outperforms the standard SSA in almost all tested functions. Results of SSA std algorithm demonstrate that the more diverse initial population has a remarkable positive impact on the quality of the algorithm's final results.

The effect of self-adaptive parameter tuning on SSA GA−tuner
The comparison between proposed tuned SSA G A−tuner algorithm, diversified SSA std , and the standard SSA algorithm performance over 31 experimental runs are shown in Table 6. The performance measures of the algorithms are calculated, including mean and standard deviation for twelve benchmark problems. It is notable that the SSA G A−tuner outperforms both SSA std and the standard SSA for almost all tested functions. In addition, SSA G A−tuner is able to obtain the best result in all functions in comparison with the other two algorithms. Results of SSA G A−tuner proof that parameter tuning gives the algorithm the ability to deal with different population nature without proper experience from the  Best results in bold users. Furthermore, parameter tuning enhances algorithm outcomes.
In order to validate the significance of the obtained results, the Wilcoxon Mann-Whitney Statistical test is conducted and its results are recorded in Table 7. These results are according to the best-obtained results. The statistical indications proof that the obtained results for SSA std algorithm has significant difference (p-value < 0.05) in comparison

Computational time
It is clear from Table 8 that the improved versions of the algorithm have more computational time than the standard algorithm. The increase in computational time is due to the algorithm performing additional tasks before proceeding with the main optimization process. For example, the SSA std algorithm, search for high diverse population before Based on benchmark function moving to the optimization process, and the SSA G A−tuner algorithm has two additional tasks in addition to the main optimization process (i.e., searching for a high diverse population and finding the optimal parameter values for a specific Best results in bold problem). In addition, it is noticeable that the SSA G A−tuner algorithm has a large increase in computational time over the rest of the algorithms, as this is due to the large time required to find parameter values.

Comparative evaluation
To validate our work, two comparisons with state of the art methods were conducted. These state-of-the-art meth- Best results in bold ods used ten benchmark functions in the first comparison and seven benchmark functions in the second comparison that were adopted in this research. Note that the results of comparative methods are selected from Harifi et al. (2019) and Zivkovic et al. (2022) for the first and second comparisons, respectively. In the first comparison, the comparative algorithms are Artificial Bee Colony ( Table 9 as used by all comparative methods to allow a fair comparison. The mean and standard deviation results of the first and second comparisons over 30 experimental runs are shown in Tables 10 and 11 respectively. The best results are highlighted in the bold font. For the first comparison, it can be seen that the proposed method is able to achieve the best results for Sphere, Bukin, Bohachevsky, Zakharov, Booth, and Michalewicz functions. Also, for the uni-modal function with no local minima such as "Sphere" function outcome clarify that SSA G A−tuner algorithm has the best result. On the other hand, the Emperor Penguins Colony (EPC) algorithm achieved the best results for Rastrigin, Ackley, and Griewank functions, with multi-local minima. Furthermore, SSA G A−tuner obtains the best results for Michalewicz function which is complex and a multi-modal type. Based on the conducted experiments, the overall results confirm that the proposed SSA G A−tuner algorithm is appropriate for optimization, whether the optimization problems subject has uni-modal or multi-modal search space nature. For the standard deviation results in the same table, it is notable that the proposed SSA G A−tuner algorithm performance is stable.
For the second comparison, it can be seen that the proposed SSA G A−tuner is able to achieve the best results for Sphere and Ackley functions only. Where the SSARM-SCA algorithm gets the best results in Rastrigin and Griewank functions. In addition, RGA and BH-GSA algorithms gets the best results in Rosenbrock function. Although the proposed algorithm did not get the best results in most of the functions, but its results were close to the competing algorithms. This comparison confirm that the proposed SSA G A−tuner algorithm is appropriate for optimization, whether the optimization problems subject has uni-modal or multi-modal search space nature.

Conclusion
This paper proposes an enhanced version of the salp swarm algorithm (SSA) for optimization problems. The enhancements of SSA involve the initial population diversity and the parameter control strategy. Firstly, the diversification of the Salp Swarm population is introduced to control the exploration aspects. Secondly, a new version of SSA referred as SSA G A−tuner is proposed to enhance the parameters control of SSA using a self-adaptive parameter setting whereby genetic algorithm is adopted to find the optimal parameters for SSA at each generation.
Initially, the effect of the diversified population on the convergence behavior of SSA std version is studied. The pro-posed algorithm is able to excel in the standard version of SSA in all benchmark functions. In conclusion, there is a positive impact of the diversified population on the performance of SSA std . In order to evaluate the impact of the self-adaptive parameter control on the convergence of SSA G A−tuner , the comparative results against standard SSA and SSA std show that the SSA G A−tuner is able to yield the best results. Briefly, the results prove that the self-adaptive parameter control has a direct impact on the performance of the proposed SSA versions. In a nutshell, the proposed SSA versions are a very powerful enhancement that can be applied for a wide range of optimization problems.
Based on the experimental evaluation and verification carried out, it is notable that proposed methods tackle the exploration issue through increasing the population diversity, which in turn insure covering the entire search area as much as possible. In addition, the proposed methods tune the SSA algorithm to address the variation in different problems nature, so the algorithm become suitable to tackle any prediction problem.
Additionally, the experimental results confirm that the robustness of the diverse and parameter controlled algorithm that develops an optimal set of weights and biases values for the BPNN predictor added an edge to the prediction process, in addition to build a parameter-less optimization algorithm and to make use of the full utilization of the algorithm efficiency by striking the right balance among wide-area exploration and local-nearby exploitation during the search, in order to helped enhance the BPNN's performance.
As the proposed SSA versions reveal very successful outcomes, in the future, the proposed SSA versions can be adapted for combinatorial optimization problems such as scheduling problems. Furthermore, other ways of parameter tuning such as control parameter tuning and adaptive parameter control strategies can be investigated. Other enhancement in the SSA can be studied such as adapting structural population methods and fusing natural selection principles.