The CLPB algorithm has utilized ten common chaotic map functions separately in the LPB interiorly to create the initial population of individuals. This essentially helps in calculating the different positions of individuals by choosing any value that can be assumed as the best random value between 0 and 1; as a result, in this work, the initial point of the chaotic maps was set to (0.7). The different functions of the chaotic maps are proposed by (Mohammed and Rashid,2021). As mentioned previously, Table 2 demonstrates the functions and names of the ten chaotic maps. Each individual in the population of the single objective CLPB algorithm has position and cost characteristics. The fitness for the position of each individual is recognized as cost. The cost of each individual in CLPB is calculated by applying either one of the nineteen classical benchmark functions (TF1-TF19) or one of the ten functions of CEC2019 (CEC01-CEC10) and as mentioned above, one of the 10 chaotic maps is used simultaneously at the same time to produce the positions of individuals (initial population) and therefore this will examine CLPB. The results are then evaluated against the standard LPB and three popular algorithms in the literature: DA, PSO, and GA. The results for 19 classical benchmark functions for PSO, DA, and GA are taken from (Mirjalili,2016), and for LPB are taken from (Rahman and Rashid,2021). Also, the results for 10 CEC-C06 2019 test functions for all participated algorithms LPB, PSO, and DA are taken (Rahman and Rashid,2021). In addition to that, the processing time (PT) of the algorithm for the two groups of test functions is computed to examine how fast is the algorithm in finding the optimal results. Moreover, the significance of the results is proved by using the T.TEST. The parameter settings for CLPB are shown in Table 3.
5.1 Benchmark Test Functions
The benchmark is described as a standard, by which things are measured or evaluated accordingly. In this work, the CLPB is subjected to the use of two powerful and common sets of benchmarks to evaluate the overall performance enhancement of the CLPB.
The first set of benchmarks is classical (traditional) benchmark functions that are divided into three groups: unimodal, multi-modal, and composite (Yao et al.,1999; Molga and Smutnicki,2005; Liang et al.,2005; Mirjalili,2016; Price et al.,2018). The properties of each group of test functions are different. For instance, the unimodal test functions are designed to benchmark the convergence and exploitation of an algorithm. This group type of test function has a single optimum. Nevertheless, a multi-modal test function is often referred to as having multi optimum as their name reveals this. It has one global and multi-local optima. One of the most important factors that an algorithm should consider when it comes to approaching the global optimum is avoiding the entire local optimal solution. Thus, this group of test functions can then test the algorithm's ability to avoid local optima and benchmark the exploration ability of an algorithm. Lastly, composite test functions are composed of various versions of the unimodal and multimodal benchmark groups, such as combined, rotated, shifted, and biased versions (Liang et al.,2005). These functions show the difficulties that exist in the search spaces due to the diversity of shapes and local optima. An algorithm should suitably balance the exploration and exploitation of a given test function to achieve the global optimum. This group of test functions can be used to benchmark the combined efforts of both exploration and exploitation (Mirjalili,2016). Tables 4, 5, and 6 demonstrates more information about the test functions with their ranges and dimensions.
The second set of benchmark functions is CEC-C06 2019 test functions. The author of with work has examined this benchmark on the CLPB algorithm against other participating algorithms to show the ability of the CLPB algorithm in solving large-scale optimization problems (Price et al.,2018). In the evolutionary field, various optimization algorithms take advantage of the known properties of the benchmark functions. For instance, they can find optimal values located across the horizontal axes. The CEC-C06 test functions are designed to evaluate the various functions of an algorithm on a horizontal slice of the convergence plot (Liang et al., 2005). These functions are then used to evaluate the algorithm for large-scale optimization problems. the CEC-C06 test functions can examine the challenge known as “The 100-digit challenge” which helps users find the most effective algorithm for their particular situation. The first three functions of the evolutionary field, namely the CEC01, CEC02, and CEC03, have various dimensions. On the other hand, the rest of the test functions CEC04 to CEC10 functions are set as 10-dimensional minimization problems in the range [-100, 100], and they are shifted and rotated. All the CEC-C06 2019 functions are scalable and all global optimums of these functions were united towards point 1(Rahman and Rashid,2021). Table 7 shows the CEC-C06 2019 test function names, dimensions, and ranges.
5.1.1 Results of the Classical Benchmark Test Functions
Tables 8 to 14 show the rank results of the comparison between the CLPB algorithm and other popular participating algorithms in the industry GA, PSO, DA, and LPB to demonstrate the differences and optimal results. The parameters for GA, PSO, and DA are given in reference (Mirjalili,2016) and the parameters for standard LPB are discussed in reference (Rahman and Rashid,2021). The data for the test functions (TF1-TF19) for LPB, DA, PSO, and GA is from (Mirjalili,2016; Rahman and Rashid,2021). Each participating algorithm's traditional benchmark test functions are solved 30 times using 80 search agents over a total of 500 iterations. The standard deviation and average of the results are then calculated. Also, the PT has been restored 30 times. The contributed algorithms have been tested by the author of this work using the rest of the test functions. The dp for all the test functions is set to 0.5. The standard deviation and the average of the optimal solution are calculated in the last iteration and the PT as well. For each TF the results of CLPB1 to CLPB10 have been averaged to one solution to be able to compare it with other contributed algorithms. In Table 8, the overall average performance of the CLPB is 2.47 on a scale of 1 to 5, with 1 being the best and 5 the worst. This ranking takes into consideration that CLPB ranked first three times, second seven times, third four times, fourth two times, and fifth once. See Table 9 for more details. Additionally, the ranking of CLPB by the type of the benchmark function (Unimodal test functions) is 3.1428; (Multi-modal test functions): 2.1667; (Composite test functions): 1.75. Table 10, shows that, on a scale of 1 to 5, where 1 is the best and 5 is the worst, the global standard deviation performance of the CLPB is 2.294. This ranking takes into consideration that CLPB ranked first four times, second six times, third five times, fourth two times, and fifth zero times. See Table 11 for more details. Generally, the processing time of the CLPB is 1.117 on a scale of 1–5 where being 1 the best and 5 being the worst. Table 12 shows that. This ranking takes into consideration that CLPB ranked first fifteen times, second two times, and ranked zero times as third, fourth, and fifth. See Table 13 for more details. Additionally, the ranking of CLPB by the type of the benchmark function is (Unimodal test functions): 1.1428; (Multi-modal test functions): 1; (Composite test functions): 1.25. Table 14, proves the superiority of the CLPB algorithm against all the participated algorithms in terms of PT in seconds. The PT for the CLPB for optimizing all the functions is much smaller. This is noticeable as the CLPB algorithm has ranked 1st (superior) in 15 out of 17 applicable functions and the difference is significantly large. This has two possible reasons, first, As discussed previously, a subset of the population is chosen in the first step of the CLPB, and other subpopulations are constructed based on this smaller group. Priority is given to optimizing the perfect subpopulation, followed by the good and bad subpopulations. The subpopulations are substantially smaller than the overall population, making it quicker to look for answers there. As a result, this increases randomization also reducing overall optimization time. Second, using chaotic map functions helped in choosing the best random values out of a range of values to create the initial populations of individuals rather than creating them randomly and consuming time until reaching the aimed solution. Therefore, this will speed up the processing time which will lead to speed up the convergence to reach the global optimum and choose the best solution in the shortest possible time. In addition, The convergence curve for some of the classical benchmark test functions for the CLPB algorithm is presented in Figure 2. The shown data of the CLPB in the aforementioned tables is proved in the Figure.
5.1.2 Results of the CEC-C06 2019 Test Functions
The results of the CEC-C06 2019 test functions of the CLPB algorithm with the participated algorithms (LPB, DA, GA, and PSO) are shown in Table 15. In addition to the CLPB algorithm, the author of the work evaluated LPB, DA, GA, and PSO against the benchmarks, to compare them to the CLPB. For each test function in Table 15, Bold results indicate superior outcomes. Over 500 iterations, 30 times are used to solve the test functions using 80 search agents. From the last iteration; the average, standard deviation, and processing time are computed. Also, the value of the metric standard deviation for the CLPB algorithm in almost all the CEC-C06 2019 test functions is smaller than DA, PSO, GA, and LPB. In contrast, according to the value of the metric, average the LPB showed its superiority in almost all of the functions compared to the participated algorithms. However, CLBP in CEC03 has the same average and standard deviation compared to LPB but with a significant difference in processing time (2.101408) seconds for CLPB while LPB is (144.194876) seconds. In addition to that, CLPB has proved its enormous superiority in the processing time of all the test functions of CEC-C06 2019 in comparison to all the contributed algorithms (LPB, DA, PSO, and GA). Also, the results of the CLPB and LPB for optimizing CEC08 and CEC10 are comparative. On the other hand, CLPB in CEC02, CEC03, CEC06, and CEC10 test functions scored superior results compared to DA, PSO, and GA.
The aforementioned comparison can be noticeable/ proved in rank tables 16, 17, 18, 19, 20, and 21. which shows that, generally, the global average performance of the CLPB is 2.4 on a scale of 1–5 where being 1 the best and 5 is the worst. This ranking takes into consideration that CLPB ranked first two times, second four times, third two times, fourth two times, and fifth zero times. See Table 17 for more details. Also, the standard deviation performance of the CLPB is 1.9 on a scale of 1–5 while LPB ranked 2.3. According to the processing time, CLPB ranked 1.1 on a scale of 1–4 while LPB ranked 2.9. In general, the results of the CEC-C06 2019 benchmark functions revealed that for large-scale optimization problems, CLPB provides better results compared to the DA, PSO, GA, and LPB. The convergence curve for some of the CEC-C06 2019 test functions for the CLPB algorithm is presented in Figure 3. The shown data of the CLPB in Table 15 is proved in the Figure. Precisely, the processing time and name of the function with the chaotic map that has been used are highlighted in the figures.
5.2 Statistical Tests
Assessing the performance is not accurate enough since it only relies on the standard deviation, average, and PT. To evaluate the CLPB chaotic algorithm, the statistical test is used to verify the importance of the results statistically. The result of the test is then compared with other metaheuristic algorithms to evaluate the mean and make sure that the result is significant or not. An independent T.TEST is a statistical test (hypothesis test) that measures the difference between the mean values of two unrelated groups of populations. It can also be used to determine if there is a significant difference between the two groups of populations (Microsoft). The p-value of the CLPB algorithm is computed using the T.TEST function. It should be less than 0.05, at that time the results are significant. Therefore, the results are underlined as it is shown in Tables 22 and 23. Figure 4 shows the chaotic map/maps that are the best/worst in improving the performance of CLPB. The Circles are indications of the number of times a specific chaotic map has scored significant results in the performance improvement of CLPB.
It can be seen that the Gauss/mouse map and Tent map both show the best significant results in 9 out of 17 functions while the Chebyshev map, Logistic map, Sine map, and Singer map provide the least impact on improving CLPB in 6 out 17 functions. The iterative map, Piecewise map, and Sinusoidal map have the second-best significant results in 8 out of 17 functions, and the Circle map has the third-best significant results in 7 out of 17 functions. The p values reported in Tables 22 and 23 for classical benchmark test functions prove that almost all the ten chaotic maps could enhance the performance of CLPB. Precisely, Gauss/mouse map and Tent map. This is because these chaotic maps can improve the exploitation and exploration capability of CLPB and speed up the convergency which will lead find the best global optima among various solutions. Therefore, avoid local optima. In the case of (Rahman and Rashid,2021), it was proved that the results of the LPB are statistically significant compared to DA, PSO, and GA. This means there is no need to compare the CLPB algorithm with DA, PSO, and GA statistically since the CLPB algorithm proved its superiority against the LPB in most of the test functions and with the use of ten chaotic maps.