5.1 Particle Swarm Optimization
The Particle swarm optimization (PSO) technique, introduced by James Kennedy and Russell Eberhart in 1995 [22], and has been widely employed to deal with the single and multi-objective optimization problems. Operating on the principles of social behavior within a swarm of particles, PSO adjusts the trajectory of each particle towards its individual best and the overall best solutions in each generation. PSO is known for its simplicity and quick convergence and is implemented through velocity (vel) and position update equations, involving parameters like inertia weight (ω), social and cognitive factors (c1 and c2). Where, x represents the position of particle i, individual best and global best positions are denoted by pbest and gbest.
$$vel_{{i,j}}^{{iter+1}}=\omega *vel_{{i,j}}^{{iter}}+{c_1}*rand()*(pbest_{{i,j}}^{{iter}} - x_{{i,j}}^{{iter}})+{c_2}*rand()*(gbest_{j}^{{iter}} - x_{{i,j}}^{{iter}})$$
17
$$x_{i}^{{iter+1}}=x_{i}^{{iter}}+vel_{i}^{{iter+1}}$$
18
5.2 Crow Search Algorithm
Crows exhibit a behaviour of observing and tracking other birds to discover the whereabouts of their food and seize the opportunity to take it when the owner is absent. Moreover, if a crow pilfers food from another bird or crow, it becomes notably cautious and consistently changes its own hiding place to prevent thefts in the future. Additionally, the crow utilizes its acquired knowledge to safeguard its food from potential robbers. These behaviors serve as the foundation for the crow search algorithm (CSA) [23], as formulated by Askarzadeh in 2016.
In the context of the algorithm, during iteration ‘iter’, if crow ‘j’ expresses a desire to reach its hiding location, and crow ‘i’ wishes to trail crow ‘j’ in the same iteration, two possible scenarios may unfold:
Case 1
Crow ‘j’ remains unaware that it is being followed by crow ‘i’ Consequently, crow ‘i’ successfully locates the food hidden by crow ‘j’.
Case 2
Crow ‘j’ becomes aware that it is being trailed by crow ‘i’ and opts to mislead it by guiding it to a different random location within the search space.
The positions of these two scenarios can be represented by the equation given below [23]:
$$x_{i}^{{iter+1}}=\left\{ \begin{gathered} x_{i}^{{iter}}+ran{d_i} \times f{l_i} \times (m_{j}^{{iter}} - x_{i}^{{iter}}),\,if\;\;ran{d_j} \geqslant A{P_j} \hfill \\ a\;random\;position,\,\,\,\,\,\,otherwise \hfill \\ \end{gathered} \right.$$
19
Where xiiter is the position of the ith crow; ‘fli’ is the flight length of crow ‘i’, which can also be presented as the vicinity of the search space; AP a represents the awareness probability of crow ‘j’, with its value ranging between 0 and 1; while the random numbers (0 to 1) are represented by randi and randj. Eq. (20) updates the memory, where miiter is the memory of ith crow at iteration ‘iter’.
$$m_{i}^{{iter+1}}=\left\{ \begin{gathered} {x_i}^{{iter+1}},\;\,\,\,\,\,\,\,if\;f(x_{i}^{{iter+1}})\;is\,\,better\,\,than\;f(m_{i}^{{iter}}) \hfill \\ m_{i}^{{iter}},\,\,\,\,\,\,\,\,\,\,\,\,\,otherwise \hfill \\ \end{gathered} \right.$$
20
5.3 Pigeon-Inspired Optimization
Pigeon-inspired optimization (PIO) [24] is based on the characteristics observed in pigeons, two operators have been devised based on specific rules:
-
Map and compass operator: Pigeons create a conceptual map of the Earth's magnetic field by using magnetoreception to sense its magnetic field. They navigate by using the height of the sun as a compass. As they move closer to their destination, they become less dependent on the sun and magnetic particles.
-
Landmark operator: Pigeons rely on neighbouring landmarks to guide them as they get closer to their objective. They will travel straight to their location if they are familiar with these landmarks. On the other hand, pigeons those are not familiar with the area or their goal will follow other pigeons that are headed in the same direction.
Virtual pigeons are utilized naturally in the PIO model. The position xi and velocity veli of pigeon ‘i’ are used to establish the rules in this map and compass operator. Each iteration updates the positions and velocities in a search space. Equations (20) and (21) may be used to determine the new location and velocity of Pigeon ‘i’ at iteration ‘iter’.
$$vel_{i}^{{iter+1}}=vel_{i}^{{iter}} \times {e^{ - R.iter}}+rand \times (gbes{t_{iter}} - x_{i}^{{iter}})$$
20
$$x_{i}^{{iter+1}}=x_{i}^{{iter}}+vel_{i}^{{iter+1}}$$
21
gbest iter is the global best position at iteration iter, which can be found by comparing all of the pigeons' locations; R is the map and compass factor; and rand is a random number. veliiter and xiiter are the velocity and position of ith pigeon during iteration ‘iter’.
Whereas, in case of the landmark operator, NP reduces the amount of pigeons by half with each generation. The pigeons are not familiar with the sights and are still a long way from their goal. Assume that every pigeon can fly directly to the destination and xC(t) is the center of some pigeon's location during iteration, ‘iter’ and at the same iteration, the position update rule for pigeon ‘i’ can be expressed by the equations given below.
$$N_{P}^{{iter+1}}=\frac{{N_{P}^{{iter}}}}{2}$$
22
$$X_{C}^{{iter+1}}=\left\{ {\begin{array}{*{20}{c}} {\frac{{\sum {x_{i}^{{iter+1}}} .\frac{1}{{f\,(x_{i}^{{iter+1}})+\varepsilon }}}}{{{N_P}\sum {f(x_{i}^{{iter+1}})} }}\,\,\,\,\,\,\,\,\,for\,\,\,\hbox{min} imization\,\,problem} \\ {\frac{{\sum {x_{i}^{{iter+1}}} .f(x_{i}^{{iter+1}})}}{{{N_P}\sum {f(x_{i}^{{iter+1}})} }}\,\,\,\,\,\,\,\,\,\,\,\,for\,\,\,\hbox{max} imization\,\,problem\,\,\,\,\,\,} \end{array}} \right.$$
23
$$x_{i}^{{iter+1}}=x_{i}^{{iter}}+rand \times (x_{C}^{{iter+1}} - x_{C}^{{iter}})$$
24
For each individual pigeon, the optimal position of each pigeon during NCth iteration can be denoted by xP, where, \(x_{P}^{{}}=\hbox{min} \left[ {x_{i}^{1},x_{i}^{2},...,x_{i}^{{{N_C}}}} \right]\).
Half of the pigeons may far away from the desired location and will they have to follow the pigeons present near the desired location, which also means that two pigeons may be at the same position. The pigeons present near their target move towards the destination.
5.4 Sine Cosine Algorithm
A method for stochastic population-based optimization may be broken down into two stages. The first stage wides the search space and finds the promising area of a superior solution. This first stage is known as the exploration phase, uses very high rates of unpredictability in the random solutions of the fitness function. In the contrast, the degree of randomness reduces and modest, progressive adjustments are made to move the best possible solution toward a higher-quality solution in the second phase, this stage is also known as the exploitation phase. These two steps are used by the sine cosine algorithm (SCA) [25] in its governing equation, which is represented by the expression given below.
$$x_{i}^{{iter+1}}=\left\{ \begin{gathered} x_{i}^{{iter}}+ran{d_1} \times \sin (ran{d_2})*|ran{d_3}*(P_{i}^{{iter}} - x_{i}^{{iter}})|,\,\,\,if\;ran{d_4}<0.5 \hfill \\ x_{i}^{{iter}}+ran{d_1} \times \cos (ran{d_2})*|ran{d_3}*(P_{i}^{{iter}} - x_{i}^{{iter}})|,\,\,\,if\;ran{d_4} \geqslant 0.5\,\,otherwise \hfill \\ \end{gathered} \right.$$
25
Where xiiter is the position of the current solution, where Piiter is the position of the destination point in the ith dimention at iteration ‘iter’. rand1, rand2, rand3 and rand4 are the random numbers used in SCA.