Particle Swarm Optimization (PSO) algorithm is a population-based search method that involves particles, which are people that modify their location (state) over time. In a Particle Swarm Optimization (PSO) system, particles traverse a search space that is characterized by several dimensions. During the course of a flight, individual particles undergo positional adjustments based on their own experiences, referred to as Pbest, as well as the experiences of nearby particles, referred to as Gbest. These adjustments are performed by using the best positions experienced by the particles themselves and their respective neighbors. In the conventional PSO, we made two significant modifications as follows,
Improvement in Local Optima: The problem of local optima is resolved by updating velocity in both Worst Case as well as Best Case Directions (in conventional PSO the local optima is updated in best case direction only).
Improvement in Population Generation: As PSO lacks with generation of new population we incorporate GA operators (Crossover & Mutation) for new population generation.
With the above modifications, we present our MPSO algorithm for PID controller design in wind energy systems.
3.1 PARTICLE SWARM OPTIMIZATION (PSO)
SPSO algorithm involves the movement of a swarm of particles inside a search space of D dimensions, with the objective of finding the most optimum solution. Every individual particle, denoted as i, is characterized by a velocity vector Vi=[vi1,vi2,…,viD] and a location vector Xi=[xi1,xi2,…,xiD], where D is the number of dimensions. The SPSO procedure starts with the random initialization of Vi and Xi. In each iteration, particle iPbesti=[Pbesti1,Pbesti2,…,PbestiD], representing the best position discovered by particle i, and the overall best position found by the whole swarm Gbest=[Gbest1,Gbest2,…,GbestD], serve as guiding factors for particle i to update its velocity and position,
$$\:{v}_{id}\left(t+1\right)={v}_{id}\left(t\right)+{c}_{1}{r}_{1}\left(PBes{t}_{id}\left(t\right)-{x}_{id}\left(t\right)\right)+{c}_{2}{r}_{2}\left(GBes{t}_{d}\left(t\right)-{x}_{id}\left(t\right)\right)$$
$$\:{x}_{id}\left(t+1\right)={x}_{id}\left(t\right)+{v}_{id}\left(t+1\right)$$
The cognitive and social acceleration coefficients are denoted as c1 and c2, respectively. Additionally, r1 and r2 represent two uniform random variables that are created from the range [0,1]. The implementation of velocity clamping restricts the movement of particles inside a defined border in the search space by the establishment of an upper limit on their velocity, denoted as Vmax. If the velocity of a particle is determined to surpass the maximum velocity Vmax, it is adjusted to Vmax in the following manner:
$$\:{v}_{id}\left(t+1\right)=\text{min}({v}_{id}\left(t+1\right),{V}_{max})$$
While the use of velocity clamping helps to mitigate the issue of velocity explosion, determining an appropriate value for Vmax is a crucial but challenging endeavor. Inadequate selection of Vmax may lead to suboptimal performance. In instances when the value of Vmax is quite large, it is possible for particles to exhibit a highly erratic trajectory, perhaps leading to the avoidance of the optimum solution. In contrast, when Vmax is modest, the particles exhibit a limited exploration range, perhaps leading to confinement inside a local optimum. In order to address this crucial issue, it is recommended to establish the maximum velocity, denoted as Vmax, in the following manner,
$$\:{V}_{max}=\tau\:({x}_{max}-{x}_{min})$$
The variables xmax and xmin represent the upper and lower limits of the search space border, respectively. Additionally, the variable δ is a real number that is inside the interval (0,1]. In general, two commonly used stopping criteria are utilized to conclude the Particle Swarm Optimization (PSO) process. The initial halting condition for the execution of Particle Swarm Optimization (PSO) occurs after a certain number of iterations has been completed. The calculation of function evaluation is performed in the following manner,
where S is the swarm size and T is the maximum number of iterations.
3.2 GENETIC ALGORITHM & OPERATORS
The Genetic technique (GA) is an optimization technique that draws inspiration from the process of natural selection. The algorithm under consideration is a population-based search method that incorporates the principle of survival of the fittest. The generation of new populations is achieved by the repeated use of genetic operators on individuals that are already part of the population. The fundamental components of a Genetic Algorithm (GA) are the encoding of solutions into chromosomes, the process of selection, the application of crossover and mutation operators, and the computing of fitness functions. The process of genetic algorithm (GA) is outlined as follows. The population (Y) consists of n chromosomes that are randomly initialized. The fitness of every chromosome within the population Y is calculated. Two chromosomes, denoted as C1 and C2, are chosen from the population Y based on their respective fitness values. The use of the single-point crossover operator, with a certain crossover probability (Cp), results in the production of an offspring denoted as O, by crossing over the genetic material of C1 and C2. Subsequently, the uniform mutation operator is used on the generated offspring (O) using a mutation probability (Mp) in order to create O′. The newly generated offspring, denoted as O', is introduced into a separate population. The selection, crossover, and mutation processes will be iteratively applied to the present population until the new population is fully formed. The mathematical analysis of genetic algorithms (GA) may be described as follows. GA employs dynamic adjustments to the search process by manipulating the probability of crossing and mutation, ultimately leading to the identification of an optimum solution. Genetic algorithms (GA) have the capability to alter the encoded genes. Genetic algorithms (GA) provide the capability to assess several people and generate various ideal solutions. Therefore, it may be argued that genetic algorithms (GA) provide superior global search capabilities. The progeny resulting from the recombination of parental chromosomes is likely to disrupt the favourable genetic patterns. The concept of parent chromosomes and the crossover formula may be formally described as follows,
$$\:R=\frac{\left(G+2\sqrt{g}\right)}{3G}$$
In this context, the variable "g" represents the number of generations, whereas "G" denotes the total number of evolutionary generations created by the population. It can be noted from the aforementioned equation that the variable R undergoes dynamic changes and exhibits a rise as the number of evolutionary generations increases. During the first stage of a genetic algorithm (GA), the level of similarity between individuals is often very low. In order to preserve the exceptional genetic architecture of individuals, it is essential to maintain a low value for R, the parameter representing the reproductive rate of the new population. At the culmination of the evolutionary process, there is a notable increase in the degree of resemblance among individuals, accompanied by a correspondingly elevated value of the coefficient of relatedness, denoted as R.
a. Crossover Operator
Crossover operators are used in order to produce children via the amalgamation of genetic information from two or more parental individuals. The commonly recognized crossover operators in genetic algorithms are single-point, two-point, k-point, uniform, partly matched, order, precedence preserving crossover, shuffle, reduced surrogate, and cycle.
b. Mutation Operator
Mutation is a fundamental mechanism that facilitates the preservation of genetic variation as it is passed from one group to the subsequent generation. The well recognized mutation operators include displacement, simple inversion, and scramble mutation. The Displacement Mutation (DM) operator is used to shift a specific substring of an individual solution inside itself. The location is selected at random from the provided substring in order to achieve a legitimate solution, as well as to introduce a random displacement mutation. There are two types of variations in the context of DM, namely exchange mutation and insertion mutation. The Exchange mutation and Insertion mutation operators involve the manipulation of individual solutions by either exchanging a portion with another component or inserting it into a different position, respectively.
3.3 PROPOSED MODIFIED PSO ALGORITHM
-
Worst particles selection: The subsequent procedure involves the identification of the most unfavorable particles for the purpose of updating velocity in the event of the worst-case scenario. In the context of our research, we designate a subset of particles as the worst particles (WP) based on their least fitness value. These WP particles are subject to distinct velocity functions. Particles other than WP are regarded as superior particles (\(\:BP\)).
-
Velocity updation: In the WC-PSO algorithm, the optimum solution is monitored in both the best search space and the worst search space to enhance the algorithm's capability to determine the global solution. Therefore, in order to improve particle behavior, the velocity is adjusted by taking into account both the particle's best position (pbest) and the global best position (gbest). This adjustment is carried out in the following manner,
$$\:v\left[t+1\right]=v\left[t\right]+{C}_{1}*rand*\left(pbest-present\right)+{C}_{2}*rand(gbest-present)$$
Similarly, for WP velocity is updated based on particle worst (\(\:pworst\)) and global worst (\(\:gworst\)). Velocity updation for WP is performed as follows,
$$\:v\left[t+1\right]=v\left[t\right]+{C}_{1}*rand*\left(pworst-present\right)+{C}_{2}*rand(gworst-present)$$
Therefore, the Wikipedia (WP) algorithm seeks the optimum solution within the worst area, whereas the Backtracking (BP) algorithm seeks the ideal solution within the best zone. By using this approach, we effectively mitigate the issue of local optima in our research endeavors.
3. Position updation: Based on computed velocity, the position is updated for all particles as follows,
$$\:Pos\left[t+1\right]=Pos\left[t\right]+V\{T+1]$$
Current position of the particle is computed based on previous position (\(\:Pos\left[t\right]\)), and current velocity (\(\:v[t+1]\)).
4. New population generation: The two chromosomes are chosen by the mechanism of tournament selection. The genetic algorithm (GA) employs a metric called crossing fraction, represented as XoverFrac, to signify the percentage of offspring produced by the crossover operation, omitting the elite individuals that have already been chosen from the present population being assessed. The variable XoverFrac, as previously stated, is subject to the constraint 0 ≤ XoverFrac < 1. In the present investigation, the variable XoverFrac is given a value of 0.8, and the chosen crossover function is of the arithmetic type. In the given context, the XOR operation is performed on the two parental chromosomes owing to their binary characteristics. The phrase is stated in the following way,
$$\:Cross={P}_{1}\oplus\:{P}_{2}$$
Mutation: Mutation is a genetic anomaly that impacts individuals within a certain population. Mutation plays a pivotal function in the preservation of genetic diversity and the facilitation of the study of a broader spectrum of viable solutions. The strategy of uniform mutation was chosen as the optimal approach for our investigation. Within the domain of genetic algorithms, the procedure of uniform mutation encompasses the development of a collection of random numbers (RDs) derived from a uniform distribution. The quantity of random numbers produced is equivalent to the length of the genome. The assignment of numerical values to individual genes (bits) inside the chromosome is decided by the associated random number.
$$\:NewGen=\#\:of\:ElitKids+\#\:of\:Crossover+\#\:of\:Mutation$$