Socio-inspired evolutionary algorithms: a unified framework and survey

It is well recognized that human societies and their problem-solving capabilities have evolved much faster than biological evolution. The inspiration from human behaviors, knowledge exchange, and transformation has given rise to a new evolutionary computation paradigm. Multiple research endeavors have been reported in the literature inspired by the diverse aspects of human societies with corresponding terminologies to describe the algorithm. These endeavors have resulted in piles of algorithms, worded differently but with more or less similar underlying mechanisms causing immense confusion for a new reader. This paper presents a generalized framework for these socio-inspired evolutionary algorithms (SIEAs) or socio-inspired meta-heuristic algorithms. A survey of various SIEAs is provided to highlight the working of these algorithms on a common framework, their variations and improved versions proposed in the literature, and their applications in various fields of search and optimization. The algorithmic description and the comparison of each SIEA with the general framework enable a clearer understanding of the similarities and differences between these methodologies. Efforts have been made to provide an extensive list of references with due context. This paper could become an excellent reference as a starting point for anyone interested in this fascinating field of research.


Introduction
Optimization is an essential task in engineering and many other aspects of life and living. Various optimization techniques have been proposed in the literature to solve such problems. These techniques are classified into two broad categories-exact and heuristics. Exact or deterministic methods comprise the classical approaches founded on mathematical and dynamic programming models and can solve B Laxmikant Sharma lkrdkrishnan@gmail.com Vasantha Lakshmi Chellapilla vasanthalakshmi@dei.ac.in Patvardhan Chellapilla cpatvardhan@dei.ac.in optimization problems exactly. However, they may require infeasible computational time with increased problem size and complexity. Since real-world optimization problems are typically nonlinear, multi-modal, vast, dynamic, and complex, exact algorithms are not practically suitable to solve them. Heuristics and meta-heuristics are usually employed in such cases because of their user-friendly implementation nature and schematic algorithmic framework and design. Heuristics are designed for a specific problem, whereas metaheuristics have broad applicability.

Evolutionary algorithms
Several meta-heuristics are nature-inspired algorithms. Evolutionary algorithms (EAs) are one of the leading metaheuristics. EAs drew inspiration from observing flora and fauna closely to understand their relationship with their environment. Darwin (1859Darwin ( , 2018 explained this relationship in 1859 and called it natural selection. According to his theory, individuals not well adapted to their environment do not survive long enough to reproduce or have fewer chances to reproduce than individuals of the same species that have acquired beneficial characteristics through variation during their production. This adaptation of the individuals to their environment is called their fitness; an individual with higher fitness is more adapted to the environment. Each algorithm in the EA class is a population-based, guided random meta-heuristic algorithm. These algorithms simulate species' natural-biological evolution concepts such as reproduction, mutation, recombination, selection, migration, locality, and neighborhood. Evolutionary programming (EP) (Fogel 1997), evolution strategies (ES) (Rechenberg 1970), and genetic algorithms (GA) (Goldberg 1989) are some notable EA techniques.
Akin to EAs, there are other classes of meta-heuristics. Swarm intelligence (SI) algorithms were designed, taking inspiration from the collective intelligence of insects, birds, and fish. SI works on social behavior and the self-organizing nature of various natural or artificial agents. Particle swarm optimization (PSO) (Kennedy and Eberhart 1995), ant colony optimization (ACO) (Dorigo and Di caro 1999), and artificial bee colony (ABC) optimization (Karaboga and Basturk 2007) are some notable swarm-based algorithms. Physicsand chemistry-based algorithms are inspired by phenomena seen in physical and chemical Sciences, respectively (Biswas et al. 2013;Siddique and Adeli 2017). Some good survey articles on bio-inspired meta-heuristics are Darwish (2018) and Fan et al. (2020).
Memetic algorithms (MAs) introduced local search techniques at specific parts of evolutionary algorithms to improve their performance (Moscato 1989;Norman et al. 1991). These algorithms are inspired by complex natural systems where a population of individuals evolves through evolutionary processes and individual learning. This learning or imitation of behavior and knowledge from other individuals is represented as memes-a cultural equivalent of genes, introduced by Trivers and Dawkins (1976). Memetic algorithms utilize Lamarckian, and Baldwinian learning (Whitley et al. 1994) and are also known as Lamarckian EAs and Baldwinian EAs. MAs have been successfully applied to solve real-world problems and have shown high performance (Neri and Cotta 2012). Reynolds (1994) proved that introducing a belief space in a general GA framework makes it more powerful. He developed a framework of cultural algorithms in which experiences of the candidate solutions are exchanged with the help of belief space. This cultural framework has been improved and hybridized many times. Some conspicuous algorithms inspired by this framework are cultural algorithm with evolutionary programming (Chung and Reynolds 1998), cultural swarms (Iacoban et al. 2003), cultured differential evolution (Becerra and Coello 2006), social learning optimization (Liu et al. 2016) and others.

Socio-inspired evolutionary algorithms
It is well accepted that human societies and humans' problem-solving capabilities have evolved much faster than biological evolution (Bjorklund et al. 2010). This evolution is due to humans' biological structure and social means of living and surviving. DNA studies reveal that a newborn child is not genetically much different from his ancestors that were born 40,000 years ago (Varki and Altheide 2005), i.e., human's hardware part is substantially the same. However, socially, humans have left the caves and are planning to emerge from the surface of Earth to colonize other planets in the next few decades, i.e., the software part has immensely changed over time. With the evolution of human life on this planet, humans started living in groups or social organizations for convenience, resource sharing, collective problem-solving, and division of workload and responsibilities. Social organizations of humans can be regarded eventually as an optimization process for the overall welfare of individuals and the organization. Individuals in societies learn to live and survive from their parents and other older members. This behavior and knowledge transfer among members is pivotal for the evolution of societies. Occasionally, this influence works in reverse order, i.e., older members can also learn from their descendants.
The social phenomena of human interaction, behavioral exchange, learning mechanisms, and others have motivated a new class of meta-heuristics dubbed socio-inspired evolutionary algorithms (SIEAs) or socio-inspired meta-heuristics (see Fig. 1).
Many research endeavors have been reported in the literature claiming inspiration from different social phenomena. These endeavors have resulted in a plethora of algorithms worded differently from each other. However, the underlying mechanisms could be more or less similar, causing immense confusion to a new reader. One of the first serious discussions and analyses of SIEAs emerged during the 2000 s with the work of Neme and Hernández (2009). They classified these algorithms into four categories: leadership, alliance formation, social labeling, and neighborhood delimitation and segregation. Similarly, Khuat et al. (2016), and Kumar and Kulkarni (2019) surveyed many SIEAs. Khuat et al. (2016) provided a limited survey without describing several powerful SIEAs. Kumar and Kulkarni (2019) provided detailed descriptions of each algorithm independently. However, these previously published studies are limited to individual surveys without any effort to put reported SIEAs into a common framework. The algorithms surveyed in these articles are selected based on the social jargon used. Moreover, only some algorithms qualify the necessary criteria of the social grouping for being an SIEA. However, only social tagging is utilized to discriminate the types of individuals in them. This paper proposes a common framework for describing the prominent SIEAs. In addition, a survey is provided on the state-of-the-art of SIEAs. For each SIEA, the pseudocode description indicates points of similarity and distinctiveness in the context of the broad framework provided. A detailed analysis of each algorithm highlights how the basic steps of EAs and SIEAs, viz. population initialization, selection, recombination, exploration, and exploitation, are incorporated into these algorithms. This description cuts through the diverse terminology of each algorithm to present it in an easily comprehensible manner. A comparison table that clarifies the similarities and differences between surveyed SIEAs is also furnished.
Furthermore, a detailed table lists the applications to which each algorithm has been applied. An extensive list of references has been provided to cover the significant publications in the area. Therefore, this paper could become an excellent starting point for a researcher trying to understand the field.
The rest of the paper is organized as follows. The following section-Sect. 2-presents an introduction to social phenomena and the division of SIEAs according to wellknown social phenomena classification. In Sect. 3, the unified framework on SIEAs is proposed and discussed. Then prior art on SIEAs is given in section 4. Section 5 lists the application areas of these algorithms. Finally, Sect. 6 concludes the paper and provides future directions for researchers.

Human social phenomena
Human society provides a complex system with diverse ideas of optimization working in parallel and serial. These societies are built on the conventions of living in peace and working to fulfill everyone's needs for food, clothes, and home. The basic needs of a human being are significantly achieved with various social activities or social actions. Interactions in humans are primal social activities that lead to the transfer of information. This information is usually utilized to general-ize facts and to benefit society. Although social interactions are not based on any overarching optimizing principle, they lead to robust societal structures.
To improve their lifestyle, humans observe other similar or distinct species and change their behavior and actions accordingly. These observations also influence people to do numerous activities of interest. Furthermore, humans self-observe their actions and try to improve in the future following the previous results. These social means of behavior and other qualities, including information exchange and self-observation, are examples of social phenomena. The algorithms inspired by ideas of social phenomena have been shown to perform well in practice for various optimization problems. Compared to other swarm intelligence mechanisms, algorithms inspired by human society are more robust (Neme and Hernández 2009). Other common social phenomena include collective behavior, self-organizing systems, social interaction, coordination, cooperation and competition, information propagation, communication, language, relationships, leadership, politics, elections, sports, tourism, disease spread, consensus, and social opinion, to name a few. These social phenomena are categorized by various sociologists (Hayes 1911). One of the most influential classifications is provided by Giddings (1914) into cultural, economic, moral and juristic, and political categories. Table 1 lists common phenomena in each category.
This paper utilizes the classification proposed by Giddings to organize SIEAs into various categories for easy comprehension.

Framework for SIEAs
Exploration and exploitation are two fundamental operations in optimization meta-heuristics. In the search for better solutions, the feasible solution space is explored in the exploration process. However, in exploitation, solution space is searched around a good solution under consideration to find more promising solutions in its vicinity. Diversification is also used for exploration, whereas intensification is also used for exploitation. A proper balance in the computational effort devoted to these two operations is critical to the success of any meta-heuristic (Bäck et al. 1997). The selection, crossover, and mutation-like operators typically perform these operations. It is not within this paper's scope to discuss the correspondence between the operator and its operation. Please refer Eiben and Schippers (1998);Črepinšek et al. (2013); Cuevas et al. (2021) for details. The essential idea in SIEAs is to divide the population into subpopulations, each subpopulation searching in a differ-ent search sub-space while simultaneously maintaining the number of individuals in subpopulations commensurate with the importance of the search sub-space. Initially, individuals in the search space are divided according to a predefined method. Gradually, these individuals migrate to the promising regions in the search space. Thus, well-designed SIEAs typically start with a higher exploration rate and progressively increase exploitation in later runs.
In terms of individual representation, fitness evaluation, selection mechanisms, and initial population diversity, SIEAs are analogous to conventional EAs. However, operators in SIEAs simulate social phenomena rather than natural-biological evolution. In each run of SIEAs, an increasing number of solution individuals are allocated to the most promising sub-regions of the solution space. These algorithms divide the feasible search space into subclasses/clusters/groups based on societal organizations like families, villages, colonies, states, and others, as seen in the real world. The non-duplication of the population individuals in these groups is not guaranteed, and these groups can overlap. The best-fitted individuals in every group are appointed as local leaders, and the best individual among all the groups becomes the global leader in each iteration. An individual improves himself by interacting and simulating the characteristics of other better individuals. Specifically, an individual gains influence from peer members (in the same group and sometimes from another group), local or group leaders, and global leaders. A group with the best individuals is the best group and usually provides the optimal global solution. A flowchart and a pseudocode description of a generic SIEA are succinctly provided in Fig. 2 and Algorithm 1.
Algorithm 1 Pseudocode of a generic SIEA 1: Initialize parameters and randomly generate candidate solutions 2: Create logical groups of solutions 3: while termination condition not satisfied do 4: Intra-group reproduction 5: Inter-group reproduction 6: Update groups 7: end while

Algorithmic components
The two main components of SIEAs are social grouping mechanisms and social phenomena-inspired operators. The initialization in SIEAs depends upon the social grouping mechanism used in it. Best individual (leader) selection, evaluation of individuals, number of individuals in a group, and group updating mechanisms are other components that depend on the choice of the social grouping mechanism. These and other algorithmic ingredients that define a new SIEA in line with the above pseudocode are the following.
1. Initialization: A population of N individuals is typically generated randomly to initialize an SIEA. This initialization of population individuals can also be seeded using some heuristics for a better start. Then, the initial population is divided into numerous groups of individuals.
The best individual in each group is declared the group leader, and the best individual in all groups is declared the global leader. This global leader represents the bestfound solution in an SIEA.
A. Social Organization or Grouping: The population in SIEAs is typically divided into M mutually exclusive but collectively exhaustive subpopulations or groups. An individual can also be assigned to more than one group. Individuals may be selected randomly, sequentially, or by a distance-based clustering technique for this division. Senadji and Dawes (2010) argued that the groups generated using clustering methods are more prone to social loafing than the randomly allocated groups. The i.e., the number of individuals in a group M i equals round , where f (L i ) represents i th leader's fitness. Hence, a better leader will lead more individuals. = l individuals randomly from the rest n − m individuals and assign them to some group M i , again select l individuals from the remaining n − m − l individuals and assign them to another group, and so on. This mechanism leads to equal-sized groups with randomly selected individuals. vidual I i ∈ {I 1 , I 2 , . . . , I n−m } and assign it to some group M i whose leader L i is more similar to the selected individual I i , i.e., L i is the nearest neighbor to I i . vi. Generate n individuals randomly. Select an individual I i sequentially or randomly from this population and assign it to a cluster C g , i.e., C g ← I i ; g = 1, . . . , k from which it has minimum Euclidean distance than from the other clusters, e.g., D( Create a new cluster C k+1 and assign I i to it, i.e., C k+1 ← I i , if I i 's maximum distance from all the clusters is greater than the average of all the distances from different clusters, e.g., max(D( . The random selection of individuals while creating groups can enhance diversification. However, sequential and distance-based clustering methods can provide better intensification. The fitness-proportionate grouping process also maintains a good amount of diversification. A comparison of all the social groupforming mechanisms in terms of intensification and diversification is presented in Table 3. B. Evaluation: Evaluation of individuals is typically performed to find the leaders and create groups of individuals. A grouping mechanism may utilize the individuals' evaluated fitness to create highly diverse groups or better-resembling groups in terms of their fitness. However, some grouping mechanisms, the distance-based clustering method, do not require prior evaluation of the individuals. Thus, the evaluation of individuals may be performed in one of two ways as follows. (a) Individuals are evaluated (typically to determine the leaders), and then grouping is performed. (b) Individuals are evaluated after their grouping. C. Leader Selection: Some grouping mechanisms utilize selected leaders to create groups, whereas some do not require leader selection. However, leaders can be generated beforehand, and the remaining population can be generated in the proximity of these generated leaders. Thus, leader selection is performed in one of the following ways.
(a) The best of the evaluated individuals are selected as leaders, and groups are created by allocating the remaining individuals sequentially or randomly to these leaders. (b) Groups are created using some distance-based method. Then the best of the evaluated individuals are announced as leaders. (c) Leaders are generated randomly in the initial phase of an algorithm. Then group mem-bers are generated in the leader's neighborhood. After evaluating all group members, leaders are updated by the best individuals in the group. This selection of leaders is typically based on their objective function values. However, it may also consider constraint satisfaction in the case of constrained optimization problems.
A schematic of the above-described grouping mechanisms is provided in Fig. 3 to differentiate between them. Each downward trail determines a different grouping mechanism in this figure. 2. Reproduction Mechanisms: A reproductive mechanism used in socially motivated systems depicts the social convention of individual evolution. Imitation and transfer of mannerisms are suggested and utilized by various SIEAs as reproductive mechanisms. These reproduction mechanisms mostly exhibit the Lamarckian evolution of individuals, i.e., acquired improvements in an individual's fitness will change its genetic encoding rather than just updating its fitness or creating a new individual. As the best individual in a group is the most influential entity, other individuals in the group mimic the characteristics of this best individual to evolve themselves. Similarly, the best global individual influences the group leaders to follow him. Moreover, individuals can follow other better peers or use self-improvement methods. These reproductive mechanisms typically simulate asexual versions of various recombination mechanisms used in the Darwinian evolutionary system. Hence, the reproduction in SIEAs can occur in two phases.
A. Intra-group Reproduction: Intra-group reproduction mechanisms utilized by SIEAs are as follows.
(a) Most SIEAs use the "Follow the Leader(s)" principle within a group. The best group individual(s) influences other ingroup members in this phase. Group individuals are usually modified to become more like the leader(s) or are moved toward the leader(s). (b) The individuals follow better peer individual(s) to achieve a better place in the group. (c) Random changes in the population members are made to enhance or maintain population diversity. (d) New randomly generated members are introduced to groups. These new individuals usually replace the worst members of the group. B. Inter-group Reproduction: The inter-group reproduction can happen in multiple ways.
(a) The global best individual influences the best individuals (local leaders) from all other groups in this phase, i.e., each group's best individual is usually moved toward the global best. This movement may be produced using appropriate operators or the same method used for intragroup reproduction. (b) The local leaders follow another leader(s). (c) The local leaders go through self-observation and introduce a quantum of change in themselves using mutation-like operators. (d) Individuals interact and follow the members of other groups. (e) Individuals switch their groups.
Following leaders or members can be modeled with a reinsertion operator or a weighted movement of members toward leaders or other members for increased search power that contributes to diversification in the initial phases and intensification toward the end of the algorithm. The quantum of change in each group member is essential in deciding how fast the member converges toward the leader(s) or other member(s). Too rapid convergence may destroy diversity in the group, resulting in a local optimum. However, too slight changes increase computational cost. As leaders change themselves in the inter-group reproduction process, members of the population that were hitherto closer in search space to one leader may become close to another leader and change their group affiliation accordingly. A new randomly generated individual may replace a leader or member exhibiting no improvement for a predefined number of iterations. Likewise, replacing the weak members of a group with new randomly generated individuals can promote diversification. The intra-and inter-group reproduction phases can be implemented in an intertwined or reverse sequence manner. Initially, the groups must explore the subpopulation with a higher diversification rate. So in the initial stages, the algorithm typically performs more exploration. As the algorithm progresses, the individuals converge toward their respective leaders, focusing more on intensification. 3. Group Refreshment: Groups in SIEAs are updated after or between each iteration of intra-and inter-group reproduction processes. This refreshment of groups may be performed using one or more of the following.
(a) To maintain the influence cycle in each group, the group leaders are updated, i.e., the group member(s) exhibiting better-evaluated value (fitness) than the leader(s) exchanges its (their) role with it (them). As a result, the global best individual is also updated. (b) The low-powered groups are merged into other strong groups (influenced by their possessiveness or strength). (c) Strong groups are merged to form a diverse group with better individuals. (d) Despite the resource-intensive and expensive nature of the grouping process, some SIEAs undergo regrouping after each iteration for a fresh start, i.e., the groups are reformed with the updated population after intraand inter-group processes.

Termination Criterion:
The following conditions can be utilized as the termination condition for an SIEA.
(a) The population in an SIEA is diverted toward the global optimum solution after each group refreshment, which may include the migration of group members to other groups and the collapsing of low-or no-power groups into other better groups. Often, only one group remains after many group refreshments; in that case, SIEA ends. (b) SIEA terminates after a predefined number of iterations, a standard termination criterion in EAs. (c) Due to the higher convergence speed of an SIEA, the population within a group may become similar in a relatively short period. In such cases, the termination criteria in SIEAs can be modeled in terms of the comparative similarity of the whole population. If the population has not changed significantly for several iterations, there is no need to run the algorithm anymore; hence, a termination point is reached.

Prior art on SIEAs
In this section, eleven well-known SIEAs are reviewed in the order of their appearance in the classification as in Table 2 and Fig. 4. For each SIEA, inspirational social phenomena, their algorithmic description, and pseudocode are presented. Furthermore, a table of algorithmic steps utilized in each SIEA in reference to the proposed general framework is provided. These tables highlight the differences between the algorithmic steps used by different SIEAs. Also, a consolidated table combining these tables is furnished at the end of the section to provide a broad view of the similarities and differences between reviewed SIEAs.

Society and civilization algorithm (SCA)
Society and civilization algorithm (Ray and Liew 2003) is inspired by the human behavior of interacting in a symbiotic relationship and improving while living together in societies. The algorithm rests on the following basic premises. Social interactions are improvement-oriented, i.e., society members interact to create opportunities to improve themselves. A society's success relies on its members' progress and success. Hence, a good balance of cooperative and competitive relationships among individuals advances societies and civilizations. Other social phenomena in societies include migration, knowledge sharing, and leadership.
The algorithmic steps for SCA are outlined as follows. And the correspondence of these steps with those of the general framework is given in Table 4.

Initialization and groups formation:
A group of individuals forms a society, and societies form a civilization. In SCA, some random points (individuals) are initially generated in the parametric space. These generated individuals together represent a complete civilization. All individuals in the civilization are evaluated for two types of fitnesses corresponding to the objective function and constraint satisfaction. Individuals close to each other in terms of Euclidean distance are assigned to the same society. Thus, mutually exclusive but collectively exhaustive societies are created so that every society has some individuals, and every individual of the civilization is assigned to some society. The best individuals in terms of both fitnesses in each society are designated leaders. Likewise, the best leaders become civilization leaders.

Reproduction mechanisms:
(a) Intra-group reproduction(s): Society leaders facilitate the improvement of other society individuals. The movement of individuals toward nearby society leaders in the parametric space achieves this. In effect, this simulates knowledge acquisition from the leader to improve individuals. In terms of EAs, this process implements intensification. (b) Inter-group reproduction(s): Society leaders improve by migrating toward the civilization leaders. The migration of society leaders expands promising regions in parametric space by receiving increased individual counts. In terms of EAs, this movement implements diversification. The newly generated improved individuals by intra-and inter-group improvement processes and the civilization leaders directly appear in the subsequent civilizations.
3. Group refreshment: Society formation and leader selection are repeated at each subsequent iteration.

Termination condition(s):
The termination of SCA depends on the computational resources available and is specified as the count of maximum iterations permitted, i.e., the number of civilizations counts. SCA stores parametrically unique individuals (solutions) across the civilizations, enabling diverse solutions to be maintained in the search process. At termination, the civilization leaders are reported as the best-found solutions.
SCA was initially proposed for constrained optimization problems. In SCA, for each individual, a constraint satisfaction vector is calculated and maintained throughout the run. A nonzero value at the i th position in this vector represents i th constraint violation, and a zero at the i th position means i th constraint satisfaction. Pseudocode for SCA is provided in Algorithm 2.
Algorithm 2 Society and Civilization Algorithm 1: Initialize the parameters and generate a random population representing a civilization 2: Evaluate each individual according to the objective function and constraint satisfaction 3: repeat 4: Build mutually exclusive societies using the clustering method 5: Identify leaders and create a leader set for each society 6: Move each non-leader individual in the direction of the nearest leader and add this newly positioned individual to the new civilization 7: Identify the civilization leader 8: Move all the leaders except the civilization leader in the direction of the nearest best leader and add these newly positioned leaders to the new civilization 9: Add the civilization leader to the new civilization 10: until given civilization count Talent-based social algorithm (Daneshyari and Yen 2004) is an improved version of SCA. This algorithm proposes two new Liberty Rate and Talent concepts to provide new mechanisms for moving individuals toward corresponding leaders. Liberty Rate measures society's individuals' independence, defined as the ratio of society's average fitness and the aver- Parallel-machine serial-batching scheduling problems Pei et al. (2021); Mladenović and Hansen (1997) age fitness of civilization. Liberal societies allow people to move freely toward the leaders. Moreover, each individual's Talent is defined as the product of its objective and constraint ranks. Civilized Swarm Optimization (CSO) (Selvakumar and Thanushkodi 2009), a prominent hybrid variation of SCA, integrates the self-experience concept from particle swarm optimization (PSO). In CSO, a society individual explores society through guidance based on their own experience and that of their leaders. Table 5 provides a list of hybrid versions of SCA.

Soccer league competition algorithm (SLC)
Soccer league competition algorithm (Moosavian and Roodsari 2013) is inspired by the competitions within and among soccer teams seen in soccer leagues. In a soccer league, players compete for a better position. In contrast, the teams compete for a better rank at the league table. Four socioinspired operators-imitation, provocation, self-inspection, and substitution are utilized in SLC. The imitation operator expedites the algorithm's searching capability, while the provocation operator provides high-accuracy solutions to complex optimization problems. However, mutation and substitution operators help the algorithm escape from local minima and plateaus. The algorithm rests on the following basic premises.
Every team in a league has several fixed and substitute players. Super player (SP) and super-star player (SSP) represent the best player in a team and among all teams, respectively. Team players admire and follow their SP or other team's SP, while each SP admires and follows the SSP. After each match, the winner and the loser are declared, and some players, including fixed and substitutes, experience changes. These changes aim to improve the performance of both players and teams. The winning team players go through imitation and provocation processes, whereas the losing team members self-inspect their strategies (using the selfinspection operator) and are substituted more often (using the substitution operator). In imitation, fixed players in a winning team imitate both the SP and the league SSP to improve their future activities. In provocation, substitute players are promoted to fixed players when their performance exceeds the average performance level of the fixed team players. In Selfinspection, a fixed player in a loser team revises his activity to prevent failure in future games. For this, that player tries other strategies that may benefit him. In Substitution, new substitutes are tested in a match for their performance.
The algorithmic steps for SLC are outlined as follows. The correspondence of the steps of SLC with those of the general framework is provided in Table 6. 1. Initialization and groups formation: The initial population in SLC represents the complete set of players in a league. These players are evaluated for their fitness, called their power, and are sorted to form some teams. There are two types of players in a team-fixed players and substitutes, and the team's total power is defined as the average power of these players. When a league starts, competitions between all possible pairs of teams are performed, and the winning and losing teams are declared based on their power. In a league, teams compete for a higher rank in the league table, and players compete for a higher position in their team. The best-ranked player in a team is super player (SP), and the best-ranked player in the league is super-star player (SSP).

Reproduction mechanisms:
(a) Intra-group reproduction(s): In the winning team, fixed players imitate their team SP, and substitute players are promoted to fixed players if their power exceeds the team's mean power. While in the loser team, fixed players self-inspect their behavior and try mutation to improve their strategy, and new substitutes are introduced for a change. (b) Inter-group reproduction(s): Fixed players of the winning team imitate the SSP of the league. 3. Group refreshment: After performing operators on winning and losing teams, all the players are sorted if the termination condition is not satisfied, and new teams are formed for the successive leagues. Again, the power of competing players and teams in the league is calculated. Furthermore, SP and SSP are also updated.

Termination condition(s):
The number of seasons is the termination condition in this algorithm, and the SSP at the termination represents the best-found global solution.
The pseudocode for SLC is provided in Algorithm 3.

Algorithm 3 Soccer League Competition Algorithm
1: Initialize the parameters and generate random players 2: Evaluate players' fitness 3: repeat 4: Form the teams and allocate sorted players to them sequentially 5: Start the league competition 6: Find the losing and winning teams 7: Apply corresponding operators in both the teams 8: until maximum number of seasons Diversity-team soccer league competition algorithm (DSLC) (Qiao et al. 2020) is an improved version of SLC. It improves SLC by adding trading and drafting of players and combining these strategies. In trading, players between teams are exchanged, while new players are introduced to a team in drafting. Table 7 lists a hybrid version of SLC.

Socio evolution and learning optimization (SELO)
Socio evolution & learning optimization algorithm (Kumar et al. 2018) is inspired by human learning behavior as a member of the modern societal system known as family. A family is an elementary social group where people live closely and follow some communication protocols continuously all day. These communications yield the influence of the better members on the other members, and SELO imitates this influential process seen in families to improve the members' behavior. The algorithm rests on the following basic premises. Each family member possesses different behavioral traits, which result from successive observation and imitation of the behavior of other family members. Children learn values, behaviorism, and manners from their parents, peers, and other members of society. Moreover, every society individual learns mannerisms and behaviors from other individuals, e.g., parents learn parenting and other things from other parents. This interaction and learning from others helps in improving individual performance.
Below are the algorithmic steps for SELO. Table 8 provides the correlation of these steps with those of the general framework. The pseudocode for SELO is provided in Algorithm 4.
1. Initialization and groups formation: Initially, several families with an equal number of individuals (2 parents and a few children) are initialized. Individuals in these families are generated in their close neighborhood in the objective space using a clustering mechanism. Firstly parents are generated in a close neighborhood, and then kids are generated following one of the parents. These families collectively represent a society. Each family's parents and kids are evaluated for their fitness, and the global best society member is determined.

Reproduction mechanisms:
(a) Intra-group reproduction(s): After initialization, each member in a family follows another better member selected with a roulette-wheel selection procedure. Parents try to follow other better parents, and kids follow their parents and better siblings. If following other better parents in society does not improve a parent, then a self-contemplation operator (modeled by mutation operation) is utilized. (b) Inter-group reproduction(s): Kids in a family follow their parents in the initial phase. Gradually this influence is shifted to their siblings, peers, and finally, to other better society individuals. Similarly, parents follow parents from other families. Whenever a kid is influenced to a wrong path, i.e., the newly generated solution is worse than the current solution, parent intervention is required; a behavior correction operator is utilized to rectify the situation.
3. Group refreshment: Global best society member is redetermined at the beginning of each iteration. 4. Termination condition(s): The algorithm terminates when the maximum number of learning attempts are performed, or families converge. The convergence of a family represents the saturation of all the family members in a single position. At termination, any of these can be reported as  the best solution found. As a rule, the best family in society contains the best solution(s).

Algorithm 4 Socio Evolution and Learning Optimization Algorithm
1: Initialize the parameters and generate the population of parents in their pairwise close neighborhood 2: Generate kids in the neighborhood of the parents in a family 3: repeat 4: Find the best family and society member 5: Each parent of every family decides to randomly follow the behavior of a parent from one of the other families 6: Kids decide to follow either their parents or their siblings or kids from other families 7: until maximum number of learning attempts, or families converge

Nomadic people optimizer (NPO)
Nomadic people optimizer (Salih and Alsewari 2020) is inspired by the behavior of nomads in the search for required food and water. NPO is motivated by the lifestyle of Bedouin Arabic nomads known as Bedu or Beduin. The algorithm rests on the following basic premises.
Bedouins travel their entire life with all their belongings (mostly camel, cattle, and sheep herds) in search of locations rich with necessary resources for their lives. Their families are classified as normal families and the Sheikh family. Sheikh represents the clan leader and determines the places for the families' survival and distribution pattern at the location. The Sheikh selects a few normal families to explore the surrounding regions to find suitable locations. Whenever a better place is located, Sheikh moves the entire clan there. The algorithmic steps for NPO are described below and mapped to those of the general framework in Table 9.
1. Initialization and groups formation: Initially, a set of clan leaders is generated randomly. Then, other families are generated in the vicinity of these clan leaders. Leaders and other families are evaluated for their fitness, and if any new family is found to be better than the previous leader, the leader for that clan is updated.

Reproduction mechanisms:
(a) Intra-group reproduction(s): If no newly generated family is better than the corresponding clan leader, then the families use levy flight (Kamaruzaman et al. 2013) moves to search for better locations in other regions. Again, after performing levy flights, clan leaders are updated. (b) Inter-group reproduction(s): The global best leader is determined among the leaders, and all other leaders strive to follow that leader.
3. Group refreshment: Group refreshment is applied after each intra-group improvement. Leaders in each clan are updated. Furthermore, the best global leader is determined at the beginning of inter-group improvement process. 4. Termination condition(s): NPO terminates after a maximum number of iterations have been completed.
The pseudocode for NPO is provided in Algorithm 5, and Table 10 lists its hybrid version. Update the leaders if newly generated normal families have better fitness than their corresponding clan leader; otherwise, explore the search space using levy flight 6: Update the leaders with the families having better fitness than their corresponding leaders after levy flight 7: Periodical Meetings: leaders participate in these meetings aimed to provide the best leader and move all the normal leaders toward the best leader 8: until maximum number of iterations

Imperialist competitive algorithm (ICA)
Imperialist competitive algorithm or colonial competitive algorithm (CCA) (Atashpaz-Gargari and Lucas 2007b; Atashpaz ) is inspired by imperialism and the imperialistic competition among empires to improve themselves by taking control of weaker colonies of other empires. The algorithm rests on the following basic premises. Stronger countries overcome weaker countries and form their empires. The strongest country in each empire is called imperialist, and the possessed countries are called colonies of these imperialists. Imperialists attempt to improve colonies by imposing their more profitable characteristics upon them. Colonies can revolute during this process and can exhibit better characteristics independently. Stronger empires acquire colonies from weaker empires through imperialistic competition and become more powerful. An empire with no colonies suffers collapse or is defeated by a better/best empire in this imperialistic competition.
Below are the algorithmic steps for ICA. Table 11 provides the correlation of these steps with those of the general framework. And the pseudocode for ICA is provided in Algorithm 6.

Initialization and groups formation:
A random population of countries (solutions) is generated in the parametric space. Each gene in such countries represents one characteristic. These countries are evaluated for their cost (fitness). A few best countries are designated imperialists, and the rest are termed colonies. The imperialists overcome nearby weaker countries in proportion to their cost to form empires.

Reproduction mechanisms:
(a) Intra-group reproduction(s): Imperialists impose their characteristics on the possessed colonies. Colonies move in objective space toward their imperialist to simulate this process. In effect, this imitates the knowledge acquisition from the imperialists to the colonies for their improvement. In terms of EAs, this implements exploration. Colonies occasionally undergo some revolution while following imperialists for exploitation. In this moving process, a colony may attain better fitness than its imperialist. Then, the roles of that colony and its imperialist are interchanged. (b) Inter-group reproduction(s): Empires compete to acquire the weakest colony of the worst empire. Thus, better empires flourish, weaker ones weaken, and, finally, collapse into better ones.
3. Group refreshment: Imperialists in each empire represent the best solution in that empire and are updated after performing assimilation and revolution processes. Furthermore, empires with no colonies collapse into strong empires after the imperialistic competition.

Termination condition(s):
The weaker colonies and empires collapse into stronger empires, and eventually, the algorithm ends up with just one empire. In this case, the imperialist of the only empire is reported as the best solution found, and the algorithm is terminated. Additionally, or alternatively, the maximum iteration count can be specified as the termination condition. Empires are distinguished based on their power, calculated using the cost of their imperialists and colonies. Upon termination, the imperialist of the best empire provides the best solution.
Social-based algorithm (Ramezani and Lotfi 2013) provides a hybrid version of ICA by combining EA and ICA. Various discrete versions of ICA have also been proposed over time, Algorithm 6 Imperialist Competitive Algorithm 1: Initialize the parameters and generate some random solutions (countries) 2: Create the empires 3: repeat 4: Assimilate and revolute empires: move the colonies toward nearby best imperialist with some probability 5: Exchange colony and imperialist, if the colony's fitness is better than imperialist 6: Compute the total power of empires 7: Imperialist Competition: Move the weakest colony (colonies) from the weakest empire to the empire that has the most likelihood of possessing it 8: Eliminate empires having no colonies 9: until only one empire remains   Table 12.

Parliamentary optimization algorithm (POA)
Parliamentary optimization algorithm (POA) (Borji 2007;Borji and Hamidi 2009) is inspired by human competitive behavior seen in the parliamentary head selection process. This algorithm also mimics human behavior in an athletic championship or presidential campaign competition. The algorithm rests on the following basic premises. The members of parliament belong to a political party and are elected in a general election process. These parliament members of a political group are divided into candidates and regular members in a parliamentary head election. Candidates are the members nominated for the parliamentary head position. Meanwhile, regular members vote for the candidates according to their interests. They can migrate to other parties influenced by their ideology. In parliamentary elections, candidates or nominated party leaders try to get as many votes as possible from their party members. At times political parties also form alliances to win the elections.
The algorithmic steps for POA are as follows. The correspondence of these steps with those of the general framework is provided in Table 13. Further, the pseudocode for POA is provided in Algorithm 7.
1. Initialization and groups formation: A random population of individuals is generated, each representing a member of parliament. This population of members is divided randomly into a few equal-sized political groups or parties. Each parliament member is marked as either a candidate or a regular member depending upon their evaluated fitness. As the representatives of their respective parties in a parliamentary head election, the best party members are selected as candidates. A regular member supports them by voting in their favor.

Reproduction mechanisms:
(a) Intra-group reproduction(s): Regular political party members favor candidates to improve their fitness. If a regular member attains better fitness than a candidate, their roles in that party are exchanged. (b) Inter-group reproduction(s): The overall power of each political group is calculated in terms of its members' fitness. Members of weaker groups are forced to change their parties to better political groups. In this process, the worst-performing groups gradually lose their power and eventually collapse into other groups. Alliances are also formed with a predefined probability.
3. Group refreshment: Candidate members in each group are reassigned after intra-group improvement. Some of the best groups are merged, and some worst groups are deleted with some predefined probabilities. 4. Termination condition(s): Unlike the real world, POA converges to a state where only one political party remains. The best solutions in POA are represented as candidates in each political party. The best among them is the best solution in each iteration.
An improved version of POA for permutation constraint satisfaction problems is proposed by de Marcos et al. (2010). Table 14 lists some hybrid versions of POA.

Election campaign optimization (ECO)
Election campaign optimization (ECO) algorithm ) simulates the candidates' behavior seen in an imaginary election campaign scenario to achieve the highest support from the voters. The algorithm rests on the following basic premises.  Algorithm 7 Parliamentary Optimization Algorithm 1: Initialize the parameters and generate a random population 2: Divide the population into some equal-sized groups 3: Evaluate the individuals 4: Select the best fit individuals in each group as candidate members 5: repeat 6: Bias regular members of each group toward the candidates of the same group 7: Reassign candidate members 8: Compute each group's power 9: Select some best groups and merge them with some predefined probability 10: Delete some groups with a predefined probability 11: until only one political group remains Candidates with a higher social status (prestige) can participate in elections. Candidates organize campaigns to get maximum support from the voters for their election advancement. Voters support candidates that they find promising for their betterment.
The algorithmic steps for ECO are outlined as follows. The correspondence of these steps with those of the general framework is provided in Table 15. Further, the pseudocode for ECO is provided in Algorithm 8.

Initialization and groups formation:
The population space in this algorithm comprises two types of individualscandidates and voters. Initially, a random population of candidates is generated. Then, voters are generated globally using uniform distribution and into the candidates' support regions (ranges) using the normal distribution. The support regions of candidates represent the area in which candidates can affect voters. The candidate's effect decreases as the difference between the candidate and the voter increases. After a range limit of support region, it reduces to zero. The support regions can overlap, i.e., a voter can be affected by one or more candidates depending on their support ranges.

Reproduction mechanisms:
(a) Intra-group reproduction(s): Voters affected by multiple candidates distribute their support in the proportion of their effect. In this process, a voter's prestige can become better than a candidate's. Then, the voter's rule is interchanged with the candidate. (b) Inter-group reproduction(s): Through campaigns, candidates affect voters in their support regions. In return, voters support candidates in proportion to their prestige. The voters' support helps the candidate change to an updated location in the next iteration, which defines its new election location. A candidate's total support is the aggregated sum of all the affected voters.
3. Group refreshment: Candidates are reassigned in each iteration with the voters with better prestige in the same support region.

Termination condition(s): A maximum number of itera-
tions is used as the termination condition. At termination, the candidate with the highest prestige provides the best global solution.

Algorithm 8 Election Campaign Optimization
1: Initialize the parameters and generate a random population of candidates 2: Calculate candidates' prestige and ranges 3: Generate voters 4: repeat 5: Calculate the effect of candidates on voters 6: Calculate the prestige and support of the voters 7: Find the new support focus and ranges of the candidates 8: Substitute the candidate with the voter with better prestige than the candidate if any 9: until maximum number of iterations A comparative study verifying the good performance of ECO on constrained optimization problems is reported by Xie et al. (2010). A parameter design and performance study on the ECO algorithm is presented by Zhang et al. (2011b).

Group leaders optimization algorithm (GLOA or GLA)
Group leaders optimization algorithm (Daskin and Kais 2011b) is inspired by the influence people get from the leaders and the members of other groups. The algorithm rests on the following basic premises. Leaders work as the inspirational body in a group and are the best individuals. Leaders influence other individuals and inspire them to improve. Individuals also get inspiration from other better members of different groups. These influenced members can replace the leader if they attain better characteristics than the leader.
The algorithmic steps of GLOA are as follows. Table 16 shows how they correspond to those of the general framework. And the pseudocode for GLOA is provided in Algorithm 9.
1. Initialization and groups formation: In this algorithm, a few groups are generated randomly, and then these groups are inflated with an equal-sized population of randomly generated members. Each member of these groups is evaluated for fitness, and the best-fit member in each group is designated as a leader.

Reproduction mechanisms:
(a) Intra-group reproduction(s): The leader in each group helps improve individuals using recombination and mutation operators. Randomly selected members in each group are crossed-over with the group leader, and the best between the previous and newly generated members is stored. (b) Inter-group reproduction(s): One-way crossover is performed between different groups to maintain diversity in each group. In this process, randomly selected members from one group influence randomly selected individuals in each group. Characteristics of one member are overwritten on another member, and the best between the previous and newly generated members is stored.
3. Group refreshment: Sometimes, a newly generated group member surpasses its leader; hence, the leader is redetermined at the start of each iteration.

Termination condition(s):
The maximum number of iterations is used to terminate the algorithm. The leader in each group represents the best local solution. The best of these leaders represents the best-found solution.  Algorithm 9 Group Leaders Optimization Algorithm 1: Initialize the parameters and generate a random population of individuals 2: Create equal-sized groups 3: Evaluate the individuals 4: repeat 5: Determine the leader for each group 6: Generate new individuals using the crossover operator between an individual and the group leader and mutate the generated individual 7: Influence individuals from other groups individuals selected randomly 8: until maximum number of iterations Xiang et al. (2014) proposed a Pareto-GLA to solve multiobjective optimization algorithms. Table 17 lists some hybrid versions of GLOA.

Election algorithm (EA)
The election algorithm (Emami and Derakhshan 2015) is inspired by the presidential elections used to elect a president in a country. This algorithm mimics the election strategy used in the real world. The algorithm rests on the following basic premises.
Every individual participates in an election, either as a candidate or as a voter. Initially, candidates and their supporters form political parties. These supporters follow their party candidate because of his ideology. Candidates start advertising campaigns to attract more and more supporters in their favor. This advertisement can be of two types-positive and negative advertisement. In a positive advertisement, candidates convey their good characteristics, while in a negative advertisement, candidates disparage their opponents. During advertising campaigns, candidates with the same ideologies might join and create a joint party. This coalition of parties increases their chance of success in the election.
The algorithmic steps for EA are outlined as follows, and the correspondence of these steps with those of the general framework is provided in Table 18. Algorithm 10 provides the pseudocode for the election algorithm.

Initialization and groups formation:
In the election algorithm, a random population of individuals is generated. These individuals can be of one of two types-a candidate or a voter (or supporter). Initially, a few individuals are selected as candidates from this population. Afterward, using the clustering method, various electoral parties are formed according to the same interests, beliefs, and ideas to participate in the election.

Reproduction mechanisms:
(a) Intra-group reproduction(s): Advertising is the counterpart of operators in GA. Before the election starts, electoral parties try to influence individuals using advertising. The election algorithm has three ways of advertising: positive advertisement, negative advertisement, and coalition. In a positive advertisement, candidates impact the individuals by introducing their positive images and qualities. (b) Inter-group reproduction(s): In a negative advertisement, candidates impact individuals by introducing negative images and qualities of their opponents. This advertisement affects individuals globally and converges them to a specific electoral party. This winning electoral party contains the global optimum of the whole solution space.

Group refreshment:
Candidates at the end of each iteration are reassigned for each party. Furthermore, candidates with similar ideas join to create a united party in the coalition process. 4. Termination condition(s): An election day is used as the termination criterion, defined as the maximum number of iterations. The candidates in each group represent the best solution in the group. The best of these candidates represent the best-found solution.
A new election algorithm based on assistance in distributed systems (Zargarnataj 2007) is an improvement over this algorithm. Emami (2019) proposed an improved election algo- Algorithm 10 Election Algorithm 1: Initialize the parameters and generate an initial population of individuals 2: Evaluate each individual 3: Create electoral parties 4: repeat 5: Candidates advertise their plans and enhance their positions within the party by learning new ideas 6: Candidates aim to win over supporters from other parties 7: Collate candidates if they have similar ideas 8: Re-evaluate the eligibility of candidates 9: until maximum number of iterations rithm by modifying the party formation step and introducing a chaotic positive advertisement and migration operator. A modified version of the election algorithm to solve the random k satisfiability problem is provided by .
A comparison between the election and election campaign optimization algorithms is presented in (Abubakar and Sathasivam 2020).  reported a significant enhancement in the election algorithm on incorporating a negative campaign strategy. Table 19 lists the hybrid versions of the election algorithm.

Ideology algorithm (IA)
Ideology algorithm (IA) (Huan et al. 2017) is a sociopolitically inspired population-based meta-heuristic algorithm. It simulates political party members' self-interest and competitive behavior to improve their rank in the party. The algorithm rests on the following basic premises.
Ideologies guide individuals to achieve their life goals. Hence, individuals in a society follow and support some ideologies. In the case of local parties, each party follows certain ideologies. The individuals in a party follow their party's ideology. At the same time, every party member wants to become the local party leader, and every local party leader wants to become a global leader.
The algorithmic steps for IA are outlined as follows. The correspondence of the steps of IA with those of the general  (2021) EA + Support vector regression method (SVR) Discharge coefficient (Cd) estimation of vertically cosine shape weirs  framework is provided in Table 20. The pseudocode for IA is provided in Algorithm 11.
1. Initialization and groups formation: In IA, the population of randomly generated possible solutions are divided into some equal-sized political parties. Here, each member of a party is considered a solution. The position of an individual in a political party depends upon his fitness. An individual with the highest fitness in a political party is considered a local party leader. However, the best local party leader is considered a global leader. Individuals in parties compete with others and desire to improve themselves or maintain their current position.

Reproduction mechanisms:
(a) Intra-group reproduction(s): The individuals in each party are ranked based upon their evaluated fitness as local party leader, second-best individual, ordinary individuals, local second-worst individual, and local worst individual. Each local party leader selects the best improvement using the roulette wheel selection in its fitness calculated after introspection, competing with the second-best and following the global leader to maintain its status in the party. However, ordinary individuals introspect and follow the leader. (b) Inter-group reproduction(s): If the distance between the fitness of the local worst individual and the second-worst individual in each party is higher than a pre-specified value, then the local worst individuals change their ideology by switching to another party. Every other ordinary party individual tries to introspect himself once and follows all the local party leaders with a desire to become one.

Group refreshment:
The ranking of individuals is performed to find the new local best, worst and ordinary individuals. 4. Termination condition(s): IA terminates if there is no significant change in the local party leaders for a significant number of iterations or the maximum number of iterations is reached. The best among all local party leaders provides the best solution.  Rank individuals in each party 6: Search in the neighborhood of each party best, each second party best, and the global best members 7: Worst individuals switch their parties due to a predefined condition 8: Ordinary individuals introspect and follow other local party individuals 9: Update party individuals 10: until no significant change over iterations or the maximum number of iterations

Political optimizer (PO)
Political optimizer (PO) (Askari et al. 2020) is a sociopolitically inspired meta-heuristic. It mimics all phases of politics, including inter-party elections, constituency/seat allocation, election campaigns, party switching, and parliamentary affairs. The algorithm works on the following premises.
In politics, each party member tries to win the election. Further, each party tries to maximize the number of seats in parliament to form a government. During the electoral process, party members campaign for votes and use their previous experience to win over the voters. The parties collaborate and compete to win over the voters. Furthermore, candidates follow other candidates and sometimes switch parties.
The algorithmic steps for PO are outlined as follows, and the correspondence of the steps of PO with those of the general framework is provided in Table 21. The pseudocode for PO is provided in Algorithm 12.
1. Initialization and groups formation: In PO, a particular case is visualized wherein the number of parties, constituencies, and party members is the same. During initialization, a population of individuals is generated and divided sequentially into several equal-sized political parties. Each of these party members also represents election candidates. The corresponding candidates in various parties contest the elections from the corresponding constituencies. The fitness of each party member is evaluated in a general election, and the fittest member in each party is elected as the party leader. Moreover, the winners from all the constituencies become the constituency winners or parliamentarians.

Reproduction mechanisms:
(a) Intra-group reproduction(s): In the election campaign, a recent past-based position updating strategy (RPPUS), a position updating strategy modeling the learning behaviors of politicians from the previous election, is used. This election campaign updates party members and candidates according to their corresponding party leaders and constituency winners, depending on their new and previous fitness. (b) Inter-group reproduction(s): Each party member is selected with some gradually decreasing probability and swapped/exchanged with the least fit member of a randomly selected party. Furthermore, constituency winners are updated during parliamentary affairs based on another randomly selected constituency winner.
3. Group refreshment: After each election, the constituency winners and party leaders are reallocated in the government formation process. 4. Termination condition(s): PO terminates after a maximum number of iterations, and the best party leader is reported as the best solution.
Manita and Korbaa (2020) proposed a binary version of PO using eight transfer functions (Mirjalili and Lewis 2013) categorized into S-shaped and V-shaped to solve feature selection problems on gene expression data. Askari and Younas (2021a) proposed an improved political optimizer (IPO) by enhancing its position-updating mechanism and demonstrated its applicability on benchmark complex Election campaign: update party members and candidates according to their corresponding party leaders and constituency winners, respectively 7: Party switching: exchange members between parties 8: Election and government formation 9: Parliamentary affairs: constituency winners are updated based upon another randomly selected constituency winner 10: until the maximum number of iterations landscapes and engineering optimization problems. Basetti et al. (2021) proposed a quasi-oppositional-based political optimizer (QOPO) by incorporating quasi-opposition-based learning (QOBL) (Tizhoosh 2005) to improve the exploration and convergence capability of the political optimizer and utilized it to solve economic emission load dispatch problem with valve-point loading. Zhu et al. (2021) proposed seven variants of PO with different interpolation and refraction learning strategies. Xu et al. (2022) proposed an improved political optimizer, namely the quantum Nelder-Mead political optimizer (QNMPO), to solve performance optimization in photovoltaic systems. QNMPO uses the quantum rotation gate method to rotate the population of individual solutions and the Nelder-Mead simplex method to improve the solution quality by searching the neighborhood of the best-found solution. Table 22 lists hybrid versions of PO. A comprehensive view of the similarities and differences between the SIEAs described is provided in Table 23, which shows how steps of each SIEA are mapped into the generic framework by choosing amongst the possible alternatives for each step. The table is designed to get the overall perspective in a single picture. From the table, it is possible to discern precisely where the different SIEAs are similar and how they differ in a pair-wise comparison. Furthermore, Table 24 provides the utilization count of the framework components in  Baruah and Baruah (2022) considered SIEAs. The following points emerge based on the data provided in it.
1. The highlighted (in bold) mechanisms serve as the most popular choices in SIEAs. However, the best choice of mechanisms for a new SIEA depends on the combination of mechanisms to balance the exploration and exploitation requirements perfectly. 2. Some SIEA components are yet to be attempted and can be investigated in the future. 3. It might be possible to develop some new and more promising approaches by judiciously selecting other social phenomena and modeling the search on their basis.

Application areas
In this section, an exhaustive search is performed. The results are given in Table 25, which lists the various reported applications of the state-of-the-art SIEAs. The following salient points can be summarized from a study of the analysis of Table 25.
1. Almost all SIEAs have been applied to solve various realfunction optimization problems. 2. Imperialist competitive algorithm and election campaign optimization have been more popular among researchers attempting real-function optimization problems.
3. Society and civilization algorithm, imperialist competitive algorithm, parliamentary optimization, election campaign optimization, group leaders optimization, and soccer league competition algorithm have been applied to solve discrete combinatorial optimization problems. 4. Imperialist competitive algorithm appears to be the most popular SIEA with maximum reported applications considering both combinatorial and real-function optimization problems as surveyed in Hosseini and Khaled (2014). 5. In most cases, the SIEA proposed and published reports better results than traditional EAs.

Conclusions and future directions
Evolutionary computation has undergone vast and diverse developments in the past few decades. Although the initial inspiration came from biological evolution, some subsequent developments were inspired by the collective intelligence of animals and insects. It is well known that human social evolution has been much faster than biological evolution. Therefore, inspiration from social phenomena, e.g., human behavior exchange and knowledge transfer, has been used to design a new evolutionary computing paradigm dubbed socio-inspired evolutionary algorithms (SIEAs). Numerous SIEAs have been proposed that employ a variety of terminologies picked from the domain of inspiration, viz. elections, and societal and imperial colonization, to name a few. This diverse terminology has created a situation where  Step   (2013) Optimal design of water distribution networks in urban areas Moosavian and Kasaee Roodsari (2014) Knapsack problem Moosavian (2015) Set covering problem Jaramillo et al. (2016a, b, c) Truss Structure Designing Moosavian and Moosavian (2017) Capacitated vehicle routing problem Anderson (2018) Power consumption optimization in wireless sensor networks Ebrahimi and Tabatabaei (2020) Attribute reduction based on rough Set theory Abdolrazzagh-Nezhad and Adibiyan (2021) Load frequency control in nonlinear interconnected power system Dogan (  Zoning map for drought prediction using integrated machine learning models Mohamadi et al. (2020) Test list generation for interaction testing in IoT Alsewari (2021) ICA Nash equilibrium point achievement Rajabioun et al. (2008) Off Optimum skeletal structures designing Kaveh and Talatahari (2010) Linear induction motor designing Lucas et al. (2010) Optimizing the free convection heat transfer in a vertical cavity with flow diverters Karami et al. (2011) Optimal simultaneous coordinated tuning of damping controller Bijami and Marnani (2012) Plate fin heat exchanger designing Yousefi et al. (2012) Adhesive-bonded fiber glass strip optimization Mozafari et al. (2012) Robust PID controller for load-frequency control of power systems Shabani et al. (2013) Sliding mode controller designing Jalali et al. (2013) Fractional order PID controller design for LFC in electric power systems Taher et al. (2014) Non-convex economic dispatch problem Bijami et al. (2014) Ground vibration production in quarry blasting Hajihassani et al. (2015) Harmonic minimization in multilevel inverters Etesami et al. (2015) POA Benchmark numerical functions Borji (2007); Borji and Hamidi (2009) Task scheduling de Marcos et al. (2010) Overlapping community detection in social networks Altunbey and Alatas (2015) Automatic mining of numerical classification rules Kiziloluk and Alatas (2015) Web pages classification Kiziloluk and Ozer (2017) Community structure identification in social networks Shakya et al. (2020) ECO Engineering design problems He and Zhang ( (2014) Job scheduling in a grid computing system Pooranian et al. (2013) QoS and energy consumption aware service composition and optimal-selection in cloud manufacturing systems   (2021) Structural design optimization for buckling-restrained braces (BRBs) Hoseini et al. (2021) Color aerial image multilevel thresholding Kurban et al. (2021) Energy-management system for a microgrid installation Suresh et al. (2021) Integration of DGs in the distribution grids Tolba and Tulsky (2021) Vehicle design optimization Yıldız et al. (2021) Maximum power harvesting technique for offshore permanent magnetic synchronous generator Zhu et al. (2021) Optimal coordination of Directional Overcurrent Relays (DOCRs) Abdelhamid et al. (2020) Optimal adaptive fuzzy management strategy for fuel cell-based DC microgrid  Solid oxide fuel cell (SOFC) optimal parameters estimation Fathy and Rezk (2022) Energy management strategy for a renewable-based microgrid Ferahtia et al. (2022) some algorithms appear to be entirely different but are very similar in matters of significant detail. Anyone trying to understand the area must devote considerable effort to sifting through this terminological maze to arrive at a meaningful understanding of the algorithms and their differences.
In this paper, we present a generalized framework and survey for SIEAs. The proposed framework includes social grouping ideas of a given population into various groups and their two-way evolution. This framework provides a detailed description of each algorithmic component and how it is implemented in practice. A survey of multiple SIEAs is also provided to highlight the working of these algorithms and their improved hybrid versions. Their algorithmic descriptions and pseudocodes are provided. A mapping is provided to the steps outlined in the general framework to help understand the similarities and differences between these methodologies. Various applications of these algorithms found in the literature are also listed for ready reference. Thus, this paper could become an excellent reference as a starting point for anyone interested in this wide upcoming and fascinating field of research.
In future work, it would be interesting to investigate a rigorous study comparing the performance of various SIEAs on a fair platform and diverse applications. Such a study is needed to clarify the relative strengths of the different SIEAs and the respective domains in which each is better than others. Since the different SIEAs appear to have different strengths and search power, further experimental investigations may be made to create a meta-SIEA that encapsulates robust mechanisms in balancing exploration and exploitation of each of the different SIEAs to perform better than each of them individually on as many problems as possible.
Author Contributions All authors contributed equally.
Funding The authors declare that no funds, grants, or other support was received during the preparation of this manuscript.
Data availability This manuscript has no associated data.

Conflict of interest
The authors have no relevant financial or nonfinancial interests to disclose.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. Quantum circuit design methodology for multiple linear regression. arXiv preprint arXiv:1811.01726 Ebrahimi S, Tabatabaei S (2020) Using clustering via soccer league competition algorithm for optimizing power consumption in wsns (wireless sensor networks Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.