Constrained Optimization Based on Hybridized Version of Superiority of Feasibility Solution Strategy

Teaching learning based optimization (TLBO) is a stochastic algorithm which was ﬁrst proposed for unconstrained optimization problems. It is population based, nature-inspired, and meta-heuristic that imitates teaching learning process. It has two phases, teacher and learner. In teacher phase, the teacher who is well-learned person transfers his/her knowledge to the learners to raise their grades/results; while in learner phase, learners/pupils learn and reﬁne their knowledge through mutual interconnection. To solve constrained optimization problems (COPs) through TLBO we need to merge it with some constraint handling technique (CHT). Superiority of feasibility (SF) is a concept for making CHTs, existed in diﬀerent forms based on various decisive factors. Most commonly used decision making factors in SF are number of constraints violated (NCV) and weighted mean (WM) values for comparing solutions. In this work, SF based on number of constraints violated (NCVSF) and weighted mean (WMSF) are incorporated in the framework of TLBO. These are tested upon CEC-2006 constrained suit with the remark that single factor used for the decision making of winner is not a wise idea. Mentioned remark leads us to made a single CHT that carries the capabilities of both discussed CHTs. It laid the foundation of hybrid superiority of feasiblity (HSF); where NCV and WM factors are combined with giving dominance to NCV over WM. In current research three constrained versions of TLBO are formulated by the name NCVSF-TLBO, WMSF-TLBO, and HSF-TLBO; while implanting NCVSF, WMSF, and HSF in the framework of TLBO, respectively. These constrained versions of TLBO are evaluated on CEC-2006 with the remarks that HSF-TLBO got prominent and ﬂourishing status among these.


Introduction
Optimization is the mathematical process to find the best possible solution vector of decision variables that minimize or maximize the objective function of the given optimization and search problems. Optimization is natural process such as maximize the profit and minimize expenditure in daily life applications. Optimization has become an essential part in all branches of sciences and engineering technologies since its first piratical application [1,2]. The standard form of constrained optimization problem is described as follow: . . .
subject to the following conditions: h j (x) = 0, j = 1, 2, . . . , q . , x n ) T ∈ Ω is a vector of n decision variables, f (x) consist of m objective functions, g i (x) describes P inequality functions and h j (x) denotes q equality functions. If Ω is a closed and connected region in R n and all objective functions are described in real valued variables, then problem (1) is said to be continuous multi-objective optimization problem (MOP). In global optimization, the main objective is to find the best solution for the given optimization problem in presence of the multiple optimal solutions. In constrained optimization, one has to find the feasible solution subject to several constraints functions [3,4,5].
In the last decade or two, evolutionary optimization algorithms have contributed much for dealing with different optimization and search problems. Unlike the local techniques, where a single design point is updated, evolutionary algorithms do not require any gradient information and typically utilize a set of solutions to find optimum solution for the given problem [6,4,7,8,9,10]. This family of algorithms are typically inspired from the phenomena of nature. EAs have the many advantages including extremely robust and easy to implement and being well suited for discrete optimization problems [11,12,13]. The classical evolutionary algorithms can be distinguished by the nature of their solutions representation and operators employing for their evolution. The evolutionary strategies employs mutation operator to create new solutions which are represented in real-numbers [14,15,16],evolutionary programming also required solutions that are represented in real numbers or integers [17] and genetic algorithms employ crossover operators to evolve its population [18,19] and genetic programming required tree based representation of computer programs to perform search process [20,21,22].The main drawbacks associated with most of the classical EAs are their high computational cost, poor constraint-handling abilities, problem-specific parameter tuning and limited problem size and lack of ability to cope with large scale global optimization problems.
Teaching learning based optimization (TLBO) is one of the most efficient and recently developed population based EA. TLBO was proposed by R.Venkata Rao and his colleagues [23,24]. TLBO is a stochastic algorithm and was first applied on unconstrained optimization problems. It is nature-inspired populationbased algorithm that imitates teaching learning process. It has two phases, teacher and learner. In teacher phase, the teacher who is well-learned person transfers his/her knowledge to the learners to raise their grades/results; while in learner phase, learners/pupils learn and refine their knowledge through mutual interconnection It is the most popular algorithms in the field of optimization. It is used on large scale in various fields of engineering and industry. In this algorithms the best solution play the role of a teacher. This algorithms is different from EAs because here the member of swarm improves their results/grades in a given search space. In TLBO, the starting population is randomly selected in the given search space. The members of the swarm updates his position as given below [23]: New positions are updated as follows, in teacher phase of TLBO.
where x t+1 i is the updated position, r is randomly generated numbers between 0 and 1, x t i is the current position, x t best is the best position, F is randomly generated teaching factor either 1 or 2, µ t is mean position of the population in iteration t. In teacher phase of TLBO positions are updated according to Eq.(3).
where x t+1 i is the updated position, r is randomly generated numbers between 0 and 1, x t i is the current position, x t j is the position of randomly selected solution from the population other than i th one, f (x t i ) and f (x t j ) are the fitness values of x t i and x t j respectively. In learner phase of TLBO positions have been updated according to Eq.(4).

Number of Constraints Violated based SF (NCVSF)
In NCVSF the decisive factor for declaring the winner among the solution is number of constraints violated (NCV). According to it the comparison of two solutions x t i and x t j are done as follow: where x t+1 i is updated position, x t i and x t j are competing positions, and N CV is number of constraints violated. Pseudo-code of NCVSF technique is given in Algorithm 1: Comparing positions x t i and x t j via NCVSF; Winner of x t i and x t j is selected by their cost values; end if

Weighted Mean based Superiority of Feasibilities (WMSF)
In WMSF, the factor upon which the superior among the compared solution are nominated is weighted mean. In this CHT winner of x t i and x t j is selected as follow: where x t+1 i is updated position, x t i and x t j are competing positions, and WM is weighted mean of all constraints violations which is defined as follow [25].
w i (= 1/G maxi ) is a weight parameter, G maxi is the maximum violation of constraint in combined population. Pseudo-code of WMSF technique is given in Algorithm 2: Winner of x t i and x t j is selected by their cost values; end if

Main Research Contributions
Using single decisive factors NCV or WM solely for SF reduces the diversity among the solutions; while pushing them in the specific direction of used criteria. In NCVSF when we meet up with a situation when both competing solutions are infeasible with same NCV values the winner is decided based on fitness values which is not a wise move. Similar situation is happened in WMSF when the WM values for two infeasible solution are same the winner is decided upon fitness values, again it is not considered a good decision. In this work discussed gaps are tried to remove by introducing HSF. To take step based on fitness values creates infeasibility among solutions; while neglecting the constraints violations. This gap is filled up by taking step via discussed CHT in each proposed algorithm. Details of the prescribed contributions are given in below.

Hybrid Superiority of Feasibilities (HSF)
HSF is hybrid of NCVSF and WMSF. In it the decisive factor NCV is considered superior over weighted mean value. It is due to giving more impotence to NCV since it is considered superior than WM values. Here winner of x t i and x t j is selected as follow: Winner of x t i and x t j is selected by their cost values; end if For constrained optimization taking step in the learner phase of TLBO according to fitness values is not a good idea, since solutions are neglecting constraints in each iteration that causes bringing in infeasibility among solutions; therefore, for constrained optimization selecting step via some CHT is good concept. Implantation of the mentioned concept is described in following: 1. Learner Phase based on NCVSF: 2. Learner Phase based on WMSF: 3. Learner Phase based on HSF: In this paper, TLBO will be combine with NCVSF, WMSF, and HSF for handling the constraints of the problem (1). This will result in a new constrained version of TLBO, denoted by NCVSF-TLBO, WMSF-TLBO, HSF-TLBO. The pseudo code of these are given below. Update BestSol=x * using algorithm 3; 11: end while

Simulation Results, Comparison, and Discussion
The basic properties of these problems are given in table 1, where n is the problem dimension, f (x) is know optimal solution at x, ρ denotes the ratio between feasible and whole search space, LI denote number linear inequality constraint, NI denotes the number of non-linear inequality constraints, LE denotes the number of linear equality constraints, NE denotes the number of non-linear equality constraints, TC denotes total constraints and a denotes active constraints at x. The CEC 2006 constrained optimization problems suit [26] containing of 24 problems has been selected from available literature, to test the robustness and consistency of the proposed algorithms. The results are compiled for all problems in 25 independent run. For commencement of simulations, the parameters are set for selected schemes as follow.

PC and software Specification
The system which is used for performing simulations has Windows 10 with 8 GB Ram and intel(R) core(TM) i7-8700 CPU@3.20GHZ processor. All the experiments were performed in MATLAB 2013a (32-Bit) environment.

Illustration of terms used for comparison
Illustration of the used terms in table for comparison is given in below.
• Best: It is the best fitness value for the obtained final best solutions in 25 runs.
• Median: It is the median fitness value for the obtained final best solutions in 25 runs.
• Worst: It is the Worst fitness value for the obtained final best solutions in 25 runs.
• υ: It is the mean violation of median solution.
• Mean: It is the Mean fitness value for the obtained final best solution in 25 runs.
• St. Dev: It is the standard deviation of fitness value for the obtained final best solution in 25 runs.
• FR: It is the percentage of feasibility rate in 25 runs.
• SR: It is the percentage of success rate in 25 runs.

Comparison of Proposed Algorithms
Simulation results are compared for three proposed algorithms; NCVSF-TLBO, WMSF-TLBO, and HSF-TLBO, for CEC-2006 constrained suit consisting of 24 benchmark constrained problems. Details of the compared statical data are displayed in the tables below.

Fitness and feasibility convergence graphs
Fitness and feasibility convergence graphs for representing problems are displayed for Proposed Algorithms in following:

Discussion
In this paper, three CHTs named NCVSF, WMSF, and HSF based on the concept of SF are implemented and integrated with TLBO. The three constrained variants of TLBO, namely, NCVSF-TLBO, WMSF-TLBO, and HSF-TLBO are applied for checking robustness, consistency, and efficiency to solve the CEC 2006 benchmark functions [26].The Statistical analysis, enable us to state the following discussion in respect of developed algorithms.
• G01: For this problem HSF-TLBO is leading algorithm since it is comparable or defeating the remaining based on all statistics used for evaluation.
• G02: For this problem HSF-TLBO is leading algorithm since statistics displayed that except St.Dev. in all remaining it is winner or comparable.
• G03: For this problem HSF-TLBO is leading algorithm since statistics analysis displayed that except Best in all remaining it is winner or comparable.
• G04: For this problem proposed algorithms are comparable in all statistics except St.Dev. On the basic of St.Dev. we can say that HSF-TLBO is winner.
• G05: For this problem HSF-TLBO is leading algorithm since statistics displayed that except St.Dev. in all remaining it is winner or comparable.
• G06: For this problem HSF-TLBO and WMSF-TLBO are leading and comparable algorithms since statistics are same for these two.
• G07: For this problem for statistics Best, Median, and SR HSF-TLBO is winner for c and υ, and FR all comparable. For remaining statistics the performace of HSF-TLBO is not notable.
• G08: For this problem proposed algorithms are comparable in all statistics except St.Dev. On the basic of St.Dev. we can say that NCVSF-TLBO is winner since other statistics are matched for all.
• G09: For this problem HSF-TLBO is leading algorithm since it is comparable or defeating the remaining based on all statistics used for evaluation.
• G10: For this problem HSF-TLBO is leading algorithm since it is comparable or defeating the remaining based on all statistics used for evaluation.
• G11: For this problem HSF-TLBO and WMSF-TLBO are leading and comparable algorithms since statistics are same for these two.
• G12: For this problem proposed algorithms are comparable since statistics are same for these.
• G13: For this problem HSF-TLBO is leading algorithm since it is comparable or defeating the remaining based on all statistics used for evaluation.
• G14: For this problem HSF-TLBO is comparable or winner in all statistics except median and mean. It is very closed to the winner in these statistics.
• G15: For this problem HSF-TLBO is comparable or winner in all statistics except mean and St.Dev. It is very closed to the winner in these statistics.
• G16: For this problem after a very closed competition, HSF-TLBO leaded over WMSF-TLBO based on St.Dev.
• G17: For this problem HSF-TLBO exclusively leaded over the remaining algorithms based on almost all statistics.
• G18: For this problem HSF-TLBO exclusively defeated the remaining algorithms based on almost all statistics with high SR value.
• G19: For this problem HSF-TLBO exclusively defeated the remaining algorithms based on almost all statistics with some SR value; for while remaining the SR values is zero.
• G20: This is a hard problem of the suit; therefore, performance of the proposed algorithms are not notable.
• G21: For this problem HSF-TLBO and WMSF-TLBO are very closed in competition, but one can declared HSF-TLBO based on its SR values which is very high as compared to WMSF-TLBO.
• G22: This is the hard problem of the suit; therefore, performance of the proposed algorithms are not notable, but based on low infeasibility the claim of declaring HSF-TLBO better as compared to the remaining is legal.
• G23: This is also a hard problem of the suit, but based on high value of SR HSF-TLBO can be declared as winner.
• G24: For this problem proposed algorithms are comparable in all statistics except St.Dev. based on it WMSF-TLBO can be declared as winner.

Conclusion
Current research work brings in the concept of three constrained versions of TLBO, NCVSF-TLBO, WMSF-TLBO, and HSF-TLBO. In these algorithms, for handling the constraints of the problem the platform of superiority of feasibility is used. Where factors upon which the decision of winner is made are NCV, WM and hybrid of these i.e HSF. These are integrated in the framework of TLBO to solve COPs. For evaluation purposes CEC-2006 constrained benchmarks are used. Obtained results from simulations are compared, based on it the following notables points can be concluded: • Success Rate (SR): Based on the SR values HSR-TLBO has winning status since no algorithms among compared existed that can beat it at any problem of the suit.
• Feasibility Rate (FR): Based on the FR values HSF-TLBO and WMSF-TLBO are comparable for almost all problem except G21.
• Best Fitness: Based on the Best fitness values HSF-TLBO is leading algorithm on average problems i.e winner for the competition based on Best fitness value.
• Median Fitness: Based on the Median fitness values HSF-TLBO is leading algorithm on almost all problems i.e winner for the completion based on Median fitness value.
• Worst Fitness: Based on the Worst fitness values HSF-TLBO is leading algorithm on almost all problems i.e winner for the completion based on Worst fitness value.
• c: Based on the the order triple value c HSF-TLBO and WMSF-TLBO are comparable for whole suit.
• υ: Based on the the υ value HSF-TLBO and WMSF-TLBO are comparable for most problem, but; there exist some problem based on them on can say HSF-TLBO is in prominent status as compared to WM-TLBO.
• Mean Fitness: Based on the Mean fitness values HSF-TLBO is leading algorithm on most problems i.e winner for the completion based on Mean fitness value.
• St.Dev: Based on the St.Dev values HSF-TLBO is winning the competition since it beat the others compared.
• Over All Conclusions: Over all conclusion about the proposed algorithms is that in most problems HSR-TLBO in winner; therefore, it is the consistent, robust, and prominent among the compared.
We will work in future keeping in veiw the following tasks: • To check efficiency and robustness of HSF in other existed NIAs environments.
• To extend the researched idea for solving other constrained problems.
• To do parameters adjustment for specific type of problems to insure that certain setting is suitable for certain type problems.
• To check the algorithms efficiency when the order of the implantation of teacher phase and learner phase are reversed.