Nowadays different heuristic algorithms for different optimization purposes have been proposed (including simulated annealing (SA), genetic (GA), Tabu search (TS), ant colony (AC), bee colony (BC), particle swarm optimization (PSO), and etc.,) among them PSO and SA, due to their advantages are mostly being employed. Easy programming (few input parameters to adjust) and fast convergence are the major merits of PSO algorithm. Whereas, in high dimensional space, falling into local optimum traps may be considered as a weakness for PSO algorithm. Based on the SA mechanism this algorithm could avoid getting trapped into local optimum which can be considered as a major excellence over other algorithms [19].
In this study, PSO and SA algorithms due to their mentioned merits, have been employed as the heuristic algorithms to optimize A-GTAW process variables in order to achieve maximum DOP, minimum WBW and proper value of ASR. In this study, PSO has been used twice (to determine the most appropriate architecture for BPNN and optimize the process measures). Next, SA algorithm has been used to evaluate the performance of PSO algorithm. Furthermore, a set of validation experiments has been conducted in order to confirm the proposed approach.
5.3. Simulated annealing algorithm
To interpolate between the process input variables intervals to select the most appropriate values of process input parameters reaching the desired output characteristics (e.g. minimum WBW and maximum DOP), different procedures have been introduced among which heuristic algorithms have been extensively employed for different optimization problems. All heuristic algorithms are reminiscent of biological or physical processes. In this regard, SA algorithm is reminiscent of annealing in heat treatment process [21, 22]. In annealing process, metals are heated up to a specific and pre-determined temperature (near the melting point), at which all metal particles are in random motion. Then, all metal particles rearranged by cooling down slowly toward the lowest energy state. As the cooling process is conducted appropriately slowly, lower and lower energy states are achieved until the lowest energy state is reached. Similarly, in A-TIG welding process the lowest energy level gives the optimized value for variables based on an energy function is created and minimized. The mechanism of SA algorithm is defined as follows [23]:
Defining an acceptable answer space and generating an initial random solution in this space. Next, the new solution’s objective function (C1) is computed and compared with the current ones (C0). A move to a new solution is made either the new solution has better value or the value of SA probability function (Eq. (9)) is higher than a randomly generated number between 0 and 1 [22]:
Where, temperature parameter is shown by Tk, which acts as the temperature in the physical annealing process does [21]. Eq. (10), is used as a temperature reduction rate to cool down the pre-determined temperature at each iteration.
Where, the current and former temperatures are shown by Tk+1 and Tk respectively. The cooling rate also presented by parameter α. Consequently, at the first iterations of SA due to higher temperature, most of the not improving (or even worsening) moves may be accepted. Nonetheless, as the algorithm proceeds and temperature is reduced only improving moves are likely to be accepted. This strategy could help the algorithm avoid being trapped in local minimum and jump out of it. After a specific number of iterations, a number of iterations in which no development is detected, and a pre-determined run time, the algorithm may be dismissed.
5.4. Particle swarm optimization (PSO) algorithm
PSO is a heuristic algorithm proposed by Kennedy and Eberhart [24]. It begins with a population of random solutions which is updated and searched for optimum ones. The current optimum particles are followed by the random solutions (known as particles) through the problem space. The problem space is connected with the best obtained solution and its corresponding location shown by “pBest” and “gBest” respectively. Each particle keeps track of the “pBest” and “gBest” in the problem space by changing its velocity towards them. The following Equations (11 and 12) used updating the particles [25–27].
Vi+1 = w × Vi + (C1 × ri × (pBesti- Xi)) + (C2 × ri × (gBesti- Xi)) (11)
Xi+1 = Xi + Vi+1 (12)
Where, (Vi+1) for each particle has been determined based on its previous velocity (Vi), global best solution (pBest) and location (gBest). Eq. (9) has been used for updating the particle’s position (Xi) [27]. The terms “r1” and “r2” are two random numbers generated independently in the range of [0, 1]. There are acceleration constants (“cl” and“c2”) using which pull each particle (solution) towards “pBest” and “gBest” positions. Inertia weight “w”, acts as an important parameter in PSO algorithm convergence behavior. In order to explore the answer space globally, the large amount of “w” required, while the small amounts explore nearby regions of the space [28].
Based on the literature survey, the architecture (number of hidden layers and nods in hidden layers) of BPNN in most studies has been determined using trial and error. Whereas, in this study PSO algorithm has been employed to determine the PBNN architecture. Furthermore, the optimization of the proposed BPNN models have been carried out using PSO algorithm. Moreover, SA algorithm has been used to confirm the performance of PSO algorithm (avoiding getting trapped in the local optima).
The performance of each evolutionary algorithm is affected by its own distinctive adjusting parameters. The details of the PSO performance are well documented in Refs [23–27].
The adjusting parameters used for controlling the SA and PSO algorithms are carried out as the following.
PSO variables: Population: 50; Learning factor c1 and c2: 2; Number of iteration performed: 30.
SA variables: Temperature reduction rate: 0.91; Processing time: 30; seconds Initial temperature: 700.