Solving a new bi-objective multi-echelon supply chain problem with a Jackson open-network issue under uncertainty

In today’s world, changes in the economic and industrial sectors are taking place more quickly than in the past. Based on today’s competitive market, considering the needs and desires of the customer is very important. Thus, designing the supply chain network to minimize the costs and improve the system efficiency, the service facility, and attention to the timely service delivery to the customer should be considered. Also, deployment costs to determine the number and position of facilities are regarded as another strategic issue. Therefore, to become closer to the real-world application, by considering the Jackson network model (queuing theory), the present study aims to design a closed-loop supply chain network to reduce the waiting time in the queue. Two objectives, namely minimizing the cost and waiting time incurred in queues formed in producers and distributors, are investigated. Queuing mathematical expressions are used both in dealing with objectives and constraints. To develop a model, which is close to real by considering uncertainty, the parameters, including demand rate and shipping cost of products, were considered as a trapezoidal fuzzy number. Also, we crisp this fuzzy uncertainty using both Jimenez and chance-constrained programming approaches. Due to the suggested model’s complexity and nonlinearity, we linearized the mathematical model as much as possible, then two meta-heuristic algorithms, namely multi-objective particle swarm optimization and non-dominated sorted genetic algorithm (NSGA-II), are used to solve the model. First, the structural parameters of these algorithms are tuned by the Taguchi method. Then, numerous numerical test problems ranging from small to large scales are used to compare these algorithms considering Pareto front measuring indexes and various hypothesis tests.


Introduction
A supply chain is a collection of organizations linked to each other through materials, information, and financial flows. Depending on the decision-making variables within the time horizon, the decisions taken in the supply chain can be placed in one of three levels of decision-making, including strategic, tactical, and operational. Supply chain networks are divided into different categories depending on the organization and environment. The forward supply chain includes the flow of products and services from the supplier to the final consumer. The network consists of facilities, such as manufacturing, distribution, and retailing centers.
On the other hand, in the reverse supply chain, consumed products flow from the final consumer to manufacturing centers. The network consists of facilities, such as a collection center, reproducer, producer, and disposal center. In this paper, distribution centers are also considered a collection center for detective products.
Loosely speaking, the integration of the forward and reverse supply chains leads to the closed-loop supply chain (CLSC) (Souza 2013;Deng et al. 2020;Yang et al. 2020;Zhang et al. 2020).
The aim of designing a supply chain network is to determine the number and location of facilities, materials flow, and the like to reduce the costs and increase network efficiency. Since the market is moving toward competitiveness, attention to the need of the customer is significant. The customer needs include quality, price, timely delivery of the product or service, etc. The queueing theory was integrated into the supply chain topics to meet customer needs. The queuing system can be divided into two discrete and continuous groups. A discrete system is a system in which state variables change only in a set of different points of time. The bank is an example of a discrete system because the state variable (the number of customers present in a bank) changes only when a customer enters or services to a customer.
A continuous system is a system, whose state variables change continuously over time (e.g., water leakage behind the dam). The features of the queueing process include the entering process rate of customers into the system, customer behavior, service time, and service sequences. The quantification criteria of queueing performance include waiting time in line or system, the number of clients in the queue or the system, and free time of servers.
Intuitively, supply chain management (SCM) is based on a customer-centric approach. Therefore, the timely and complete communication between all the chain elements to understand the customer needs and level of meeting needs are the necessities of the chain. The provision of reliable and fast service to customers requires proper planning for deploying the facility and its management. Considering today's competitive market, paying attention to customer demand is of great importance, and addressing it will increase market share. In this research, a novel mathematical model with the objectives includes minimizing the chain's costs and minimizing the average waiting time in product delivery to the customer.
To achieve the second objective, we have examined the queue problem in two positions: • A queue formed in the production line in the producer, and the Jackson queuing theory (the main novelty) is used to constrain the queue length in producer facilities • A queue formed in service lines in distribution centers and reducing waiting time in distribution centers has been used as a second objective in the mathematical model. • The probability of queuing in distribution centers as a factor in calculating the waiting time of the product in distribution centers to deliver the product or service to the customer is important. • Obtaining efficient solutions regarding the second objective function of the problem, therefore, this factor plays a decisive role. Also, numerous decision variables are considered in this research.
Constraints of the queue at producing plants with a difference (i.e., the number of production lines as a variable) are considered. Besides, the present study aims to determine the number of optimal lines in producing centers using the Jackson network, regarding the constraint of the queue length and balancing the costs. This study also aims to develop the previous models and considers the realworld constraints in more detail. Due to the NP-hardness of these kinds of problems (i.e., supply chain mathematical models) to solve this developed mathematical model, two meta-heuristic algorithms, namely multi-objective particle swarm optimization (MOPSO) and non-dominated sorting genetic algorithm (NSGA-II), are employed. Other purposes of this research are summarized below: • Determining the acceptable number of potential facilities and their location. • Determining the adequate flow rate between the facilities. • Providing an appropriate encoding to solve the model. • Comparison of these meta-heuristic algorithms using comparative metrics.
In brief, this paper is organized as follows. Section 2 presents the motives for doing this research. Section 3 reviews the relevant literature. Section 4 illustrates the uncertain model of the problem with its assumptions, variables, and parameters. In Sect. 5, the solution method is developed. We solve numerical examples, including an extensive sensitivity analysis in Sect. 6. Finally, managerial insights and conclusions are drawn in Sects. 7 and 8, respectively.

Motives
In this paper, we consider a reverse supply chain problem, in which the cycle of flow is taken into account in the way that defective parts are first returned from the customer to the distributor, which is the collection center. The collection center transfers the products to the recycling center. Some of the goods are lost and discarded in the recycling center, and some of them are sent to the producer as semifinished products and continue to the distributor, and the cycle continues. Also, one aspect of the queuing theory (i.e., Jackson) deals with the bi-objective supply chain problem. One objective function aims to minimize the total cost involving the establishment cost of facilities. The other one aims to minimize the total waiting time of materials or goods to be served in the distributor. However, we use mathematical, probabilistic expressions related to the queuing theory defined by Marianov and Serra (2003).
Due to the nonlinearity inheritance of the mathematical model, we linearize it as much as possible. To be close to reality, we consider uncertainty (i.e., fuzzy logic) related to some parameters (e.g., demands and transportation cost). We also crisp this uncertainty using a chance-constrained programming approach (Inuiguchi and Ramik 2000). We use two multi-objective meta-heuristic algorithms because of the mathematical model (i.e., existing binary or integer variable and nonlinearity expression).
To generate a feasible solution, we exploit a method defined by Gen et al. (2006). Due to the existing two objectives, we use two multi-objective meta-heuristic algorithms, namely MOPSO and NSGA-II. To investigate the formed Pareto front by each algorithm, five indices are investigated. Numerous test problems ranging from small to large sizes with 10 replicates of each problem considering the defined lower and upper bounds for each parameter are solved. Numerous figures related to each index involving trends and boxplots are drawn to show the impact of each algorithm's size of test problems and habit. Finally, to take a decisive decision of superiority of each algorithm over the other, we use one of the multi-attribute decision-making methods, namely Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). Results show that the NSGA-II is more appropriate as a whole.

Literature review
In recent years, several papers have been published due to the growing importance of designing closed-loop and reverse supply chain networks. For example, Pishvaee et al. (2012) developed a robust optimization approach to the closed-loop supply chain network design (SCND) under uncertainty.
Mahmoodi (2019) investigated a new multi-objective model of agile supply chain network design considering transportation limits. Mohebalizadehgashti et al. (2020) addressed designing a green meat supply chain network. Resat and Unsal (2019) designed a novel multi-objective optimization approach for a sustainable supply chain. Singh and Goh (2019) devised a multi-objective mixedinteger programming model and investigated an application in a pharmaceutical supply chain. Vafaeenezhad et al. (2019) considered a multi-objective mathematical model for sustainable SCM in the paper industry.
However, some researchers have devised and solved the supply chain mathematical model by efficient algorithms; for example, Robles et al. (2020) optimized a hydrogen supply chain network design by multi-objective genetic algorithms. Kayvanfar et al. (2017) investigated a bi-objective intelligent water drops algorithm to a practical multi-echelon supply chain optimization problem. Nobil et al. (2017) considered a multi-objective evolutionary approach in location and allocation decisions for a multiechelon supply chain network. Cheraghalipour et al. (2018) considered a bi-objective optimization model for the citrus CLSC using Pareto-based algorithms. Kusolpuchong et al. (2019) studied a genetic algorithm for multi-objective cross-dock scheduling in supply chains. Mardan (2019) planned an accelerated Bender's decomposition algorithm for a bi-objective green CLSC network design problem. Niu et al. (2019) devised a cooperative bacterial foraging optimization method for a multi-objective multi-echelon supply chain optimization problem. Roghanian and Cheraghalipour (2019) addressed a set of meta-heuristics to solve a multi-objective model for a citrus CLSC considering CO 2 emissions. Mohammed and Duffuaa (2020) investigated a tabu search-based algorithm for the optimal design of multi-objective multi-product supply chain networks.
Some researchers investigated congestion related to the supply chain network. For example, Teimoury et al. (2012) used a queuing approach for making decisions about order penetration points in multiechelon supply chains. Rioux et al. (2016) investigated the economic impacts of debottlenecking congestion in the Chinese coal supply chain. Marufuzzaman and Ekşioglu (2017) managed congestion in supply chains via dynamic freight routing, including an application in the biomass supply chain. Fathian (2018) considered location and transportation planning in supply chains under uncertainty and congestion by using an improved electromagnetismlike algorithm. Hu et al. (2018) investigated a joint design of fleet size, hub locations, and hub capacities for thirdparty logistics networks with road congestion constraints. Siswanto (2018) took into account a simulation study of sea transport-based fertilizer products considering disruptive supply and congestion problems. Darestani and Hemmati (2019) optimized a robust bi-objective CLSC network for perishable goods considering a queue system. Poudel et al. (2019) managed congestion in a multi-modal transportation network under biomass supply uncertainty.
Jain and Raghavan (2019) used a queuing approach for inventory planning with batch ordering in multi-echelon supply chains. Nazari-Ghanbarloo and Ghodratnama (2019) optimized a robust tri-objective multi-period reliable supply chain network considering the queuing system and operational and disruption risks. Liu et al. (2020a, b) coordinated a location-inventory problem with supply disruptions and optimized it using a two-phase queuing theory-optimization model approach. Mohtashami et al. (2020) designed a green CLSC model using a queuing system for reducing environmental impact and energy consumption. Fathi et al. (2021) integrated queuingstochastic optimization with the genetic algorithm for a location-inventory supply chain network. Wu and Yang (2021) optimized a bi-objective queueing model with a two-phase heterogeneous service.
Salkuti and Kim (2019) managed congestion using a multi-objective glowworm swarm optimization algorithm. Li et al. (2019) took into account factors affecting bikesharing behavior in Beijing, involving price, traffic congestion, and supply chain. Liu et al. (2020a, b) addressed a time-dependent vehicle routing problem with time windows of city logistics with a congestion avoidance approach. Mohtashami et al. (2020) considered the green closed-loop SCND using a queuing system for reducing environmental impact and energy consumption.
Uncertainty is also investigated by many researchers recently. For example, Gholamian (2016) considered multi-objective multi-product multi-site aggregate production planning in a supply chain under a fuzzy environment. Dai (2016) designed a multi-objective fuzzy model of the closed-loop SCND considering risks and environmental impact. Kumar et al. (2017) designed multiperiod SCND considering risk and emission. Yu et al. (2018) investigated multi-objective models and a real-case study for dual-channel SCND with fuzzy information. Gholami et al. (2019) took into account the multi-objective robust SCND considering reliability. However, designing a multi-objective model under uncertainty was also investigated by many researchers. Amin and Zhang (2013) considered a multi-objective facility location model for the CLSC network under uncertain demand and return. Fakhrzad and Goodarzian (2019) accounted for a fuzzy multi-objective programming approach to develop a green closed-loop SCND problem under uncertainty and used an imperialist competitive algorithm. Goodarzian et al. (2020) developed a multi-objective pharmaceutical supply chain network based on a robust fuzzy model and compared the meta-heuristic algorithm used to solve it. Goodarzain and Hosseini-Nasab (2021) applied a fuzzy multi-objective model for a production-distribution problem by using a novel self-adaptive evolutionary algorithm.
After scrutinizing the research papers, specifically in recent years, to the best of our knowledge, there exists no research paper handling a supply chain issue regarding the Jackson queuing type and other features (e.g., uncertainty) and solving by multi-objective algorithms.

Mathematical model
In this section, the suggested model is described to design the multi-product closed-loop SCND problem with the constraint of the queuing length at the production center. This model includes raw materials supply centers, production centers, and distribution centers. Besides, customers in the backward supply chain network include centers for collecting return products, reproduction centers, and waste disposal centers in the reversed supply chain (i.e., downstream). For this purpose, the used assumptions, indices, parameters, and variables are first described before presenting the model. Figure 1 shows the proposed model of the closed-loop SCND.

Assumptions
The assumptions of multi-product closed-loop SCND problem are as follows: • The proposed model includes several different products. • Customer demand is possible from any product, and the shortage is not allowed. • The locations of the potential facility are predefined and distinct. • The capacity of all potential facilities is limited and specific. • Demand parameters, transportation costs are uncertain and are considered as trapezoidal fuzzy. • Produced products at the production center from a queue with an unknown number of service providers. • Products in the distribution centers are sent to the customer in a distribution line with an unknown number of service providers. • The number of service providers in the queue of the production and distribution center is unspecified. • Service rates in distribution and production centers are known. • According to the above-mentioned assumptions, the most crucial aim of this research is to choose the location of supplier centers, production centers, distribution centers of products, reproduction centers, and disposal centers, as well as the determination of optimal flow rate among centers and the number of service providers in production and distribution centers.

Indices
This section introduces indices used in our proposed mathematical model.

Parameters
This section introduces the parameters used in our proposed mathematical model.  Establishment cost of a server at the distributor k F#J j Establishment cost of a server at the producer j g TIJP i;j;p Transportation cost of each raw or semi-finished product p from supplier i to producer j g TJKP j;k;p Transportation cost of each product p from producer j to distributer k g TKLP k;l;p Transportation cost of each product p from collection center k (distributer k) to remanufacture c g TLKP l;k;p Transportation cost of each product p from customer l to collection center k (distributer k) g TKCP k;c;p Transportation cost of each product p from collection center (distributer) k to reproducer c g TCJP c;j;p Transportation cost of each product p from reproducer c to the manufacture j g TCDP c;d;p Transportation cost of each product p from reproducer c to disposal center d f DE l;p Demand amount of customer l for product p bCP c;p Percentage of defective returned product p from reproducer c CKP k;p Distribution capacity of product p related to the distributer k CKP 0 k;p Collecting capacity of product p in the collection center k (distributer k) CDP d;p Disposal capacity of product p in disposal center d CCP c;p Reproducing capacity of product p in the reproducer c CJP j;p Producing capacity of product p related to the producer j CIP i;p Supplying capacity of raw or semi-finished product p related to supplier i  Disposal capacity of disposal center d for product p g TLKP l;k;p Transportation cost of each product from customer l to collection center k (distributer k) g TKLP k;l;p Transportation cost of each product p from distributer k to customer l g TKCP k;c;p Transportation cost of each product p from reproducer c to producer j g TJKP j;k;p Transportation cost of each product p from producer j to distributer k g TCDP c;d;p Transportation cost of each product p from reproducer c to disposal center d a 1l;p Percentage of returned products p from customer l bdp d;p Percentage of defective returned products p from reproducer d lK k Service rate of each server at distributer k (Exponential distribution) lJ j Service rate of each server at producer j (Exponential distribution) bjp j;p The upper limit of queue length for product p in producer j BQ j;p The upper limit probability of excessive queue length of product p in producer j M A big value number

Decisions variables
This section introduces the decision variables used in our developed mathematical model. Number of the service provider in producer j xijp i;j;p Transferred amount of semi-finished product p between supplier i and producer j xjkp j;k;p Transferred amount of finished product p between producer j and distribution center k xklp k;l;p Transferred amount of finished product p between distribution center k and customer l xlkp l;k;p Transferred amount of finished product p between customer l and collection center k xkcp k;c;p Transferred amount of finished product p between distributer k (collection center k) and reproducer c xcjp c;j;p Transferred amount of finished product p between reproducer c and producer j xcdp c;d;p Transferred amount of finished product p between reproducer c and disposal center d xkp k;p Amount of product p transferred from distributer k xjp j;p Amount of product p transferred from producer j p 0;k;p Probability of forming queue back to distributor k for product p pr j 0 ;j;p Probability of forming queue back to producer j server j 0 for product p w k;p Waiting time of product p in distributer k

Objective functions
Two separate objective functions are taken into consideration in our proposed mathematical model as follows: 4.5.1 First objective function: minimizing the total cost The first objective function is formulated as follows: The first objective function minimizes the cost of the entire supply chain network, in which these objectives include fixed costs for the establishment of centers, fixed costs for utilizing server, and shipping (transportation) costs. 4.5.2 Second objective function: minimizing the total waiting time The second objective function is given by: The second objective function minimizes the total waiting time resulting from products in the queue of the distribution center.

Proposed bi-objective probabilistic fuzzy mixed-integer nonlinear mathematical model
Regarding the indices, parameters, and decision variables, the multi-product closed-loop SCND as a bi-objective probabilistic fuzzy mixed-integer nonlinear mathematical model is as follows: X KT k¼1 xlkp l;k;p ! a 1l;p : f DE l;p 8 l; p ð4Þ yi i ; yj j ; yk k ; yc c ; yd d 2 0; 1 f g 8 i; j; k; c; d ð22Þ  (4) shows the fraction of the consumed and returned products from customers to the collection center. Constraint (5) ensures the equality of returned products from customers to the collection center with transferred products to the reproduction center. Constraint (6) indicates the fraction of returned products from a customer that can be reproduced. Constraint (7) shows the fraction of defective returned products from the customer, which are transferred from reproduction to disposal center. Constraint (8) shows the balance in the distribution center and ensures the transmission of new production and reproduction products to the customer. Constraint (9) indicates the constraint in balance in the producing center and shows that customer demand is provided by a total of reproduction products and products received from the supplier. Constraints (10)- (15) show the constraint in the capacity of potential facilities and include collection, disposal, reproduction, manufacturing, and supplying centers, respectively. It should be noted that, in all of the above constraints, if any facility is chosen, only the capacity of that center can be used. Constraint (16) shows the number of required duel centers of distribution/collection. Constraint (17) shows the product's input rate to the queueing system of the distribution system. Constraints (18) and (19) are related to queue length constraints in the distribution center and show the occurrence possibility and waiting time of each product in the constructed distribution center. For further information about how these equations were obtained, please refer to (Shortle et al. 2018), for example. Constraint (20) indicates the queue length from manufacturing and  reproduction products in the producing center. Constraints (21) and (22) show the type of decision variables.

Jackson network (Queuing theory)
The studied Jackson network in the present research is illustrated in Fig. 2. The equations of the proposed network are as follows: Throughout our investigation, p 0 1 value (i.e., probability of product entering the producer from the supplier) and p 2 value (i.e., probability of product entering the producer from the reproducer) are considered equal to one, respectively. This means that the amount of product specified to be sent from a specific supplier or reproducer to a specific producer will enter the production center with a probability of 1. In other words, we assume that there exists no waste or difficulty (e.g., disaster) in receiving the raw materials or semi-finished product from the supplier and the reproducer (i.e., reworking process) to the producer at all. Therefore, according to the mentioned values, the input rate to the producing center (k 1 ) is calculated by: Variables k 1 0 and k 2 are the input rate of the product from the supplier and the reproducer to the producer, respectively.
Consequently, the input rate corresponds to the proposed CLSC network model is as follows: Based on Constraint (20), we use a queueing model to turn this constraint into deterministic constraints. In this queueing M/M/C/K model, we have C channels with system capacity K involving service rate l for each channel, entrance rate k into the production, or distribution centers (note that the time between each occurrence follows the exponential distribution and as a result number of occurrence follow Poisson distribution). With the notes mentioned above, therefore, the Constraint (20) changes into Constraint (26) as follows (Marianov and Serra 2003): pr j 0 ;j;p BQ j;p or 1 À X #j j þbJP j;p j 0 ¼0 pr j 0 ;j;p BQ j;p Constraint (26) assures that the probability of occurrence in which the queue length for product p in the production center j to be greater than bJP j;p with #j j the server must be lower than BQ j;p . Note that pr j 0 ;j;p it is related to this amount of probability corresponding j 0 server.
Note that the service rate in producer j is logically computed by: By combining Constraint (27), Constraint (18) changes into Constraint (28) as follows: As a result, Constraint (26) changes into Constraint (29) as below: where the j and p are the indices of production center and product, respectively.

Crisp auxiliary model
In this section, after determining the probabilistic model of the queue, we aim to determine the model derived from fuzzy parameters. In this mode, the parameters, including transportation costs and customer demand, are considered as fuzzy parameters with a trapezoidal membership function. The following steps represent the model of changing an uncertain model to a crisp mode. Given the uncertain nature of some important parameters (including demands, the cost of each unit of the manufacturing product, the cost of transportation and the transfer of each unit of the product and the purchase part of each unit of the supplier) and the unavailability of the required historical data at the design stage, these parameters are estimated based on expert opinions. Therefore, the above ambiguous parameters are formulated as uncertain ones in the form of trapezoidal fuzzy numbers as follows: It is worth noting that evaluating a deterministic demand is hard. It is sometimes impossible for long-term decisions (strategic decisions) because of a lack of past experiences and previous data. The demand for each product, along with the cost of producing and transporting a unit of product that changes in short-term planning, is considered a fuzzy one. Moreover, due to the uncertainty of the purchase price of the parts from the supplier, it is assumed as a vague (fuzzy) value as well. Also, possibilistic chance-constrained programming (PCCP) is often used to deal with uncertain constraints, having non-deterministic data on the left or right side. If this method is used, to control the level of assurance of creating these uncertain constraints, the concept of the decision can achieve the minimum level of certainty as a safe margin for each of these constraints (Pishvaee et al. 2012). In this case, two standard fuzzy methods, including possibility optimistic standard (POS) and necessity (NEC), are usually used.
It should be noted that POS indicates the level of optimistic possibility of an uncertain event involving uncertain parameters, while NEC presents pessimistic decisionmaking about an uncertain event. Consequently, it is more conservative to use NEC (Inuiguchi and Ramik 2000) (i.e., we assume that decision-making has a pessimistic (conservative) attitude to establish uncertain constraints).
Therefore, NEC is used to ensure the establishment of uncertain constraints. Based on the fuzzy parameters and the use of the expected value for the objective function and the NEC measure for uncertain constraints, the obvious transformation of the uncertain model can be formulated. To do this issue, first, the abbreviation for the proposed model is considered as follows: where the C, F, and d are the fixed costs, variable costs, and demand, respectively. Moreover, A and B are a matrix of coefficients, and finally, x and y are continuous, zero, or one variable, respectively. Now, it is assumed that F and d vectors are presented as uncertain parameters in the above model. According to the PCCP, the objective function and NEC's expected value are considered to deal with the objective function and uncertain constraint. The abbreviation form of the basic model of PCCP is shown below: where a controls the minimum degree of assurance to establish an uncertain constraint by the decision-making (NEC) approach. Regarding the trapezoidal distribution function for vague (fuzzy) parameters, the general form of Model (37) is as follows: Finally, the crisp mathematical model of the CLSC network is modeled by considering the length of the queue in the production and distribution system as follows: s.t.

Solution approaches
Given that the problem is the NP-hard type, the metaheuristic algorithms are used to solve our developed mathematical model. These algorithms are NSGA-II and MOPSO. Following the mechanism of generating feasible solutions and structures of the above-mentioned algorithms are described.

Generating feasible solution
Considering the below mathematical model being the brief structure of our primary proposed mathematical model, we explain by a simple numerical example how we generate a feasible solution and navigate through feasible solution space.
In the above model, the aim is to minimize total cost involving shipment and establishment cost that demands customers to be met, and the amount of shipments is less than considered capacity. Note that C ij is shipment cost between supplier i and customer j, Dem j is the demand of Solving a new bi-objective multi-echelon supply chain problem with a Jackson open-network issue… 1985 customer j, Cap i is capacity of supplier i, and x ij is shipment amount between suppler i and customer j and y i is one if supplier i is established otherwise zero. In this example, the priority-based decoding introduced by Gen et al. (2008) is used. This encoding considers a permutation of real numbers, the length of the number of nodes in each level.
The encoding method is illustrated in Fig. 3. At one of the supply chain network levels with four suppliers and three customers, encoding is based on a permutation of the number of nodes, as shown in this figure as (2-5-3-7-4-1-6). It has been shown that the priorities (4-1-6) are customerrelated and (2-5-3-7) supplier-related. The following two steps must be taken to encoding.
Step 1: First, the largest priority among the selected suppliers (priority 7 to the fourth supplier), and if this supplier can meet all customer demand, the priority of the rest of the donation centers will be reduced to zero.
In the example, the fourth supplier capacity is 850, while the total customer demand is 1050. In that case, the next supplier will be selected with the next largest priority (priority 5 to the second supplier). Now the capacity of two suppliers (1600 amount) is greater than the total customer demand (1050 amount). In this case, the priorities of the first and third suppliers will be reduced to zero.
Step 2: After determining the number and location of suppliers, the optimal allocation is made between the selected suppliers and the customers. At this stage, the highest priority (priority 7 to the fourth supplier) is selected, and the lowest shipping cost associated with this supplier is identified (the first customer cost is 11.42) and the minimum amount of capacity selected, and customer determined as optimal allocation (minimum value is 300).
Priority is reduced to zero after updating residual capacity or unmet demand. Repeat the second step until all values of all priorities are reduced to zero.
The feasible space of our main model is generated in this way. Note that for some of the constraints penalty function approach, which is a deviation from the right-hand side value of related constraints, is used. These deviations are included in objective functions, and whenever this amount is equal to zero, feasible solutions are obtained. Figure 3 illustrates the described mechanism.

NSGA-II algorithm
The NSGA-II is one of the most widely used and powerful algorithms for solving multi-objective optimization problems and has been proven to solve various problems. Srinivas and Deb (1994) introduced the NSGA optimization method to solve multi-objective optimization problems. In the NSGA-II, among each generation's solutions, a number of them are selected using the binary tournament selection method. In the binary selection method, two answers are randomly selected from the population, and then a comparison is made between the two solutions, whichever is the better, ultimately. Selection criteria in the NSGA-II are primarily based on the rank and secondly the crowding distance.
The lower the rank and the higher the crowding distance, the more desirable it is. By repeating the binary selection operator on the population of each generation, a set of individuals of that generation are selected to participate in the crossover and the mutation. On the part of the selected set of members, the crossover is performed, and on the others, the mutation is performed and a population of children and mutants is created. This population then merges with the original population. Newly formed population members are first ranked in ascending order. Members of the population with the same rank are arranged in descending order of density. Now the population is sorted primarily by rank and secondarily by the crowding distance. Equal to the number of people in the main population, members are selected from the top of the sorted list and the rest of the population is discarded. Selected members make up the next generation of the population, and the cycle will be repeated in this section until termination occurs. The unsuccessful solutions to the multi-objective optimization problem are often referred to as the Pareto front. None of the Pareto Front's answers is preferable to the other, and depending on the circumstances, each can be considered an optimal decision. The steps of the NSGA-II are summarized as follows: Step 1: Generate the initial population in this method as usual on the scale and constraint of the problem.
Step 2: Evaluate the produced population from the defined objective functions.
Step 3: Apply a non-dominated sorting method. Population members are grouped in such a way that members in the first category are a completely non-dominated set by other members of the current population. The members in the second category are thus only dominated by the members of the first category, and the process continued in the same way as all other members in each category were assigned a rank based on the category number.
Step 4: Calculate the control parameter, called the crowding distance. This parameter is calculated for each member in each group and represents how close the sample is to other members of that population and group. A large amount of this parameter will lead to better divergence and range in the population set.
Step 5: Select the parent and population for reproduction. One of the selection mechanisms based on the binary tournament is between two randomly selected members from the population.
Step 6: Perform the mutation and crossover. Figure 4 illustrates the steps, as mentioned earlier, schematically. Crossover operator: To perform the crossover, the desired crossover rate is multiplied by the number of the population then the result is divided by the number two, then the output is rounded, and the obtained result is multiplied by two finally. In this way, the number of crossovers is calculated. In the following equations, the crossover mechanism is illustrated, where x 1 and x 2 are parents a belongs to the [0, 1] interval, and y 1 and y 2 are obtained, children.
Mutation operator: To make a mutation, the desired mutation rate is multiplied by the number of population, and then the result is rounded. Now, the number of mutations that must be executed is obtained. Note that the members that mutation must be executed on them are also randomly selected from the total population. In the following equation, the mutation mechanism is shown.
x is a considered member of the main population, r belongs to the [0,1] interval and r is a random variable that follows uniform distribution, whose boundaries are 0,1, respectively. The resulted mutated member is y.

MOPSO algorithm
Reyes-Sierra and Coello Coello (2006) introduced a multiobjective particle swarm optimization (MOPSO) algorithm. Research shows that the algorithm performs well in solving multi-objective optimization problems. This algorithm, like other mass algorithms, starts with a random population. Each member is a piece that makes up the collection. This set moves to the optimal point according to the velocities of each particle and the set in the decision space. Thus, the movement vector of each particle in each iteration is affected by the best position the particle has ever reached (Pbest) and the best position that the best member of the set has ever achieved (Gbest). In this method, for each purpose, a generated set and its associated Gbest are extracted at each iteration. To move to the next step, each Gbest suite will move to the next suite. However, Pbest is available for each set individually. Finally, after a certain number of iterations of the last set, they are introduced as the final set and its dominated points as the set of solutions or Pareto. The members of the Pareto set have no advantage over each other, and by decreasing the value of the target function, the value of the other objective function increases, and vice versa.
As a whole, the steps of this algorithm are summarized as follows: Step 1: Create an initial population.
Step 2: Separate undesirable members of the population and store them in the repository.
Step 3: Tabulation of the explored objective space.
Step 4: Each particle of Repository members chooses a leader and makes its move.
Step 5: The best personal memory of each particle is updated. Step 6: Repeated members of the current population are added to the repository.
Step 7: Delete dominated members from a repository.
Step 8: If the repository member numbers exceed the specified capacity, remove the extra members and tabulate once again.
Step 9: Return to Step 3 if the termination is not fulfilled and otherwise terminate. 6 Computational results

Generating test problems
To prove that our model is valid, five problems have been tested. All parameters are the same for all problems (following from the specified normal distribution function). The problems we are considering are different in size. To be closer to reality, all parameters follow a normal distribution. The characteristics of the problems that we are considering are given in Table 1.
The values of the parameters are specified in Table 2. The model parameters were generated using the MATLAB software in terms of the normal distribution presented in Table 2.

Comparison criteria
To compare the two aforementioned algorithms, we consider four measures, in which except one of them (i.e., computational time), the other investigate the formed Pareto front involving non-dominated solutions related to each algorithm.

Central processing unit (CPU) time
Logically, big computational time with respect to the maximum iteration considered for each algorithm is not adequate. Thus, in the comparison between two considered algorithms, the one that yields a smaller amount is more acceptable.

Number of Pareto solutions in the first frontier
In comparison between the two above mentioned algorithms, the more non-dominated solutions formed in the first frontier is more acceptable, because the user has more alternative to choose.

Maximum spreading index (MSI)
Zitzler et al. (2007) presented this criterion the length of the formed frontier by endpoints of the non-dominated Pareto front. The bigger this index, the better it would be.

Spacing metric index (SI)
The criterion presented by Schott (1995) is the relative distance between successive non-dominated solutions. The measured interval is the minimum of the sum of the absolute magnitude of the difference in the values of the objective functions between the i-th solution and other solutions located in the final frontier. This index measures the standard deviation of different values. When the solutions are evenly scattered, then the value of s will also be small, so an algorithm with these non-dominated solutions with small space is better.

Surface metric index (SM)
The most important criterion is the covering surface. The covering surface indicates the ratio of the number of nondominated Pareto solutions belonging to the related algorithm is an ideal common frontier formed by considered algorithms. The higher the criterion and closer to one value in comparison with other considered algorithms will be more acceptable.

Tuning structural parameters of NSGA-II & MOPSO algorithms
First of all, the software used to run these two algorithms is MATLAB R2016a, and the characteristics of the computer machine used to deal with the algorithms are Intel(R), Core(TM) i5-4200U, CPU @ 1.6 GHz 2.3 GHz, 3.89 GB of RAM. The performance of the NSGA-II will be at its highest if the structural parameters of the NSGA-II, including the number of population per iteration, maximum number of iterations, mutation, and crossover rates, are set to 80, 120, 0.6, and 0.5, respectively. Also, for the MOPSO algorithm, these values for the parameters, such as the number of particles per each iteration, the maximum number of iterations, initial velocity coefficient, and secondary velocity coefficient must be set on the numbers 80, 120, 2, and 1.5, respectively. Tables 3 and 4 show the considered levels and related values of the aforementioned parameters and their optimal values obtained, respectively. As a whole, we categorize the obtained results as follows. In the first step, a small instance problem is solved by two algorithms (i.e., NSGA-II and MOPSO0). Then, decision variables are extracted and analyzed. In the second step, five different sizes of the proposed problem are generated, and ten instance problems are designed for each of them (at the same size). The simple problems of any size are solved by the proposed algorithms and compared using the comparison indices or metrics (i.e., NPF, CPU time, MSI, SI, and SM) of the multi-objective meta-heuristics. The t-test is used to measure the significance of the difference between the obtained indices in each size. Finally, the best algorithm is identified and selected using the TOPSIS method.

Analysis based on test problem no. 1 in a probabilistic-fuzzy mode
In this section, in the first step to evaluate the results and comparisons of the proposed model in a probabilistic-fuzzy mode, a simple problem is provided and solved by NSGA-II and MOPSO algorithms. Then, decision variables are extracted and analyzed. Given that the bi-objective model contains a set of efficient solutions, therefore, there is no optimal value for this model. Thus, in this study, we aim to express the values of decision variables of one of the  Table 5 shows the number of obtained effective solutions, including first and second objectives. Figure 6 shows the Pareto frontier obtained in 120 repetitions for both algorithms. The convex form of the Parent frontiers confirms the conflict between two objective functions. Tables 6,7,8,9,10,11,12 and 13 show the number of optimal selected locations on one of the efficient solutions to the above-mentioned problem by two algorithms, in which the cut-off alpha a 2 in the trapezoidal fuzzy model is 0.5. Note that the results for both algorithms are related to the first solution from the left-hand side of the Pareto front. All of the following tables exhibit the amount of transported goods between levels of the related supply chain and also specify which facilities involving (producer and distributor) are located and being active. Besides, how many servers to be established in producers and distributors are also specified. For example, in Table 7, the transferred materials flow is shown between the selected facilities by the two meta-heuristic algorithms. The amount of transferred flow between suppliers and producers is shown.
In the second step, five different sizes of the proposed problem are generated, and ten instance problems are designed for each of them at the same size. After evaluating algorithms and providing an output from an efficient solution of the instance problem, a comparison between NSGA-II and MOPSO is conducted in the larger dimension. For this purpose, ten instance problems of any size are created, and the metrics obtained by each algorithm (i.e., NPF, CPU, MSI, SI, and SM) are compared with the t-test for each size. Finally, to solve the proposed model of the closed-loop supply chain, the best algorithm is chosen using the TOPSIS method.
6.5 Analysis based on test problem no. 1 Table 14 shows the results of the 10-instance problem in size 1. In this table, the average results from all the efficient solutions obtained by each sample problem are described. The comparison of meta-heuristic algorithms shows different indices (i.e., NPF, CPU, MSI, SI, and SM) as depicted in Figs. 7,8,9,10,11,12,13,14,15 and 16 in detail. NPF index: Related to the test problem no. 1. The average NPF measure for ten instances is relatively equal. However, the solution scattering for MOPSO is relatively larger than the NSGA-II. Concerning, the descending trend is reported for both algorithms, but in some points, the NSGA-II and MOPSO outperform each other. CPU time index: Related to the test problem no. 1. The average CPU time index for ten instances in the NSGA-II is bigger than MOPSO. Also, solution scattering for the NSGA-II is larger than MOPSO as well. As a whole, both algorithms have a relatively equal habit. Descending and ascending pasterns are also reported for both algorithms, and in some points, MOPSO & NSGA-II outperform each other as well. MSI index: Associated with, the NSGA-II has a bigger MSI index average than MOPSO. Also, the scattering of solutions is bigger for the NSGA-II as well. All in all, both algorithms have virtually equal habits. Descending and ascending patterns are also reported for both algorithms the same. In some points, the NSGA-II and MOPSO have superiority over each other as well. SI index: Associated with, the NSGA-II has a bigger SI index average than MOPSO. Also, the scattering of solutions is more significant for the NSGA-II tremendously as well. Generally, MOPSO for all instances outperforms the NSGA-II and for points 5 and 6 tremendously as well. SM index: Associated with the NSGA-II and MOPSO has the same averages. Also, the scattering of solutions is the same for both, relatively. All in all, both algorithms have virtually equal habits. Descending and ascending patterns are also reported for both algorithms the same. In some points, the NSGA-II and MOPSO have superiority over each other as well.
6.6 Analysis based on test problem no. 2 Table 15 illustrates the numerical results obtained by the algorithms mentioned above versus the specified indices in detail. The comparison of meta-heuristic algorithms shows different indices (i.e., NPF, CPU, MSI, SI, and SM) as depicted in Figs. 17,18,19,20,21,22,23,24,25 and 26 in detail. NPF index: Related to test Problem no. 2, the average NPF index for 10 instances is reported bigger than for MOPSO. However, the solution scattering for MOPSO & NSGA-II is somewhat equal. Concerning, in all points except point 9, the NPF measure for MOPSO is more significant. CPU time index: It is related to test problem no. 2. The average CPU time index for ten instances in the NSGA-II is bigger than MOPSO. Also, the solution scattering for the NSGA-II and MOPSO is relatively the same. As a whole, CPU time for MOPSO for all points is better than the NSGA-II. MSI index: Associated with, the NSGA-II has a bigger MSI index average than MOPSO. Also, the scattering of solutions is bigger for the NSGA-II as well. All in all, for the majority of points except points 1, 5, and 10, the NSGA-II outperforms the MOPSO algorithm. SI index: Associated with, the NSGA-II has a bigger SI index average than MOPSO. Also, the scattering of solutions is bigger for the NSGA-II tremendously as well. Generally, MOPSO for all instances outperforms the NSGA-II except for points 5 and 10 as well. SM index: Associated with the NSGA-II and MOPSO has the same averages. Also, the scattering of solutions for MOPSO is reported bigger. All in all, descending and ascending patterns are also reported for both algorithms. However, in some points, the NSGA-II and MOPSO have superiority over each other as well.
NPF index: Related to test problem no. 3, the average of the NPF measure for ten instances is reported bigger than for MOPSO. Solution scattering for MOPSO is tremendously bigger. Concerning, as a whole, the same trend is reported for both algorithms. However, in some points, NSGA-II and MOPSO outperform each other. CPU time index: It is related to the test Problem no. 3. The average CPU time index for ten instances in MOPSO is bigger than the NSGA-I. Also, the solution scattering for the NSGA-II and MOPSO is relatively the same. As a whole, the CPU time for the NSGA-II for all points is better than MOPSO. MSI index: Associated with the NSGA-II and MOPSO have the same MSI index average. Also, the scattering of solutions is relatively the same for both algorithms. Concerning, as a whole, the same trend is reported for both algorithms. However, in some points, the NSGA-II and MOPSO outperform each other. SI index: Associated with, MOPSO has a bigger SI index average than the NSGA-II. Also, the scattering of solutions is reported bigger for MOPSO as well. Generally, NSGA-II and MOPSO have the same habits. However, in some points, the NSGA-II and MOPSO outperform each other. SM index: Associated with, the NSGA-II has a smaller average than MOPSO. Also, the scattering of solutions for NSGA-II is reported bigger. Generally, the NSGA-II and MOPSO have the same habits. However, in some points, the NSGA-II and MOPSO outperform each other.
6.8 Analysis based on test problem no. 4 Table 17 illuminates the numerical results obtained by the aforementioned algorithms versus the specified indices in detail. The comparison of meta-heuristic algorithms shows different indices (i.e., NPF, CPU, MSI, SI, and SM) as depicted in Figs. 37,38,39,40,41,42,43,44,45 and 46 in detail.
NPF index: It is related to test problem no. 4. The average of NPF measure for ten instances is reported bigger for NSGA-II. Solution scattering for the NSGA-II is also relatively bigger. Generally, the NSGA-II and MOPSO have the same habits. However, in some points, the NSGA-II and MOPSO outperform each other. CPU time index: It is related to test problem no. 4. The average CPU time index for ten instances in MOPSO is bigger than the NSGA-I. Also, solution scattering for the NSGA-II and MOPSO is fairly the same. As a whole, CPU time for the NSGA-II for all points is better than MOPSO. MSI index: Associated with, MOPSO has a relatively bigger MSI index average than the NSGA-II. Also, the scattering of solutions is somewhat smaller for the NSGA-II. In some points, the NSGA-II and MOPSO outperform each other. SI index: Associated with, the NSGA-II has a bigger SI index average than MOPSO. Also, the scattering of solutions is reported bigger for MOPSO as well. In some points, the NSGA-II and MOPSO outperform each other. SM index: Associated with the NSGA-II and MOPSO has relatively the same SI index average. Also, the scattering of solutions is reported bigger for MOPSO as well. In some points, the NSGA-II and MOPSO outperform each other.
NPF index: Related to test problem no. 5, the average of the NPF measure for ten instances is reported relatively the same for both algorithms. Solution scattering for the NSGA-II is also the same. Generally, the NSGA-II and MOPSO have the same habits. However, in some points, the NSGA-II and MOPSO outperform each other. CPU time index: It is related to test problem no. 5. The average CPU time index for ten instances in MOPSO is bigger than the NSGA-II. Also, solution scattering for the NSGA-II and MOPSO is fairly the same. As a whole, CPU time for the NSGA-II for all points is better than MOPSO. MSI index: Associated with, MOPSO has a relatively bigger MSI index average than the NSGA-II. Also, the scattering of solutions is tremendously more significant for the NSGA-II. Concerning, in some points, the NSGA-II and MOPSO outperform each other.
SI index: Associated with averages of the NSGA-II and MOPSO are fairly the same. Also, the scattering of solutions is reported bigger for the NSGA-II as well. Generally, the NSGA-II and MOPSO have the same habits. However, in some points, the NSGA-II and MOPSO outperform each other. SM index: Associated with the NSGA-II and MOPSO has fairly the same SM index average. Also, the scattering of solutions is reported bigger for the NSGA-II as well. Generally, the NSGA-II and MOPSO have the same habits. However, in some points, the NSGA-II and MOPSO outperform each other.

Comparison between two algorithms
The t-test is used to measure the significance of the difference between the obtained indices in sample problem no. 1. Finally, the best algorithm is identified and selected using the TOPSIS method.

Performing the t-test hypothesis
For the overall evaluation of these algorithms, the mean differences between these indices should be significantly determined by the t-test. Tables 19, 20, 21 and 22 examine the significance of the (i.e., NPF, CPU, MSI, and SI) indexes by using the t-test, respectively. Suppose the pvalue is less than 0.05 (note that the considered confidence level is 0.95). In that case, there is a significant difference in the averages of the proposed algorithms in that particular index.
NPF index: Because the related p-value is bigger than 0.05, the related null hypothesis is accepted. There is no superiority of each algorithm on another related to the NPF index. Table 19 exhibits the results in detail. CPU time index: Simply, because the related p-value is less than 0.05, the related null hypothesis is rejected. As a result, the NSGA-II has a longer computational time.  As a whole, according to Tables 19, 20, 21 and 22, there is a significant difference among the proposed metaheuristic algorithms by considering the p-value only in the computational time averages.

Using the TOPSIS method
Briefly, the TOPSIS method considers some criteria and alternatives. In our survey, alternatives are the NSGA-II and MOPSO algorithm, and criteria are CPU time, NPF, MSI, and SI. The mechanism of this approach is that for each criterion, the positive and negative ideal alternatives are considered. Ideally, the main aim is to be closer as much as possible to the positive ideal and farther as much as possible to the negative ideal alternative. The keynote is that the interaction of other criteria and alternatives generally does not allow the ideal occurrence to take place for each alternative. Figure 57 shows this mechanism.
However, as a whole, this approach presents acceptable choices. For further information about the detailed mechanism of this approach, please refer to the (Munda 2005). In our survey, after comparing each instance problem in different sizes, it was observed that the MOPSO algorithm in small sizes (1 and 2) and the NSGA-II in medium and large sizes (3, 4, and 5) had better performance for solving the model. Therefore, the TOPSIS method is used to determine the best and most efficient algorithm by considering all sample issues. Table 23 shows the results of comparing the indices of the NSGA-II and MOPSO. It is concluded that the NSGA-II is chosen as the most efficient algorithm for this proposed model.

Managerial and decision-support insights
In this research, one aspect of queuing theory Jackson model is investigated for the first time regarding the supply chain problem. Because of complexity, two multi-objective meta-heuristic algorithms are used. Computation results affirm that two objective functions conflict (i.e., upward convexity of Pareto front). For the first test problem (smallest size), computational results assure that all mathematical constraints are satisfied.
Due to the numerous indices for the other test problems, only non-dominated Pareto front-defined indices are observed for the decision-makers. Five test problems are investigated, and then 10 replicates for each are solved. Two algorithms report reasonable solutions at the appropriate time. For each Pareto front index, trend figures and box plots (statistical analysis) are drawn and interpreted separately. Finally, to take a decisive decision, the TOPSIS method depicts that the NSGA-II outperforms MOPSO regarding five indices. This method can help managers effectively utilize resources Also, the decision-makers and managers can use the hypothesis test to find out that the MOPSO algorithm is more efficient in small-sized problems; however, the NSGA-II algorithm is more efficient in  medium-and large-sized problems. They can use the actual data taken from a case study for further investigation, which assures our contribution.

Conclusion and further research remarks
The study was a follow-up of the conducted research in the closed-loop supply chain (CLSC) and facilities location. By considering the Jackson queueing models in manufacturing centers and a waiting queue in distribution centers, it aimed to design a multi-product, multi-level CLSC network with two objectives including supplier, producer, distributor/collector, reproducer, and disposal center as well as uncertain parameters (e.g., shipping cost and amount of demand). First, a mixed-integer nonlinear model of the CLSC network design was designed, and then Jackson's queuing models were added. Finally, after completing the model, the trapezoidal fuzzy membership function was considered for the mentioned fuzzy parameter.
In the analyzing and evaluation section, the proposed model was analyzed using the proposed meta-heuristic algorithms, namely NSGA-II and MOPSO. First, a numerical example was developed and analyzed using two proposed algorithms. Then, the results of the two solutions obtained by these algorithms were reported (i.e., the first solution of the left-hand side Pareto front) in the tables in detail. Next, the model was solved in larger sizes. For this purpose, five different sizes were designed, and from each size, the 10-instance problem was generated and solved using the NSGA-II and MOPSO algorithms. Five evaluating Pareto front indices were used to compare these aforementioned algorithms. These indices were first analyzed in general. We draw numerous figures representing trends and statistical boxplots regarding each index to show the impact of the size of problems on the solutions. Using the hypothesis test, the MOPSO algorithm was efficient in small sizes, and the NSGA-II algorithm was more efficient in medium and large sizes. Finally, to select the best algorithm out of these two algorithms, the TOPSIS method was used, which resulted in selecting the NSGA-II algorithm as better than the MOPSO algorithm for solving a biobjective CLSC. After conducting this research, some suggestions are suggested to improve the proposed model and solving methods in future research as follows: • Providing other priority-based encodings.
• Remodeling so that boundaries of all indices are predefined and all of the relation to be linear to solve by commercial software (e.g., GAMS and CPLEX) in a reasonable time.
• Using other algorithms involving heuristic, metaheuristic, and exact ones in solving the related model. • Using other queuing systems in modeling.
• Adding the green objective function to the considered model because of the importance of this issue in recent years. • Integrating the two-objective model into a singleobjective model and solving it by multi-objective decision-making methods (e.g., Lp-metric, goal attainment, and goal programming).

Declarations
Conflict of interest The authors declare that there is no potential conflict of interest regarding the publication of this paper.
Ethical approval The authors certify that they have no affiliation with or involvement with human participants or animals performed by any of the authors in any organization or entity with any financial or nonfinancial interest in the subject matter or materials discussed in this paper.