A Neurodynamic Approach for Solving Robust Linear Programming under the Polyhedral Uncertainty Set

This paper converts the robust linear programming under the polyhedral uncertainty set to standard linear programming. After eliminating the parameter uncertainty, we design a projection neural network to address the problem. The equilibrium point of the neural network model is theoretically proved to be stable in the Lyapunov sense and globally convergent to the optimal solution to robust linear programming. Two numerical examples are provided to illustrate the validity and performance of the proposed approach.


Introduction
Uncertainty, which typically arises from the observed value, model and parameters, is prevalent in real-world applications.Although uncertainty is unavoidable in optimization problems, we can handle it through robust optimization, which has been widely used in power systems [1], logistics [2], economics [3] etc.In 1973, Soyster [4] first utilized robust optimization to handle the uncertainty in linear programming.The main idea of robust optimization is to construct a feasible set for uncertain parameters, and then optimize for worst-case scenarios in that set.However, the optimal condition of the objective function was limited to assure the robustness of the solution, and the result was too conservative.Then in 1995, Mulvey et al. [5] developed a lower level of uncertain set (the box uncertainty set) by balancing the robustness between the objective function and the optimal value.Each type of uncertainty set has obvious parameter characteristics, which are stated as distinct norms.The box uncertainty set, commonly expressed in terms of an interval set, is the most basic uncertainty set.For some models, considering that robust optimization is an approach on account of worstcase scenarios, any uncertain parameter may be optimized at the upper and lower bounds of the interval set.In practice, the chances of this happening are extremely low.In 1998, Nemirovski and Ben-Tal introduced ellipsoidal uncertainty sets and polyhedral uncertainty sets [6], respectively, and pointed out that ellipsoidal uncertainty sets reflect the relationship among the parameters to a certain extent.
However when the problem is relatively simple (with fewer or unique uncertain parameter), the ellipsoidal uncertainty set will make the problem more complicated [7].On the contrary, although the polyhedral uncertainty set is poor at describing the correlation between uncertain parameters, it has a linear structure, and the uncertainty of the parameters is easier to control [8].Thus, it is more widely used than box uncertainty sets and ellipsoidal uncertainty sets in engineering problems.In 2009, Nemirovski and Ben-Tal [9] presented the conception of robust linear programming and investigated the data uncertainty in linear programming, the robust counterpart of robust linear programming and its feasibility.In robust linear programming, when the uncertainty set is relatively broad, the optimal solution obtained will be too conservative, resulting in a mismatch with the actual situation.In real applications, when the problem size increases significantly, the difficulty of solving the robust optimization also greatly increases, leading to a dramatic increase in the difficulty of solving the problem by traditional algorithms.Thus, we have to resort to intelligent optimization algorithms such as neurodynamic optimization.
The essential aspect of neurodynamic optimization is the design of a neural network model where the equilibrium point corresponds to the optimal solution of the optimization problem [10].Given any initial state of the system, we can obtain the optimal solution to the problem as the system gradually evolves to a stable point.The emergence of neurodynamic optimization can be traced all the way back to 1982.Hopfield [11] proposed a recurrent neural network model and introduced an energy-like function to it.Hopfield's achievement laid the groundwork for the study on neurodynamic optimization.In 1986, a recurrent neural network was employed to handle the linear programming by Hopfield and Tank [12].Chua and Kennedy [13] introduced a neural network model by using the penalty function approach to address the nonlinear programming in 1988.Since then, the neurodynamic approach has been applied in various optimization problems, such as nonlinear programming [14], linear programming [15], multi-objective programming [16], quadratic programming [17], etc.The interval-valued optimization is a simple model of uncertain optimization.In 2022, Li et al. [18] investigated interval-valued optimization.By using the scalarization method of weighted sum, the interval-valued optimization is transferred to a common optimization problem and a onelayer recurrent neural network is designed to address it.For more literature on state-of-the-art of neurodynamic optimization, see [19][20][21][22].
To the best of our knowledge, there has been few works on neurodynamic approach based robust optimization.The main difficulty is figuring out how to transform and substitute the uncertainty set into the original problem to eliminate the parameter uncertainty according to its characteristics.In this paper we investigate robust linear programming under the polyhedral uncertain set.The existing problems related to robust optimization under the polyhedral uncertainty set usually represent the polyhedral set as the 1 -norm.However, since the uncertain parameters cannot be eliminated, this representation is not available when utilizing neurodynamic approach to address robust optimization.In this paper we propose a neurodynamic framework for resolving robust optimization.By eliminating the uncertain parameters, we design a projection neural network to resolve the problem.The rest of this paper is organized as follows.Section 2 introduces some preliminaries to be used in this paper.Section 3 introduces the robust counterpart of the robust linear programming under the polyhedral uncertainty set.Then we design a neural network model according to the optimality condition and the projection theorem.In Sect.4, the stability of the neural network model is proved.In Sect.5, we provide two numerical simulations to demonstrate the feasibility of the results obtained in Sect. 4. Finally, the research in this paper is summarized in Sect.6.

Preliminaries
This section lays the mathematical background for the proposed method.
We first introduce the Karush-Kuhn-Tucker(KKT) condition for the following convex optimization problem: where f and g i (x), h j (x) are both convex and differentiable functions.
According to [23], x * is an optimal solution of (1) iff there exists holds.Here, we call (x * , y, z) a Karush-Kuhn-Tucker (KKT) point of (1).As we know, the solution of the mathematical programming corresponds to that of a related variational inequality.Therefore, x * is a solution to (1) iff x * is a solution to the following variational inequality where The lemmas and definitions relevant to variational inequalities are presented as follows.Here we consider the following projected dynamical system [24] where γ and η are positive constants, and Ω r is defined the same as in the above.
Lemma 1 [24] Let Ω 0 be the equilibrium point set of system (4).x * is a solution to V I (F, Ω r ) iff the following equation holds, where α is a positive constant, and P Ω r : R n → Ω r is a projection operator defined by P Ω r (x) = arg min ω∈Ω r x − Ω r . Specifically, If the projection operator is on a closed convex set, it satisfies that where Ω 0 is the equilibrium point set of system(4), that is, the solution set of ( 5).According to the Brouwer fixed-point theorem, we have Ω 0 = φ.
According to Lemma 2, the solution of V I (F, Ω r ) in ( 5) is equivalent to the equilibrium point of the system.Referring to (6), we can also obtain the following lemma.
Lemma 3 [25] Let the projection operator P Ω r be defined as Lemma 1.The follwing inequality holds for any x, y ∈ R n .
Lemma 4 [26] Assume that F is locally Lipschitz continuous in a domain of R n .When x 0 / ∈ Ω r is the initial point, the solution of the system will approach to the feasible set Ω r .Furthermore, if x 0 ∈ Ω r , x (t) ⊂ Ω r .
According to Lemma 4 and the boundedness of Ω r , we obtain that the solution of any neural network is bounded.Definition 1 Assume that x (t) is a solution to system ẋ = f (x).The equilibrium point x * is said to be stable in the Lyapunov sense if there exists σ > 0 satisfying x (t 0 ) − x * < σ , such that x (t) − x * < ε for all t ≥ 0 and any x 0 = x (t 0 ), ε > 0. If the system is stable at x * and there exists σ > 0 satisfying x and the system is said to be asymptotically stable at x * .
Definition 2 If all trajectories of the dynamical system converge to the equilibrium point, the system is said to be globally convergent.

Main Results
The robust linear programming under polyhedral uncertanity set is described in this section.To deal with the robust linear programming, we first obtain the optimal condition for the robust counterpart of the robust linear programming.Next we design a projection neural network to address the robust linear programming.
We are concerned with the following linear programming (LP): The standard form of the polyhedral uncertainty set is presented as follows [28]: where u 1 is the 1 -norm defined by Since the parameter matrix Ã in robust linear programming is uncertain, we introduce the uncertain parameter U to describe it.Let's investigate the following linear programming: R m , a i is the i-th row of A, and u i is the i-th row of U .
Next we investigate the case where the uncertain parameter matrix U is perturbed within the polyhedral uncertainty set.For a matrix M ∈ R m×n and a vector g ∈ R m , we have the robust inequality [28] Under this senario, the duality plays a significant role in providing a tractable representation of the above semi-infinite inequality.Indeed, by inserting the variable λ 0 into the Lagrangian for the maximizing problem over u T x, we have In particular, strong duality emerges as a result of all the inequality constraints being linear, which is the case here.sup As a result, the robust linear inequality with polyhedral uncertainty is equivalent to the following λ 0 Therefore, the robust counterpart of (9) under the polyhedral uncertainty set is as follows: By introducing the polyhedral uncertainty set, we eliminate the uncertain parameter u and ensure the robustness of the linear programming without affecting the feasibility of the problem.So in order to obtain the optimal solution condition for (10), we introduce Lagrange multipliers 0 < y ∈ R, 0 < z ∈ R n to obtain the Karush-Kuhn-Tucker condition for (10).
x * is a solution to (10) iff there exists 0 < y * ∈ R and 0 Referring to the projection theorem [29], we transform the inequality condition to the following equations where A projection neural network is presented in order to resolve problem (10) based on equality conditions (11), and its dynamical equation is expressed as follows: where γ > 0, and equations ( 11) is readily accomplished in a projection neural network, (•) + = max {0, •} is executed by introducing a piecewise linear activation function.In the subsequent discussion, we set γ = 1.

Remark 1 In [18]
Li et al. addressed the interval-valued optimziation by using a one-layer neural network.Interval-valued optimization is also a mathematical programming model for uncertain optimization, but it is much simpler than robust optimization.In interval-valued optimization, the parameters are taken between the upper and lower bounds of the closed interval, while in the robust optimization investigated in this paper, the parameters are more accurately taken in the polyhedral uncertainty set and has an additional dimension.Compared with the neural network presented in [18], the neural network (12) presented in this paper is more suitable for practical applications and can prevent the solution of the original problem from being overly conservative.

Stability Analysis
The global stability of system ( 12) is proved in this section.First we provide some useful definitions and lemmas.
Lemma 5 [29] x * is the solution to linear programming (10) iff there exists y * ∈ R and z * ∈ R n such that system (12) holds.

Definition 3
For any given initial point, the projection neural network is globally convergent iff the trajectory of system ( 12) converges to the equilibrium point.
Proof Referring to [30], the proposed system ( 12) has a unique solution ψ(t) for (t 0 , +∞) through the uniqueness and local existence theory of ordinary differential equations(ODEs).
It is also shown that ψ(t) is bounded.And we can easily extend the local existence for the solution to system (12) to the global existence.
The dynamical system ( 12) can be represented as + and y (t 0 ) ≥ 0, we get that y (t) ≥ 0.Moreover, the first equation above can be expressed in terms of the Integral Mean Value Theorem as ∈ Ω and x (t 0 ) ∈ Ω, we get that x (t) ∈ Ω.Similarly, we can obtain that: Next we will show the key result on the convergence of the projection neural network.
Theorem 1 Assume that f (x) : R n → R is a continuously differentiable and convex function.Then system (12) is Lyapunov stable and globally convergent to the optimal solution to problem (10).
Proof Consider the following Lyapunov function, where ψ * = (x * , y * , z * ) is the equilibrium point of system (12) and According to the result in [31] and Lemma 1, we obtain the following inequality, According to (14), Therefore, we have that According to [32], we get that Then and From the above inequalities we get that Then we conclude that and Integrating ( 16), we obtain that According to (17), we obtain that Note the right-hand side of inequality ( 18) where ψ = ψ + s (ψ − ψ * ), so we obtain where ψ = x, ŷ, ẑ , then J (ψ) T (ψ − ψ * ) ≥ 0. According to (15), we get that Therefore, we obtain that the projection neural network converges to x * , indicating that it is globally stable.

Simulation Results
Some numerical examples are presented in this section to show the performance and effectiveness of system (12) for robust linear programming.
By introducing the polyhedral uncertainty set and duality, we eliminate the uncertainty parameters u 1 , u 2 and thus the effect of uncertainty on the model.We get the following robust counterpart x 3 ≥ 0 where x 3 is the Lagrange multiplier, g = 6, and M = (0.1, 0.3) T .Figure 1 demonstrates that the outputs of the neural network are convergent to a unique optimal solution x * = (−5, −5, 0) T from any initial point x 0 .
By introducing the polyhedral uncertainty set and duality, we eliminate the uncertainty parameters u 1 , u 2 , u 3 and thus the effect of uncertainty on the model.We get the following robust counterpart min f (x) = f (x 1 , x 2 ) = 0.0043x 1 + 0.0037x 2 + 0.  .Figure 2 shows that the outputs of the neural network are convergent to a unique optimal solution x * = (−7, −7, −7, 0, 0) T from any initial point x 0 .

Remark 2
In Example 1 where n = 2, λ = 1, the number of equations in the Matlab code is 2 * 1 + 1 + 1 + 2 * 1 = 6.This is the simplest case.But in Example 2, n = 3, λ = 2, and each one has one extra variable, then the number of equations in the Matlab code is increased to 3 * 1 + 2 * 1 + 1 + 3 * 1 = 9.Based on these two examples, we can conclude that the number of equations in the numerical simulation is 2m + n + 1.That means the increase in variables causes the neural network to satisfy more constraints in the stability conditions, thus increasing the computational complexity of the problem.As a result, when the problem size increases, the difficulty of solving the robust optimization also seriously increases.

Conclusion
In this paper by eliminating the parameter uncertainty a projection neural network is proposed for solving robust linear programming under the polyhedral uncertainty set.The proposed neural network model is proved to be globally convergent to the optimal solution to the robust linear programming.The polyhedral uncertainty set focuses on robust optimization with a single uncertain parameter.In the future research, we will try to investigate robust optimization with multiple uncertain parameters and robust optimization under different uncertainty sets.

Fig. 1
Fig. 1 Transient behaviors of the state variables of neural network (12) in Example 1

Fig. 2
Fig. 2 Transient behaviors of the state variables of neural network (12) in Example 2