A New Sequential Optimality Condition of Cardinality-Constrained Optimization Problems and Application

In this paper, we consider the cardinality-constrained optimization problems and propose a new sequential optimality condition for the continuous relaxation reformulation which is popular recently. It is stronger than the existing results and is still a first-order necessity condition for the cardinality constraint problems without any additional assumptions. Meanwhile, we provide a problem-tailored weaker constraint qualification, which can guarantee that new sequential conditions are Mordukhovich-type stationary points. On the other hand, we improve the theoretical results of the augmented Lagrangian algorithm. Under the same condition as the existing results, we prove that any feasible accumulation point of the iterative sequence generated by the algorithm satisfies the new sequence optimality condition. Furthermore, the algorithm can converge to the Mordukhovich-type (essentially strong) stationary point if the problem-tailored constraint qualification is satisfied.


Introduction
In recent years, cardinality-constrained optimization problems (CCOP) have attracted great attention, due to its wide application in portf olio [11,10,22], compressed sensing [15], statistical regression [24,16] and other fields, and a large number of scholars have tried to solve these problems from different perspectives. According to whether a model transformation is carried out, the existing methods are mainly divided into direct methods and indirect methods. While CCOP is non-convex and non-continuous, solving directly is extremely difficult. Therefore, this paper mainly focuses on the indirect methods, in which [14] presents a new relaxed reformulation with orthogonal constraints by introducing an auxiliary variable y.
The paper [14] studied the relationship between problems the relaxation problem and CCOP, and proved that the two are equivalent in terms of global solution and feasibility. Compared with CCOP, the relaxation problem has a better structure property, such as continuity and smoothness, which allows us to have more tools to deal with the problem. But the relaxation problem is an optimization problem with orthogonal constraints, which means it is highly non-convex and difficult to solve. Because of the similarity between the relaxation problem and mathematical programs with complementarity constraints (MPCC), a natural idea is to directly use MPCC's rich theoretical and numerical methods to solve the problem. However, this idea is often not feasible. For example, most of the MPCC's constraint qualification (CQ) cannot be directly applied to the relaxation problem (such as MPCC-LICQ). Even if it can be applied, it will often lead to better conclusions than MPCC. Literature [25] remark 5.7 detailed summary of the difference between the two. This means that we cannot simply treat the relaxation problem as a special case of MPCC, but should develop problem-tailored theories and numerical algorithms. In recent years, as a large number of scholars continue to pay attention to this model, some results have been achieved.
With the help of the tightened nonlinear program of CCOP, denoted by T N LP (x * ).Červinka et al. developed the classic constraint qualification to the CCOP in [25], proposed some CCOP customized constraint qualification (CC-CQ), and discussed the relationship between them. In addition, Kanzow et al. [18] adapted the quasi-normality CQ in [9] and obtained a form corresponding to CCOP. And [19] proposed a cone-continuity constraint qualification. At the same time, [14] defines the first-order stationarity concept of the relaxation problem, called CC-Strong-stationary (CC-S-stationary) and CC-Mordukhovich-stationary (CC-M-stationary), where CC-S-stationary is equivalent to the Karush-Kuhn-T ucker (KKT) condition of the relaxation problem, and the CC-M-stationary is equivalent to the KKT condition of T N LP (x * ); [21] provides a Weak-type stationarity. It is worth mentioning that, unlike CC-S-stationary and Weak-type stationarity, CC-M-stationary is only related to the original variable x, and [18] proves that CC-S-stationary and CC-M-stationary are equivalence in the original variable space. Consequently, this paper will focus on CC-M-stationarity.
Because of the similarity between the relaxation problem and MPCC, some researchers try to apply the classic algorithm of MPCC to solve the relaxation problem. [14] and [13] respectively applied two classic MPCC's regularization methods to the relaxation problem, and both obtained better convergence than general MPCC. However, the regularization strategy is actually to further relax the relaxation problem into a sequence of regular subproblems and obtain the solution of the relaxation problem by solving the regular subproblems. Can the relaxation problem be solved directly without further relaxation? This is an issue worthy of attention. In this paper, we have made a great answer to this. The main contributions of this paper are as follows: • We propose a new sequential optimality condition: CC-PAM-stationarity.
In recent years, the application of sequential optimality condition to stop criteria and uniform convergence analysis of algorithms has received great attention. In this area, several sequential optimality conditions have been proposed for nonlinear programming (NLP) [4,6,17,3], where [3] gives the relationship between them. However, there are still very few relevant results about CCOP. [21] establishes a sequential optimality condition, called CC-approximate weak stationarity (CC-AW-stationarity), but this condition is based on the (x, y) space. Therefore, Kanzow et al. [19] proposed CC-approximate Mordukhovich stationarity (CC-AM-stationarity), which is only related to x, and a proof is given that it is equivalent to CC-AWstationary. However, for some problems, the number of CC-AM-stationary points is numerous, and these points are often far from the optimal solutions (e.g. Example 3.1). In order to obtain fewer optimal candidate points, we propose CC-PAM-stationarity, which is strictly stronger than CC-AMstationarity, and we show that it is a necessary condition of CCOP without any assumptions. • We define a new problem-tailored constraint qualification, called CC-PAMregularity, which is weaker than CC-AM-regularity proposed in [19]. We prove that any CC-M-stationary point is CC-PAM-stationary, and conversely, CC-PAM-stationary point is CC-M-stationary if CC-PAM-regularity condition is satisfied. In other words, CC-PAM-regularity condition is a CC-CQ. Borrowing the notation of reference [12], this constraint qualification is called strict constraint qualification (SCQ). Furthermore, we show that CC-PAM-regularity condition is the weakest SCQ relative to CC-PAMstationarity. • We apply CC-PAM-stationarity to safeguarded augmented Lagrangian method and further improve its convergence. Different from the regularization methods, the literature [18] and [19] try to directly apply the safeguarded augmented Lagrangian method of the general NLP to the relaxation problem and [18] uses the corresponding solver ALGENCAN [12,2,1] to solve the portf olio problem verify the advantages of the augmented Lagrangian algorithm over the regularization methods. In addition, the above two regularization methods both require accurate KKT points for their subproblems, while the safeguarded augmented Lagrangian method only requires subproblems to be solved inaccurately. Kanzow et al. [19] show that any feasible limit point of safeguarded augmented Lagrangian method is CC-AM-stationary. And we proved that under mild conditions such as semialgebraic properties (or the same conditions as [19]), these points are CC-PAM-stationary, which is strictly better than CC-AM-stationary. If additional conditions of CC-PAM-regularity hold, they will be CC-M-stationary points.
The organization is as follows: we give some basic definitions and preliminary conclusions in Sect.2; propose a new sequential optimality condition in Sect.3, and defines a new problem-tailored constraint qualification in Sect.4. The convergence of safeguarded augmented Lagrangian method is discussed in Sect.5, and Sect.6 is a simple summary.

Preliminaries
In this paper, we consider the optimization problems where κ is an integer and κ < n, f : R n → R, g : R n → R m , h : R n → R p is continuously differentiable, and x 0 is also called cardinality of x. Thus, the problem (1) is called a cardinality-constrained optimization problems (CCOP). Let x * ∈ R n , the tightened NLP problem (T N LP (x * )) of CCOP defined as And the relaxation problem of CCOP is defined as Note that the problem (3) is one less non-negative constraint than the form in [14]. The literature [18] shows that this change will not affect the original conclusion and can lead to a larger feasible set. In the introduction, we have mentioned the relationship between CCOP and the problem (3). Below we will give specific conclusions.
Proposition 2.1 [14] Let x ∈ R n , then the following statements hold.
-If x is a feasible point (or golbal minimizer) of CCOP if and only if there exists y ∈ R n such that (x, y) is feasible point (or golbal minimizer) of the problem (3).
-If x is a local minimizer of CCOP, then there exists y ∈ R n such that (x, y) is local minimizer of the problem (3); Conversely, if (x, y) is a local minimizer of the problem (3) and x 0 = κ holds, then x is a local minimizer of CCOP.
There are several stationarity concepts with the relaxation problem (3).
Definition 2.1 [14] Let (x * , y * ) be feasible for (3), then it is called Obviously, CC-M-stationarity is weaker than CC-S-stationarity, but it only depends on the variable x, which can be used as the optimality measure of CCOP. Another important reason to focus on CC-M-stationary points in this paper is because of the validity of the following conclusion.
Let us now recall a basic concepts that needed for theoretical analysis [23]. The upper limit of set-valued maps Θ : For a function l : R n → R, the (lower) level set l α is defined as If any level set of the function l is bounded, then the function l is said to be level bounded. Since some of the conclusions of this article are obtained under the assumption of semialgebraic, let us briefly introduce the basic definition and properties of semialgebraic. We say the set C ⊆ R n is semialgebraic if it can be written as a finite union of sets of the form A function is called semialgebraic if its graph is a semialgebraic set, obviously polynomial functions are semialgebraic. Because of their strong stability, semialgebraic functions are a very broad class of functions.
The following properties hold.
-Linear combination of finite number of semialgebraic functions is semialgebraic. -Composition of semialgebraic functions is semialgebraic.
-Generalized inverse of a semi-algebraic function is semialgebraic.
if the set C and function f are semialgebraic, then both F and G are semialgebraic.
Semialgebraic functions have another important property, they satisfy the Kurdyka-Lojasiewicz property.
Definition 2.2 (KL property) [7] We say the function f satisfy the Kurdyka-Lojasiewicz property, if for any limiting-critical point And if the constraint set of NLP is denoted as then standard MFCQ is defined as follows.

Definition 2.3 (MFCQ) Let
x ∈ X, then we say x satisfies Mangasarian-Fromovitz CQ, if the gradient vectors ∇h j (x) (j = 1, . . . , p) are linearly independent, and there exists d ∈ R n such that .

A New Sequential Optimality Condition
Recently, due to its excellent properties, sequential optimality conditions are very popular. Although there have been many theoretical results on NLP problems, to avoid auxiliary variables, we did not directly apply the results of NLP to problem (3) but proposed new problem-tailed sequential optimality conditions.
The condition in Definition 3.1 only needs to be true for sufficiently large k. For example, if there is N , the conditions (a)-(e) are satisfied when k ≥ N , then you can setx k = x N +k , and the new sequence obtained is the CC-PAM sequence. Observe that, the conditions (a)-(b) are the same as CC-AMstationarity, so there are the following conclusions.
The converse of the above conclusion is untenable, as shown in the following example.
Obviously, the problem (5) has the only global optimal solution (1, It is easy to verify that the above sequence satisfies the conditions (a)-(b), that is, x is a CC-AM-stationary point; but it is not CC-PAM-stationary, because the sequence meets the conditions (a)-(b), γ k 3 and x k 3 must have different signs, which violates the condition (e).
As can be seen from Example 3.1, the number of CC-AM-stationary points is numerous, and these points are far from the optimal solution. While CC-PAM-stationary points contain fewer candidate points, that is, it is strictly superior to CC-AM-stationarity. The following Theorem 3.1 states that CC-PAM-stationarity is a necessary optimality condition for CCOP without any additional assumptions.
Theorem 3.1 Let x * is a local minimizer of CCOP, then x * is a CC-PAMstationary point.
Proof If x * is a local minimizer of CCOP, then it is also a local minimizer of T N LP (x * ), there exist ǫ > 0 such that x * is the only global minimizer for the following problem Let where 0 < M k → +∞. For all M k , the objective function of the problem (7) is continuous and the feasible set is compact, there must exist a global minimizer, denoted as x k . Meanwhile, the sequence {x k } is bounded, there must be a convergent subsequence. For simplicity, let us set x k →x. The following proves thatx = x * . Since x k is the global minimizer of the problem (7), then Divide both sides of (8) by M k and take the limit, we obtain p(x) ≤ 0. Sox is feasible for the local problem (6). In addition, from (8) f Letting k → +∞ yields but x * is the only global minimum point of the problem (6), there must bē From the necessary optimality condition of the problem (7) we obtain And we define For all i / ∈ I g (x * ), there is g i (x * ) < 0, then g i (x k ) < 0 when k is sufficiently large, so λ k i = 0. Meanwhile, by the definition of γ k , obviously Similarly, if µ k j = 0, then h j (x k ) = 0; and if γ k ı = 0, we obtain x k ı = 0. So µ k j h j (x k ) = M k (h j (x k )) 2 > 0 and γ k ı x k ı = M k (x k ı ) 2 > 0. In summary, {(x k , λ k , µ k , γ k )} is a CC-PAM sequence, that is, x * is a CC-PAM-stationary point. ⊓ ⊔ Theorem 3.1 and Example 3.1 show that CC-PAM-stationary point we proposed can be used as a candidate point for the optimal solution, and it is more suitable as a measure of optimality than CC-AM-stationarity. On the other hand, another advantage of the sequential optimality condition is that it has nothing to do with the specific algorithm. That is, CC-PAM-stationary point has some theoretical properties, and any algorithm that can generate the CC-PAM-stationary point also has the same nature. Therefore, the existence of sequential optimality conditions provides a tool for establishing a unified framework for optimality theory.
The converse of the above conclusion is untenable, as shown in the following example.
Example 3.2 In two-dimensional space, consider a simple geometric problem, set z = (1, 1) T , find the point closest to z on the coordinate axis. This problem can be modeled as Obviously, (1, 0) T and (0, 1) T are the two global optimal solutions of the problem. The following shows that x * = (0, 0) T is a CC-PAM-stationary point. Take it is easy to verify that {(x k , γ k )} satisfies the conditions (a)-(e), that is, x * = (0, 0) T is a CC-PAM-stationary point, but it is not a local minimizer of the problem (9).
We know that CC-M-stationarity is stronger than the CC-AM-stationarity, and CC-PAM-stationarity we proposed is also strictly better than CC-AMstationarity. An interesting question is whether CC-M-stationarity is stronger than CC-PAM-stationarity, and under what conditions are the two equivalent. This issue will be described in detail in the next section.

A New Constraint Qualification
Let x * be feasible for CCOP, α ≥ 0, β ≥ 0, x ∈ R n , we defined the set Obviously, if x * is a CC-M-stationary point, then it can be written as In addition, CC-PAM-stationarity can be expressed as the limit form of the set sequence, and the following conclusions are established.
Let π k = (1, λ k , µ k , γ k ) ∞ , then the sequence (λ k ,µ k ,γ k ) π k must have a convergent subsequence. For simplicity, let's set it to converge. Let so we can take Obviously there is α k → 0, and when k is sufficiently large, we obtain Regarding µ, γ has a similar conclusion. And let Obviously there is β k → 0. Combining the non-negativity of {α k } and {β k }, we can set α k ↓ 0 and β k ↓ 0 (if necessary, subsequence can be taken). And we obtain θ k ∈ Θ(x k , α k , β k ).

Now we give the relationship between CC-PAM-stationarity and CC-Mstationarity.
Theorem 4.1 Let x * is feasible for CCOP, the following statements hold. Θ(x, α, β).
(ii) By Lemma 4.1, we obtain Θ(x, α, β), and define f = x T θ * , then ∇f (x * ) = θ * . Since In Theorem 3.1, we have proved that any local minimizer of CCOP (or T N LP (x * )) satisfies the sequential optimality condition (CC-PAM-stationarity), and Theorem 4.1 (ii) explains CC-PAM + CC-PAM-regularity =⇒ CC-M, (11) in other words, CC-PAM-regularity condition is a CC-CQ. Literature [12] calls the constraint qualification that satisfies the property (11) as the strict constraint qualification (SCQ). And the conclusion (iii) means that the CC-PAMregularity condition is the weakest SCQ relative to CC-PAM-stationarity. Next, we will apply CC-PAM-stationarity and CC-PAM-regularity condition to enhance the theoretical results of the augmented Lagrangian method.

Convergence of Safeguarded Augmented Lagrangian Method
This section will discuss the convergence of using safeguarded augmented Lagrangian method to directly solve the relaxation problem (3). Let Λ = (λ, µ, γ, δ, η) ∈ R m + × R p × R n × R + × R n + , ρ > 0, then the augmented Lagrangian function of the problem (3) can be written as Now we give safeguarded augmented Lagrangian method [12].
Step 2(Update of the iterate) Compute (x k , y k ) as an approximate solution of Step 3(Update of the approximate multipliers) Step 4 (Update of the penalty parameter) Take set ρ k = ρ k−1 , otherwise set ρ k = σρ k−1 .
Algorithm 5.1 introduces the safeguard multiplier based on the classic augmented Lagrangian method. The convergence is further improved, and the whole sequence convergence required in the classic methods is relaxed to the subsequence convergence [20]. The updated way of safeguard multiplier in Step 5 is not unique, such as the most popular projection method. In addition, it should be emphasized that the stopping criterion is not set in Algorithm 5.1, and we will explore this issue in the subsequent convergence analysis.
Before proceeding to the analysis of convergence, a useful assumption is given.
Assumption 5.1 Assuming that g : R n → R m and h : R n → R p in CCOP are semialgebraic function.
In Sect.2, we have introduced the basic concepts and properties of semialgebraic, explaining that Assumption 5.1 is a relatively mild condition, which covers a large class of problems. In the subsequent analysis, we will see that the "semialgebraic" assumption can be further relaxed. Let p(x, y, Λ, ρ) = 1 ρ [L (x, y, Λ, ρ) − f (x)], it is actually the second penalty part of (12) (excluding penalty parameters). Under the assumption 5.1, the following conclusions can be easily obtained by Lemma 2.1.
Let {(x k , y k )} be the iterative sequence generated by Algorithm 5.1. We already know that if {x k } is bounded on a subsequence, then {y k } is bounded on the corresponding subsequence (for detailed proof, see [18]). This property shows that the boundedness of the entire iteration sequence can be obtained only by ensuring that it is bounded in the x space (that is, in the original problem). A sufficient condition is given below.

Lemma 5.2
If f is level bounded, then for any given Λ, ρ, L (x, y, Λ, ρ) is also level bounded.
Proof Let Meanwhile, for any α ∈ R, set f α , T α as their respective levels under the α level set. By hypothesis, f α is bounded, and for T (y, δ, η, ρ), we have y → ∞ =⇒ T (y, δ, η, ρ) → ∞, therefore, T α is also bounded. Because of then L (x, y, Λ, ρ) is level bounded. ⊓ ⊔ Lemma 5.2 shows that L is consistent level bounded about Λ and ρ. If the subproblem (13) is solved using a descent algorithm, then the sequence generated by Algorithm 5.1 must be bounded. Let us now discuss the convergence of Algorithm 5.1.
Theorem 5.1 Let (x * , y * ) be an accumulation point of {(x k , y k )} generated by Algorithm 5.1, that is feasible for the problem (3), and Assumption 5.1 holds, then x * is a CC-PAM-stationary point.
Proof For simplicity, we assum, (x k , y k ) → (x * , y * ). By (14), we obtain Now let's do the proof in two cases. i) {ρ k } is bounded.
If {ρ k } is bounded, combined with {Λ k } being bounded and (15)-(17), we know that {Λ k } is also bounded. To avoid repeatedly taking subsequence, we assum Λ k → Λ, then . If the three are all empty sets, then letx k = x * ,λ k =μ k =γ k = 0, it can be concluded that x * is a CC-PAM-stationary point. Conversely, if there is at least one non-empty, it can be obtained from Lemma 1 of [5], there exist I ⊆ I, J ⊆ J , K ⊆ K and (λ I ,μ J ,γ K ) such that And the vector group The next key problem is to find a sequence {x k },x k → x * such that {(x k ,λ k ,μ k ,γ k )} is a CC-PAM sequence. Let Since the vector group F is linearly independent, then Z satisfies LICQ at x * , then MFCQ must also be satisfied. Thus, there exists d ∈ R n such that where i ∈ I, j + ∈ J + , j − ∈ J − , ı + ∈ K + , ı − ∈ K − . For simplicity, we set d = 1, and takex where t k ↓ 0, this impliesx k → x * . By (19) and (23), we have Let i / ∈ I g (x * ), we have g i (x * ) < 0, and g i (x k ) < 0 when k is sufficiently large. By (18), we know Futhermore, (15) implies ) + = 0, for all k sufficiently large, then λ i = 0, that is, i / ∈ I. Hence, by (23), we havê Take an index ı ∈ I ± (x * ), we know x * ı = 0, y * ı = 0, then γ ı y * ı = 0, i.e. ı / ∈ K. By (23), we haveγ The following verifies that {(x k ,λ k ,μ k ,γ k )} satisfies conditions (c)-(e) of Definition 3.1. Set Ifλ k i = 0, then i ∈ I. By (24), we know I ⊆ I g (x * ). This implies where r(t k ) represents the high-order infinitesimal of t k . Divide both sides of the above formula by t k , when k is sufficiently large, we have We only discuss j ∈ J − here (similarly available for j ∈ J + ). For any j ∈ J − , when k is sufficiently large, we know (23), we know ı ∈ K, then hence, for anyγ k ı = 0, obviouslyγ k ı x k ı > 0. In summary, we show that {(x k ,λ k ,μ k ,γ k )} is a CC-PAM sequence. Therefore x * is CC-PAM-stationary point. (14), we know Let i / ∈ I g (x * ), we have g i (x * ) < 0, and g i (x k ) < 0 when k is sufficiently large. Since ρ k → ∞, then Take an index ı ∈ I ± (x * ), we have x * ı = 0 and y * ı = 0, then y k ı → 0. Thus Now, Let's prove ρ k−1 x k ı y k ı 2 → 0. For simplicity, we abbreviate p(x, y, Λ, ρ) in Lemma 5.1 as p(x), and definē Then where, the inequality sign comes from the Lipschitz property of (·) + , such as the others are similar. SinceΛ k−1 is bounded, g, h ∈ C 1 and (x k , y k ) → (x * , y * ), then there exists M 1 > 0 such that On the other hand, by (14), we know Thus Furthermore, since ǫ k ↓ 0, f ∈ C 1 and x k → x * , then there exists M > 0 such that At the same time, by Assumption 5.1 and Lemma 2.1, we knowp(x) is semialgebraic. So there exists C > 0, θ ∈ [0, 1) such that By (26) and (27), we obtain It's easy to know that there are still ∇f (x k ) + ∇g x k λ k + ∇h x k µ k +γ k → 0.
Let π k := 1, λ k , µ k ,γ k ∞ . If {π k } is bounded, the proof process of case i) can verify that the conditions (c)-(e) are established. Now, we consider the case that π k is unbounded. If lim k→∞ λ k i π k > 0, then Obviously, we have g i (x k ) > 0 for all k sufficiently large, so Observe that µ k j has the same sign as h j (x k ), this implies Similarly, if lim k→∞ |γ k ı | π k > 0, by (28), we know ı ∈ I 0 (x * ), then y k ı → y * ı = 0. Meanwhile Therefore, when k is sufficiently large,γ k ı has the same sign as x k ı , namelŷ We have completed this proof. ⊓ ⊔ As can be seen from the proof of Theorem 5.1, the "semi-algebraic" hypothesis is essentially to ensure that for any given Λ, ρ, p(x, y, Λ, ρ) has KL properties. Therefore, Assumption 5.1 can be further relaxed to the structure of O-minimal, or even to the assumption that p(x, y, Λ, ρ) has KL properties (that is, the same as the conditions of [19]), the conclusion of Theorem 5.1 still holds. There are two reasons why we did not do this. One is that y is an artificial variable, so all assumptions should not be imposed on the y space; in addition, through Lemma 5.1 and Lemma 5.2 can show that the introduction of y does not destroy the good properties of the hypothesis on the x space.
Theorem 5.1 states that any feasible accumulation point of Algorithm 5.1 is a CC-PAM-stationary point if Assumption 5.1 holds. And from the proof process, it can be seen that the sequence generated by Algorithm 5.1 is not necessarily a CC-PAM sequence. In fact, this is not contradictory, because we require the existence of the corresponding CC-PAM sequence in Definition 3.1. But at least this shows that it is not appropriate to take CC-PAM-stationarity as the stop criterion. On the other hand, according to Theorem 4.1, if there is an additional CC-PAM regularity condition holds at x * , then it is a CC-M-stationary point. In other words, when CC-PAM regularity condition is established, CC-M-stationarity itself is a very suitable stop criterion, [18] has verified the validity of this method, this paper mainly emphasizes the theoretical improvement, not to repeat the experiment. In addition, in conjunction with Proposition 2.2, it is clear that the following conclusion holds.
Theorem 5.2 Let (x * , y * ) be an accumulation point of {(x k , y k )} generated by Algorithm 5.1, Assumption 5.1 holds, that (x * , y * ) is feasible for the relaxation problem (3), and meet CC-PAM regularity condition at x * . Then (x * , y * ) is a CC-M-stationary point and there exists z * ∈ R n such that (x * , z * ) is a CC-Sstationary point.

Final Remarks
In this paper, we study the continuous relaxation form of CCOP, which is more popular in recent years. We propose a new sequential optimality condition called CC-PAM-stationarity. In Sect.3, we prove that CC-PAM-stationarity is strictly superior to CC-AM-stationarity. Moreover, any local minimizer of CCOP is a CC-PAM-stationary point without any additional assumptions. Obviously, CC-PAM-stationarity is a better measure of optimality than CC-AM-stationarity. In addition, we introduced a new constraint qualification called CC-PAM-regularity in Sect.4, which is weaker than CC-AM-regularity. It is proved that if the CC-PAM regularity condition is established, then any CC-PAM-stationary point all are CC-M-stationary points.
In Sect. 5, we apply the new sequential optimality condition proposed in this paper, CC-PAM-stationarity, to the safeguarded augmented Lagrangian method (Algorithm 5.1), which further improves the existing theoretical results. We have proved that under mild conditions such as KL properties, any feasible convergence point of Algorithm 5.1 is a CC-PAM-stationary point; further, if the CC-PAM-regularity condition, it can converge to a CC-Mstationary (essentially CC-S-stationary) point. In other words, in this case, the CC-M-stationarity is the natural termination criterion of Algorithm 5.1. Meanwhile, we emphasize that if the same assumptions as the existing results are used, the conclusions of this article are still valid.