A unified framework for three accelerated extragradient methods and further acceleration for variational inequality problems

The main strategy of this paper is intended to speed up the convergence of the inertial Mann iterative method and further speed up it through the normal S-iterative method for a certain class of nonexpansive-type operators that are linked with variational inequality problems. Our new convergence theory permits us to settle down the difficulty of unification of Korpelevich’s extragradient method, Tseng’s extragardient method, and subgardient extragardient method for solving variational inequality problems through an auxiliary algorithmic operator, which is associated with the seed operator. The paper establishes an interesting the fact that the relaxed inertial normal S-iterative extragradient methods do influence much more on convergence behaviour. Finally, the numerical experiments are carried out to illustrate that the relaxed inertial iterative methods; in particular, the relaxed inertial normal S-iterative extragradient methods may have a number of advantages over other methods in computing solutions to variational inequality problems in many cases.


Introduction
Many problems arising in different areas of mathematics such as signal/image recovery, optimization, variational analysis, and differential and integral equations, can be modelled by the fixed point problem (FPP): where S : X → C is an operator and C is a nonempty subset of a Hilbert space X .The solutions to this problem are called fixed points of the operator S. We denote the set of solutions of FPP (1) (or equivalently set of fixed points of S) by Fix(S) = {x ∈ C : x = Sx}.
A variational inequality problem can be modelled into FPP (1).The paper concerns with iterative method for approximating a solution of a variational inequality problem in Hilbert space setting.We now give our attention on research background of fixed point iterative methods and iterative methods for solutions of variational inequality problems and then some observations and our contribution.

Convergence rates of iterative methods for nonexpansive operators
There is a vast literature on the topic "computation of fixed points" relative to important aspects such as: types of involved operators, robustness, efficiency and convergence rate.It is well known that the Picard iterative method for nonexpansive operators may, in general, not behave well (see Agarwal et al. 2009;Chidume 2009;Xu 2002).One of the powerful iterative techniques for computation of fixed points of nonexpansive-type operators in Hilbert spaces as well as Banach spaces is Mann iterative method, which was introduced by Mann (1953) in 1953 as follows: where {α n } is a real sequence in [0, 1] and T : C → C is nonexpansive on a nonempty closed convex set C of a suitable Banach space.This method has been extensively studied and applied by many authors in various science and engineering problems, see for instance (Agarwal et al. 2009;Ansari and Sahu 2014;Bauschke and Combettes 2011).The augmentation of rates of convergence of iterative techniques fascinates to researchers theoretically as well as numerically for various nonlinear problems.In order to accel-erate the convergence rate, multi-step iterative algorithms have been studied by using additional inertial terms (see Boţ et al. 2015;Dixit et al. 2019;Dong et al. 2019;Maingé 2008;Sahu 2020;Verma et al. 2017;Polyak 1964 ).The inertial term is motivated from heavy ball method proposed by Polyak (1964).The heavy ball method is a discretization of second-order ordinary differential equation x (t) + γ x (t) + ∇ω(x(t)) = 0, where x(t) is a time continuous trajectory and γ > 0 is a friction and ω(x(t)) is an external gravitational field.In particular, Maingé (2008) adopted the inertial Mann iterative method for computation of fixed points of a nonexpansive operator T defined in a Hilbert space X as follows: (2) Maingé (2008) studied weak convergence of algorithm (2) under the following conditions: (M0) θ n ∈ [0, θ] for all n ∈ N and for some θ ∈ [0, 1); (M1) The verification of condition (M1) is laborious in practical situations.Boţ et al. (2015) proved a result for computation of fixed points of nonexpansive operators under some different condition without the condition (M1).Recently, author (Sahu 2020) studied weak convergence and convergence rate of inertial Mann iterative method (2) for computation of fixed points of quasi-nonexpansive operator T with affine domain in a Hilbert space under the practical condition: which is more simpler than the condition (6) of Bot and Csetneck Boţ et al. (2015, Theorem 5).

Variational inequality problems
We now consider the following nonlinear problem: where C is a nonempty closed convex subset of a real Hilbert space X and F : C → X is a nonlinear operator.The problem (3 ) is called variational inequality problem over C for the operator F and it is denoted by VI(C, F).The solution set of variational inequality problem VI(C, F) is denoted by Ω[VI (C, F)].
Many problems in convex optimization, economics, engineering mechanics and mathematical physics can be modelled as variational inequality problems (3) (see Nachaoui and Nachaoui 2022).It is well known that the convergence of projected gradient iterative method introduced by Goldstein (1964) requires the strong monotonicity of the operator F involved in (3 ).Without strong monotonicity of the operator F, the sequence generated by projection method may diverge.In order to relax strong monotonicity of the operator F, Korpelevich (1976) introduced an extragradient method (KEM) for solving variational inequality problem VI(C, F) as follows: where F : C → X is a monotone and L-Lipschitz continuous operator and λ ∈ (0, 1/L).Korpelevich's algorithm and its variants have been studied by many authors, see for instance Ansari and Sahu (2016); Yao et al. (2014).In subsequent work, Dong et al. (2016) introduced an extragradient method with inertial effect (IM-KEM) by combining the extragradient algorithm (Korpelevich 1976) and inertial extrapolation method (Boţ et al. 2015) for the solution of variational inequality problem.The inertial extragradient method in Dong et al. (2016) is given as follows: where F : C → X is a monotone and L-Lipschitz continuous operator and {θ n } is nondecreasing with θ 1 = 0 and 0 ≤ θ n < θ < 1 for all n ∈ N and λ, σ, δ > 0 such that where ξ = (1+τ L) 2 θ(1+θ)+(1−τ 2 L 2 )θ σ +σ (1+τ L) 2 .On the other hand, it is remarked in the literature that the KEM (4) is an iterative method that requires computing the projection onto the set C twice in each iteration and it affects the execution efficiency of the KEM (4).In view of current literature, to overcome this difficulty, a large number of variants of the KEM (4), which only need to compute the projection on the feasible set C once in each iteration, have been studied to solve variational inequalities (see, e.g., Cai et al. 2021;Censor et al. 2012;Malitsky 2015).In this direction, Tseng (2000) introduced the following extragradient method (TEM) for solving variational inequality problem (3): where λ ∈ (0, 1/L).Recently, the extragradient methods have received great attention for solving pseudo-monotone 123 variational inequality problems in infinite-dimensional Hilbert spaces (Boţ et al. 2020;Cai et al. 2021;Khanh 2016;Anh et al. 2020;Vuong 2018).Censor et al. (2011) proposed the following subgradient extragradient method (SEM) for solving variational inequality problem (3): where A number of improved variants of the SEM were designed to solve variational inequalities, equilibrium problems, and other optimization problems (see Cai et al. 2021;Jolaoso and Aphane 2022).In Boţ et al. (2020), Bot et.al studied a relaxed extragradient method and compared that the performance of relaxed extragradient method better than subgradient-extragradient method (8).Indeed, the comparison of inertial version of the relaxed extragradient method and that of relaxed subgradient-extragradient method are not studied in Boţ et al. (2020).Recently, Tan et al. (2023Tan et al. ( , 2022) ) studied the strong convergence of some variants of the inertial extragradient approach to solution of the pseudomonotone variational inequality problems Hilbert spaces.Moreover, other interesting observations are given in the next subsection.

Observations
The following remarks are given in Censor et al. (2011): (R1) If projections onto C are easily executed, then extragradient method (4) is particularly useful.(R2) If C is a general closed and convex set, then a minimal distance problem has to be solved (twice) in order to obtain the next iterate.This might seriously affect the efficiency of the extragradient method (4).
In view of remarks (R1) and (R1), there are the following natural questions: (Q1) Is it possible to unify KEM (4), TEM (7) and SEM (8) for pseudo-monotone variational inequality problems by defining a certain class of algorithmic operators?(Q2) If the projection onto C is executed, how to compare convergence rates of KEM (4), TEM (7) and SEM (8)?
In the light of Dong et al. (2016), we have the following observations: (O1) The upper bound of the iteration parameter {α n } determined by (6) depends on other parameters τ, σ, δ, θ and L. Therefore, the determination of upper bound of {α n } for convergence behaviour of the extragradient method ( 5) is not easy.(O2) The rate of convergence of inertial Mann iterative method ( 5) is not reported in Dong et al. (2016).(O3) There are a huge number of variants of extragradient methods for the computational solutions of variational inequality problems involving monotone/pseudomonotone operators.Interestingly, many researchers have focussed their attention and study on inertial technique as one of the ways of acceleration.The normal S-iterative technique is also one of the ways of acceleration, for detail see Sect.3.2.
While doing so, first we investigate convergence theory of two broad classes of the relaxed inertial iterative methods based on Mann iteration method and normal S-iteration method generated by strongly quasi-nonexpansive operators in Hilbert spaces for computation of solutions of FPP (1) under the practical assumptions (see Sect. 3).Our convergence theory strengthens the situation that if projection onto C is executed, then Korpelevich's extragradient method (4) behaves well like Tseng's extragardient method (7) and subgardient extragardient method (8).Our operator theoretic methodology creates a common planform for handling the issue of unification of three extragradient methods, and therefore, our convergence theory provides affirmative answers of questions (Q1)-(Q2).

Organization of the paper
This paper is organized as follows: In Sect.2, we summarize some basic definitions and results.In Sect.3, first we introduce the relaxed inertial iterative method based on Mann iteration method and to further speed up the relaxed inertial Mann iterative method, we introduce normal S-iteration method with the relaxed iteration parameters {α n } in (0, ∞) and establish their convergence theory with convergence rate in worst case for computation of solutions of FPP (1) when there exists an strongly quasi-nonexpansive operator T : X → X , which is associated with original operator S. Indeed, the rate of convergence of the relaxed inertial Mann iterative method is estimated as iterative method and the relaxed inertial normal S-iterative method with three basic extragradient methods (4), ( 7) and ( 8) and develop corresponding relaxed inertial extragradient methods for solving pseudo-monotone variational inequality problems.In Sect.5, we give numerical test problems for demonstrating the convergence behaviour of six new developed extragradient methods.Finally, in the last section, we conclude the paper with some future plan of research work.

Mathematical preliminaries
Throughout the paper, X denotes a real Hilbert space with inner product •, • and norm • , respectively.The strong (weak) convergence of a sequence {x n } to x is denoted by x − y for all x ∈ C and y ∈ Fix(T ); (iv) κ-strongly quasi-nonexpansive if Fix(T ) = ∅ and there exists a constant κ ∈ (0, ∞) A monotone operator A is said to be maximal monotone if there is no proper monotone extension of A.
Definition 2.1 (Kato 1964) Let C be a nonempty subset of a X and F : C → X an operator.Then F is said to be hemi-continuous at x ∈ C if for any y ∈ X and t > 0 such that x + t y ∈ C, we have Let C be a nonempty closed convex subset of X .For every element x ∈ X , there exists a unique nearest point in C, denoted by P C x, such that x − P C x = inf{ x − y : y ∈ C}.
The operator P C : X → C is called the projection operator onto C. Lemma 2.1 (see Agarwal et al. 2009) Let C be a nonempty closed convex subset of X and P C be the metric projection from X onto C. Then the following hold: The property (A ) plays an important role for finding fixed points (common fixed points) of nonexpansive (family of nonexpansive operators) and zero of monotone operators (see Sahu et al. 2016Sahu et al. , 2020Sahu et al. , 2012)).Following (Sahu et al. 2020), we define the property (A ) of an operator S : X → C with respect to the operator T : X → X when C is a nonempty subset of X .Definition 2.2 Let C be a nonempty subset of X and let S : X → C and T : X → X be operators.We say that the operator S has the property (A ) with respect to the operator T if the following holds: for any bounded sequence {x n } in X , we have Definition 2.3 Let C be a nonempty subset of X and S : X → C an operator.Then I − S is said to be demiclosed at zero if {x n } is a sequence in X such that x n x for some x ∈ X and x n − Sx n → 0, we have x − Sx = 0. Following Sahu (2020, Proposition 2.2), we have Proposition 2.1 Let C be a nonempty weakly closed subset of X and S : X → C an operator such that Fix(S) = ∅ and I − S is demiclosed at zero.Let {x n } be a sequence in X such that lim n→∞ x n − v exists for all v ∈ Fix(S) and lim n→∞ x n − Sx n = 0.If {x n } has a weakly convergence subsequence, then {x n } converges weakly to a fixed point of S.
Lemma 2.2 (see Bauschke and Combettes 2011) The following identities hold: y 2 for all t ∈ R and x, y ∈ X.
Let x, y ∈ X , α ≥ 0 and ρ > 0. Following Sahu (2020), we have Suppose that ∞ n=1 b n < ∞ and there exists a real number θ with 0 ≤ θ n ≤ θ < 1 for all n ∈ N. Then the following hold: Lemma 2.5 Let C be a nonempty subset of X , v ∈ C and {x n } n≥0 a sequence in X satisfying the inequality: where ε ∈ (0, ∞) and {θ n } is an increasing sequence in [0, 1).Define Assume that ϕ 1 > 0 and there exists θ ∈ (0, 1) such that Then, we have the following: The assumption "ϕ 1 > 0" automatically holds if x 0 = x 1 or θ 1 = 0. Lemma 2.6 Let C be a nonempty subset of X and let T : C → X be κ-strongly quasi-nonexpansive operator.Then, for α ∈ R, x ∈ X and v ∈ Fix(T ), we have

Accelerated computation methods for FPP
(1) and their convergence analysis Motivated by the idea adopted in Sahu et al. (2020), in this section, our main goal is to introduce and study convergence analysis of the relaxed inertial iterative methods based on Mann iteration method and normal S-iteration method, which involve algorithmic operator T for solving FPP (1).First, we introduce the notion of algorithmic operator for computation of solutions of FPP (1).Definition 3.1 Let C be a nonempty subset of X and let S : X → C be a given operator.Then an operator T : X → X is said to be an algorithmic operator for FPP (1) if the following condition (A0) holds: (A0) Fix(S) ⊆ Fix(T ) and the operator S has property (A ) with respect to the operator T .
We say S : X → C is a seed operator for computation of solutions of FPP (1).Remark 3.1 If S is an algorithmic operator for computation of solutions of FPP (1), the assumption (A0) holds automatically.

The relaxed inertial Mann iteration method: RI-MIM
In this section, we introduce and analyse the relaxed inertial Mann iteration method for computation of solutions of FPP 123 D. R. Sahu (1) and study its weak convergence and convergence rate in a Hilbert space.We introduce the relaxed inertial Mann iteration method (RI-MIM) for computation of solutions of FPP (1).Algorithm 3.1 Let C be a nonempty subset of X and let S : X → C be a given operator such that Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for problem FPP (1).

Remark 3.2
In view of the relaxed Mann iteration method (RMIM), we have the following observations: (i) The orbit {x n } of Algorithm 3.1 is well defined without the convexity of C. (ii) The initial points x 0 and x 1 can be also chosen from X .(iii) In the papers (Kanzow and Shehu 2017;Zhang et al. 2019), authors studied convergence analysis of Mannlike iteration method for fixed points of nonexpansive operators in Hilbert and Banach spaces, respectively.Our approach is different from them.Now our goal is to analyse the relaxed inertial Mann iteration method (13) when involved algorithmic operator is strongly quasi-nonexpansive.More precisely, for convergence analysis of Algorithm 3.1, we assume that the algorithmic operator T : X → X is κ-strongly quasi-nonexpansive and sequences {α n } and {θ n } satisfy the following conditions: In view of the condition (C1), one can take α n = 1 for all n ∈ N. In this case, the relaxed inertial Mann iteration method (13) reduces to inertial Picard iteration method: Remark 3.3 For α n = b = 1 for all n ∈ N, the condition (C3) reduces to the condition (C4): (C4) there exists θ ∈ (0, 1) such that θ n ≤ θ < δ(κ) for all n ∈ N.
First, we study some basic properties of orbit of the relaxed inertial Mann iteration method (13) for solving FPP (1).
Proposition 3.11 Let C be a nonempty subset of X and S : X → C an operator such that Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for FPP (1) such that T is κ-strongly quasi-nonexpansive.For x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial Mann iteration method (13), where sequences {α n } and {θ n } satisfy the conditions (C1), ( C2) and (C3).Then, for v ∈ Fix(S), the following hold: We are now in the position to establish the convergence theorem for the relaxed inertial Mann iteration method (13).
Theorem 3.2 Let C be a nonempty weakly closed subset of X and S : X → C an operator such that I − S is demiclosed at zero and Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for FPP (1) such that T is uniformly continuous and κ-strongly quasi-nonexpansive.For x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial Mann iteration method (13), where sequences {α n } and {θ n } satisfy the conditions (C1), ( C2) and (C3).Then {x n } converges weakly to an element of Fix(S).
Proof Proposition 3.1 shows that lim n→∞ x n − w n = 0 and lim n→∞ w n − T w n = 0.By the uniform continuity of T , we have lim n→∞ x n − T x n = 0. Observe that (i) lim n→∞ x n − z exists for all z ∈ Fix(S), (ii) lim n→∞ x n − Sx n = 0 by the property (A ), (iii) I − S is demiclosed at zero.Proposition 2.1 shows that {x n } converges weakly to an element of Fix(S).
We derive the following result in which T is not necessarily uniformly continuous.
Corollary 3.1 Let C be a nonempty weakly closed subset of X and S : X → C an operator such that I − S is demiclosed at zero and Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for FPP (1) such that T is κ -strongly quasi-nonexpansive.For x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed Mann iteration method (14), where sequence {α n } satisfies the condition (C1).Then {x n } converges weakly to an element of Fix(S).
Proof For v ∈ Fix(S), from (55) (see Appendix-II), we have Observe that lim n→∞ x n − z exists for all z ∈ Fix(S) and lim n→∞ x n − Sx n = 0 by the property (A ).Since I − S is demiclosed at zero, therefore, we conclude from Proposition 2.1 that {x n } converges weakly to an element of Fix(S).

The relaxed inertial normal S-iteration method: RInSIM
In this section, we introduce another relaxed inertial iterative method for computation of solutions of problem FPP( 1) and study its weak convergence and convergence rate in a Hilbert space.This relaxed inertial iterative method is based on normal S-iteration method (NSIM) (Sahu 2011).
The normal S-iteration method (NSIM) was introduced by author (Sahu 2011) as follows: where {α n } is a real sequence in (0, 1) and T : C → C is nonexpansive on a nonempty closed convex set C of suitable Banach space.In recent years, S-iterative methodology is applied for solving various nonlinear problems, inclusion problems, optimization problems and image recovery problems (see Agarwal et al. 2009Agarwal et al. , 2007;;Sahu 2011Sahu , 2020;;Sahu et al. 2019Sahu et al. , 2020Sahu et al. , 2017)).We now introduce the relaxed inertial normal S-iteration method (RI-NSIM) for computation of solutions of problem FPP (1).

Algorithm 3.3 Let C be a nonempty subset of X and let S :
X → C be a given operator such that Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for problem FPP(1).
) and compute the (n + 1) th iteration as follows: For θ n = 0 for all n ∈ N, the relaxed inertial normal S-iteration method (17) reduces to where {α n } is a sequence in (0, α max ) for some α max ∈ (1, ∞).We say ( 18) is the relaxed normal S-iteration method.

Remark 3.4
The relaxed normal S-iteration method ( 18) is new from of normal S-iteration method ( 16) in the following contexts: (i) the iteration parameter {α n } is lying on interval (0, α max ), (ii) for solving FPP (1), the given operator S is not involved in Algorithm ( 18).
We now study some basic properties of orbit of the relaxed inertial normal iteration method (17) for solving FPP(1).Proposition 3.22 Let C be a nonempty subset of X and S : X → C an operator such that Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for FPP (1) such that T is β-Lipschitz continuous and κ -strongly quasinonexpansive.For x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial normal iteration method (17), where sequences {α n } and {θ n } satisfy the conditions (C1), ( C2) and ( C5): (C5) there exists θ ∈ (0, 1) such that θ n ≤ θ < δ(E) for all n ∈ N, where δ is given in ( 12) and Then, for v ∈ Fix(S), the following hold: Now, we are in a position to show our weak convergence theorem for the relaxed inertial normal S-iteration method (17).
Theorem 3.4 Let C be a nonempty weakly closed subset of X and S : X → C an operator such that I − S is demiclosed at zero and Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for problem FPP (1) such that T is β-Lipschitz continuous and κ-strongly quasi-nonexpansive.For x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial normal S-iteration method (17), where {α n } is a sequence in (0, ∞) and {θ n } is a sequence in [0, 1) satisfying the conditions (C1), ( C2) and (C5).Then {x n } converges weakly to an element of Fix(S).Proof Proposition 3.2(c) shows that lim n→∞ x n − T x n = 0. Observe that (i) lim n→∞ x n − z exists for all z ∈ Fix(S), (ii) lim n→∞ x n − Sx n = 0 by the property (A ), (iii) I − S is demiclosed at zero.Proposition 2.1 shows that {x n } converges weakly to an element of Fix(S).
Corollary 3.2 Let C be a nonempty weakly closed subset of X and S : X → C an operator such that I − S is demiclosed at zero and Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for problem FPP (1) such that T is β-Lipschitz continuous and κ-strongly quasi-nonexpansive.For x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed normal S-iteration method (18), where {α n } is a sequence in (0, ∞).Then we have the following: Proof For v ∈ Fix(S), from (62) (see, Appendix-III), we have (a) Suppose that {α n } satisfies the condition (C1).From (21), we get lim n→∞ x n − z exists for all z ∈ Fix(S) and lim n→∞ x n − Sx n = 0 by the property (A ).Since I − S is demiclosed at zero, then Proposition 2.1 shows that {x n } converges weakly to an element of Fix(S).(b) Suppose that T is β-Lipschitz continuous and 0 < α n ≤ 1 + κ for all n ∈ N.For v ∈ Fix(S), from (21), we have Thus, lim n→∞ x n − z exists for all z ∈ Fix(S) and lim n→∞ y n − T y n = 0. From (65) (see Appendix-III), we obtain lim n→∞ x n − T x n = 0 and hence lim n→∞ x n − Sx n = 0 by the property (A ).As Part (a), we see that {x n } converges weakly to an element of Fix(S).
We observe the following facts: (i) Theorem 3.4 together with Proposition 3.2 establishes that the sequence {x n } generated by the relaxed inertial normal S-iteration method (17) converges weakly to an element of Fix(S) with the convergence rate given in ( 20).(ii) In view of Remark 3.4, Theorem 3.4 is new in the literature.

Some deductions for nonexpansive-type operators
In view of Remark 3.1, we have Theorem 3.5 Let C be a nonempty weakly closed subset of X and T : X → C a κ-strongly quasi-nonexpansive operator such that I − T is demiclosed at zero.Let x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial Mann iteration method (13), where {θ n } is a sequence in [0, 1) satisfying the conditions (C1), ( C2) and (C3).Then {x n } converges weakly to an element of Fix(T ).
Proof Note that the assumption (A0) holds with S = T .Therefore, Theorem 3.5 follows from Theorem 3.2.
Similarly, from Theorem 3.4, we have Theorem 3.6 Let C be a nonempty weakly closed subset of X and T : X → C a κ-strongly quasi-nonexpansive operator such that T is β-Lipschitz continuous and I −T is demiclosed at zero.For x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial normal S-iteration method (17), where {α n } is a sequence in (0, ∞) and {θ n } is a sequence in [0, 1) satisfying the conditions (C1), ( C2) and (C5).Then {x n } converges weakly to an element of Fix(T ).
Remark 3.5 The author (Sahu, 2020, Theorem 3.1) studied weak convergence of the inertial Mann iteration method (13) for computation of fixed points of quasi-nonexpansive operators T : C → C, where domain C is necessarily affine.Our approach in Theorem 3.5 is different from Sahu, (2020, Theorem 3.1).
If the algorithmic operator T is κ-strongly quasinonexpansive for solving FPP (1), then, in view of (C4) (see Remark 3.3) and Theorem 3.2, we have Corollary 3.3 Let C be a nonempty weakly closed subset of X and S : X → C an operator such that I − S is demiclosed at zero and Fix(S) = ∅.Suppose that T : X → X is an algorithmic operator for FPP (1) such that T is κ-strongly quasi-nonexpansive.For x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the inertial Picard iteration method (15), where {θ n } is non-decreasing sequence in [0, 1) such that there exists θ ∈ (0, 1) such that θ n ≤ θ < δ(κ) for all n ∈ N. Then {x n } converges weakly to an element of Fix(S).

123
In particular, for the class of firmly nonexpansive operators, we have Corollary 3.4 Let T : X → X be a firmly nonexpansive operator such that Fix(T ) = ∅.Assume that {θ n } is nondecreasing sequence in [0, 1) and there exists θ ∈ (0, 1) such that θ n ≤ θ < δ(1) for all n ∈ N.Then, for x 0 = x 1 ∈ C, the sequence {x n } in X generated by the inertial Picard iteration method (15) converges weakly to an element of Fix(T ).
Corollary 3.5 Let T : X → X be a firmly nonexpansive operator such that Fix(T ) = ∅.For x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial normal S-iteration method (17), where Then {x n } converges weakly to an element of Fix(T ).
From Corollary 3.5, we derive the following new result: Theorem 3.7 Let A : X ⇒ X be a maximally monotone operator such that A −1 (0) = ∅.For λ ∈ (0, ∞) and x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by the relaxed inertial normal S-iteration method: where {α n } and {θ n } satisfy all the corresponding assumptions of Corollary 3.5.Then {x n } converges weakly to an element of A −1 (0).

Applications to pseudo-monotone variational inequality problems
In a Hilbert space, the relationship between the variational inequality problem VI(C, F) and a fixed point problem can be made through the characterization of the projection operator P C as follows: Lemma 4.1 Let C be a nonempty closed convex subset of X and F : X → X a given operator.Then Ω[VI(C, F)] = Fix(P C (I − μF)) for any μ ∈ (0, ∞).
Let us recall the following result.
Lemma 4.2 (see Zeidler 2013) Let C be a nonempty closed convex subset of X .Let F : X → X be a hemi-continuous pseudo-monotone operator.Then, the variational inequality problem VI(C, F) : is equivalent to the dual variational inequality problem DVI(C, F) : Proposition 4.1 Let C be a nonempty closed convex subset of X .Let F : X → X be a hemi-continuous monotone operator satisfying the condition (B): (B) F maps each bounded set into a bounded set.
Proof Since F is hemi-continuous monotone operator, from Lemma 4.2, we have Let {x n } be a sequence in X such that x n z ∈ X and (I − S λ )x n → 0. We now show that From Lemma 2.1(a), we have 123 Then, by the monotonicity of F, we have Note (i) F satisfies the condition (B), (ii) the operator S λ defined by ( 22) is (1 + λL)-Lipschitz continuous.Indeed, for u, v ∈ X , we have (iii) in Propositions 4.1 and 4.2, both hemi-continuity of F and condition (B) are satisfied.
Let C be a nonempty closed convex subset of X and F : X → X a pseudo-monotone operator.For computational theory of solutions of VI (C, F), we consider three extragradient operators as below: For λ ∈ (0, ∞), following (Ansari and Sahu 2016), we define operators and respectively, where S λ : X → C is an operator defined by ( 22) and H x = {w ∈ X : x −λF x−S λ (x), w−S λ (x) ≤ 0}.We now explore some basic properties of extragradient operators E λ , T λ and U λ .Specially, we show that all extragradient operators E λ , T λ and U λ are strongly quasinonexpansive.
Proposition 4.33 Let C be a nonempty closed convex subset of X and F : X → X a pseudo-monotone and L-Lipschitz continuous such that Ω[VI(C, F)] = ∅.For λ ∈ (0, ∞), let E λ : X → C be an extragradient operator defined by (24).Then, we have the following: (d) For λ ∈ (0, 1/L), the operator S λ has the property (A ) with respect to the operator T λ .
Proposition 4.5 5 Let C be a nonempty closed convex subset of X and F : X → X a pseudo-monotone and L-Lipschitz continuous operator such that Ω[VI(C, F)] = ∅.For λ ∈ (0, ∞), define operator U λ : X → X by (26).Then, we have the following: By the pseudo-monotonicity of F, we have F y, y − v ≥ 0. Hence By the definition of H x , we get From ( 26) and ( 28), we have 5 For proof of Proposition 4.5, see Appendix-VI P. 31.
Hence, from (29), we get All other results of Proposition 4.5 are true as Proposition 4.3.

Remark 4.2
We observe the following: (i) In the light of Lemma 4.1, Propositions 4.3, 4.4 and 4.5, we see that the operators E λ : X → X , T λ : X → X and U λ : X → X are algorithmic operators for computation of solutions of FPP (1) with S = S λ .(ii) All three algorithmic operators E λ , T λ and U λ are in the class of strongly quasi-nonexpansive operators, which settles down the unification of extragradient methods KEM (4), TEM (7) and SEM (8) for pseudo-monotone variational inequality problems.This provides an affirmative answer of (Q1).
In view of Remark 4.2(i), various inertial and non-inertial extragradient methods for solving the variational inequality problem VI(C, F) can be derived by applying fixed point algorithms (see Agarwal et al. 2007;Mann 1953;Sahu 2011Sahu , 2020;;Sahu et al. 2019).
Theorem 4.3 Let C be a nonempty closed convex subset of X and F : X → X a pseudo-monotone and L-Lipschitz continuous operator such that Ω[VI(C, F)] = ∅.For λ ∈ (0, 1/L) and x 0 = x 1 ∈ C, let {x n } be a sequence in X generated by RInS-KEM (32) or RInS-SEM (38), where sequences {α n } and {θ n } satisfy the conditions (C1), ( C2) and ( C5 Remark 4.4 Thong and Hieu (2018) studied weak convergence of I-TEM ( 34) for solving VI(C, F) under the condition: where F is a monotone and L-Lipschitz continuous operator and ε = 1−λL 1+λL .Clearly, the condition (39) depends on the constants λ and L. But, Theorem 4.5 shows the weak convergence of I-TEM (34) under the simpler condition: which is independent from the constants λ and L.
(ii) In Sect.3, the rate of convergence of the relaxed inertial Mann iterative method is estimated as R whereas the rate of convergence of the relaxed inertial normal S-iterative method is estimated as Note all three algorithmic operators E λ , T λ and U λ are strongly quasi-nonexpansive operators.This provides an affirmative answer of (Q2) in the light of the convergence 123 rates of the relaxed inertial Mann iterative method ( 13) and the relaxed inertial normal S-iterative method (17).

Applications to pseudo-convex optimization problems
Let G be a nonempty open set in R m and f : G → R be a differentiable function.Then f called pseudo-convex on G, if for all x, y ∈ G it holds: It is well-known that f is pseudo-convex on G if and only if ∇ f is pseudo-monotone on G.
Consider the pseudo-convex optimization problem: where f : G ⊆ R m → R is a differentiable function with Lipschitz continuous gradient, which is also pseudo-convex on an open convex set G and C is a nonempty closed convex set of G. Thus, the relaxed extragradient methods studied in Sect.4.1 can be applied to solve pseudo-convex optimization problem (42).We now give an example of pseudo-convex function for numerical solutions of pseudo-convex optimization problem (42).

Example 4.1 Let
Then we have the following: We observe that Ψ (t) ≤ 125 54d for all t ≥ 0. Thus, F is 125 54d -Lipschitz continuous on X .The simulation of function Ψ is given in Fig. 1 for d = 0.8, 1, 1.2.
Following Sahu and Singh (2021, Example 2.1), we have the following: Example 5.1 Let F : R 3 → R 3 be an operator defined by with , where r , s, t are real numbers.Then F is monotone and L-Lipschitz continuous, where Example 5.2 Let U , V : X → X be bounded linear operators such that U is self-adjoint and V x, x ≥ 0 and U x, x ≥ η x 2 for all x ∈ X , where η > 0. Let F : X → X be an operator defined by 123 where α ≥ 0 and q ∈ X .Then F is pseudo-monotonous and Example 5.3 The operator F is defined by F(x) := M x + q for all x ∈ R m , where M = B B + S + D, and B, S, D ∈ R m×m are randomly generated matrices such that S is skewsymmetric, D is a positive definite diagonal matrix (hence the variational inequality has a unique solution) and q = 0.

Numerical experiments based on Example 5.1
Choose r = 1, s = −1, t = 1 in Example 5.1.Then the operator F defined by ( 45) is monotone and L-Lipschitz continuous on R 3 , where L = √ 18.It is easy to verify that the point x * = (0, 0, 0) ∈ R 3 is a solution of problem VI(C, F).
Tables 3, 4 and 5 demonstrate that algorithm RInS-SEM(38)-based inertial normal S-iterative method performs better than other inertial algorithms.Note that F = ∇ f is pseudo-monotone and 125 54d -Lipschitz continuous on X = R m .Here we take m = 500 and d = 0.8, 1, 1.2.

Concluding remarks and further research
In this paper, we have achieved the goal of unification and further acceleration of three different extragradient methods by using operator theoretic methodology and the normal S-iterative methodology.More precisely, we investigated a unified framework for well-known three extragradient methods KEM (4 ), TEM (7) and SEM (8) and investigated computational theory of the inertial extragradient methods with relaxation parameters for solving infinite-dimensional variational inequality problem over a nonempty convex closed set governed by a pseudo-monotone and Lipschitz continuous operator.Indeed, we investigated that the inertial fixed point iterative methods with relaxation parameters can be transformed into the inertial extragradient methods for efficiently solving variational inequality and pseudo-convex optimization problems.Our operator theoretic methodology provides affirmative answers to questions (Q1) and (Q2).Our numerical experiments evidently show that the relaxed inertial normal S-iterative technique has a better convergence behaviour compared to the relaxed inertial Mann iterative variant of Tseng's method, Korpelevich's extragradient method and also the subgradient extragradient method.These methods may be applied to optimal control problems, signal/image recovery problems, etc.The strong convergence of the extragradient methods with relaxation parameters is still an open question that could be an interesting topic for a future research.
Funding The authors have not disclosed any funding.
Appendix-I: Proof of Lemma 2.5 11), we have and hence Since θ n ≤ θ n+1 for all n ∈ N, it follows that Note K > 0 by the inequality (12).Hence which implies that ϕ n+1 ≤ ϕ n for all n ∈ N. Thus, the sequence {ϕ n } is non-increasing.(b) For n ∈ N, we have and Since ϕ 1 > 0, from (52), we have Combining ( 53) and ( 54), we have From (51), we get 50), we have From Lemma 2.4, we obtain that lim n→∞ x n − v exists.
Since := lim n→∞ x n − v exists, it follows, from lim n→∞ x n − w n = 0, that lim n→∞ w n − v = .Hence, from (62), we obtain that lim n→∞ w n − T w n = 0 = lim n→∞ y n − T y n .
Remark 4.5 (i) Theorem 4.4 is the inertial version of Bot et al. (2020, Theorem 3.1), whereas Theorem 4.6 is the inertial version of Bot et al. (2020, Theorem 3.1) based on normal S-iteration method (16).(ii) Theorem 4.4 and Theorem 4.6 are improvements upon Bot et al. (2020, Theorem 3.1) and Thong and Hieu (2018, Theorem 3.3) in the context of the relaxed inertial variants of their extragradient methods.In view of Corollaries 3.1 and 3.2 and Remark 4.3, we have Theorem 4.7 Let C be a nonempty closed convex subset of X and F : X → X a pseudo-monotone and L-Lipschitz continuous operator such that Ω[VI(C, F)] = ∅.Let λ ∈ (0, 1/L) and {α n } a sequence satisfies the conditions (C1).Then we have the following: (a) If κ = (1 − λL)/2 and {x n } is a sequence in X generated by the relaxed Mann iteration-based Korpelevich's extragradient method (RM-KEM): then {x n } converges weakly to an element of Ω[VI(C, F)].(b) If κ = (1 − λL)/(1 + λL) and {x n } is a sequence in X generated by the relaxed Mann iteration-based Tseng's extragradient method (RM-TEM):

Lemma 2.3 (
see Dong 2015) Let {β n } and {γ n } be sequences of positive real numbers such that ∞ n=1 β n γ n < ∞.Suppose that the sequence {β n } is nonsummable and the sequence {γ n } is decreasing.Then γ n = o ) if and only if lim n→∞ s n t n = 0. Let {a n }, {b n } and {θ n

Numerical experiments for pseudo-convex optimization problem (42) with Example 4.1
Consider pseudo-convex optimization problem (42), where the objective function f is defined by (43) (see Example 4.1).