A new self-adaptive iterative method for variational inclusion problems on Hadamard manifolds with applications

The objective of this work is to design a new iterative method based on Armijo’s type-modified extragradient method for solving the inclusion problem (A+B)-1(0)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{(A+B)^{-1}(0)}$$\end{document}, where A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{A}$$\end{document} is a maximal monotone vector field and B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{B}$$\end{document} is a continuous monotone vector field. The proposed method requires one projection at each iteration, reducing the cost of computational viewpoint and improving the convergence rate. A convergence theorem is established for the proposed extragradient method, significantly improving existing results. We provide concrete examples of Hadamard manifolds and convergency for numerical confirmation. Moreover, we demonstrate convergence results for the variational inequality problems in which the vector field’s monotonicity can be removed.


Introduction
The theory of variational inclusion problems has been studied in various spaces, namely, Hilbert spaces and Banach spaces in nonlinear analysis (see [1][2][3][4][5]).Let C be a nonempty closed convex subset of a Hilbert space X .The variational inclusion problem is to find x * ∈ C such that x * ∈ (A + B) −1 (0), (1) where A : C → 2 X is a set-valued map and B : C → X is a single-valued map, for more information, (see [1][2][3][4]) and the references therein.If we take B = 0, then the problem (1) transforms into the following inclusion problem: The problem (2) was introduced by Martinet [6].Furthermore, Rockafellar [7] proposed a proximal point algorithm for the solution of problem (2).
Over the last few decades, many researchers (see [1][2][3][4]) have investigated the problem of finding solutions to the problem (1) in the setting of linear spaces and developed algorithms that converge to a fixed point of mapping J A λ (I − λB), where λ > 0 and J A λ is the resolvent of the set-valued map A. Indeed, the fixed point of the mapping J A λ (I − λB) is a solution to the problem (1) in Hilbert spaces (see [1]).Furthermore, in general, the operator I − λB is not nonexpansive in Banach spaces, therefore, Sahu et al. [5] introduced the concept of 'property (N )' for the nonexpansivity of the operator I − λB as follows: Let C be a nonempty closed convex subset of a Banach space X .An operator B : C → X is said to satisfy the property (N ) on (0, γ X ,B ) if there exist γ X ,B ∈ (0, ∞], depends on X and B, such that I − λB : C → C is nonexpansive for each λ ∈ (0, γ X ,B ).The authors also proved that in a Hilbert space, every η-inverse strongly monotone operator satisfies the property (N ) for each λ ∈ (0, 2η).
On the other hand, if we replace A(x) by N C (x) = {z ∈ X : z, y − x ≤ 0 for all y ∈ C}, a normal cone to C, then the variational inclusion problem (1) reduces to the classical variational inequality problem, shortly V I (B, C): find x * ∈ C such that B(x * ), y − x * ≥ 0 for all y ∈ C.

Minty variational inequality problem (M V I (B, C
)) associated to V I (B, C) is given by find u ∈ C such that B(y), u − y ≤ 0 for all y ∈ C. ( The projection methods play a crucial part in the variational inequality theory and are one of the most effective and often used techniques to solve variational inequality, see [8][9][10][11][12].The projection method may only converge to solve variational inequality problems if the operator B is strongly monotone (see [13]).To obtain a convergent projection method for such problems, the extragradient (or double projection) method has been proposed by Korpelevich [14] and provided a convergence result of variational inequality problem under pseudomonotonicity and Lipschitz continuity of B. However, when B is not Lipschitz continuous, or the Lipschitz constant is unknown, the extragradient method may not be applied to solve variational inequality directly.In this case, Solodov and Svaiter [11] proposed an extragradient method and provided a convergence result by taking a pseudomonotone operator B and nonemptiness of solution set of V I (B, C).In addition, they pointed out that their algorithm works well for the non-Lipschitz mappings by taking appropriate stepsize in each iteration through an Armijo-type search.In the method of Solodov and Svaiter [11], the projection onto the feasible set C is computationally expensive and the mapping B must be pseudomonotone on C.
Later on, Ye and He [15] proposed a double projection method for the convergence result by using only the continuity of B and the nonemptiness of the solution set of the Minty variational inequality problem.One of the advantages of this algorithm is that we can solve the variational inequality problem without monotonicity.However, one has to solve an optimization problem in which its constraint set is the intersection of the constraint set in the previous iteration and a half-space.This may significantly lead to high computational costs as the number of dimensions of the under-consideration problem increases.To overcome this situation, in 2022, Dinh et al. [16] proposed a modified Solodov-Svaiter method in which only one projection is needed to be calculated at each iteration.
In 2017, Ferriera [26] considered the inclusion problem for the sum of two vector fields in the setting of Hadamard manifold M as follows: where A : M → 2 T M is a set-valued vector field and B : M → T M is a single valued vector field.
Németh [18] introduced the variational inequality problem, shortly V I (B, C) in the setting of Hadamard manifold as follows: where C is a nonempty closed geodesic convex subset of a Hadamard manifold M and B : M → T M be a vector field.The solution set of problem ( 5) is denoted by In 2012, Tang and Huang [22] established Korpelevich's method and its convergence result for variational inequality problem (5), when B is pseudomonotone and continuous vector field.Later, Tang et al. [23] proposed a projection-type method, which is an improvement of the method given by [22].In 2021, Chen et al. [28] introduced a modified Tseng's extragradient methods with new steps to solve the variational inequality problem (5).
Later, in 2018, Ansari et al. [25] introduced Korpelevich's method to solve inclusion problem (4) in Hadamard manifolds.Moreover, they proved that the fixed point of the map J A λ (exp • −λB(•)) is the solution of problem (4), where J A λ is the resolvent of the vector field A. Also, Ansari et al. [25] proposed an Armijo type extragradient method in the framework of Hadamard manifolds.They proved that the sequence generated by the Algorithm converges to the solution of problem (4), if A is a maximal monotone vector field and B is a monotone and continuous vector field.In 2019, Al-Homidan et al. [29] proposed the Halpern and Mann-type Algorithms to solve the problem (4) with an assumption on the vector field B. They assumed that the map exp • (−λB(•)) : M → M satisfies the nonexpansivity condition, that is, which is the property (N ) in the sense of Hadamard manifold (see [5,30]).In 2022, Khammahawong et al. [31] introduced two of Tseng's methods which converge to the solution of the inclusion problem (4) under certain conditions.

Question 1 Is it possible to construct an algorithm that converges to the solution of inclusion problem (4) without using the property (N ) in the Hadamard manifold?
This paper aims to design an iterative algorithm with an Armijo-type line search for solving inclusion problem (4) in Hadamard manifolds.We establish a convergence theorem without the property (N ) for the proposed iterative algorithm.The considered method is new and less expensive from a computational point of view in the Hadamard manifolds.A convergence theorem is established for the proposed extragradient method.The proposed algorithm needs only one projection at iteration that significantly improves the rate of convergence and existing results.We provide concrete examples of Hadamard manifolds and convergency for numerical confirmation.Moreover, we demonstrate convergence results for the variational inequality problems in which the vector field's monotonicity can be removed.Moreover, we establish the convergence result for the nonmonotone variational inequality problem, which improves the outcomes of [8,17,22,23,31].

Preliminaries
This section recalls some fundamental definitions, properties and notations of Riemannian manifolds.For more intricate details, we refer to [32][33][34].
Let M be an m-dimensional differentiable manifold and x ∈ M. The set of all tangents at the point x is called the tangent space of M at x ∈ M, denoted by T x M, which forms a vector space of dimension m.Tangent bundle of M is defined as x is an inner product on T x M for every x ∈ M, then the smooth mapping •, • : x −→ •, • x is called a Riemannian metric on M. The corresponding norm to the inner product •, • x on T x M is denoted by • x .We omit the subscript x, if there is no confusion occurs.A differentiable manifold M with the Riemannian metric •, • is said to be a Riemannian manifold.
Let x, y ∈ M and γ : [a, b] → M be a piecewise smooth curve joining x to y.Then, the length of the curve γ is defined as L(γ ) = b a γ (t) dt, where γ (t) ∈ T γ (t) M is tangent vector at the point γ (t) ∈ M. The minimal length of all such curves joining x to y is called the Riemannian distance, which induces the original topology on M. It is denoted by d(x, y) and defined by d(x, y) = inf{L(γ ) : γ ∈ C xy }, where C xy denote the set of all continuously differentiable curves γ : [0, 1] → M such that γ (0) = x and γ (1) = y.Also, the metric induces a mapping f → grad f that associates each differentiable function with its gradient via the rule grad f , X = d f (X ) for all X , where X : M → T M is a vector field.
Let be the Levi-Civita connection associated with the Riemannian manifold M. A vector field A along γ is said to be parallel if γ A = 0.If γ is parallel along γ , i.e., γ γ = 0, then γ is said to be geodesic.In this case, γ is a constant.Moreover, if γ = 1, then γ is called a normalized geodesic.A geodesic joining x to y in the Riemannian manifold M is said to be a minimal geodesic if its length is equal to d(x, y).We will denote the geodesic joining x and y by γ (x, y; •), that is, γ (x, y; Let M be a Riemannian manifold.Let γ : [a, b] → M be the geodesic joining γ (a) = x to γ (b) = y.The parallel transport on the tangent bundle T M along the geodesic γ with respect to Riemannian connection ∇ is defined as where V is the unique vector field satisfying ∇ γ (x,y;t) V = 0 for all t ∈ [a, b] and V (γ (x, y; a)) = v.A Riemannian manifold M is said to be complete if for any x ∈ M all geodesic emanating from x are defined for all t ∈ R. By Hopf-Rinow Theorem [33], if M is complete, then any pair of points in M can be joined by a minimal geodesic.Moreover, (M, d) is a complete metric space.If M is a complete Riemannian manifold, then the exponential map exp x : (1) for all v ∈ T p M, where γ x,v : R → M is a unique geodesic starting from x with velocity v, i.e., γ x,v (0) = p and γ x,v (0) = v.It is known that exp x (tv) = γ x,v (t) for each real number t and exp x 0 = γ x,0 (0) = p, where 0 is the zero tangent vector.Note that the exponential map exp x is diffeomorphism on T x M, see [34].Moreover, for any x, y ∈ M, we have d(x, y) = exp −1 x y .A complete, simply connected Riemannian manifold of nonpositive sectional curvature is called a Hadamard manifold.

Proposition 1 ([33]) Let M be a Hadamard manifold and let
Let C be a nonempty subset of a Hadamard manifold M and let T : C → M be an operator.Then T is said to be (i) nonexpansive if

d(T x, T y) ≤ d(x, y) for all x, y ∈ C;
(ii) firmly nonexpansive (see [20]) if x T x, exp y t exp −1 y T y) for all x, y ∈ C and for all t ∈ [0, 1].

Remark 1 ([19]
) Since a unique geodesic can join any two points in Hadamard manifold M.Then, we denote the parallel transport by P y,x instead of P γ,γ (b),γ (a) .That is, if γ (x, y; •) : [0, 1] → M is the geodesic joining x and y, then P y,x : T x M → T y M is defined by where V is the unique vector field satisfying ∇ γ (x,y;t) V = 0 for all t ∈ [0, 1] and

Remark 2 ([19])
(i) For every x, y, z ∈ M, we have P y,z • P z,x = P y,x and P −1 y,x = P x,y .
(ii) P y,x is an isometry from T x M to T y M, i.e., the parallel transport preserves the inner product A subset C of a Hadamard manifold M is said to be a geodesic convex [19] if, for any two points x and y in C, the geodesic joining Proposition 2 ([35]) Let M 1 and M 2 be Riemannian manifolds and : M 1 → M 2 be an isometry between M 1 and M 2 .Then, the function f : Let C be a subset of a Hadamard manifold M. The projection map onto C is given by Lemma 2 ([19]) Let {x n } be a sequence in a Hadamard manifold M such that x n → x ∈ M and let y ∈ M.Then, the following assertions hold: Lemma 3 ([27]) Let M be a Riemannian manifold with constant curvature.For given q ∈ M and s ∈ T q M, the set L q,s = {p ∈ M : s, exp −1 q p ≤ 0} is geodesic convex.
Let A : M → 2 T M be a set-valued vector field.The domain and for all v ∈ A(y); (b) maximal monotone if it is monotone and, for any x ∈ D(A) and u ∈ T x M the condition that u, exp −1 x y ≤ v, − exp −1 y x for all y ∈ D(A) and for all v ∈ A(y), implies that u ∈ A(x); (c) pseudomonotone if, for any x, y ∈ D(A), the following implication holds:

and for all v ∈ A(y).
Let A : M → 2 T M .Given λ > 0, the resolvent of A of order λ is the set-valued mappings J A λ : M → 2 M defined by Proposition 3 ([20]) Let M be a Hadamard manifold and A : D(A) → 2 T M be a set-valued vector field.Then for any λ > 0, the following assertions hold: (a) The vector field A is monotone if and only if J A λ is single-valued and firmly nonexpansive.

(b) If D(A) = M, the vector field A is maximal monotone if and only if J A
λ is single valued, firmly nonexpansive and the domain D(J A λ ) = M.
Lemma 4 [36] Let M be a Hadamard manifold and let A : M → 2 T M be a maximal monotone vector field.If {x n } is bounded with u n ∈ A(x n ) for all n ∈ N, then {u n } is bounded.
Proposition 4 ([25]) Let C be a closed geodesic convex subset of the Hadamard manifold M. Let A : M → 2 T M be a monotone vector field and B : M → T M be a vector field.For any λ ∈ (0, ∞), define Then, for any x * ∈ C and any λ ∈ (0, ∞), the following statements are equivalent: Lemma 5 ([25]) Let C be a nonempty geodesic convex subset of a Hadamard manifold M. Let A : M → 2 T M be a monotone vector field and B : M → T M be a vector field.Then Let M be a Hadamard manifold and f : M → (−∞, ∞] be a proper lower semicontinuous convex function with Dom( f The subdifferential of f at x ∈ Dom( f ) is defined by It is well known that ∂ f (x) is a nonempty, closed and geodesic convex set (see [19]).The following lemma extends Lemma 2.5 of Ye and He [15] from Euclidean spaces to Hadamard manifold.

If H is a nonempty set and h is Lipschitz continuous on C with modulus
Lemma 8 Let M be a Hadamard manifold and let A : M → 2 T M be a monotone vector field and B : M → T M be a vector field.Let x ∈ M and define γ : It follows from Lemma 5 that

Self-adaptive extragradient method
This section introduces a new self-adaptive iterative method for solving the inclusion problem (4) and studies the proposed method's convergence in the Hadamard manifold framework.Throughout the section, we denote N 0 = N ∪ {0}.
First, we introduce a new self-adaptive iterative algorithm for the inclusion problem (4) that requires one projection at each step, decreasing the convergence iterations to achieve the desired solution.
Step 3. Define and compute where where Step 4. Compute where and go to iteration n with n replaced by n + 1.(b) Suppose by contradiction that, for every positive integer m, we have

Proposition 8 Let
Since From ( 9) and ( 10), we have Since γ n (η m ) → x n as m → ∞, then by the continuity of the vector field B, we have Hence, d(x n , z n ) < 0, which contradicts the fact that x n = z n .Thus, the line search procedure ( 7) is well defined.(c) Recalling that γ i n (η i n ) is the geodesic joining x i n and J A,B λ (x i n ).Since C is geodesic convex, by the definition of y i n , we obtain y i n ∈ C. Note that y i n ∈ H i n .Thus, C n = C ∩ H i n is nonempty.From Lemma 2, H i n is closed and from Lemma 3 that H i n is geodesic convex.Hence C n = C ∩ H i n is a closed and geodesic convex set.Thus, P C n is well defined and hence x n+1 Lemma 9 Let M be a Hadamard manifold of constant curvature and let C be a nonempty closed and geodesic convex subset of M. Let A : M → 2 T M be a maximal monotone vector field and B : M → T M be a continuous and monotone vector field such that (A + B) −1 (0)∩C = ∅.Let {x n } be the sequence generated Algorithm 1.
Since A is monotone and −B(x * ) ∈ A(x * ), for any x ∈ M and any u x ∈ A(x), we have Since B is monotone, we have From ( 11) and ( 12), we have In particular, for x = y j ∈ C and u y j ∈ A(y j ), 0 ≤ j ≤ n, we have B(y j ) + u y j , exp −1 y j x * ≤ 0 for all 0 ≤ j ≤ n.
From ( 8), we obtain that x * ∈ H j for 0 ≤ j ≤ n.In particular, for j = i n , we get Theorem 1 Let M be a Hadamard manifold of constant curvature and let C be a nonempty closed and geodesic convex subset of M. Let A : M → 2 T M be a maximal monotone vector field and B : M → T M be a continuous and monotone vector field such that (A + B) −1 (0) ∩ C = ∅.Then, the sequence {x n } generated by Algorithm 1 converges to the solution of problem (4).
Proof From Proposition 8, Algorithm 1 is well defined.We proceed with the proof in the following steps: Step 1. lim n→∞ d(x n , H n ) = 0. Let x * ∈ (A+ B) −1 (0)∩C.By Lemma 9, we see that x * ∈ C n .Since x n+1 = P C n (x n ), it follows from Lemma 1 that which implies that Thus, lim n→∞ d(x n , x * ) exists and hence sequence {x n } is bounded.From (13), we have From ( 14), we obtain that From the definition of i n and C n , we get From ( 15) and ( 16), we have Step 2.
Since {x n } is bounded, it follows from the continuity of B that {B(x n )} is bounded.Hence, from the nonexpansiveness of J A λ , the sequence {z n } is bounded.Thus, the sequence {y n } is bounded.Since B is continuous and A is maximal monotone.From Lemma 4, the sequence {B(y n ) + u y n } is bounded.Hence, there exists κ > 0 such that For n ∈ N 0 , let h n : C → R be a function defined by It follows from Cauchy-Schwarz inequality and Proposition 1 that

Thus, each function h n is Lipschitz continuous with modulus κ.
On the other hand, from (7), we have which implies that From Lemma 8, we get which implies that From Lemma 7, ( 18) and ( 17), we have Step 3. Any cluster point of Let w be a cluster point of {x n }.Then there exists a subsequence Now, we have two cases: Case 1.Let {η n k } does not converge to 0. Then, there exist some η > 0 and subsequence, still denoted by Thus, lim k→∞ d(x n k , z n k ) = 0. Since lim k→∞ x n k = w, we have Assume that lim k→∞ η n k = 0.By the choice of η n , it follows from (7) that 20), we have Since B is continuous, A is maximal monotone and the parallel transport is an isometry, passing limit k → ∞, we obtain Hence, we conclude that Thus, w ∈ (A + B) −1 (0) ∩ C. Replace x * by w in (13).Then, we obtain that lim n→∞ d(x n , w) exists.Note that w is cluster point of {x n }.Therefore, the sequence {x n } converges to w.
Remark 4 (i) Our Theorem 1 deals with iterative solution of Problem ( 4).Here our approach is different from the Theorem 3.6 of Ansari et al. [25].(ii) For the convergence of Algorithm 1, the property (N ) is not required in Theorem 1.This provides an affirmative answer to Question 1.

Numerical experiments
In this section, we conduct numerical experiments to compare the proposed and existing methods based on their efficiency.
++ , •, • ) be the Riemannian manifold with the Riemannian metric defined by where G(x) is the diagonal matrix defined by ).Then M is a Hadamard manifold with sectional curvature 0. For more details, see [37].
The geodesic γ (x, y; •) : [0, 1] → M joining x = γ (x, y; 0) and y = γ (x, y; 1) in M is defined by The inverse of the exponential map is given by exp The exponential map exp x : T x M → M is defined by The Riemannian distance d : M × M → R + is given by The parallel transport P y,x : T x M → T y M is defined by , 3} be the closed geodesic convex subset of M. Let A : M → 2 T M be a vector field defined by and B : C → T M be a vector field defined by Then A is the maximal monotone vector field, and B is a continuous and monotone vector field.The resolvent of the vector field A, J A λ : C → M is defined by 1)/2}.For the numerical experiment, we choose two sets of parameters λ = 0.7, σ = 0.3, η = 0.8 and λ = 1, σ = 0.9, η = 0.3.It can be observed that all the conditions of Theorem 1 are satisfied, hence sequence {x n } generated by Algorithm 1 converges to the solution of problem (4).
++ , •, • ) be the Riemannian manifold with the Riemannian metric defined by for all x, y ∈ M. The sectional curvature of the Riemannian manifold M is 0.
The inverse of the exponential map exp −1  x : M → T x M is defined by x 3 for all x, y ∈ M.
The parallel transport P y,x : T x M → T y M is defined by and B : M → T M be defined by Then A is the maximal monotone vector field on C, and B is a continuous and monotone vector field on C.
Note that the resolvent of A, J A λ : M → M is defined by From the numerical experiments, we choose random initial points x 0 and the iteration parameters λ = 0.7, σ = 0.3, and η = 0.8 for Algorithm 1, Algorithm KKCM1, σ = 0.3 for Algorithm KKCM2 and λ = 0.7 and η = 0.8 for Algorithm ABL.Define r (x n , λ n ) ≤ 10 −8 as the stopping criteria to study the convergence of the algorithms.All the codes run on the MatLab 2019b on the machine intel i5 processor with 8GB RAM.
From Fig. 2 and Table 2, we can see that for different initial points, our proposed Algorithm 1 takes less number of iterations to converge to the solution of problem (4) concerning algorithms Alg.KKCM 1, Alg.KKCM 2 and Alg.ABL.

Application to variational inequality problems
In this section, we explore the application of the discussed results on the variational inequality problems.First, we prove the result for the monotone variational inequality problem and later prove it for non-monotone variational inequality problems.
Let C be a nonempty closed geodesic convex subset of Hadamard manifold M and let A : M → 2 T M be the vector field defined by A(x) = ∂i C (x), where i C is the indicator function of C, i.e., i C (x) = 0 for x ∈ C and i C (x) = ∞ otherwise.Let B : M → T M a single-valued vector field.Then, the problem (4) transform into the variational inequality problem (5).Moreover, the resolvent of the vector field ∂i C is the projection operator P C , for more detail, see [25].Thus, the Algorithm 1 transform into the following Algorithm, which is different from Algorithm 4.1 of [22] and the Algorithm 3.1 of [23].

Algorithm 2
Let C be a nonempty geodesic convex subset of a Hadamard manifold M and B : M → T M be a vector field.Initialization: Choose x 0 ∈ C and two parameters η ∈ (0, 1) and σ ∈ (0, 1).Iteration: For given x n do the following steps.
Step 1. Compute and compute where Step 4. Compute where and go to iteration n with n replaced by n + 1.
Theorem 2 Let M be a Hadamard manifold of constant curvature and let C be a nonempty closed and geodesic convex subset of M. Let B : M → T M be a continuous and monotone vector field such that (V I (B, C)) = ∅.Then, the sequence {x n } generated by Algorithm 2 converges to the solution of problem (5).
Proof Set A = ∂i C .From Proposition 6, A is maximal monotone vector field and J A λ = P C .Therefore, Theorem 2 follows from Theorem 1.

Remark 5
In Theorem 2, we used the monotonicity of the vector field B. The question is whether the monotonicity condition can be removed.Although [22] and [23] have established the result for the pseudomonotone case, we also want to remove the pseudo monotonicity condition.
The Minty variational inequality problem, in short M V I (B, C) associated to the variational inequality problem ( 5) is defined as follows: where C is a nonempty closed geodesic convex subset of a Hadamard manifold M and B : M → T M is a vector field.The solution set of problem ( 22) is denoted by Reverse inclusion is valid if the vector field B is pseudomonotone (see [23,38]).Now, we apply Algorithm 2 for solving variational inequality problem (5) when vector field B is only continuous.Before establishing another convergence theorem, we need the following lemma.In particular, for y = y j ∈ C, 0 ≤ j ≤ n, we have B(y j ), exp −1 y j x * ≤ 0 for all 0 ≤ j ≤ n.
From (21), we have x * ∈ H j for 0 ≤ j ≤ n.In particular, for Remark 6 (i) If M = R n , then Algorithm2 transform into the Algorithm 1 of Dinh et al. [16] with new linesearch procedure.(ii) Algorithm 2 is an improvement of Algorithm 4.1 of Tang and Huang [22] and Algorithm 3.1 of Tang et al. [23].(iii) From Theorem 3, we can see that for the convergence of Algorithm 2, we need only the continuity of vector field B.
The following example is devoted to the comparison of the numerical behavior of three algorithms, including our Algorithm 2, the Algorithm of Tang et al. [23] (denoted by Algorithm TWL) and the Algorithm of Tang and Huang [22] (denoted by Algorithm TH).
Then g is an isometry between R 3 ++ and R Then, the gradient of f in the Euclidean sense is defined by It follows from Remark 3 that grad f (x) = ∇ f (x)G −1 (x) = 3x 1 ln x 1 x 2 x 3 , 3x 2 ln x 1 x 2 x 3 , −3x 3 ln x 1 x 2 x 3 for all x ∈ M.
Let C = {(x 1 , x 2 , x 3 ) ∈ M : 1 ≤ x i ≤ 2 for i = 1, 2, 3}.Then, C is a closed, geodesic convex subset of M. Note that h is a proper continuous convex function on R 3 (see [39]).Since g is an isometry, it follows from Proposition 2 that f is a proper continuous geodesic convex function on M. From Proposition 5, grad f is a monotone vector field.From Proposition 7, we have that For the numerical experiment, we take random initial points x 0 and the iteration parameters λ = 1, σ = 0.7 and η = 0.5 for our Algorithm 2, Algorithm TWL and Algorithm TH.Define r (x n , λ) ≤ 10 −5 as the termination criteria to study the convergence of algorithms.
From Fig. 3 and Table 3, we can observe that for different initial points, proposed Algorithm 2 takes the least number of iterations and comparatively less time to achieve

Conclusion
This article proposes a new self-adaptive extragradient method for solving the variational inclusion problem in Hadamard manifolds of constant curvature.The proposed method used Armijo's type line search procedure which helps the method to converge without the prior knowledge of the Lipschitz constant.The convergence theorem of the proposed algorithm is established under some suitable conditions without using the property (N ).Furthermore, we proposed an iterative method for the variational inequality problem and established a convergence result for the nonmonotone variational inequality.Numerical experiments of the proposed algorithm have been done to support our algorithm.Also, we have compared several existing methods with the proposed algorithm and found positive results.

Proposition 5 (
[34]) Let C be an open geodesic convex subset of a Hadamard manifold M and let f :M → R is differentiable on C. Then f is convex on C if and only if grad f is monotone on C. Proposition 6 ([19]) Let f : M → (−∞, ∞]be a proper, lower semicontinuous and geodesic convex function on Hadamard manifold M.Then, the subdifferential ∂ f of f is a monotone vector field.Moreover, if Dom( f ) = M, then ∂ f is a maximal monotone vector field.Proposition 7 ([18]) Let C be a geodesic convex subset of a Hadamard manifold M and f : C → R be a differentiable geodesic convex function.Then, x * is a solution to the minimization problem: min x∈C f (x) if and only if it is a solution of the V I (B, C) with B = grad f .Let C be a nonempty subset of a Hadamard manifold M. The distance from x ∈ M to C is defined and denoted by d(x, C) = inf y∈C d(x, y).Lemma 6 Let C be a nonempty closed subset of a Hadamard manifold M.Then, for any x ∈ M, there exists a point x 0 ∈ C such that d(x, C) = d(x, x 0 ).
M be a Hadamard manifold of constant curvature and let C be a nonempty closed and geodesic convex subset of M. Let A : M → 2 T M be a maximal monotone vector field and B : M → T M be a continuous and monotone vector field such that (A + B) −1 (0) ∩ C = ∅.Let {x n } be the sequence defined by Algorithm 1.Then, the following assertions hold: (a) If r (x n , λ)=0, then the current term x n is a solution of Problem 4. (b) If r (x n , λ) = 0, then the the linesearch procedure defined by(7) is well defined.(c) C n is nonempty closed and geodesic convex set and x n+1 is well defined.Proof (a) The result follows directly from Proposition 4(c).

Fig. 1
Fig. 1 Performance of Algorithm 1 for different sets of iteration parameters for Example 1

Lemma 10
Let M be a Hadamard manifold of constant curvature and let C be a nonempty closed and geodesic convex subset of M. Let B : M → T M be a continuous vector field such that (M V I (B, C)) = ∅.Let {x n } be the sequence generated by Algorithm 2. Then (M V I (B, C)) ⊆ C n .Proof Let x * ∈ (M V I (B, C)).Then, we have B(y), exp −1 y x * ≤ 0 for all y ∈ C.

Theorem 3
Let M be a Hadamard manifold of constant curvature and let C be a nonempty closed and geodesic convex subset of M. Let B : M → T M be a continuous vector field such that (M V I (B, C)) = ∅.Then, the sequence {x n } generated by Algorithm 2 converges to the solution of problem (5).Proof Set A = i C .From Proposition 6, A is maximal monotone vector field and J A λ = P C .Since (M V I (B, C)) = ∅.By Lemma 10, if x * ∈ (M V I (B, C)), then x * ∈ C n .Therefore, Theorem 3 follows in the similar steps of Theorem 1.

Table 3
Numerical comparison of Algorithm 2, Algorithm TWL and Algorithm TH [23]termination criteria while Algorithm of Tang and Wang[23]remains second fastest Algorithm.