A double projection algorithm for quasimonotone variational inequalities in Banach spaces

We propose a double projection algorithm for solving variational inequality problems in Banach spaces. We establish the strong convergence of the whole sequence generated by the proposed method under the quasimonotone and uniform continuity on bounded sets, which are weaker conditions than those used in existing projection-type methods for solving variational inequality problems in Banach spaces.


Introduction
Let B be a reflexive Banach spaces with norm · , and let B * be its topological dual with norm · * . By x * , x we denote the duality coupling in B * × B defined by f , x = f (x) for all x ∈ B and f ∈ B * . By x n → x and x n x we denote the strong and weak convergence of a sequence {x n } to x, respectively. We consider the following variational inequality problem, denoted by VI(T, C): find a vector x * ∈ C such that T x * , yx * ≥ 0 for all y ∈ C, where C is a nonempty closed convex subset of B, and T : B → B * is an operator.
Let S be the solution set of VI(T, C), and let S D be the solution set of the dual variational inequality, that is, If T is continuous and C is convex, then we have Indeed, for anyx ∈ S D , we havex ∈ C. For any given y ∈ C and t ∈ [0, 1], applying the convexity of C, we obtain (1t)x + ty ∈ C.
Therefore the definition of S D implies that T (1t)x + ty , (1t)x + ty -x ≥ 0 or, equivalently, Letting t → 0, by the continuity of T we obtain T(x), y -x ≥ 0, that is,x ∈ S, and thus, S D ⊂ S. The variational inequality problem was first introduced by Hartman and Stampacchia [1] in 1966. The projection-type algorithms for solving the variational inequality problem have been extensively studied in a finite-dimensional space, such as proximal point methods [2], extragradient projection methods [3][4][5][6], double projection methods [7][8][9][10], and self-adaptive projection methods [11,12]. To prove the convergence of a generated sequence, all the methods mentioned have the common assumption S ⊂ S D , that is, T(y), yx * ≥ 0 for all x * ∈ S and y ∈ C, ( 2 ) which is a direct consequence of the pseudomonotonicity of T on C in the sense of Karamardian [13]; T is said to be pseudomonotone on C if for all x, y ∈ C, T is said to be quasimonotone on C if for all x, y ∈ C, Note that pseudomonotone implies quasimonotone, but the converse is not true.
Recently, in the literature [14,15],an interior proximal algorithm for solving quasimonotone variational inequalities is proposed, and the global convergence is obtained under more assumptions than S D = ∅ and quasimonotonicity. Clearly, and S D = ∅ is weaker than assumption (2). Thus S = ∅ and pseudomonotonicity contain quasimonotonicity and S D = ∅, whereas the converse implications are not true. For sufficient conditions for S D = ∅, see Lemma 2.6.
On the other hand, recently, in [14][15][16] an extragradient-type method proposed in [5] is extended from Euclidean spaces to Banach spaces. Under the assumptions of the pseudomontonicity, uniform (or strong) continuity, and S = ∅, the global strong convergence is obtained. In [17] a double projection method in Banach space is studied, and the global weak convergence is obtained under more assumptions than the pseudomontonicity and uniform continuity.
Inspired by the works mentioned, in this paper, by Bregman projection we extend a double projection algorithm proposed by Solodov and Svaiter [7] for solving variational inequalities from Euclidean spaces to Banach spaces. Under the assumptions of S D = ∅, uniform continuity, and quasimonotonicity, we prove that the whole sequence generated by the proposed method is strongly convergent to the solution of the variational inequalities, and our proof techniques are different from those presented in [14][15][16][17].

Preliminaries
In this section, we recall some useful definitions and results. First, we state some properties of the Bregman distance taken from [18].
(i) The Bregman distance with respect to g is the function D g : B × B → R defined as where g (y), xy = lim t→0 g(y + t(xy))g(y) t .
(ii) The modulus of total convexity of g at the point x ∈ B is the function (iii) A function g is said to be totally convex if ν g (x, t) > 0 for all t > 0 and x ∈ B.
(iv) g is said to be a strongly convex function if there exists α > 0 such that Remark 2.1 (1) It should be noted that D g is not a distance in the usual sense of the term. In general, D g is not symmetric and does not satisfy the triangle inequality. Clearly, D g (x, x) = 0, but D g (x, y) = 0 may not imply x = y, for instance, when g is a linear functional on B.
If g is strictly convex or strongly convex on B, then we have that D g (x, y) > 0 for x, y ∈ B, x = y.
(2) Clearly, if g is a strongly convex function, then g is a totally convex function.
We present some conditions on an auxiliary function, called g, which are important for the feasibility and the convergence analysis of our algorithm.
(H1) The level sets of D g (x, ·) are bounded for all x ∈ B. (H2) g is strongly convex on B. (H3) g is uniformly continuous on bounded subsets of B. (H4) g is onto, that is, for all y ∈ B * , there exists x ∈ B such that g (x) = y. (H5) (g ) -1 is uniformly continuous on bounded subsets of B * . On the feasibility of the assumptions (H1)-(H5), see [17,19,20] and the references therein. If B = R, then g(x) = x 2 satisfies assumptions (H1)-(H5).
We recall the definition of the Bregman projection and some useful results.

Lemma 2.1
Assume that B is a Banach space, C is a nonempty, closed, and convex subset of B, g : B → R is a totally convex function on B satisfying (H1). Then there exists uniquê x ∈ C such thatx = min x∈C D g (x,x);x is called the Bregman projection ofx onto C and is denoted by Proof See p. 70 of [18].
Proof See Proposition 5 of [19]. Lemma 2.5 Let T be a continuous and quasimonotone operator, and let y ∈ C. If for some x 0 ∈ C, we have T(y), x 0y ≥ 0, then at least one of the following must hold: Proof See Lemma 3.1 of [21].

Lemma 2.6 If either
(i) T is pseudomonotone on C, and S = ∅; (ii) T is the gradient of G, where G is a differentiable quasiconvex function on an open set K ⊃ C and attains its global minimum on C; (iii) T is quasimonotone on C, F = 0, and C is bounded; (iv) T is quasimonotone on C, F = 0, and there exists a positive number r such that, for Proof See Proposition 2.1 of [22].
Remark 2. 2 We can see that the strong continuity and the uniformly continuity are two different concepts, and they both contain the continuity, whereas the converse implications are not true. Under the assumptions of the strong continuity and pseudomonotonicity, in [16] the convergence of the sequence produced is proved.
The feasibilities of Step 1 and Step 2 of the Algorithm 3.1 are explained in the following: Proof For given x k ∈ C, the feasibility of z k follows from (H4). If x k = g C (z k ), then it follows from Lemma 2.1 that x k is a solution of VIP(T, C). If g C (z k ) = x k , then Step 2 of the algorithm is well defined; otherwise, for all nonnegative integers m, we have Since γ ∈ (0, 1), we have Now letting m → ∞ in (6), by σ > 0 and the continuity of T we obtain that Note that g C (z k ) = x k and g is strongly convex, which implies that D g ( g C (z k ), x k ) > 0, a contradiction. So m k , α k , and y k are well defined.
The following lemma shows that Step 3 of Algorithm 3.1 is also feasible.

Lemma 3.2
For all x ∈ C, we have Proof See Lemma 2.5 of [16].
Proof Applying Remark 3.1, α k > 0, and σ ∈ (0, 1), we have For all x * ∈ S D , we have T(y k ), x *y k ≤ 0, from which it follows that x * ∈ C ∩ H k , so Remark 3.2 Clearly, C ∩ H k is closed and convex. It follows from Lemma 2.1 that the generation of the iteration point x k+1 in Step 3 is feasible. So Step 3 is well defined. By Lemma 3.3 we know that the hyperplane H k strictly separates the current iterate from the solutions of VI(T, C).

Lemma 3.4 If x k
By the definition of z k we obtain g x k -T x kg x k , y 0x k = -T x k , y 0x k > 0, which implies T(x k ) = 0.

Lemma 3.5 Let C be a closed convex subset of B, and let g be a continuously differentiable function satisfying (H1) and (H2). Define h : B × B → R by h(x, v) = T(v), xv for any
given v ∈ B and take K(v) = {x ∈ C : h(x, v) ≤ 0}. If K(v) = ∅ and h(·, ·) is Lipschitz continuous with respect to the first variable on C with modulus L > 0, then Proof First, we prove that, for all v ∈ B, K(v) is a convex set. In fact, for all x 1 , x 2 ∈ K(v) and θ ∈ (0, 1), we have So θ x 1 + (1θ )x 2 ∈ K(v), and K(v) is convex. Since h is continuous, we conclude that K(v) is also a closed set. For all x ∈ C \ K(v), it follows from (H1), (H2), and Lemma 2.1 that there exists unique y(x) ∈ K(v) such that By the definition of K(v) and the Lipschitz continuity of h(·, ·) with respect to the first variable on C, we obtain Since g is strongly convex, there exists α > 0 such that, for all x, y ∈ B, that is, which by (10) implies that

The convergence of algorithm
(iv) Proof (i) Applying the definition of D g , for all x, y, z ∈ B, we have Taking z = x k+1 and x = x k in (13), it follows from x k+1 = g C∩H k (x k ) and Lemma 2.1 that Taking y = x * ∈ S D in (14), we obtain (ii) It follows from D g (x k+1 , x k ) ≥ 0 and (11) that the sequence {D g (x * , x k )} is nonincreasing with lower bounds and hence is a converging sequence. This implies that {D g (x * , x k )} is a bounded sequence. Using (H2), we obtain Consequently, {x k } is a bounded sequence.
(iv) By (ii) the sequence {x k } is bounded and T is uniformly continuous on bounded subsets of B, which by (H3), (H5), and Lemma 2.4 implies that {z k } is bounded, and thus, by Lemma 2.3, { g C (z k )} is bounded. Consequently, {y k } is bounded. Taking into account the uniform continuity of T, we obtain that {T(y k )} is also bounded, that is, there exists a positive number L such that T y k * ≤ L, ∀k.
Then from (5) it follows that that is, h k is Lipschitz continuous on C. Combining Lemma 3.3 and Lemma 3.5, we obtain

Theorem 4.2 Assume that S D is a nonempty set, T : B → B * is uniformly continuous on bounded subset of B, and g : B → R satisfies (H1)-(H5). If C is a closed and convex subset of B and {x k } is an infinite sequence generated by Algorithm 3.1, then each weak accumulation point of {x k } is a solution of VI(T, C).
Proof Applying Theorem 4.1(iii) and (iv), we get Since B is a reflexive Banach space and {x k } is bounded by Theorem 4.1(ii), {x k } has at least one weak accumulation point. Let x * be any weak accumulation point of {x k } such that the subsequence {x k i } of {x k } weakly converges to x * , that is, Then we prove that x * is a solution of VI(T, C) by discussing two cases. Case 1: If lim sup i→∞ α k i > 0, then exists a subsequence, without loss of generality, still recorded as {α k i }, and a constant θ > 0 such that, for all i, we have α k i > θ . Therefore, using (16), we obtain It follows from Lemma 2.2 that Lemma 2.1 implies that It follows from the definition of z k i in Algorithm 3.1 that This implies Using (H3), (18), and the boundedness of {x k i } and { g C (z k i )}, for all given y ∈ C, letting i → ∞ in both sides of (21), we obtain that Therefore, for any given ε > 0, there exists a large enough positive integer N , such that, for i ≥ N , we have Note that T(x k i ) = 0 by Lemma 3.4. Take v k i ∈ B such that T(x k i ), v k i = 1. Then inequality (23) can be written as which implies, using Lemma 2.5, that at least one of the following must hold: or Inequality (26) implies that x k i is a solution of VI(T, C), which contradicts x k i = g C (z k i ). Thus inequality (25) must hold. Inequality (25) can be equivalently written From the continuity of T and the boundedness of {x k i }, letting ε → 0, we obtain Taking into account the fact that x k i x * , i → ∞, we obtain that is, x * ∈ S D . It follows from S D ⊂ S that x * is a solution of VI(T, C). Case 2: If lim i→∞ α k i = 0, then In fact, takē or, equivalently, From the boundedness of { g C (z k i )x k i } and lim i→∞ α k i = 0 we obtain It follows from the definition of α k i that Using the uniform continuity of T on bounded sets of B, σ ∈ (0, 1), and the boundedness of { g C (z k i )} and {x k i }, we obtain Next, applying a similar argument as in Case 1, we get the desired result.
Now we can state and prove our main convergence result. Proof Letx be any weak accumulation point of {x k }, and let {x k i } be a subsequence of {x k } such that x k i x, i → ∞. By Theorem 4.2,x is a solution of VI(T, C). We next prove that the whole sequence {x k } strongly converges tox. Indeed, since g is strongly convex, we have The function g is lower semicontinuous and convex and, thus, weakly lower semicontinuous. Hence Since g is uniformly continuous on bounded subsets of B and {x k i } is bounded, by Lemma 2.4 we get that {g (x k i )} is bounded. From (35) and (36) we have Since g and g are uniformly continuous on bounded subsets of B, {x k i } is bounded, and D g x * , x k i = g x *g x k ig x k i , x *x k i .
Letting i → ∞ in this inequality and combining this with (37), we obtain lim i→∞ D g x * , x k i = 0.
Applying the convergence of the whole sequence {D g (x * , x k )}, we get lim k→∞ D g x * , x k = 0.
From Lemma 2.2 it follows that lim k→∞ x *x k = 0, that is, the whole sequence {x k } strongly converges to x * .

Conclusion
In this paper, by the Bregman projection we extend a double projection algorithm proposed by Solodov and Svaiter [7] for solving variational inequalities from Euclidean spaces to Banach spaces. Under the assumptions of S D = ∅, uniform continuity and quasimonotonicity, we prove that the whole sequence generated by the proposed method is strongly convergent to the solution of the variational inequalities, and our proof techniques are different from those presented in [14][15][16][17].