Skip to main content

Strong convergence of extragradient method for generalized variational inequalities in Hilbert space

Abstract

In this paper, we present a new type of extra-gradient method for generalized variational inequalities with multi-valued mapping in an infinite-dimensional Hilbert space. For this method, the generated sequence possesses an expansion property with respect to the initial point, and the existence of the solution to the problem can be verified through the behavior of the generated sequence. Furthermore, under mild conditions, we show that the generated sequence of the method strongly converges to the solution of the problem which is closest to the initial point.

MSC:90C30, 15A06.

1 Introduction

Let F be a multi-valued mapping from ℋ into 2 H with nonempty values, where ℋ is a real Hilbert space. Let X be a nonempty, closed and convex subset of the Hilbert space ℋ. The generalized variational inequality problem, abbreviated as GVIP, is to find a vector x ∗ ∈X such that there exists ω ∗ ∈F( x ∗ ) satisfying

〈 ω ∗ , x − x ∗ 〉 ≥0,∀x∈X,
(1.1)

where 〈⋅,⋅〉 stands for the inner product of vectors in ℋ. If the multi-valued mapping F is a single-valued mapping from ℋ to ℋ, then the GVIP collapses to the classical variational inequality problem [1, 2].

The generalized variational inequalities find application in economics and transportation equilibrium, engineering sciences, etc. and have received much attention in the past decades [3–11]. It is well known that the extra-gradient method [5, 12] is a popular solution method, which has a contraction property, i.e., the generated sequence { x k } k = 0 ∞ by the method satisfies

∥ x k + 1 − x ∗ ∥ ≤ ∥ x k − x ∗ ∥ ,∀k≥0

for any solution x ∗ of the GVIP. It should be noted that the proximal point algorithm also possesses this property [13]. In this paper, inspired by the work in [14] for finding the zeros of maximal monotone operators in a real Hilbert space, we proposed a new type of extra-gradient solution method for the GVIP which has the following expansion property w.r.t. the initial point, i.e.,

∥ x k − x 0 ∥ ≤ ∥ x k + 1 − x 0 ∥ ,∀k.

Furthermore, we establish the strong convergence of the method in the case that the solution set X ∗ is nonempty, and we show that the generated sequence { ∥ x k ∥ } k = 0 ∞ diverges to infinity if the solution set is empty.

The rest of this paper is organized as follows. In Section 2, we give some related concepts and conclusions needed in the subsequent analysis. In Section 3, we present our designed algorithm and establish the convergence of the algorithm.

2 Preliminaries

Let x∈H and K be a nonempty, closed, and convex subset in ℋ. A point y 0 ∈K is said to be the orthogonal projection of x onto K if it is the closest point to x in K, i.e.,

y 0 =argmin { ∥ y − x ∥ ∣ y ∈ K } ,

and we denote y 0 by P K (x). The well-known properties of the projection operator are as follows.

Lemma 2.1 [15]

Let K be a nonempty, closed, and convex subset in ℋ. Then for any x,y∈H, and z∈K, the following statements hold:

  1. (i)

    〈 P K (x)−x,z− P K (x)〉≥0;

  2. (ii)

    ∥ P K ( x ) − P K ( y ) ∥ 2 ≤ ∥ x − y ∥ 2 − ∥ P K ( x ) − x + y − P K ( y ) ∥ 2 .

Remark 2.1 In fact, (i) in Lemma 2.1 also provides a sufficient condition for a vector u to be the projection of the vector x, i.e., u= P K (x) if and only if

〈u−x,z−u〉≥0,∀z∈K.

Definition 2.1 Let K be a nonempty subset of ℋ. The multi-valued mapping F:K→ 2 H is said to be

  1. (i)

    monotone if and only if

    〈u−v,x−y〉≥0,∀x,y∈K,u∈F(x),v∈F(y);
  2. (ii)

    pseudo-monotone if and only if, for any x,y∈K, u∈F(x), v∈F(y),

    〈u,y−x〉≥0⟹〈v,y−x〉≥0.

To proceed, we present the definition of maximal monotone multi-valued mapping F.

Definition 2.2 Let K be a nonempty subset of ℋ. The multi-valued mapping F:K→ 2 H is said to be a maximal monotone operator if F is monotone and the graph

G(F)= { ( x , u ) ∈ K × H ∣ u ∈ F ( x ) }

is not properly contained in the graph of any other monotone operator.

It is clear that a monotone multi-valued mapping F is maximal if and only if, for any (x,u)∈K×H such that 〈u−v,x−y〉≥0, ∀(y,v)∈G(F), then u∈F(x).

Definition 2.3 Let K be a nonempty, closed, and convex subset of the Hilbert space ℋ. A multi-valued mapping F:K→ 2 H is said to be

  1. (i)

    upper semi-continuous at x∈K if for every open set V containing F(x), there is an open set U containing x such that F(y)⊂V for all y∈K∩U;

  2. (ii)

    lower semi-continuous at x∈K if given any sequence x k converging to x and any y∈F(x), there exists a sequence y k ∈F( x k ) that converges to y;

  3. (iii)

    continuous at x∈K if it is both upper semi-continuous and lower semi-continuous at x.

Throughout this paper, we assume that the multi-valued mapping F:X→ 2 H is maximal monotone and continuous on X with nonempty compact convex values, where X⊆H is a nonempty, closed, and convex set.

3 Main results

For any x∈H and ξ∈F(x), set

r(x,ξ)=x− P X (x−ξ).

Then the projection residue r(x,ξ) can verify the solution set of problem (1.1).

Proposition 3.1 Point x∈X solves problem (1.1) if and only if r(x,ξ)=0 i.e.,

x= P X (x−ξ)for some Î¾âˆˆF(x).

Now, we give the description of the designed algorithm for problem (1.1), whose basic idea is as follows. At each step of the algorithm, compute the projection residue r( x k , ξ k ) at iterate x k . If it is a zero vector for some ξ k ∈F( x k ), then stop with x k being a solution of problem (1.1); otherwise, find a trial point y k by a back-tracking search at x k along the residue r( x k , ξ k ), and the new iterate is obtained by projecting x 0 onto the intersection of X with two halfspaces, respectively, associated with y k and x k . Repeat this process until the projection residue is a zero vector.

Algorithm 3.1

Step 0: Choose σ,γ∈(0,1), x 0 ∈X, k=0.

Step 1: Given the current iterate x k , if ∥r( x k ,ξ)∥=0 for some ξ∈F( x k ), stop; else take any ξ k ∈F( x k ) and compute

z k = P X ( x k − ξ k ) .

Take

y k =(1− η k ) x k + η k z k ,

where η k = γ m k , with m k being the smallest non-negative integer m satisfying ∃ ζ k ∈F( x k − γ m r( x k , ξ k )) such that

〈 ζ k , r ( x k , ξ k ) 〉 ≥σ ∥ r ( x k , ξ k ) ∥ 2 .
(3.1)

Step 2: Let x k + 1 = P H k 1 ∩ H k 2 ∩ X ( x 0 ), where

H k 1 = { x ∈ H ∣ 〈 x − y k , ζ k 〉 ≤ 0 , ∀ ζ k ∈ F ( y k ) } , H k 2 = { x ∈ H ∣ 〈 x − x k , x 0 − x k 〉 ≤ 0 } .

Set k=k+1 and go to Step 1.

The following conclusion addresses the feasibility of the stepsize rule (3.1), i.e., the existence of point ζ k .

Lemma 3.1 If x k is not a solution of problem (1.1), then there exists a smallest non-negative integer m satisfying (3.1).

Proof By the definition of r( x k , ξ k ) and Lemma 2.1, it follows that

〈 P X ( x k − ξ k ) − ( x k − ξ k ) , x k − P X ( x k − ξ k ) 〉 ≥0,

which implies

〈 ξ k , r ( x k , ξ k ) 〉 ≥ ∥ r ( x k , ξ k ) ∥ 2 >0.
(3.2)

Since γ∈(0,1), we get

lim m → ∞ ( x k − γ m r ( x k , ξ k ) ) = x k .

Combining this with the fact that F is continuous, we know that there exists ζ m ∈F( x k − γ m r( x k , ξ k )) such that

lim m → ∞ ζ m = ξ k ;

hence, by (3.2), one has

lim m → ∞ 〈 ζ m , r ( x k , ξ k ) 〉 = 〈 ξ k , r ( x k , ξ k ) 〉 ≥ ∥ r ( x k , ξ k ) ∥ 2 >0.

This completes the proof. □

Lemma 3.2 Suppose the solution set X ∗ is nonempty, then the halfspace H k 1 in Algorithm  3.1 separates the point x k from the set X ∗ . Moreover,

X ∗ ⊆ H k 1 ∩X,∀k≥0.

Proof By the definition of r( x k , ξ k ) and Algorithm 3.1, we have

y k =(1− η k ) x k + η k z k = x k − η k r ( x k , ξ k ) ,

which can be written as

η k r ( x k , ξ k ) = x k − y k .

Then, by this and (3.1), one has

〈 ζ k , x k − y k 〉 >0,
(3.3)

where ζ k is a vector in F( y k ). So, by the definition of H k 1 and (3.3) it follows that x k ∉ H k 1 .

On the other way, for any x ∗ ∈ X ∗ and x∈X, we have

〈 ω ∗ , x − x ∗ 〉 ≥0, ω ∗ ∈F ( x ∗ ) .

Since F is monotone on X, one has

〈 ω , x − x ∗ 〉 ≥ 〈 ω ∗ , x − x ∗ 〉 ≥0,∀ω∈F(x).
(3.4)

Let x= y k in (3.4). Then for any ζ k ∈F( y k ),

〈 ζ k , y k − x ∗ 〉 ≥0,

which implies x ∗ ∈ H k 1 . Moreover, since

X ∗ ⊆ H k 1 ∩X,∀k≥0,

the desired result follows. □

Regarding the projection step, we shall prove that the set H k 1 ∩ H k 2 ∩X is always nonempty, even when the solution set X ∗ is empty. Therefore the whole algorithm is well defined in the sense that it generates an infinite sequence { x k } k = 0 ∞ .

Lemma 3.3 If the solution set X ∗ ≠∅, then X ∗ ⊆ H k 1 ∩ H k 2 ∩X for all k≥0.

Proof From the analysis in Lemma 3.2, it is sufficient to prove that X ∗ ⊆ H k 2 for all k≥0. The proof will be given by induction. Obviously, if k=0,

X ∗ ⊆ H 0 2 =H.

Now, suppose that

X ∗ ⊆ H k 2

holds for k=l≥0. Then

X ∗ ⊆ H l 1 ∩ H l 2 ∩X.

For any x ∗ ∈ X ∗ , by Lemma 2.1 and the fact that

x l + 1 = P H l 1 ∩ H l 2 ∩ X ( x 0 ) ,

we have

〈 x ∗ − x l + 1 , x 0 − x l + 1 〉 ≤0.

Thus X ∗ ⊆ H l + 1 2 . This shows that X ∗ ⊆ H k 2 for all k≥0 and the desired result follows. □

Lemma 3.4 Suppose that X ∗ =∅, then H k 1 ∩ H k 2 ∩X≠∅ for all k≥0.

Proof On the contrary, suppose k 0 >1 is the smallest non-negative number such that

H k 0 1 ∩ H k 0 2 ∩X=∅.

Then x k , y k , ζ k are defined for k=0,1,…, k 0 , and there exists a positive number M such that

{ x k ∣ 0 ≤ k ≤ k 0 } ⊆B ( x 0 , M )

and

{ x k − ξ k ∣ 0 ≤ k ≤ k 0 , ξ k ∈ F ( x k ) } ⊆B ( x 0 , M ) ,

where

B ( x 0 , M ) = { x ∈ H ∣ ∥ x − x 0 ∥ ≤ M } .

Set

h(x)={ 0 , if  x ∈ int ( X ) ∩ B ( x 0 , 2 M ) , + ∞ , otherwise .

Then h:H→R∪{+∞} is a lower semi-continuous proper convex function. By the definition of subgradient, we have

∂h(x)={ { 0 } , if  x ∈ int ( X ) ∩ { x ∣ ∥ x − x 0 ∥ < 2 M } , { λ ( x − x 0 ) ∣ λ ≥ 0 } , if  x ∈ int ( X ) ∩ { x ∣ ∥ x − x 0 ∥ = 2 M } , ∅ , otherwise .

So, ∂h(x) and

F ′ =F+∂h

are all maximal monotone mappings [16]. Furthermore,

F ′ (x)=F(x),if x∈int(X)∩ { x ∣ ∥ x − x 0 ∥ < 2 M } ,

and x k , y k , ζ k for k=0,1,…, k 0 also satisfy the conditions of Algorithm 3.1. Since the domain of F ′ is bounded, by the proof of Theorem 2 in [14], we know that F ′ (x) has a zero point i.e., there exists a point x ¯ ∈int(X)∩{x∣∥x− x 0 ∥<2M} such that

0∈ F ′ ( x ¯ )=F( x ¯ ),

which implies that the solution set X ∗ is nonempty. We arrive at a contradiction and the desired result follows. □

In order to establish the convergence of the algorithm, we first show the expansion property of the algorithm w.r.t. the initial point.

Lemma 3.5 Suppose Algorithm 3.1 reaches an iteration k+1. Then

∥ x k + 1 − x k ∥ 2 + ∥ x k − x 0 ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2 .

Proof By the iterative process of Algorithm 3.1, one has

x k + 1 = P H k 1 ∩ H k 2 ∩ X ( x 0 ) .

So x k + 1 ∈ H k 2 and

P H k 2 ( x k + 1 ) = x k + 1 .

From the definition of H k 2 , it follows that

〈 z − x k , x 0 − x k 〉 ≤0,∀z∈ H k 2 .

Thus, x k = P H k 2 ( x 0 ) from Remark 2.1. Then, from Lemma 2.1, we have

∥ P H k 2 ( x k + 1 ) − P H k 2 ( x 0 ) ∥ ≤ ∥ x k + 1 − x 0 ∥ 2 − ∥ P H k 2 ( x k + 1 ) − x k + 1 + x 0 − P H k 2 ( x 0 ) ∥ 2 ,

which can be written as

∥ x k + 1 − x k ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2 − ∥ x k − x 0 ∥ 2 ,

i.e.,

∥ x k + 1 − x k ∥ 2 + ∥ x k − x 0 ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2 ,

and the proof is completed. □

From Lemma 3.4, Algorithm 3.1 generates an infinite sequence if the solution set of problem (1.1) is empty. More precisely, we have the following conclusion.

Theorem 3.1 Suppose Algorithm 3.1 generates an infinite sequence { x k } k = 0 ∞ . Assume the sequence { η k } k = 0 ∞ is bounded away from zero. Then the generated sequence { x k } k = 0 ∞ is bounded and its each weak accumulation point is a solution of problem (1.1) if the solution set X ∗ is nonempty. Otherwise

lim k → + ∞ ∥ x k − x 0 ∥ =+∞

if the solution set X ∗ is empty.

Proof For the case that X ∗ ≠∅, by Lemma 3.3 and

x k + 1 = P H k 1 ∩ H k 2 ∩ X ( x 0 ) ,

we know that

∥ x k + 1 − x 0 ∥ ≤ ∥ x ∗ − x 0 ∥

for any x ∗ ∈ X ∗ . So, { x k } k = 0 ∞ is a bounded sequence.

Then, by Lemma 3.5, we know the sequence { ∥ x k − x 0 ∥ } k = 0 ∞ is nondecreasing and bounded, which implies that

lim k → ∞ ∥ x k + 1 − x k ∥ 2 =0.
(3.5)

On the other hand, by the fact that x k + 1 ∈ H k 1 , we have

〈 x k + 1 − y k , ζ k 〉 ≤0,
(3.6)

where ζ k can be chosen as (3.1). Since

y k =(1− η k ) x k + η k z k = x k − η k r ( x k , ξ k ) ,

by (3.6), one has

〈 x k + 1 − y k , ζ k 〉 = 〈 x k + 1 − x k + η k r ( x k , ξ k ) , ζ k 〉 ≤0,

which can be written as

η k 〈 r ( x k , ξ k ) , ζ k 〉 ≤ 〈 x k − x k + 1 , ζ k 〉 .

Using the Cauchy-Schwarz inequality and (3.1), we obtain

η k σ ∥ r ( x k , ξ k ) ∥ 2 ≤ η k 〈 r ( x k , ξ k ) , ζ k 〉 ≤ ∥ x k + 1 − x k ∥ ∥ ζ k ∥ .
(3.7)

Since F is continuous with compact values, Proposition 3.11 in [17] implies that {F( y k ):k∈N} is a bounded set, and hence the sequence { ζ k : ζ k ∈F( y k )} is bounded. Thus, by (3.5) and (3.7), it follows that

lim k → ∞ η k ∥ r ( x k , ξ k ) ∥ 2 =0.

By assumption that { η k } k = 0 ∞ is bounded away from zero, we have

lim k → ∞ ∥ r ( x k , ξ k ) ∥ 2 =0.
(3.8)

Since the sequence { x k } k = 0 ∞ is bounded, it has weak accumulation points. Without loss of generality, assume that the subsequence { x k j } weakly converges to x ¯ , i.e.,

x k j ⇀ x ¯ ,as j→∞.

Since r(x,ξ) is a continuous and single valued operator, from Theorem 2 of [18], we know that r(x,ξ) is a weak continuous operator. Thus,

∥ r ( x ¯ , ξ ) ∥ = lim j → ∞ ∥ r ( x k j , ξ k j ) ∥ =0

for some ξ∈F( x ¯ ) and x ¯ is a solution of problem (1.1).

Now, consider the case that the solution set is empty. For this case, the inequality

∥ x k + 1 − x k ∥ 2 + ∥ x k − x 0 ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2

and (3.5) still hold. Thus, the sequence { ∥ x k − x 0 ∥ } k = 0 ∞ is also nondecreasing. Now, we claim that

lim k → + ∞ ∥ x k − x 0 ∥ =+∞.

Otherwise, a similar argument to the one above leads to the conclusion that any weak accumulation point of { x k } k = 0 ∞ is a solution of problem (1.1), which contradicts the emptiness of the solution set, and the conclusion follows. □

We are in a position to prove strong convergence of Algorithm 3.1.

Theorem 3.2 Suppose Algorithm 3.1 generates an infinite sequence { x k } k = 0 ∞ . If the solution set X ∗ is nonempty and the sequence { η k } is bounded away from zero, then the sequence { x k } k = 0 ∞ converges strongly to a solution x ∗ such that x ∗ = P X ∗ ( x 0 ); otherwise, lim k → + ∞ ∥ x k − x 0 ∥=+∞. That is, the solution set of problem (1.1) is empty if and only if the sequence generated by Algorithm 3.1 diverges to infinity.

Proof For the case that the solution set is nonempty, from Theorem 3.1, we know that the sequence { x k } k = 0 ∞ is bounded and that every weak accumulate point x ∗ of { x k } k = 0 ∞ is a solution of problem (1.1). Let { x k j } j = 0 ∞ be a weakly convergent subsequence of { x k } k = 0 ∞ , and let x ∗ ∈ X ∗ be its weak limit. Let x ¯ = P X ∗ ( x 0 ). Then by Lemma 3.3,

x ¯ ∈ H k j − 1 1 ∩ H k j − 1 2 ∩X

for all j. So, from the iterative procedure of Algorithm 3.1,

x k j = P H k j − 1 1 ∩ H k j − 1 2 ∩ X ( x 0 ) ,

one has

∥ x k j − x 0 ∥ ≤ ∥ x ¯ − x 0 ∥ .
(3.9)

Thus,

∥ x k j − x ¯ ∥ 2 = ∥ x k j − x 0 + x 0 − x ¯ ∥ 2 = ∥ x k j − x 0 ∥ 2 + ∥ x 0 − x ¯ ∥ 2 + 2 〈 x k j − x 0 , x 0 − x ¯ 〉 ≤ ∥ x ¯ − x 0 ∥ 2 + ∥ x 0 − x ¯ ∥ 2 + 2 〈 x k j − x 0 , x 0 − x ¯ 〉 ,

where the inequality follows from (3.9). Letting j→∞, it follows that

lim sup j → ∞ ∥ x k j − x ¯ ∥ 2 ≤ 2 ∥ x ¯ − x 0 ∥ 2 + 2 〈 x ∗ − x 0 , x 0 − x ¯ 〉 = 2 〈 x ∗ − x ¯ , x 0 − x ¯ 〉 .
(3.10)

Due to Lemma 2.1 and the fact that x ¯ = P X ∗ ( x 0 ) and x ∗ ∈ X ∗ , we have

〈 x ∗ − x ¯ , x 0 − x ¯ 〉 ≤0.

Combing this with (3.10) and the fact that x ∗ is a weak limit of { x k j } j = 0 ∞ , we conclude that the sequence { x k j } j = 0 ∞ strongly converges to x ¯ and

x ∗ = x ¯ = P X ∗ ( x 0 ) .

Since x ∗ was taken as an arbitrary weak accumulation point of { x k } k = 0 ∞ , it follows that x ¯ is the unique weak accumulation point of this sequence. Since { x k } k = 0 ∞ is bounded, the whole sequence { x k } k = 0 ∞ weakly converges to x ¯ . On the other hand, we have shown that every weakly convergent subsequence of { x k } k = 0 ∞ converges strongly to x ¯ . Hence, the whole sequence { x k } k = 0 ∞ converges strongly to x ¯ ∈ X ∗ .

For the case that the solution set is empty, the conclusion can be obtained directly from Theorem 3.1. □

References

  1. Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 1990, 48: 161–220. 10.1007/BF01582255

    Article  MathSciNet  Google Scholar 

  2. Wang YJ, Xiu NH, Zhang JZ: Modified extragradient method for variational inequalities and verification of solution existence. J. Optim. Theory Appl. 2003, 119: 167–183.

    Article  MathSciNet  Google Scholar 

  3. Auslender A, Teboulle M: Lagrangian duality and related multiplier methods for variational inequality problems. SIAM J. Optim. 2000, 10: 1097–1115. 10.1137/S1052623499352656

    Article  MathSciNet  Google Scholar 

  4. Ben-Tal A, Nemirovski A: Robust convex optimization. Math. Oper. Res. 1998, 23: 769–805. 10.1287/moor.23.4.769

    Article  MathSciNet  Google Scholar 

  5. Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3

    Article  MathSciNet  Google Scholar 

  6. Fang SC, Peterson EL: Generalized variational inequalities. J. Optim. Theory Appl. 1982, 38: 363–383. 10.1007/BF00935344

    Article  MathSciNet  Google Scholar 

  7. Fang CJ, He Y: A double projection algorithm for multi-valued variational inequalities and a unified framework of the method. Appl. Math. Comput. 2011, 217: 9543–9551. 10.1016/j.amc.2011.04.009

    Article  MathSciNet  Google Scholar 

  8. He Y: Stable pseudomonotone variational inequality in reflexive Banach spaces. J. Math. Anal. Appl. 2007, 330: 352–363. 10.1016/j.jmaa.2006.07.063

    Article  MathSciNet  Google Scholar 

  9. Huang NJ: Generalized nonlinear variational inclusions with noncompact valued mappings. Appl. Math. Lett. 1996,9(3):25–29. 10.1016/0893-9659(96)00026-2

    Article  MathSciNet  Google Scholar 

  10. Li S, Chen G: On relations between multiclass, multicriteria traffic network equilibrium models and vector variational inequalities. J. Syst. Sci. Syst. Eng. 2006,15(3):284–297. 10.1007/s11518-006-5012-8

    Article  Google Scholar 

  11. Saigal R: Extension of the generalized complementarity problem. Math. Oper. Res. 1976, 1: 260–266. 10.1287/moor.1.3.260

    Article  MathSciNet  Google Scholar 

  12. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.

    Google Scholar 

  13. Allevi E, Gnudi A, Konnov IV: The proximal point method for nonmonotone variational inequalities. Math. Methods Oper. Res. 2006, 63: 553–565. 10.1007/s00186-005-0052-2

    Article  MathSciNet  Google Scholar 

  14. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.

    MathSciNet  Google Scholar 

  15. Polyak BT: Introduction to Optimization. Optimization Software Incorporation, Publications Division, New York; 1987.

    Google Scholar 

  16. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–78. 10.1090/S0002-9947-1970-0282272-5

    Article  MathSciNet  Google Scholar 

  17. Aubin JP, Ekeland I: Applied Nonlinear Analysis. Wiley, New York; 1984.

    Google Scholar 

  18. Levine N: A decomposition of continuity in topological spaces. Am. Math. Mon. 1961,68(1):44–46. 10.2307/2311363

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Natural Science Foundation of China (Grant Nos. 11171180, 11101303), and the Specialized Research Fund for the Doctoral Program of Chinese Higher Education (20113705110002). The authors would like to thank the reviewers for their careful reading, insightful comments, and constructive suggestions, which helped improve the presentation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haibin Chen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, H., Wang, Y. & Wang, G. Strong convergence of extragradient method for generalized variational inequalities in Hilbert space. J Inequal Appl 2014, 223 (2014). https://doi.org/10.1186/1029-242X-2014-223

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-223

Keywords