 Research
 Open Access
 Published:
Strong convergence of extragradient method for generalized variational inequalities in Hilbert space
Journal of Inequalities and Applications volume 2014, Article number: 223 (2014)
Abstract
In this paper, we present a new type of extragradient method for generalized variational inequalities with multivalued mapping in an infinitedimensional Hilbert space. For this method, the generated sequence possesses an expansion property with respect to the initial point, and the existence of the solution to the problem can be verified through the behavior of the generated sequence. Furthermore, under mild conditions, we show that the generated sequence of the method strongly converges to the solution of the problem which is closest to the initial point.
MSC:90C30, 15A06.
1 Introduction
Let F be a multivalued mapping from ℋ into {2}^{\mathcal{H}} with nonempty values, where ℋ is a real Hilbert space. Let X be a nonempty, closed and convex subset of the Hilbert space ℋ. The generalized variational inequality problem, abbreviated as GVIP, is to find a vector {x}^{\ast}\in X such that there exists {\omega}^{\ast}\in F({x}^{\ast}) satisfying
where \u3008\cdot ,\cdot \u3009 stands for the inner product of vectors in ℋ. If the multivalued mapping F is a singlevalued mapping from ℋ to ℋ, then the GVIP collapses to the classical variational inequality problem [1, 2].
The generalized variational inequalities find application in economics and transportation equilibrium, engineering sciences, etc. and have received much attention in the past decades [3–11]. It is well known that the extragradient method [5, 12] is a popular solution method, which has a contraction property, i.e., the generated sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} by the method satisfies
for any solution {x}^{\ast} of the GVIP. It should be noted that the proximal point algorithm also possesses this property [13]. In this paper, inspired by the work in [14] for finding the zeros of maximal monotone operators in a real Hilbert space, we proposed a new type of extragradient solution method for the GVIP which has the following expansion property w.r.t. the initial point, i.e.,
Furthermore, we establish the strong convergence of the method in the case that the solution set {X}^{\ast} is nonempty, and we show that the generated sequence {\{\parallel {x}^{k}\parallel \}}_{k=0}^{\mathrm{\infty}} diverges to infinity if the solution set is empty.
The rest of this paper is organized as follows. In Section 2, we give some related concepts and conclusions needed in the subsequent analysis. In Section 3, we present our designed algorithm and establish the convergence of the algorithm.
2 Preliminaries
Let x\in \mathcal{H} and K be a nonempty, closed, and convex subset in ℋ. A point {y}_{0}\in K is said to be the orthogonal projection of x onto K if it is the closest point to x in K, i.e.,
and we denote {y}_{0} by {P}_{K}(x). The wellknown properties of the projection operator are as follows.
Lemma 2.1 [15]
Let K be a nonempty, closed, and convex subset in ℋ. Then for any x,y\in \mathcal{H}, and z\in K, the following statements hold:

(i)
\u3008{P}_{K}(x)x,z{P}_{K}(x)\u3009\ge 0;

(ii)
{\parallel {P}_{K}(x){P}_{K}(y)\parallel}^{2}\le {\parallel xy\parallel}^{2}{\parallel {P}_{K}(x)x+y{P}_{K}(y)\parallel}^{2}.
Remark 2.1 In fact, (i) in Lemma 2.1 also provides a sufficient condition for a vector u to be the projection of the vector x, i.e., u={P}_{K}(x) if and only if
Definition 2.1 Let K be a nonempty subset of ℋ. The multivalued mapping F:K\to {2}^{\mathcal{H}} is said to be

(i)
monotone if and only if
\u3008uv,xy\u3009\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}x,y\in K,u\in F(x),v\in F(y); 
(ii)
pseudomonotone if and only if, for any x,y\in K, u\in F(x), v\in F(y),
\u3008u,yx\u3009\ge 0\phantom{\rule{1em}{0ex}}\u27f9\phantom{\rule{1em}{0ex}}\u3008v,yx\u3009\ge 0.
To proceed, we present the definition of maximal monotone multivalued mapping F.
Definition 2.2 Let K be a nonempty subset of ℋ. The multivalued mapping F:K\to {2}^{\mathcal{H}} is said to be a maximal monotone operator if F is monotone and the graph
is not properly contained in the graph of any other monotone operator.
It is clear that a monotone multivalued mapping F is maximal if and only if, for any (x,u)\in K\times \mathcal{H} such that \u3008uv,xy\u3009\ge 0, \mathrm{\forall}(y,v)\in G(F), then u\in F(x).
Definition 2.3 Let K be a nonempty, closed, and convex subset of the Hilbert space ℋ. A multivalued mapping F:K\to {2}^{\mathcal{H}} is said to be

(i)
upper semicontinuous at x\in K if for every open set V containing F(x), there is an open set U containing x such that F(y)\subset V for all y\in K\cap U;

(ii)
lower semicontinuous at x\in K if given any sequence {x}^{k} converging to x and any y\in F(x), there exists a sequence {y}^{k}\in F({x}^{k}) that converges to y;

(iii)
continuous at x\in K if it is both upper semicontinuous and lower semicontinuous at x.
Throughout this paper, we assume that the multivalued mapping F:X\to {2}^{\mathcal{H}} is maximal monotone and continuous on X with nonempty compact convex values, where X\subseteq \mathcal{H} is a nonempty, closed, and convex set.
3 Main results
For any x\in \mathcal{H} and \xi \in F(x), set
Then the projection residue r(x,\xi ) can verify the solution set of problem (1.1).
Proposition 3.1 Point x\in X solves problem (1.1) if and only if r(x,\xi )=0 i.e.,
Now, we give the description of the designed algorithm for problem (1.1), whose basic idea is as follows. At each step of the algorithm, compute the projection residue r({x}^{k},{\xi}^{k}) at iterate {x}^{k}. If it is a zero vector for some {\xi}^{k}\in F({x}^{k}), then stop with {x}^{k} being a solution of problem (1.1); otherwise, find a trial point {y}^{k} by a backtracking search at {x}^{k} along the residue r({x}^{k},{\xi}^{k}), and the new iterate is obtained by projecting {x}^{0} onto the intersection of X with two halfspaces, respectively, associated with {y}^{k} and {x}^{k}. Repeat this process until the projection residue is a zero vector.
Algorithm 3.1
Step 0: Choose \sigma ,\gamma \in (0,1), {x}^{0}\in X, k=0.
Step 1: Given the current iterate {x}^{k}, if \parallel r({x}^{k},\xi )\parallel =0 for some \xi \in F({x}^{k}), stop; else take any {\xi}^{k}\in F({x}^{k}) and compute
Take
where {\eta}_{k}={\gamma}^{{m}_{k}}, with {m}_{k} being the smallest nonnegative integer m satisfying \mathrm{\exists}{\zeta}^{k}\in F({x}^{k}{\gamma}^{m}r({x}^{k},{\xi}^{k})) such that
Step 2: Let {x}^{k+1}={P}_{{H}_{k}^{1}\cap {H}_{k}^{2}\cap X}({x}^{0}), where
Set k=k+1 and go to Step 1.
The following conclusion addresses the feasibility of the stepsize rule (3.1), i.e., the existence of point {\zeta}^{k}.
Lemma 3.1 If {x}^{k} is not a solution of problem (1.1), then there exists a smallest nonnegative integer m satisfying (3.1).
Proof By the definition of r({x}^{k},{\xi}^{k}) and Lemma 2.1, it follows that
which implies
Since \gamma \in (0,1), we get
Combining this with the fact that F is continuous, we know that there exists {\zeta}^{m}\in F({x}^{k}{\gamma}^{m}r({x}^{k},{\xi}^{k})) such that
hence, by (3.2), one has
This completes the proof. □
Lemma 3.2 Suppose the solution set {X}^{\ast} is nonempty, then the halfspace {H}_{k}^{1} in Algorithm 3.1 separates the point {x}^{k} from the set {X}^{\ast}. Moreover,
Proof By the definition of r({x}^{k},{\xi}^{k}) and Algorithm 3.1, we have
which can be written as
Then, by this and (3.1), one has
where {\zeta}^{k} is a vector in F({y}^{k}). So, by the definition of {H}_{k}^{1} and (3.3) it follows that {x}^{k}\notin {H}_{k}^{1}.
On the other way, for any {x}^{\ast}\in {X}^{\ast} and x\in X, we have
Since F is monotone on X, one has
Let x={y}^{k} in (3.4). Then for any {\zeta}^{k}\in F({y}^{k}),
which implies {x}^{\ast}\in {H}_{k}^{1}. Moreover, since
the desired result follows. □
Regarding the projection step, we shall prove that the set {H}_{k}^{1}\cap {H}_{k}^{2}\cap X is always nonempty, even when the solution set {X}^{\ast} is empty. Therefore the whole algorithm is well defined in the sense that it generates an infinite sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}}.
Lemma 3.3 If the solution set {X}^{\ast}\ne \mathrm{\varnothing}, then {X}^{\ast}\subseteq {H}_{k}^{1}\cap {H}_{k}^{2}\cap X for all k\ge 0.
Proof From the analysis in Lemma 3.2, it is sufficient to prove that {X}^{\ast}\subseteq {H}_{k}^{2} for all k\ge 0. The proof will be given by induction. Obviously, if k=0,
Now, suppose that
holds for k=l\ge 0. Then
For any {x}^{\ast}\in {X}^{\ast}, by Lemma 2.1 and the fact that
we have
Thus {X}^{\ast}\subseteq {H}_{l+1}^{2}. This shows that {X}^{\ast}\subseteq {H}_{k}^{2} for all k\ge 0 and the desired result follows. □
Lemma 3.4 Suppose that {X}^{\ast}=\mathrm{\varnothing}, then {H}_{k}^{1}\cap {H}_{k}^{2}\cap X\ne \mathrm{\varnothing} for all k\ge 0.
Proof On the contrary, suppose {k}_{0}>1 is the smallest nonnegative number such that
Then {x}^{k}, {y}^{k}, {\zeta}^{k} are defined for k=0,1,\dots ,{k}_{0}, and there exists a positive number M such that
and
where
Set
Then h:\mathcal{H}\to R\cup \{+\mathrm{\infty}\} is a lower semicontinuous proper convex function. By the definition of subgradient, we have
So, \partial h(x) and
are all maximal monotone mappings [16]. Furthermore,
and {x}^{k}, {y}^{k}, {\zeta}^{k} for k=0,1,\dots ,{k}_{0} also satisfy the conditions of Algorithm 3.1. Since the domain of {F}^{\prime} is bounded, by the proof of Theorem 2 in [14], we know that {F}^{\prime}(x) has a zero point i.e., there exists a point \overline{x}\in int(X)\cap \{x\mid \parallel x{x}^{0}\parallel <2M\} such that
which implies that the solution set {X}^{\ast} is nonempty. We arrive at a contradiction and the desired result follows. □
In order to establish the convergence of the algorithm, we first show the expansion property of the algorithm w.r.t. the initial point.
Lemma 3.5 Suppose Algorithm 3.1 reaches an iteration k+1. Then
Proof By the iterative process of Algorithm 3.1, one has
So {x}^{k+1}\in {H}_{k}^{2} and
From the definition of {H}_{k}^{2}, it follows that
Thus, {x}^{k}={P}_{{H}_{k}^{2}}({x}^{0}) from Remark 2.1. Then, from Lemma 2.1, we have
which can be written as
i.e.,
and the proof is completed. □
From Lemma 3.4, Algorithm 3.1 generates an infinite sequence if the solution set of problem (1.1) is empty. More precisely, we have the following conclusion.
Theorem 3.1 Suppose Algorithm 3.1 generates an infinite sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}}. Assume the sequence {\{{\eta}_{k}\}}_{k=0}^{\mathrm{\infty}} is bounded away from zero. Then the generated sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} is bounded and its each weak accumulation point is a solution of problem (1.1) if the solution set {X}^{\ast} is nonempty. Otherwise
if the solution set {X}^{\ast} is empty.
Proof For the case that {X}^{\ast}\ne \mathrm{\varnothing}, by Lemma 3.3 and
we know that
for any {x}^{\ast}\in {X}^{\ast}. So, {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} is a bounded sequence.
Then, by Lemma 3.5, we know the sequence {\{\parallel {x}^{k}{x}^{0}\parallel \}}_{k=0}^{\mathrm{\infty}} is nondecreasing and bounded, which implies that
On the other hand, by the fact that {x}^{k+1}\in {H}_{k}^{1}, we have
where {\zeta}^{k} can be chosen as (3.1). Since
by (3.6), one has
which can be written as
Using the CauchySchwarz inequality and (3.1), we obtain
Since F is continuous with compact values, Proposition 3.11 in [17] implies that \{F({y}^{k}):k\in N\} is a bounded set, and hence the sequence \{{\zeta}^{k}:{\zeta}^{k}\in F({y}^{k})\} is bounded. Thus, by (3.5) and (3.7), it follows that
By assumption that {\{{\eta}_{k}\}}_{k=0}^{\mathrm{\infty}} is bounded away from zero, we have
Since the sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} is bounded, it has weak accumulation points. Without loss of generality, assume that the subsequence \{{x}^{{k}_{j}}\} weakly converges to \overline{x}, i.e.,
Since r(x,\xi ) is a continuous and single valued operator, from Theorem 2 of [18], we know that r(x,\xi ) is a weak continuous operator. Thus,
for some \xi \in F(\overline{x}) and \overline{x} is a solution of problem (1.1).
Now, consider the case that the solution set is empty. For this case, the inequality
and (3.5) still hold. Thus, the sequence {\{\parallel {x}^{k}{x}^{0}\parallel \}}_{k=0}^{\mathrm{\infty}} is also nondecreasing. Now, we claim that
Otherwise, a similar argument to the one above leads to the conclusion that any weak accumulation point of {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} is a solution of problem (1.1), which contradicts the emptiness of the solution set, and the conclusion follows. □
We are in a position to prove strong convergence of Algorithm 3.1.
Theorem 3.2 Suppose Algorithm 3.1 generates an infinite sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}}. If the solution set {X}^{\ast} is nonempty and the sequence \{{\eta}_{k}\} is bounded away from zero, then the sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} converges strongly to a solution {x}^{\ast} such that {x}^{\ast}={P}_{{X}^{\ast}}({x}^{0}); otherwise, {lim}_{k\to +\mathrm{\infty}}\parallel {x}^{k}{x}^{0}\parallel =+\mathrm{\infty}. That is, the solution set of problem (1.1) is empty if and only if the sequence generated by Algorithm 3.1 diverges to infinity.
Proof For the case that the solution set is nonempty, from Theorem 3.1, we know that the sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} is bounded and that every weak accumulate point {x}^{\ast} of {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} is a solution of problem (1.1). Let {\{{x}^{{k}_{j}}\}}_{j=0}^{\mathrm{\infty}} be a weakly convergent subsequence of {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}}, and let {x}^{\ast}\in {X}^{\ast} be its weak limit. Let \overline{x}={P}_{{X}^{\ast}}({x}^{0}). Then by Lemma 3.3,
for all j. So, from the iterative procedure of Algorithm 3.1,
one has
Thus,
where the inequality follows from (3.9). Letting j\to \mathrm{\infty}, it follows that
Due to Lemma 2.1 and the fact that \overline{x}={P}_{{X}^{\ast}}({x}^{0}) and {x}^{\ast}\in {X}^{\ast}, we have
Combing this with (3.10) and the fact that {x}^{\ast} is a weak limit of {\{{x}^{{k}_{j}}\}}_{j=0}^{\mathrm{\infty}}, we conclude that the sequence {\{{x}^{{k}_{j}}\}}_{j=0}^{\mathrm{\infty}} strongly converges to \overline{x} and
Since {x}^{\ast} was taken as an arbitrary weak accumulation point of {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}}, it follows that \overline{x} is the unique weak accumulation point of this sequence. Since {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} is bounded, the whole sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} weakly converges to \overline{x}. On the other hand, we have shown that every weakly convergent subsequence of {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} converges strongly to \overline{x}. Hence, the whole sequence {\{{x}^{k}\}}_{k=0}^{\mathrm{\infty}} converges strongly to \overline{x}\in {X}^{\ast}.
For the case that the solution set is empty, the conclusion can be obtained directly from Theorem 3.1. □
References
Harker PT, Pang JS: Finitedimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 1990, 48: 161–220. 10.1007/BF01582255
Wang YJ, Xiu NH, Zhang JZ: Modified extragradient method for variational inequalities and verification of solution existence. J. Optim. Theory Appl. 2003, 119: 167–183.
Auslender A, Teboulle M: Lagrangian duality and related multiplier methods for variational inequality problems. SIAM J. Optim. 2000, 10: 1097–1115. 10.1137/S1052623499352656
BenTal A, Nemirovski A: Robust convex optimization. Math. Oper. Res. 1998, 23: 769–805. 10.1287/moor.23.4.769
Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s1095701097573
Fang SC, Peterson EL: Generalized variational inequalities. J. Optim. Theory Appl. 1982, 38: 363–383. 10.1007/BF00935344
Fang CJ, He Y: A double projection algorithm for multivalued variational inequalities and a unified framework of the method. Appl. Math. Comput. 2011, 217: 9543–9551. 10.1016/j.amc.2011.04.009
He Y: Stable pseudomonotone variational inequality in reflexive Banach spaces. J. Math. Anal. Appl. 2007, 330: 352–363. 10.1016/j.jmaa.2006.07.063
Huang NJ: Generalized nonlinear variational inclusions with noncompact valued mappings. Appl. Math. Lett. 1996,9(3):25–29. 10.1016/08939659(96)000262
Li S, Chen G: On relations between multiclass, multicriteria traffic network equilibrium models and vector variational inequalities. J. Syst. Sci. Syst. Eng. 2006,15(3):284–297. 10.1007/s1151800650128
Saigal R: Extension of the generalized complementarity problem. Math. Oper. Res. 1976, 1: 260–266. 10.1287/moor.1.3.260
Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.
Allevi E, Gnudi A, Konnov IV: The proximal point method for nonmonotone variational inequalities. Math. Methods Oper. Res. 2006, 63: 553–565. 10.1007/s0018600500522
Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.
Polyak BT: Introduction to Optimization. Optimization Software Incorporation, Publications Division, New York; 1987.
Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–78. 10.1090/S00029947197002822725
Aubin JP, Ekeland I: Applied Nonlinear Analysis. Wiley, New York; 1984.
Levine N: A decomposition of continuity in topological spaces. Am. Math. Mon. 1961,68(1):44–46. 10.2307/2311363
Acknowledgements
This work was supported by the Natural Science Foundation of China (Grant Nos. 11171180, 11101303), and the Specialized Research Fund for the Doctoral Program of Chinese Higher Education (20113705110002). The authors would like to thank the reviewers for their careful reading, insightful comments, and constructive suggestions, which helped improve the presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Chen, H., Wang, Y. & Wang, G. Strong convergence of extragradient method for generalized variational inequalities in Hilbert space. J Inequal Appl 2014, 223 (2014). https://doi.org/10.1186/1029242X2014223
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X2014223
Keywords
 generalized variational inequalities
 extragradient method
 multivalued mapping
 maximal monotone mapping