- Open Access
Strong convergence of extragradient method for generalized variational inequalities in Hilbert space
© Chen et al.; licensee Springer. 2014
- Received: 15 December 2013
- Accepted: 23 May 2014
- Published: 3 June 2014
In this paper, we present a new type of extra-gradient method for generalized variational inequalities with multi-valued mapping in an infinite-dimensional Hilbert space. For this method, the generated sequence possesses an expansion property with respect to the initial point, and the existence of the solution to the problem can be verified through the behavior of the generated sequence. Furthermore, under mild conditions, we show that the generated sequence of the method strongly converges to the solution of the problem which is closest to the initial point.
- generalized variational inequalities
- extra-gradient method
- multi-valued mapping
- maximal monotone mapping
where stands for the inner product of vectors in ℋ. If the multi-valued mapping F is a single-valued mapping from ℋ to ℋ, then the GVIP collapses to the classical variational inequality problem [1, 2].
Furthermore, we establish the strong convergence of the method in the case that the solution set is nonempty, and we show that the generated sequence diverges to infinity if the solution set is empty.
The rest of this paper is organized as follows. In Section 2, we give some related concepts and conclusions needed in the subsequent analysis. In Section 3, we present our designed algorithm and establish the convergence of the algorithm.
and we denote by . The well-known properties of the projection operator are as follows.
Lemma 2.1 
- (i)monotone if and only if
- (ii)pseudo-monotone if and only if, for any , , ,
To proceed, we present the definition of maximal monotone multi-valued mapping F.
is not properly contained in the graph of any other monotone operator.
It is clear that a monotone multi-valued mapping F is maximal if and only if, for any such that , , then .
upper semi-continuous at if for every open set V containing , there is an open set U containing x such that for all ;
lower semi-continuous at if given any sequence converging to x and any , there exists a sequence that converges to y;
continuous at if it is both upper semi-continuous and lower semi-continuous at x.
Throughout this paper, we assume that the multi-valued mapping is maximal monotone and continuous on X with nonempty compact convex values, where is a nonempty, closed, and convex set.
Then the projection residue can verify the solution set of problem (1.1).
Now, we give the description of the designed algorithm for problem (1.1), whose basic idea is as follows. At each step of the algorithm, compute the projection residue at iterate . If it is a zero vector for some , then stop with being a solution of problem (1.1); otherwise, find a trial point by a back-tracking search at along the residue , and the new iterate is obtained by projecting onto the intersection of X with two halfspaces, respectively, associated with and . Repeat this process until the projection residue is a zero vector.
Step 0: Choose , , .
Set and go to Step 1.
The following conclusion addresses the feasibility of the stepsize rule (3.1), i.e., the existence of point .
Lemma 3.1 If is not a solution of problem (1.1), then there exists a smallest non-negative integer m satisfying (3.1).
This completes the proof. □
where is a vector in . So, by the definition of and (3.3) it follows that .
the desired result follows. □
Regarding the projection step, we shall prove that the set is always nonempty, even when the solution set is empty. Therefore the whole algorithm is well defined in the sense that it generates an infinite sequence .
Lemma 3.3 If the solution set , then for all .
Thus . This shows that for all and the desired result follows. □
Lemma 3.4 Suppose that , then for all .
which implies that the solution set is nonempty. We arrive at a contradiction and the desired result follows. □
In order to establish the convergence of the algorithm, we first show the expansion property of the algorithm w.r.t. the initial point.
and the proof is completed. □
From Lemma 3.4, Algorithm 3.1 generates an infinite sequence if the solution set of problem (1.1) is empty. More precisely, we have the following conclusion.
if the solution set is empty.
for any . So, is a bounded sequence.
for some and is a solution of problem (1.1).
Otherwise, a similar argument to the one above leads to the conclusion that any weak accumulation point of is a solution of problem (1.1), which contradicts the emptiness of the solution set, and the conclusion follows. □
We are in a position to prove strong convergence of Algorithm 3.1.
Theorem 3.2 Suppose Algorithm 3.1 generates an infinite sequence . If the solution set is nonempty and the sequence is bounded away from zero, then the sequence converges strongly to a solution such that ; otherwise, . That is, the solution set of problem (1.1) is empty if and only if the sequence generated by Algorithm 3.1 diverges to infinity.
Since was taken as an arbitrary weak accumulation point of , it follows that is the unique weak accumulation point of this sequence. Since is bounded, the whole sequence weakly converges to . On the other hand, we have shown that every weakly convergent subsequence of converges strongly to . Hence, the whole sequence converges strongly to .
For the case that the solution set is empty, the conclusion can be obtained directly from Theorem 3.1. □
This work was supported by the Natural Science Foundation of China (Grant Nos. 11171180, 11101303), and the Specialized Research Fund for the Doctoral Program of Chinese Higher Education (20113705110002). The authors would like to thank the reviewers for their careful reading, insightful comments, and constructive suggestions, which helped improve the presentation of the paper.
- Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 1990, 48: 161–220. 10.1007/BF01582255MathSciNetView ArticleGoogle Scholar
- Wang YJ, Xiu NH, Zhang JZ: Modified extragradient method for variational inequalities and verification of solution existence. J. Optim. Theory Appl. 2003, 119: 167–183.MathSciNetView ArticleGoogle Scholar
- Auslender A, Teboulle M: Lagrangian duality and related multiplier methods for variational inequality problems. SIAM J. Optim. 2000, 10: 1097–1115. 10.1137/S1052623499352656MathSciNetView ArticleGoogle Scholar
- Ben-Tal A, Nemirovski A: Robust convex optimization. Math. Oper. Res. 1998, 23: 769–805. 10.1287/moor.23.4.769MathSciNetView ArticleGoogle Scholar
- Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3MathSciNetView ArticleGoogle Scholar
- Fang SC, Peterson EL: Generalized variational inequalities. J. Optim. Theory Appl. 1982, 38: 363–383. 10.1007/BF00935344MathSciNetView ArticleGoogle Scholar
- Fang CJ, He Y: A double projection algorithm for multi-valued variational inequalities and a unified framework of the method. Appl. Math. Comput. 2011, 217: 9543–9551. 10.1016/j.amc.2011.04.009MathSciNetView ArticleGoogle Scholar
- He Y: Stable pseudomonotone variational inequality in reflexive Banach spaces. J. Math. Anal. Appl. 2007, 330: 352–363. 10.1016/j.jmaa.2006.07.063MathSciNetView ArticleGoogle Scholar
- Huang NJ: Generalized nonlinear variational inclusions with noncompact valued mappings. Appl. Math. Lett. 1996,9(3):25–29. 10.1016/0893-9659(96)00026-2MathSciNetView ArticleGoogle Scholar
- Li S, Chen G: On relations between multiclass, multicriteria traffic network equilibrium models and vector variational inequalities. J. Syst. Sci. Syst. Eng. 2006,15(3):284–297. 10.1007/s11518-006-5012-8View ArticleGoogle Scholar
- Saigal R: Extension of the generalized complementarity problem. Math. Oper. Res. 1976, 1: 260–266. 10.1287/moor.1.3.260MathSciNetView ArticleGoogle Scholar
- Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.Google Scholar
- Allevi E, Gnudi A, Konnov IV: The proximal point method for nonmonotone variational inequalities. Math. Methods Oper. Res. 2006, 63: 553–565. 10.1007/s00186-005-0052-2MathSciNetView ArticleGoogle Scholar
- Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.MathSciNetGoogle Scholar
- Polyak BT: Introduction to Optimization. Optimization Software Incorporation, Publications Division, New York; 1987.Google Scholar
- Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–78. 10.1090/S0002-9947-1970-0282272-5MathSciNetView ArticleGoogle Scholar
- Aubin JP, Ekeland I: Applied Nonlinear Analysis. Wiley, New York; 1984.Google Scholar
- Levine N: A decomposition of continuity in topological spaces. Am. Math. Mon. 1961,68(1):44–46. 10.2307/2311363View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.