A modified subgradient extragradient method for solving monotone variational inequalities
 Songnian He^{1, 2}Email author and
 Tao Wu^{1}
https://doi.org/10.1186/s1366001713663
© The Author(s) 2017
Received: 18 January 2017
Accepted: 17 April 2017
Published: 27 April 2017
Abstract
In the setting of Hilbert space, a modified subgradient extragradient method is proposed for solving Lipschitzcontinuous and monotone variational inequalities defined on a level set of a convex function. Our iterative process is relaxed and selfadaptive, that is, in each iteration, calculating two metric projections onto some halfspaces containing the domain is involved only and the step size can be selected in some adaptive ways. A weak convergence theorem for our algorithm is proved. We also prove that our method has \(O(\frac{1}{n})\) convergence rate.
Keywords
MSC
1 Introduction
In 2006, Nadezhkina and Takahashi [3] generalized the above EG method to general Hilbert spaces (including infinitedimensional spaces) and they also established the weak convergence theorem.
In each iteration of the EG method, in order to get the next iterate \(x_{k+1}\), two projections onto C need to be calculated. But projections onto a general closed and convex subset are not easily executed and this might greatly affect the efficiency of the EG method. In order to overcome this weakness, Censor et al. developed the subgradient extragradient method in Euclidean space [4], in which the second projection in (1.4) onto C was replaced with a projection onto a specific constructible halfspace, actually which is one of the subgradient halfspaces. Then, in [5, 6], Censor et al. studied the subgradient extragradient method for solving the VIP in Hilbert spaces. They also proved the weak convergence theorem under the assumption that f is a Lipschitzian continuous and monotone mapping.
The main purpose of this paper is to propose an improved subgradient extragradient method for solving the Lipschitzcontinuous and monotone variational inequalities defined on a level set of a convex function [13], that is, \(C:=\{x\in H \mid c(x)\leq0\}\) and \(c:H\rightarrow R\) is a convex function. In our algorithm, two projections \(P_{C}\) in (1.3) and (1.4) will be replaced with \(P_{C_{k}}\) and \(P_{T_{k}}\), respectively, where \(C_{k}\) and \(T_{k}\) are halfspaces, such that \(C_{k}\supset C\) and \(T_{k}\supset C\). \(C_{k}\) is based on the subdifferential inequality, the idea of which was proposed firstly by Fukushima [14], and \(T_{k}\) is the same one as Censor’s method [5].
It is also worth pointing out that the step size in our algorithm can be selected in some adaptive way, that is, we have no need to know or to estimate any information as regards the Lipschitz constant of f, therefore, our algorithm is easily executed.
Our paper is organized as follows. In Section 2, we list some basic definitions, properties and lemmas. In Section 3, the improved subgradient extragradient algorithm and its corresponding geometrical intuition are presented. In Section 4, the weak convergence theorem for our method is proved. Finally, we prove that our algorithm has \(O(\frac {1}{n})\) convergence rate in the last section.
2 Preliminaries
A function \(c:H\rightarrow R\) is said to be weakly lower semicontinuous (wlsc) at \(x\in H\), if \(x_{k}\rightharpoonup x\) implies \(c(x)\leq\liminf_{k\rightarrow\infty}c(x_{k})\). We say c is weakly lower semicontinuous on H, if for each \(x\in H\), c is weakly lower semicontinuous at x.
Definition 2.1
Normal cone
Definition 2.2
Maximal monotone operator
It is clear that a monotone mapping T is maximal iff for any \((x,u)\in H\times H\), if \(\langle uv,xy\rangle\geq0\), \(\forall(y,v)\in G(T)\), then it follows that \(u\in T(x)\).
The next property is known as the Opial condition and all Hilbert spaces have this property [19].
Lemma 2.3
The following lemma was proved in [20].
Lemma 2.4
3 The modified subgradient extragradient method
In the rest of this paper, we always assume that the following conditions are satisfied.
Condition 3.1
The solution set of \(\operatorname{VI}(C,f)\), denoted by \(\operatorname {SOL}(C,f)\), is nonempty.
Condition 3.2
The mapping \(f:H\rightarrow H\) is monotone and Lipschitzcontinuous on H (but we have no need to know or to estimate the Lipschitz constant of f).
Condition 3.3
 (i)
\(c(x)\) is a convex function;
 (ii)
\(c(x)\) is weakly lower semicontinuous on H;
 (iii)
\(c(x)\) is Gâteaux differentiable on H and \(c'(x)\) is a \(M_{1}\)Lipschitziancontinuous mapping on H;
 (iv)
there exists a positive constant \(M_{2}\) such that \(\Vert f(x) \Vert \leq M_{2} \Vert c'(x) \Vert \) for any \(x\in\partial C\), where ∂C denotes the boundary of C.
Next, we present the modified subgradient extragradient method as follows.
Algorithm 3.4
The modified subgradient extragradient method
 Step 1::

select an initial guess \(x_{0}\in H\) arbitrarily, set \(k=0\) and construct the halfspace$$ C_{k}:=\bigl\{ w\in H \mid c(x_{k})+\bigl\langle c'(x_{k}),wx_{k}\bigr\rangle \leq0\bigr\} ; $$
 Step 2::

given the current iteration \(x_{k}\), computewhere$$ y_{k}=P_{C_{k}}\bigl(x_{k} \beta_{k}f(x_{k})\bigr), $$(3.2)and \(m_{k}\) is the smallest nonnegative integer, such that$$\begin{aligned} \beta_{k}=\sigma\rho^{m_{k}},\quad\sigma>0, \rho\in(0,1) \end{aligned}$$(3.3)where \(M=M_{1}M_{2}\) and \(\nu\in(0,1)\).$$\begin{aligned} \beta_{k}^{2} \bigl\Vert f(x_{k})f(y_{k}) \bigr\Vert ^{2}+2M \beta_{k} \Vert x_{k}y_{k} \Vert ^{2}\leq\nu^{2} \Vert x_{k}y_{k} \Vert ^{2}, \end{aligned}$$(3.4)
 Step 3::

calculate the next iterate,where$$ x_{k+1}=P_{T_{k}}\bigl(x_{k} \beta_{k}f(y_{k})\bigr), $$(3.5)which is the same halfspace as Censor’s method [5].$$ T_{k}=\bigl\{ w\in H \mid\bigl\langle x_{k} \beta _{k}f(x_{k})y_{k},wy_{k}\bigr\rangle \leq0\bigr\} , $$(3.6)
At the end of this section, we list the alternating theorem [21, 22] for the solutions of \(\operatorname{VI}(C,f)\), where C is given by (3.1). This result will be used to prove the convergence theorem of our algorithm in the next section.
Theorem 3.5
 1.
\(f(x^{*})=0\), or
 2.
\(x^{*}\in\partial C\) and there exists a positive constant β such that \(f(x^{*})=\beta c'(x^{*})\).
4 Convergence theorem of the algorithm
In this section, we prove the weak convergence theorem for Algorithm 3.4. First of all, we give the following lemma, which plays a crucial role in the proof of our main result.
Lemma 4.1
Proof
The subsequent proof is divided into following two cases.
Case 1: \(f(u)\neq0\).
Case 2: \(f(u)=0\).
Theorem 4.2
Proof
5 Convergence rate of the modified method
Lemma 5.1
Theorem 5.2
Proof
On the other hand, since \(z_{n}\) is a convex combination of \(y_{0}, y_{1}, \ldots, y_{n}\), it is easy to see that \(z_{n}\rightharpoonup z\in\operatorname {SOL}(C,f)\) due to the fact that \(y_{k}\rightharpoonup z\in\operatorname{SOL}(C,f)\) proved by Theorem 4.2. The proof is complete. □
6 Results and discussion
Since the modified subgradient extragradient method proposed in this paper is relaxed and selfadaptive, it is easily implemented. A weak convergence theorem for our algorithm is proved due to the alternating theorem for the solutions of variational inequalities. Our results in this paper effectively improve the existing related results.
7 Conclusion
Although the extragradient methods and the subgradient extragradient methods have been widely studied, the existing algorithms all face the problem that the projection operator is hard to calculate. The problem can be solved effectively by using the modified subgradient extragradient method proposed in this paper, since two projections onto the original domain are all replaced with projections onto some halfspaces, which is very easily calculated. Besides, the step size can be selected in some adaptive ways, which means that we have no need to know or to estimate the Lipschitz constant of the operator. Furthermore, we prove that our method has \(O(\frac{1}{n})\) convergence rate.
Declarations
Acknowledgements
This work was supported by the Foundation of Tianjin Key Lab for Advanced Signal Processing (2016 ASPTJ02).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Society for Industrial and Applied Mathematics, Philadelphia (2000) View ArticleMATHGoogle Scholar
 Korpelevich, GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747756 (1976) MathSciNetMATHGoogle Scholar
 Nadezhkina, N, Takahashi, W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191201 (2006) MathSciNetView ArticleMATHGoogle Scholar
 Censor, Y, Gibali, A, Reich, S: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61, 11191132 (2012) MathSciNetView ArticleMATHGoogle Scholar
 Censor, Y, Gibali, A, Reich, S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318335 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Censor, Y, Gibali, A, Reich, S: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 26, 827845 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Yao, YH, Postolache, M, Liou, YC, Yao, ZS: Construction algorithms for a class of monotone variational inequalities. Optim. Lett. 10, 15191528 (2016) MathSciNetView ArticleMATHGoogle Scholar
 Yao, YH, Liou, YC, Kang, SM: Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 59, 34723480 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Yao, YH, Noor, MA, Liou, YC, Kang, SM: Iterative algorithms for general multivalued variational inequalities. Abstr. Appl. Anal. 2012, 768272 (2012) MathSciNetMATHGoogle Scholar
 Yao, YH, Noor, MA, Liou, YC: Strong convergence of a modified extragradient method to the minimumnorm solution of variational inequalities. Abstr. Appl. Anal. 2012, 817436 (2012) MathSciNetMATHGoogle Scholar
 Zegeye, H, Shahzad, N, Yao, YH: Minimumnorm solution of variational inequality and fixed point problem in Banach spaces. Optimization 64, 453471 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Yao, YH, Shahzad, N: Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 6, 621628 (2012) MathSciNetView ArticleMATHGoogle Scholar
 He, S, Yang, C: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 942315 (2013) MathSciNetMATHGoogle Scholar
 Fukushima, M: A relaxed projection method for variational inequalities. Math. Program. 35, 5870 (1986) MathSciNetView ArticleMATHGoogle Scholar
 Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000) MATHGoogle Scholar
 Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry and Nonexpansive Mappings. Dekker, New York (1984) MATHGoogle Scholar
 HiriartUrruty, JB, Lemarchal, C: Fundamentals of Convex Analysis. Springer, Berlin (2001) View ArticleGoogle Scholar
 Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 7588 (1970) MathSciNetView ArticleMATHGoogle Scholar
 Opial, Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591597 (1967) MathSciNetView ArticleMATHGoogle Scholar
 Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417428 (2003) MathSciNetView ArticleMATHGoogle Scholar
 He, S, Xu, HK: Uniqueness of supporting hyperplanes and an alternative to solutions of variational inequalities. J. Glob. Optim. 57, 13751384 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Nguyen, HQ, Xu, HK: The supporting hyperplane and an alternative to solutions of variational inequalities. J. Nonlinear Convex Anal. 16, 23232331 (2015) MathSciNetMATHGoogle Scholar
 Facchinei, F, Pang, JS: FiniteDimensional Variational Inequalities and Complementarity Problems, vols. I and II. Springer Series in Operations Research. Springer, New York (2003) MATHGoogle Scholar
 Cai, XJ, Gu, GY, He, BS: On the \(O(\frac{1}{t})\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57, 339363 (2014) MathSciNetView ArticleMATHGoogle Scholar