A modified subgradient extragradient method for solving monotone variational inequalities

In the setting of Hilbert space, a modified subgradient extragradient method is proposed for solving Lipschitz-continuous and monotone variational inequalities defined on a level set of a convex function. Our iterative process is relaxed and self-adaptive, that is, in each iteration, calculating two metric projections onto some half-spaces containing the domain is involved only and the step size can be selected in some adaptive ways. A weak convergence theorem for our algorithm is proved. We also prove that our method has \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O(\frac{1}{n})$\end{document}O(1n) convergence rate.


Introduction
Let H be a real Hilbert space with inner product ·, · and norm · . The variational inequality problem (VIP) is aimed to finding a point x * ∈ C, such that where C is a nonempty closed convex subset of H and f : C → H is a given mapping. This problem and its solution set are denoted by VI(C, f ) and SOL(C, f ), respectively. We also always assume that SOL(C, f ) = ∅. The variational inequality problem VI(C, f ) has received much attention due to its applications in a large variety of problems arising in structural analysis, economics, optimization, operations research and engineering sciences; see [-] and the references therein. It is well known that the problem (.) is equivalent to the fixed point problem for finding a point x * ∈ C, such that [] where λ is an arbitrary positive constant. Many algorithms for the problem (.) are based on the fixed point problem (.). Korpelevich [] proposed an algorithm for solving the problem (.) in Euclidean space R n , known as the extragradient method (EG): x  ∈ C, x n = P C x nλf (x n ) , (  .  ) where λ is some positive number and P C denotes the metric projection of H onto C. She proved that if f is κ-Lipschitz-continuous and λ is selected such that λ ∈ (, /κ), then the two sequences {x n } and {x n } generated by the EG method, converge to the same point z ∈ SOL(C, f ). In , Nadezhkina and Takahashi [] generalized the above EG method to general Hilbert spaces (including infinite-dimensional spaces) and they also established the weak convergence theorem.
In each iteration of the EG method, in order to get the next iterate x k+ , two projections onto C need to be calculated. But projections onto a general closed and convex subset are not easily executed and this might greatly affect the efficiency of the EG method. In order to overcome this weakness, Censor et al. developed the subgradient extragradient method in Euclidean space [], in which the second projection in (.) onto C was replaced with a projection onto a specific constructible half-space, actually which is one of the subgradient half-spaces. Then, in [, ], Censor et al. studied the subgradient extragradient method for solving the VIP in Hilbert spaces. They also proved the weak convergence theorem under the assumption that f is a Lipschitzian continuous and monotone mapping.
The main purpose of this paper is to propose an improved subgradient extragradient method for solving the Lipschitz-continuous and monotone variational inequalities defined on a level set of a convex function [], that is, C := {x ∈ H | c(x) ≤ } and c : H → R is a convex function. In our algorithm, two projections P C in (.) and (.) will be replaced with P C k and P T k , respectively, where C k and T k are half-spaces, such that C k ⊃ C and T k ⊃ C. C k is based on the subdifferential inequality, the idea of which was proposed firstly by Fukushima [], and T k is the same one as Censor's method [].
It is also worth pointing out that the step size in our algorithm can be selected in some adaptive way, that is, we have no need to know or to estimate any information as regards the Lipschitz constant of f , therefore, our algorithm is easily executed.
Our paper is organized as follows. In Section , we list some basic definitions, properties and lemmas. In Section , the improved subgradient extragradient algorithm and its corresponding geometrical intuition are presented. In Section , the weak convergence theorem for our method is proved. Finally, we prove that our algorithm has O(  n ) convergence rate in the last section.

Preliminaries
In this section, we list some basic concepts and lemmas, which are useful for constructing the algorithm and analyzing the convergence. Let H be a real Hilbert space with inner product ·, · and norm · and let C be a closed convex subset of H. We write x k x and x k → x to indicate that the sequence {x k } ∞ k= converges weakly and strongly to x, respectively. For each point x ∈ H, there exists a unique nearest point in C, denoted by (  .  ) The mapping P C : H → C is called the metric projection of H onto C. It is well known that P C is characterized by the following inequalities: for all t ∈ [, ] and x, y ∈ H. For a convex function c : H → R, c is said to be subdifferentiable at a point x ∈ H if the set is not empty, where each element in ∂c(x) is called a subgradient of c at x, ∂c(x) is subdifferential of c at x and the inequality in (.) is said to be the subdifferential inequality of c at x.
f is also said to be a κ-Lipschitzian-continuous mapping.
and the graph G(T) of T , is not properly contained in the graph of any other monotone operator.
It is clear that a monotone mapping T is maximal iff for any ( holds for any y ∈ H with x = y. Then {P C (x k )} ∞ k= converges strongly to some z ∈ C.

The modified subgradient extragradient method
In this section, we give our algorithm for solving the VI(C, f ) in the setting of Hilbert spaces, where C is a level set given as follows: In the rest of this paper, we always assume that the following conditions are satisfied.
where ∂C denotes the boundary of C.
Next, we present the modified subgradient extragradient method as follows.
Algorithm . (The modified subgradient extragradient method) Step : select an initial guess x  ∈ H arbitrarily, set k =  and construct the half-space Step : given the current iteration x k , compute and m k is the smallest nonnegative integer, such that where M = M  M  and ν ∈ (, ).
Step : calculate the next iterate, where which is the same half-space as Censor's method [].

Convergence theorem of the algorithm
In this section, we prove the weak convergence theorem for Algorithm .. First of all, we give the following lemma, which plays a crucial role in the proof of our main result.
Lemma . Let {x k } ∞ k= and {y k } ∞ k= be the two sequences generated by Algorithm .. Let u ∈ SOL(C, f ) and let β k be selected as (.) and (.). Then, under the Conditions ., . and ., we have Proof Taking u ∈ SOL(C, f ) arbitrarily, for all k ≥ , using (.) and the monotonicity of f , we have By the definition of T k , we get Substituting (.) into the last inequality of (.), thus we obtain The subsequent proof is divided into following two cases.
Case : f (u) = . Using Theorem ., there exists a β u >  such that f (u) = -β u c (u). By the subdifferential inequality, we have Noting the fact that c(u) =  due to u ∈ ∂C, we have By the definition of C k , we have using the subdifferential inequality again, Adding the above two inequalities, we obtain Combining (.) and (.), we have by using (iii) and (iv) of Condition . where M is defined as before. Substituting (.) into the last inequality of (.), we obtain Finally, from the condition of β k given by (.), we get Case : f (u) = . From (.), we can easily get Obviously, (.) implies Thus, (.) follows from the combination of (.) and (.).
Indeed, substituting (.) into (.), we get Let m be the smallest nonnegative integer, such that where κ is the Lipschitz constant of f . Noting that f (x k )f (y k ) ≤ κ x ky k , we assert from (.) and (.) that m k ≤ m, which implies Theorem . Assume that Conditions .-. hold. Then the two sequences {x k } ∞ k= and {y k } ∞ k= generated by Algorithm . converge weakly to the same point z ∈ SOL(C, f ), furthermore (.) Proof By Lemma ., Hence, In addition, Using the Cauchy-Schwartz inequality, hence, the sequence {y k } ∞ k= is also bounded. Let ω(x k ) be the set of weak limit points of {x k } ∞ k= , i.e., Since the sequence {x k } ∞ k= is bounded, ω(x k ) = ∅. Taking z ∈ ω(x k ) arbitrarily, there exists some subsequence {x k j } ∞ j= of {x k } ∞ k= , such that Due to y k ∈ C k and the definition of C k , we have then, using the Cauchy-Schwartz inequality again, According to (iii) in Condition ., we can deduce that c (x) is bounded on any bounded sets of H, so there exists M >  such that c (x k ) ≤ M for all k ≥ , and then According to (ii) in Condition ., we have Hence, z ∈ C. Now, we turn to showing z ∈ SOL(C, f ). Define where N C (v) is defined by (.). Obviously, T is a maximal monotone operator. For arbitrary (v, w) ∈ G(T), we have equivalently, Setting y = z in (.), we get On the other hand, by the definition of y k and (.), we have for all k ≥ . Using (.) and (.), we obtain By virtue of (.), (.) and Condition ., taking j → ∞ in (.), we have Since T is a maximal monotone operator, (.) means that  ∈ T(z) and consequently z ∈ T - () = SOL(C, f ). Now we are in a position to verify that x k z (k → ∞). In fact, if there exists an- ing the fact that { x ku } ∞ k= is decreasing for all u ∈ SOL(C, f ), we obtain by using This is a contradiction, soz = z. Consequently, we have x k z (k → ∞) and y k z Finally, we show that z = lim k→∞ P SOL(C,f ) (x k ). Put u k = P SOL(C,f ) (x k ), using (.) again and z ∈ SOL(C, f ), On the other hand, since z n is a convex combination of y  , y  , . . . , y n , it is easy to see that z n z ∈ SOL(C, f ) due to the fact that y k z ∈ SOL(C, f ) proved by Theorem .. The proof is complete.
Let β = σρ m . From (.), β k ≥ β holds for all k ≥  and this together with (.) leads to ϒ n ≥ (n + )β, this means the modified subgradient extragradient method has O(  n ) convergence rate. In fact, for any bounded subset D ⊂ C and given accuracy > , our algorithm achieves

Results and discussion
Since the modified subgradient extragradient method proposed in this paper is relaxed and self-adaptive, it is easily implemented. A weak convergence theorem for our algorithm is proved due to the alternating theorem for the solutions of variational inequalities. Our results in this paper effectively improve the existing related results.

Conclusion
Although the extragradient methods and the subgradient extragradient methods have been widely studied, the existing algorithms all face the problem that the projection operator is hard to calculate. The problem can be solved effectively by using the modified subgradient extragradient method proposed in this paper, since two projections onto the original domain are all replaced with projections onto some half-spaces, which is very easily calculated. Besides, the step size can be selected in some adaptive ways, which means that we have no need to know or to estimate the Lipschitz constant of the operator. Furthermore, we prove that our method has O(  n ) convergence rate.