Iterative methods for finding the minimum-norm solution of the standard monotone variational inequality problems with applications in Hilbert spaces
- Yu Zhou^{1}Email author,
- Haiyun Zhou^{1, 2} and
- Peiyuan Wang^{1}
https://doi.org/10.1186/s13660-015-0659-7
© Zhou et al.; licensee Springer. 2015
Received: 29 October 2014
Accepted: 6 April 2015
Published: 16 April 2015
Abstract
In this paper, we introduce two kinds of iterative methods for finding the minimum-norm solution to the standard monotone variational inequality problems in a real Hilbert space. We then prove that the proposed iterative methods converge strongly to the minimum-norm solution of the variational inequality. Finally, we apply our results to the constrained minimization problem and the split feasibility problem as well as the minimum-norm fixed point problem for pseudocontractive mappings.
Keywords
MSC
1 Introduction
A mapping F is said to be hemicontinuous if for any sequence \(\{x_{n}\} \) converging to \(x_{0}\in H\) along a line implies \(T{x_{n}} \rightharpoonup T{x_{0}}\), i.e., \(T{x_{n}} = T({x_{0}} + {t_{n}}x)\rightharpoonup Tx_{0}\) as \(t_{n}\rightarrow0\) for all \(x\in H\).
Theorem 1.1
However, if F fails to be Lipschitz continuous or strongly monotone, then the result above is false in general. We will assume that F is a hemicontinuous and general monotone mapping. Thus, VIP (1.2) is ill-posed and regularization is needed; moreover, a solution is often sought through iteration methods.
Questions
- 1.
Can one modify extragradient method for general monotone operator variational inequality so that strong convergence of the modified algorithm is desirable?
- 2.
If F is a hemicontinuous and strongly monotone mapping, the solution of VIP (1.2) is unique or not?
The purpose of this paper is to solve the questions above. We introduce implicit and explicit iterative methods for construction of the solution of the monotone variational inequality problem and prove that our algorithms converge strongly to the minimum-norm solution of variational inequality problem (1.2). Finally, we apply our results to the constrained minimization problem and the split feasibility problem as well as the minimum-norm fixed point problem for pseudocontractive mappings.
2 Preliminaries
For our main results, we shall make use of the following lemmas.
Lemma 2.1
(see [8])
- (i)
\(\langle Ax,x - {x^{*}}\rangle \ge0\), \(\forall x \in C\);
- (ii)
\(\langle A{x^{*}},x - {x^{*}}\rangle \ge0\), \(\forall x \in C\).
Lemma 2.2
(see [9])
In Lemma 2.2, \(\theta\in K\) is needed. Indeed, if \(A:K \to{X^{*}}\) is a hemicontinuous η-strongly monotone operator, then the restriction that \(\theta\in K\) can be omitted. To prove this, we give the following lemma.
Lemma 2.3
Let K be a unbounded, closed, and convex subset of reflexive Banach space X. Let \(A:K\to H\) be a hemicontinuous η-strongly monotone operator. Then \(\forall w^{*}\in X^{*}\), there exists a \(u_{0}^{*}\in K\) such that the VI (2.1) holds.
Proof
Therefore, \({u_{0}}^{*}\) is a solution of VIP (2.1). □
Lemma 2.4
Proof
Therefore, \(x^{*}=y^{*}\). This completes the proof. □
Lemma 2.5
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(A:C\to H\) be a hemicontinuous monotone operator and \({\gamma_{n}} > 0\) be a sequence of real numbers. Then \({\gamma_{n}}I + A\) are \({\gamma_{n}}\)-strongly monotone.
Proof
Lemma 2.6
(see [10])
- (i)
\(\sum_{n = 0}^{\infty}{\gamma_{n}} = \infty\);
- (ii)
either \(\lim{\sup_{n \to\infty}}{\sigma_{n}} \le0\) or \(\sum_{n = 0}^{\infty}|{\gamma_{n}}{\sigma_{n}}| < \infty\).
Then \(\lim_{n\to\infty}\alpha_{n}=0\).
Lemma 2.7
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\to H\) be a mapping and write \(A:=I-T\). Then \(\operatorname{VI}(C,A)=\operatorname{Fix}(P_{C} T)\). In particular, if \(T:C\to C\) is a self-mapping, then \(\operatorname{VI}(C,A)=\operatorname{Fix}(T)\).
Proof
This completes the proof. □
Now we are in a proposition to state and prove the main results in this paper.
3 Main results
In this section we will introduce two iterative methods (one implicit and the other explicit). First, we introduce the implicit one. In what follows, we assume that \(A:C\to H\) is hemicontinuous and monotone.
Theorem 3.1
Assume that \(\operatorname{VI}(C,A)\neq\emptyset\). Then the sequence \(\{y_{n}\}\) generated by (3.7) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\) which is the minimum-norm solution of VIP (2.2).
Proof
Therefore, \(\{y_{n}\}\) is bounded. Then we know that \(\{y_{n}\}\) has a subsequence \(\{y_{n_{j}}\}\) such that \(y_{n_{j}}\rightharpoonup x^{*}\) as \(j\to\infty\).
Furthermore, without loss of generality, we may assume that \(\{y_{n}\}\) converges weakly to a point \({x^{*}} \in C\).
Since \({y_{n}} \rightharpoonup{x^{*}}\) as \(n\to\infty\), by (3.19) we get \({y_{n}} \to{x^{*}}\) as \(n\to\infty\).
So, the sequence \(\{y_{n}\}\) generated by (3.7) converges in norm to \(x^{*}=P_{\operatorname{VI}(C,A)}\theta\) as \(n\to\infty\).
Now, we introduce an explicit method and establish its strongly convergence analysis.
Theorem 3.2
- (i)
\(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);
- (ii)
\({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);
- (iii)
\(\frac{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}{\alpha_{n}^{2}} \to0\) as \(n \to\infty\).
Assume that both \(\{Ax_{n}\}\) and \(\{Ay_{n}\}\) are bounded and that \(\operatorname{VI}(C,A)\neq\emptyset\). Then the iterative sequence \(\{x_{n}\}\) generated by (3.23) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\), which is the minimum-norm solution to VIP (2.2).
Proof
By using Theorem 3.1, we know that \(\{y_{n}\}\) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\).
From conditions (i) and (iii) we know that \(\frac{{\vert {{\alpha_{n}} - {\alpha_{n - 1}}} \vert + \vert {{\beta_{n}} - {\beta_{n - 1}}} \vert }}{{{\alpha_{n}}}} = o({\alpha_{n}})\) and \(\beta_{n}^{2}=o(\alpha_{n})\).
By Lemma 2.6 and condition (ii), we have \(\Vert {{x_{n + 1}} - {y_{n}}} \Vert \to0\), as \(n \to\infty\). It follows that \(\{ {x_{n}}\} \) converges strongly to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\). This completes the proof. □
If \(A:C\to H\) is a k-Lipschitz continuous and monotone, we have the following convergence result.
Theorem 3.3
- (i)
\(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);
- (ii)
\({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);
- (iii)
\(\frac{{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}}{{\alpha_{n}^{2}}} \to0\) as \(n \to\infty\).
Assume that \(\operatorname{VI}(C,A)\neq\emptyset\). Then the iterative sequence \(\{ x_{n}\}\) generated by (3.23) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\), which is the minimum-norm solution of VIP (2.2).
Proof
From Theorem 3.1, we know that \(\{y_{n}\}\) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\). Therefore, it is sufficient to show that \(x_{n+1}-y_{n}\to0\) as \(n\to\infty\).
Remark 3.1
- (1)
- (2)
The recursion (3.23) has the strong convergence property; while (EM) has only the weak convergence property in general.
- (3)
The choice of the iterative parametric sequences \(\{\alpha_{n}\} \) and \(\{\beta_{n}\}\) in (3.23) does not depend on the Lipschitz constant of A, thus, (3.23) is also efficient even in the case where the Lipschitz constant of A is unknown.
Remark 3.2
4 Applications
In this section, we give some applications of our results.
Problem 4.1
By virtue of Theorem 3.3, we can deduce the following convergence result.
Theorem 4.1
Assume that \(S_{B}\ne\emptyset\) and \(\{x_{n}\}\) is generated by (4.3), then \(\{x_{n}\}\) converges in norm to \(x^{*}\).
Proof
Next, we turn to consider the split feasibility problem (SFP).
Problem 4.2
Theorem 4.2
Assume \(\Gamma\ne\emptyset\) and \(\{x_{n}\}\) is generated by (4.7), then \(\{ {x_{n}}\} \) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,\nabla g)}}\theta\).
Proof
Note that \(\nabla g(x)=B^{*}(I-P_{Q})Bx\). It is clear that ∇g is \({\Vert B \Vert ^{2}}\)-Lipschitz continuous and monotone, by Theorem 3.3 we conclude that \(\{ {x_{n}}\} \) converges in norm to \({x^{*}} = {P_{\operatorname{VI}(C,\nabla g)}}\theta=P_{\Gamma}\theta\). This completes the proof. □
Finally, we apply our results to the minimum-norm fixed point problem for pseudocontractive mappings.
Theorem 4.3
- (i)
\(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);
- (ii)
\({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);
- (iii)
\(\frac{{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}}{{\alpha_{n}^{2}}} \to0\) as \(n \to\infty\).
Proof
Put \(A=I-T\). Since \(T:C\to C\) is a hemicontinuous pseudocontractive mapping, then A is a hemicontinuous monotone operator. It follows from Theorem 1.1 that \(\operatorname{VI}(C,A)\ne\emptyset \). From the boundedness of C, we know that \(\{Ax_{n}\}\) and \(\{Ay_{n}\}\) are bounded. By Theorem 3.2, the iterative sequence \(\{x_{n}\}\) converges strongly to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta\). By Lemma 2.7 and noting that T is a self-mapping, we know that \(\operatorname{VI}(C,A)=\operatorname{Fix}(T)\). This completes the proof. □
Theorem 4.4
- (i)
\(\frac{{{\alpha_{n}}}}{{{\beta_{n}}}} \to0\), \(\frac{{\beta _{n}^{2}}}{{{\alpha_{n}}}} \to0\) as \(n \to\infty\);
- (ii)
\({\alpha_{n}} \to0\) as \(n\to\infty\), \(\sum_{n = 1}^{\infty}{{\alpha_{n}}} = \infty\);
- (iii)
\(\frac{{|{\alpha_{n}} - {\alpha_{n - 1}}| + |{\beta_{n}} - {\beta _{n - 1}}|}}{{\alpha_{n}^{2}}} \to0\) as \(n \to\infty\).
Proof
Put \(A=I-T\). Since \(T:C\to C\) is a k-Lipschitz continuous pseudocontractive mapping, A is a \((k+1)\)-Lipschitz continuous monotone operator. By Lemma 2.7 and our assumption, we see that \(\operatorname{VI}(C,A)=\operatorname{Fix}(T)\ne\emptyset\). By Theorem 3.3, the iterative sequence \(\{x_{n}\}\) generated by (4.9) converges strongly to \({x^{*}} = {P_{\operatorname{VI}(C,A)}}\theta=P_{\operatorname{Fix}(T)}\theta\). This completes the proof. □
Remark 4.1
Theorem 4.2 improves some related results of [10] and [11] in the sense that the iterative parametric sequences do not depend on the norm of operator A. Theorem 4.3 seems to be a new result. Theorem 4.4 is similar to Theorem 3.2 of [7] with a different condition (iii) and different arguments.
Declarations
Acknowledgements
This research was supported by the National Natural Science Foundation of China (11071053).
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Authors’ Affiliations
References
- Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980) MATHGoogle Scholar
- Duvaut, D, Lions, JL: Inequalities in Mechanics and Physics. Springer, Berlin (1976) View ArticleMATHGoogle Scholar
- Zhou, HY, Pei, YW: A simpler explicit iterative algorithm for a class of variational inequalities in Hilbert spaces. J. Optim. Theory Appl. (2013). doi:10.1007/s10957-013-0470-x Google Scholar
- Zhou, HY, Pei, YW: A new iteration method for variational inequalities on set of common fixed points for a finite family of quasi-pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014, 218 (2014) View ArticleGoogle Scholar
- Korpelevich, GM: The extragradient method for finding saddle points and the other problem. Matecon 12, 747-756 (1976) MATHGoogle Scholar
- Chen, RD, Su, YF, Xu, HK: Regularization and iteration methods for a class of monotone variational inequalities. Taiwan. J. Math. 13, 739-752 (2009) MATHMathSciNetGoogle Scholar
- Yao, YH, Marino, G, Xu, H-K, Liou, Y-C: Construction of minimum-norm fixed points of pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2014, 206 (2014) View ArticleMathSciNetGoogle Scholar
- Minty, GJ: On the maximal domain of a ‘monotone’ function. Mich. Math. J. 8, 135-157 (1961) View ArticleMATHMathSciNetGoogle Scholar
- Browder, FE: Nonlinear monotone operators and convex sets in Banach spaces. Bull. Am. Math. Soc. 71, 780-785 (1965) View ArticleMATHMathSciNetGoogle Scholar
- Zhou, HY, Pei, YW, Zhou, Y: Minimum-norm fixed point of nonexpansive mappings with applications. Optimization (2013). doi:10.1080/02331934.2013.811667 Google Scholar
- Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994) View ArticleMATHMathSciNetGoogle Scholar