- Research
- Open Access
Weak convergence theorem for variational inequality problems with monotone mapping in Hilbert space
- Ming Tian^{1, 2}Email author and
- Bing-Nan Jiang^{1}
https://doi.org/10.1186/s13660-016-1237-3
© Tian and Jiang 2016
Received: 8 June 2016
Accepted: 9 November 2016
Published: 17 November 2016
Abstract
We know that variational inequality problem is very important in the nonlinear analysis. The main purpose of this paper is to propose an iterative method for finding an element of the set of solutions of a variational inequality problem with a monotone and Lipschitz continuous mapping in Hilbert space. This iterative method is based on the extragradient method. We get a weak convergence theorem. Using this result, we obtain three weak convergence theorems for the equilibrium problem, the constrained convex minimization problem, and the split feasibility problem.
Keywords
MSC
1 Introduction
The variational inequality problem is a generalization of the nonlinear complementarity problem. It is widely used in economics, engineering, mechanics, signal processing, image processing, and so on. The variational inequality was first derived from the mechanics problems in the early 1960s. In 1964, the existence and uniqueness of solutions of variational inequalities were presented for the first time. Subsequently, some scientists have published a series of articles. In the 1970s, the variational inequality problem had been used in many fields. In the 1990s, the variational inequality problem became more important in nonlinear analysis.
In 1976, Korpelevich [1] proposed the following so-called extragradient method for solving the variational inequality problem in the finite-dimensional Euclidean space \(\mathbb{R}^{n}\).
Theorem 1.1
[1]
In this paper, based on the extragradient method, we introduce an iterative method for finding an element of the set of solutions of a variational inequality problem for a monotone and Lipschitz continuous mapping in Hilbert space. We obtain a weak convergence theorem. As applications, we can use this result to solve equilibrium problems, constrained convex minimization problems, and split feasibility problems.
2 Preliminaries
Lemma 2.1
[2]
Lemma 2.2
[2]
Lemma 2.3
[3]
Lemma 2.4
[4]
- (i)
\(\lim_{n\rightarrow\infty}\|x_{n}-u\|\) exists for each \(u\in C\);
- (ii)
\(\omega_{w}(x_{n})\subset C\).
Lemma 2.5
[5]
3 Main results
The main task of this article is to find an element of the set of solutions of a variational inequality problem with a monotone and Lipschitz continuous mapping in Hilbert space. We obtain a weak convergence theorem.
Theorem 3.1
Proof
Since \(\{x_{n}\}\) is bounded, there is a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) that converges weakly to a point z. We prove that \(z\in \operatorname{VI}(C,A)\). From (3.5) and (3.9), we have \(y_{n_{i}}\rightharpoonup z\) and \(x_{n_{i}+1}\rightharpoonup z\).
4 Application
In the applications of this method, they are useful in nonlinear analysis and optimization problems in Hilbert space. This section is concerned with three weak convergence theorems for the equilibrium problem, the constrained convex minimization problem, and the split feasibility problem by Theorem 3.1.
Lemma 4.1
- (A1)
\(F(x,x)\)=0 for all \(x\in C\);
- (A2)
for each \(x\in C\), \(y\mapsto F(x,y)\) is convex and differentiable.
Proof
Conversely. If \(z\in \operatorname{VI}(C,S)\); i.e., \(\langle\nabla F_{y}(z,y)|_{y=z}, y-z\rangle\geq0\), \(\forall y\in C\). Since for each \(x\in C\), \(y\mapsto F(x,y)\) is convex. Then \(F(z,y)\geq F(z,z)=0\). □
Applying Theorem 3.1 and Lemma 4.1, we obtain the following result.
Theorem 4.2
Lemma 4.3
Let H is a real Hilbert space and let C be a nonempty closed convex subset of H. Let f be a convex function of H into \(\mathbb{R}\). If f is differentiable, then z is a solution of (4.3) if and only if \(z\in \operatorname{VI}(C,\nabla f)\).
Proof
Applying Theorem 3.1 and Lemma 4.3, we obtain the following result.
Theorem 4.4
Proof
Since f is convex, we see that ∇f is monotone. Putting \(A=\nabla f\) in Theorem 3.1, we obtain the desired result by Lemma 4.3. □
Lemma 4.5
[8]
- (i)
\(z\in \operatorname{VI}(C,B^{*}(I-P_{Q})B)\);
- (ii)
\(z=P_{C}(I-\lambda B^{*}(I-P_{Q})B)z\);
- (iii)
\(z\in C\cap B^{-1}Q\),
Lemma 4.6
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let B be a bounded linear operator of \(H_{1}\) into \(H_{2}\) such that \(B\neq0\). Let Q be a nonempty closed convex subset of \(H_{2}\). Then \(B^{*}(I-P_{Q})B\) is monotone and \(\|B\|^{2}\)-Lipschitz continuous.
Proof
Then \(B^{*}(I-P_{Q})B\) is monotone and \(\|B\|^{2}\)-Lipschitz continuous. □
Applying Theorem 3.1 and Lemma 4.5, we obtain the following result.
Theorem 4.7
5 Numerical result
In this section, we use our iterative method to solve some specific practical numerical calculation problems. By using the algorithm in Theorem 4.4 and Theorem 4.7, we illustrate its convergence in solving constrained convex minimization problem and linear system of equations.
The first example is the constrained convex minimization problem of a function of one variable, which uses the algorithm in Theorem 4.4.
Example 1
Numerical results as regards Example 1
n | \(\boldsymbol{x_{n}}\) | \(\boldsymbol{E_{n}}\) |
---|---|---|
0 | 0.5000 | 5.00E − 01 |
10 | 0.9214 | 7.86E − 02 |
50 | 0.9998 | 1.69E − 04 |
100 | 1.0000 | 8.75E − 08 |
500 | 1.0000 | 4.44E − 16 |
The second example is a \(3\times3\) linear system of equations, which use the algorithm in Theorem 4.7.
Example 2
Numerical results as regards Example 2
n | \(\boldsymbol{x_{n}^{1}}\) | \(\boldsymbol{x_{n}^{2}}\) | \(\boldsymbol{x_{n}^{3}}\) | \(\boldsymbol{E_{n}}\) |
---|---|---|---|---|
0 | 1.0000 | 1.0000 | 1.0000 | 7.00E + 00 |
10 | 2.9932 | 3.9399 | −2.3981 | 2.60E + 00 |
50 | 2.7238 | 4.3377 | −4.7600 | 4.98E − 01 |
100 | 2.7501 | 4.2636 | −4.9161 | 3.73E − 01 |
500 | 2.9237 | 4.0805 | −4.9746 | 1.14E − 01 |
6 Conclusion
- (i)
The finite-dimensional Euclidean space \(\mathbb{R}^{n}\) is extended to the case of an infinite-dimensional Hilbert space H.
- (ii)
The fixed coefficient λ is extended to the case of a sequence \(\{\lambda_{n}\}\).
Recently, the variational inequality problem has been further developed. This will attract more scholars interested in the study of the variational inequality problem. Many scholars will devote their efforts to its study. Then the variational inequality problem can be better developed in the future.
Declarations
Acknowledgements
The authors thank the referees for their helping comments, which notably improved the presentation of this paper. This paper was supported by Fundamental Research Funds for the Central Universities (Grant: 3122016L006). Ming Tian was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Korpelevich, GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747-756 (1976) MathSciNetMATHGoogle Scholar
- Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Takahashi, W, Nadezhkina, N: Weak convergence theorem by an extragradient method for nonexpensive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191-201 (2006) MathSciNetView ArticleMATHGoogle Scholar
- Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Takahashi, W, Toyoda, M: Weak convergence theorem for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417-428 (2003) MathSciNetView ArticleMATHGoogle Scholar
- Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506-515 (2007) MathSciNetView ArticleMATHGoogle Scholar
- Ceng, LC, Ansari, QH, Yao, JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 1(3), 341-359 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010) MathSciNetView ArticleMATHGoogle Scholar
- Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18(2), 441-453 (2002) MathSciNetView ArticleMATHGoogle Scholar
- Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994) MathSciNetView ArticleMATHGoogle Scholar