Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space
- Ming Tian^{1}Email author and
- Bing-Nan Jiang^{1}
https://doi.org/10.1186/s13660-017-1397-9
© The Author(s) 2017
Received: 27 December 2016
Accepted: 12 May 2017
Published: 25 May 2017
Abstract
In this paper, we consider the algorithm proposed in recent years by Censor, Gibali and Reich, which solves split variational inequality problem, and Korpelevich’s extragradient method, which solves variational inequality problems. As our main result, we propose an iterative method for finding an element to solve a class of split variational inequality problems under weaker conditions and get a weak convergence theorem. As applications, we obtain some new weak convergence theorems by using our weak convergence result to solve related problems in nonlinear analysis and optimization.
Keywords
MSC
1 Introduction
The variational inequality problem (VIP) is generated from the method of mathematical physics and nonlinear programming. It has considerable applications in many fields, such as physics, mechanics, engineering, economic decision, control theory and so on. Variational inequality is actually a system of partial differential equations. In 1964, Stampacchia [1] first introduced the VIP for modeling in the mechanics problem. The VIP was generated from mathematical physics equations early on because the Lax-Milgram theorem was extended from the Hilbert space to its nonempty closed convex subset, so we got the first existence and uniqueness theorem of VIP. In the 1990s, the VIP became more and more important in nonlinear analysis and optimization problem.
The split feasibility problem (SFP) is also important in nonlinear analysis and optimization. In 1994, Censor and Elfving [3] first proposed it for modeling in medical image reconstruction. Recently, the SFP has been widely used in intensity-modulation therapy treatment planning.
Theorem 1.1
In this paper, based on the work by Censor et al. combined with Korpelevich’s extragradient method and Byrne’s CQ algorithm, we propose an iterative method for finding an element to solve a class of split variational inequality problems under weaker conditions and get a weak convergence theorem. As applications, we obtain some new weak convergence theorems by using our weak convergence result to solve related problems in nonlinear analysis and optimization.
2 Preliminaries
In this section, we introduce some mathematical symbols, definitions, lemmas and corollaries which can be used in the proofs of our main results.
Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle \) and the norm \(\Vert \cdot \Vert \). Let \(\{x_{n}\}\) be a sequence in H, we denote the sequence \(\{x_{n}\}\) converging weakly to x by ‘\(x_{n}\rightharpoonup x\)’ and the sequence \(\{x_{n}\}\) converging strongly to x by ‘\(x_{n}\rightarrow x\)’. z is called a weak cluster point of \(\{x_{n}\}\) if there exists a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) converging weakly to z. The set of all weak cluster points of \(\{x_{n}\}\) is denoted by \(\omega_{w}(x_{n})\). A fixed point of the mapping \(T:H\rightarrow H\) is a point \(x\in H\) such that \(Tx=x\). The set of all fixed points of the mapping T is denoted by \(\operatorname {Fix}(T)\).
We introduce some useful operators.
Definition 2.1
[7]
- (i)T is nonexpansive if$$ \Vert Tx-Ty \Vert \leq \Vert x-y \Vert , \quad \forall x,y\in H. $$
- (ii)T is firmly nonexpansive if$$ \langle Tx-Ty, x-y\rangle\geq \Vert Tx-Ty \Vert ^{2}, \quad \forall x,y\in H. $$
- (iii)T is L-Lipschitz continuous, with \(L>0\), if$$ \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert , \quad \forall x,y\in H. $$
- (iv)T is α-averaged ifwhere \(\alpha\in(0,1)\) and \(S:H\rightarrow H\) is nonexpansive. In particular, a firmly nonexpansive mapping is \(\frac {1}{2}\)-averaged.$$ T=(1-\alpha)I+\alpha S, $$
- (v)A is monotone if$$ \langle Ax-Ay, x-y\rangle\geq0, \quad\forall x,y\in H. $$
- (vi)A is η-strongly monotone, with \(\eta>0\), if$$ \langle Ax-Ay, x-y\rangle\geq\eta \Vert x-y \Vert ^{2}, \quad \forall x,y\in H. $$
- (vii)A is v-inverse strongly monotone (v-ism), with \(v>0\), if$$ \langle Ax-Ay, x-y\rangle\geq v \Vert Ax-Ay \Vert ^{2}, \quad \forall x,y\in H. $$
We can easily show that if T is a nonexpansive mapping and assume that \(\operatorname {Fix}(T)\) is nonempty, then \(\operatorname {Fix}(T)\) is a closed convex subset of H.
We need the propositions of averaged mappings and inverse strongly monotone mappings.
Lemma 2.2
[7]
- (i)
T is nonexpansive if and only if the complement \(I-T\) is \(\frac {1}{2}\)-ism.
- (ii)
If T is v-ism and \(\gamma>0\), then γT is \(\frac {v}{\gamma}\)-ism.
- (iii)
T is averaged if and only if the complement \(I-T\) is v-ism for some \(v>\frac{1}{2}\). Indeed, for \(\alpha\in(0,1)\), T is α-averaged if and only if \(I-T\) is \(\frac{1}{2\alpha}\)-ism.
- (iv)
If \(T_{1}\) is \(\alpha_{1}\)-averaged and \(T_{2}\) is \(\alpha_{2}\)-averaged, where \(\alpha_{1}\), \(\alpha_{2}\in(0,1)\), then the composite \(T_{1}T_{2}\) is α-averaged, where \(\alpha=\alpha_{1}+\alpha_{2}-\alpha_{1}\alpha _{2}\).
- (v)
If \(T_{1}\) and \(T_{2}\) are averaged and have a common fixed point, then \(\operatorname {Fix}(T_{1}T_{2})=\operatorname {Fix}(T_{1})\cap \operatorname {Fix}(T_{2})\).
Lemma 2.3
[8]
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with \(A\neq0\) and \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping. Then \(A^{*}(I-T)A\) is \(\frac{1}{2 \Vert A \Vert ^{2}}\)-ism.
Lemma 2.4
[9]
Lemma 2.5
[9]
We introduce some definitions and propositions about set-valued mappings.
Definition 2.6
[8]
In fact, the definition of the maximal monotone mapping is not very convenient to use. We usually use a property of the maximal monotone mapping: A monotone mapping B is maximal if and only if for \((x,u)\in H\times H\), \(\langle x-y, u-v\rangle\geq0\) for each \((y,v)\in G(B)\) implies \(u\in Bx\)
Lemma 2.7
[10]
Lemma 2.8
[10]
- (i)
\(z=J_{r}(I-\gamma A^{*}(I-T)A)z\);
- (ii)
\(0\in A^{*}(I-T)Az+Bz\);
- (iii)
\(z\in B^{-1}0\cap A^{-1}\operatorname {Fix}(T)\).
Corollary 2.9
- (i)
\(z=P_{C}(I-\gamma A^{*}(I-T)A)z\);
- (ii)
\(0\in A^{*}(I-T)Az+N_{C}z\);
- (iii)
\(z\in C\cap A^{-1}\operatorname {Fix}(T)\).
Proof
Putting \(B=N_{C}\) in Lemma 2.8, we obtain the desired result. □
We also need the following lemmas.
Lemma 2.10
[11]
Let H be a real Hilbert space and \(T:H\rightarrow H\) be a nonexpansive mapping with \(\operatorname {Fix}(T)\neq\emptyset\). If \(\{x_{n}\}\) is a sequence in H converging weakly to x and if \(\{(I-T)x_{n}\}\) converges strongly to y, then \((I-T)x=y\).
Lemma 2.11
[12]
- (i)
\(\lim_{n\rightarrow\infty} \Vert x_{n}-u \Vert \) exists for each \(u\in C\);
- (ii)
\(\omega_{w}(x_{n})\subset C\).
Lemma 2.12
[13]
3 Main results
In this section, based on Censor, Gibali and Reich’s recent work and combining it with Byrne’s CQ algorithm and Korpelevich’s extragradient method, we propose a new iterative method for solving a class of the SVIP under weaker conditions in a Hilbert space.
Theorem 3.1
Proof
Now, we show that \(z\in \operatorname {VI}(C,f)\). From (3.12), (3.15) and (3.18), we have \(y_{n_{i}}\rightharpoonup z\), \(t_{n_{i}}\rightharpoonup z\) and \(x_{n_{i}+1}\rightharpoonup z\).
Remark 3.2
- (i)
If \(f=0\) and \(T=P_{Q}\), problem (3.1) reduces to the SFP introduced by Censor and Elfving, and the algorithm in Theorem 3.1 reduces to Byrne’s CQ algorithm for solving the SFP.
- (ii)
If \(T=I\), problem (3.1) reduces to the VIP, and the algorithm in Theorem 3.1 reduces to Korpelevich’s extragradient method for solving the VIP with a monotone mapping.
- (iii)
If \(A=I\), \(f=0\) and \(C=H_{1}\), problem (3.1) reduces to the fixed point problem of a nonexpansive mapping.
Now, we consider SVIP (1.6).
Theorem 3.3
Proof
From Lemma 2.4, we can easily know that \(z\in \operatorname {VI}(Q,g)\) if and only if \(z=P_{Q}(I-\mu g)z\) for \(\mu>0\), and for \(\mu\in(0,2\alpha)\), \(P_{Q}(I-\mu g)\) is nonexpansive. Putting \(T=P_{Q}(I-\mu g)\) in Theorem 3.1, we get the desired result. □
Remark 3.4
- (i)
If \(f=0\) and \(g=0\), SVIP (1.6) reduces to the SFP introduced by Censor and Elfving, and the algorithm in Theorem 3.3 reduces to Byrne’s CQ algorithm for solving the SFP.
- (ii)
If \(f=0\), \(C=H_{1}\) and \(A=I\), SVIP (1.6) reduces to the VIP and the algorithm in Theorem 3.3 reduces to (1.2) for solving the VIP with an inverse strongly monotone mapping.
- (iii)
If \(g=0\) and \(Q=H_{2}\), SVIP (1.6) reduces to the VIP and the algorithm in Theorem 3.1 reduces to Korpelevich’s extragradient method for solving the VIP with a monotone mapping.
4 Application
In this part, some applications which are useful in nonlinear analysis and optimization problems in a Hilbert space will be introduced. This section deals with some weak convergence theorems for some generalized split feasibility problems (GSFP) and some related problems by Theorem 3.1 and Theorem 3.3.
- (A1)
\(F(x,x)\)=0 for all \(x\in C\);
- (A2)
F is monotone, i.e., \(F(x,y)+F(y,x)\leq0\) for all \(x,y\in C\);
- (A3)
for each \(x, y, z\in C, \lim_{t\downarrow0}F(tz+(1-t)x,y)\leq F(x,y)\);
- (A4)
for each \(x\in C, y\mapsto F(x,y)\) is convex and lower semicontinuous.
Lemma 4.1
[14]
Lemma 4.2
[15]
- (i)
\(T_{r}\) is single-valued;
- (ii)\(T_{r}\) is firmly nonexpansive, i.e.,$$ \Vert T_{r}x-T_{r}y \Vert ^{2}\leq\langle T_{r}x-T_{r}y, x-y\rangle, \quad\forall x,y\in H; $$
- (iii)
\(\operatorname {Fix}(T_{r})=\operatorname {EP}(F)\);
- (iv)
\(\operatorname {EP}(F)\) is closed and convex.
Applying Theorem 3.1, Lemma 4.1 and Lemma 4.2, we get the following results.
Theorem 4.3
The following theorems are related to the zero points of maximal monotone mappings.
Theorem 4.4
Proof
Putting \(T=J_{r}\) in Theorem 3.1, we get the desired result. □
Theorem 4.5
Proof
We can easily prove that \(z\in(B+F)^{-1}0\) if and only if \(z=J_{r}(I-rF)z\) and \(J_{r}(I-rF)\) is nonexpansive. Putting \(T=J_{r}(I-rF)\) in Theorem 3.1, we get the desired result. □
Lemma 4.6
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let ϕ be a convex function of H into \(\mathbb{R}\). If ϕ is differentiable, then z is a solution of (4.5) if and only if \(z\in \operatorname {VI}(C,\nabla\phi)\).
Proof
Applying Theorem 3.3 and Lemma 4.6, we obtain the following result.
Theorem 4.7
Applying Theorem 3.3 and Lemma 4.6, we obtain the following result. The problem to solve in the following theorem is called split minimization problem (SMP). It is also important in nonlinear analysis and optimization.
Theorem 4.8
5 Conclusion
It should be pointed out that the variational inequality problem and the split feasibility problem are important in nonlinear analysis and optimization. Censor et al. have recently introduced an algorithm which solves the split feasibility problem generated from the variational inequality problem and the split feasibility problem. The base of this paper is the work done by Censor et al. combined with Byrne’s CQ algorithm for solving the split feasibility problem and Korpelevich’s extragradient method for solving the variational inequality problem with a monotone mapping. The main aim of this paper is to propose an iterative method to find an element for solving a class of split feasibility problems under weaker conditions. As applications, we obtain some new weak convergence theorems by using our weak convergence result to solve the related problems in a Hilbert space.
Declarations
Acknowledgements
The authors thank the referees for their helpful comments, which notably improved the presentation of this paper. This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing. The first author was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing and the Financial Funds for the Central Universities (grant number 3122017072). Bing-Nan Jiang was supported in part by Technology Innovation Funds of Civil Aviation University of China for Graduate (Y17-39).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Stampacchia, G: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413-4416 (1964) MathSciNetMATHGoogle Scholar
- Korpelevich, GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747-756 (1976) MathSciNetMATHGoogle Scholar
- Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994) MathSciNetView ArticleMATHGoogle Scholar
- Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Censor, Y, Gibali, A, Reich, S: The split variational inequality problem. The Technion - Isreal Institute of Technology, Haifa. arXiv:1009.3780 (2010)
- Moudafi, A: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275-283 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 26, 10518 (2010) MathSciNetView ArticleGoogle Scholar
- Takahashi, W, Xu, HK, Yao, JC: Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 23(2), 205-221 (2015) MathSciNetView ArticleMATHGoogle Scholar
- Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286-5302 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Takahashi, W, Nadezhkina, N: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191-201 (2006) MathSciNetView ArticleMATHGoogle Scholar
- Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Takahashi, W, Toyoda, M: Weak convergence theorem for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417-428 (2003) MathSciNetView ArticleMATHGoogle Scholar
- Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123-145 (1994) MathSciNetMATHGoogle Scholar
- Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005) MathSciNetMATHGoogle Scholar