 Research
 Open access
 Published:
Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space
Journal of Inequalities and Applications volumeÂ 2017, ArticleÂ number:Â 123 (2017)
Abstract
In this paper, we consider the algorithm proposed in recent years by Censor, Gibali and Reich, which solves split variational inequality problem, and Korpelevichâ€™s extragradient method, which solves variational inequality problems. As our main result, we propose an iterative method for finding an element to solve a class of split variational inequality problems under weaker conditions and get a weak convergence theorem. As applications, we obtain some new weak convergence theorems by using our weak convergence result to solve related problems in nonlinear analysis and optimization.
1 Introduction
The variational inequality problem (VIP) is generated from the method of mathematical physics and nonlinear programming. It has considerable applications in many fields, such as physics, mechanics, engineering, economic decision, control theory and so on. Variational inequality is actually a system of partial differential equations. In 1964, Stampacchia [1] first introduced the VIP for modeling in the mechanics problem. The VIP was generated from mathematical physics equations early on because the LaxMilgram theorem was extended from the Hilbert space to its nonempty closed convex subset, so we got the first existence and uniqueness theorem of VIP. In the 1990s, the VIP became more and more important in nonlinear analysis and optimization problem.
The VIP is to find an element \(x^{*}\in C\) satisfying the inequality
where C is a nonempty closed convex subset of a real Hilbert space H and A is a mapping of C into H. The set of solutions of VIP (1.1) is denoted by \(\operatorname {VI}(C,A)\).
We can easily show that \(x^{*}\in \operatorname {VI}(C,A)\) is equivalent to
A simple iterative method algorithm for solving VIP (1.1) is the projection method
for each \(n\in\mathbb{N}\), where \(P_{C}\) is the metric projection of H into C and Î» is a positive real number. Indeed, if A is Î·strongly monotone and LLipschitz continuous, \(0<\lambda<\frac{2\eta}{L^{2}}\), then there exists a unique point in \(\operatorname {VI}(C,A)\), and the sequence \(\{x_{n}\}\) generated by (1.2) converges strongly to this point. If A is Î±inverse strongly monotone, the solution of VIP (1.1) does not always exist. Assume that \(\operatorname {VI}(C,A)\) is nonempty, \(0<\lambda<2\alpha\), then \(\operatorname {VI}(C,A)\) is a closed and convex subset of H. The sequence \(\{x_{n}\}\) generated by (1.2) converges weakly to one of the points in \(\operatorname {VI}(C,A)\). But if A is monotone and Lipschitz continuous, the sequence \(\{x_{n}\}\) generated by (1.2) is not always convergent. So, we cannot use algorithm (1.2) to solve VIP (1.1).
In 1976, Korpelevich [2] introduced the following socalled extragradient method for solving VIP (1.1) when A is monotone and kLipschitz continuous in the finitedimensional Euclidean space \(\mathbb{R}^{n}\):
for each \(n\in\mathbb{N}\), where \(\lambda\in(0,\frac{1}{k})\). The sequence \(\{x_{n}\}\) converges to a point in \(\operatorname {VI}(C,A)\).
The split feasibility problem (SFP) is also important in nonlinear analysis and optimization. In 1994, Censor and Elfving [3] first proposed it for modeling in medical image reconstruction. Recently, the SFP has been widely used in intensitymodulation therapy treatment planning.
The SFP is to find a point \(x^{*}\) satisfying the conditions
where C and Q are nonempty closed convex of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and A is a bounded linear operator of \(H_{1}\) into \(H_{2}\).
Censor and Elfving used their algorithm to solve the SFP in the finitedimensional Euclidean space \(\mathbb{R}^{n}\). In 2001, Byrne [4] improved and extended Censor and Elfvingâ€™s algorithm and introduced a new method called CQ algorithm for solving the SFP (1.4)
for each \(n\in\mathbb{N}\), where \(0<\gamma<\frac{2}{ \Vert A \Vert ^{2}}\) and \(A^{*}\) is the adjoint operator of A. The sequence \(\{x_{n}\}\) generated by (1.5) converges weakly to a point which solves SFP (1.4).
Very recently, Censor et al. [5] combined the variational inequality problem and the split feasibility problem and proposed a new problem called split variational inequality problem (SVIP). The SVIP is to find a point \(x^{*}\) satisfying
where C and Q are nonempty closed convex of the real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and A is a bounded linear operator of \(H_{1}\) into \(H_{2}\).
For solving SVIP (1.6), Censor, Gibali and Reich introduced the following algorithm:
for each \(n\in\mathbb{N}\). They obtained the following result.
Theorem 1.1
Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator, \(f:H_{1}\rightarrow H_{1}\) and \(g:H_{2}\rightarrow H_{2}\) be respectively \(\alpha_{1}\) and \(\alpha _{2}\) inverse strongly monotone operators and set \(\alpha:=\min\{\alpha_{1},\alpha _{2}\}\). Assume that SVIP (1.6) is consistent, \(\gamma\in(0,\frac{1}{L})\) with L being the spectral radius of the operator \(A^{*}A\), \(\lambda\in (0,2\alpha)\) and suppose that for all \(x^{*}\) solving SVIP (1.6),
Then the sequence \(\{x_{n}\}\) generated by (1.7) converges weakly to a solution of SVIP (1.6).
In this paper, based on the work by Censor et al. combined with Korpelevichâ€™s extragradient method and Byrneâ€™s CQ algorithm, we propose an iterative method for finding an element to solve a class of split variational inequality problems under weaker conditions and get a weak convergence theorem. As applications, we obtain some new weak convergence theorems by using our weak convergence result to solve related problems in nonlinear analysis and optimization.
2 Preliminaries
In this section, we introduce some mathematical symbols, definitions, lemmas and corollaries which can be used in the proofs of our main results.
Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle \) and the norm \(\Vert \cdot \Vert \). Let \(\{x_{n}\}\) be a sequence in H, we denote the sequence \(\{x_{n}\}\) converging weakly to x by â€˜\(x_{n}\rightharpoonup x\)â€™ and the sequence \(\{x_{n}\}\) converging strongly to x by â€˜\(x_{n}\rightarrow x\)â€™. z is called a weak cluster point of \(\{x_{n}\}\) if there exists a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) converging weakly to z. The set of all weak cluster points of \(\{x_{n}\}\) is denoted by \(\omega_{w}(x_{n})\). A fixed point of the mapping \(T:H\rightarrow H\) is a point \(x\in H\) such that \(Tx=x\). The set of all fixed points of the mapping T is denoted by \(\operatorname {Fix}(T)\).
We introduce some useful operators.
Definition 2.1
[7]
Let T, \(A:H\rightarrow H\) be nonlinear operators.

(i)
T is nonexpansive if
$$ \Vert TxTy \Vert \leq \Vert xy \Vert , \quad \forall x,y\in H. $$ 
(ii)
T is firmly nonexpansive if
$$ \langle TxTy, xy\rangle\geq \Vert TxTy \Vert ^{2}, \quad \forall x,y\in H. $$ 
(iii)
T is LLipschitz continuous, with \(L>0\), if
$$ \Vert TxTy \Vert \leq L \Vert xy \Vert , \quad \forall x,y\in H. $$ 
(iv)
T is Î±averaged if
$$ T=(1\alpha)I+\alpha S, $$where \(\alpha\in(0,1)\) and \(S:H\rightarrow H\) is nonexpansive. In particular, a firmly nonexpansive mapping is \(\frac {1}{2}\)averaged.

(v)
A is monotone if
$$ \langle AxAy, xy\rangle\geq0, \quad\forall x,y\in H. $$ 
(vi)
A is Î·strongly monotone, with \(\eta>0\), if
$$ \langle AxAy, xy\rangle\geq\eta \Vert xy \Vert ^{2}, \quad \forall x,y\in H. $$ 
(vii)
A is vinverse strongly monotone (vism), with \(v>0\), if
$$ \langle AxAy, xy\rangle\geq v \Vert AxAy \Vert ^{2}, \quad \forall x,y\in H. $$
We can easily show that if T is a nonexpansive mapping and assume that \(\operatorname {Fix}(T)\) is nonempty, then \(\operatorname {Fix}(T)\) is a closed convex subset of H.
We need the propositions of averaged mappings and inverse strongly monotone mappings.
Lemma 2.2
[7]
We have the following propositions.

(i)
T is nonexpansive if and only if the complement \(IT\) is \(\frac {1}{2}\)ism.

(ii)
If T is vism and \(\gamma>0\), then Î³T is \(\frac {v}{\gamma}\)ism.

(iii)
T is averaged if and only if the complement \(IT\) is vism for some \(v>\frac{1}{2}\). Indeed, for \(\alpha\in(0,1)\), T is Î±averaged if and only if \(IT\) is \(\frac{1}{2\alpha}\)ism.

(iv)
If \(T_{1}\) is \(\alpha_{1}\)averaged and \(T_{2}\) is \(\alpha_{2}\)averaged, where \(\alpha_{1}\), \(\alpha_{2}\in(0,1)\), then the composite \(T_{1}T_{2}\) is Î±averaged, where \(\alpha=\alpha_{1}+\alpha_{2}\alpha_{1}\alpha _{2}\).

(v)
If \(T_{1}\) and \(T_{2}\) are averaged and have a common fixed point, then \(\operatorname {Fix}(T_{1}T_{2})=\operatorname {Fix}(T_{1})\cap \operatorname {Fix}(T_{2})\).
Lemma 2.3
[8]
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator with \(A\neq0\) and \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping. Then \(A^{*}(IT)A\) is \(\frac{1}{2 \Vert A \Vert ^{2}}\)ism.
Let C be a nonempty closed convex subset of H. For each \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that
\(P_{C}\) is called the metric projection of H into C. We know that \(P_{C}\) is firmly nonexpansive.
Lemma 2.4
[9]
Let C be a nonempty closed convex subset of a real Hilbert space H. Given \(x\in H\) and \(z\in C\). Then \(z=P_{C}x\) if and only if there holds the inequality
Lemma 2.5
[9]
Let C be a nonempty closed convex subset of a real Hilbert space H. Given \(x\in H\) and \(z\in C\). Then \(z=P_{C}x\) if and only if there holds the inequality
We introduce some definitions and propositions about setvalued mappings.
Definition 2.6
[8]
Let B be a setvalued mapping of H into \(2^{H}\). We denote the effective domain of B by \(D(B)\), \(D(B)\) is defined by
A setvalued mapping B is called monotone if
A monotone mapping B is called maximal if its graph is not properly contained in the graph of any other monotone mappings on \(D(B)\).
In fact, the definition of the maximal monotone mapping is not very convenient to use. We usually use a property of the maximal monotone mapping: A monotone mapping B is maximal if and only if for \((x,u)\in H\times H\), \(\langle xy, uv\rangle\geq0\) for each \((y,v)\in G(B)\) implies \(u\in Bx\)
For a maximal monotone mapping B, we define its resolvent \(J_{r}\) by
where \(r>0\). We know that \(J_{r}\) is firmly nonexpansive and \(\operatorname {Fix}(J_{r})=B^{1}0\) for each \(r>0\).
For a nonempty closed convex subset C, the normal cone to C at \(x\in C\) is defined by
It can be easily shown that \(N_{C}\) is a maximal monotone mapping and its resolvent is \(P_{C}\).
Lemma 2.7
[10]
Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be a monotone and kLipschitz continuous mapping of C into H. Define
Then B is maximal monotone and \(0\in Bv\) if and only if \(v\in \operatorname {VI}(C,A)\).
Lemma 2.8
[10]
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(B:H_{1}\rightarrow 2^{H_{1}}\) be a maximal monotone mapping, and let \(J_{r}\) be the resolvent of B for \(r>0\). Let \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping, and let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator. Suppose that \(B^{1}0\cap A^{1}\operatorname {Fix}(T)\neq\emptyset\). Let \(r,\gamma>0\) and \(z\in H_{1}\). Then the following are equivalent:

(i)
\(z=J_{r}(I\gamma A^{*}(IT)A)z\);

(ii)
\(0\in A^{*}(IT)Az+Bz\);

(iii)
\(z\in B^{1}0\cap A^{1}\operatorname {Fix}(T)\).
Corollary 2.9
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\). Let \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping, and let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator. Suppose that \(C\cap A^{1}\operatorname {Fix}(T)\neq \emptyset\). Let \(\gamma>0\) and \(z\in H_{1}\). Then the following are equivalent:

(i)
\(z=P_{C}(I\gamma A^{*}(IT)A)z\);

(ii)
\(0\in A^{*}(IT)Az+N_{C}z\);

(iii)
\(z\in C\cap A^{1}\operatorname {Fix}(T)\).
Proof
Putting \(B=N_{C}\) in Lemma 2.8, we obtain the desired result.â€ƒâ–¡
We also need the following lemmas.
Lemma 2.10
[11]
Let H be a real Hilbert space and \(T:H\rightarrow H\) be a nonexpansive mapping with \(\operatorname {Fix}(T)\neq\emptyset\). If \(\{x_{n}\}\) is a sequence in H converging weakly to x and if \(\{(IT)x_{n}\}\) converges strongly to y, then \((IT)x=y\).
Lemma 2.11
[12]
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\{x_{n}\}\) be a sequence in H satisfying the properties:

(i)
\(\lim_{n\rightarrow\infty} \Vert x_{n}u \Vert \) exists for each \(u\in C\);

(ii)
\(\omega_{w}(x_{n})\subset C\).
Then \(\{x_{n}\}\) converges weakly to a point in C.
Lemma 2.12
[13]
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\{x_{n}\}\) be a sequence in H. Suppose that
for every \(n=0,1,2,\ldots\)â€‰. Then the sequence \(\{P_{C}x_{n}\}\) converges strongly to a point in C.
3 Main results
In this section, based on Censor, Gibali and Reichâ€™s recent work and combining it with Byrneâ€™s CQ algorithm and Korpelevichâ€™s extragradient method, we propose a new iterative method for solving a class of the SVIP under weaker conditions in a Hilbert space.
First, we consider a class of the generalized split feasibility problems (GSFP): finding an element that solves a variational inequality problem such that its image under a given bounded linear operator is in a fixed point set of a nonexpansive mapping, i.e., find \(x^{*}\) satisfying
Theorem 3.1
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(A\neq0\), \(f:C\rightarrow H_{1}\) be a monotone and kLipschitz continuous mapping and \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping. Setting \(\Gamma=\{z\in \operatorname {VI}(C,f): Az\in \operatorname {Fix}(T)\}\), assume that \(\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{t_{n}\}\) be generated by \(x_{1}=x\in C\) and
for each \(n\in\mathbb{N}\), where \(\{\gamma_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{ \Vert A \Vert ^{2}})\) and \(\{\lambda_{n}\} \subset[c,d]\) for some \(c,d\in(0,\frac{1}{k})\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\Gamma\), where \(z=\lim_{n\rightarrow \infty} P_{\Gamma}x_{n}\).
Proof
From Lemma 2.2(ii), (iii), (iv) and Lemma 2.3, we can easily know that \(P_{C}(I\gamma_{n}A^{*}(IT)A)\) is \(\frac{1+\gamma_{n} \Vert A \Vert ^{2}}{2}\)averaged. So, \(y_{n}\) can be rewritten as
where \(\alpha_{n}=\frac{1+\gamma_{n} \Vert A \Vert ^{2}}{2}\) and \(V_{n}\) is a nonexpansive mapping for each \(n\in\mathbb{N}\).
Let \(u\in\Gamma\), we have
So, we obtain that
On the other hand, from Lemma 2.5, we have
Then, from Lemma 2.4, we obtain that
So, we have
Therefore, there exists
and the sequence \(\{x_{n}\}\) is bounded. From (3.5), we also get
Hence
From (3.3), we can get
From (3.8), we also get
So,
From (3.12) and (3.14), we have
Notice that \(P_{C}\) is firmly nonexpansive, so
We obtain
From (3.15) and (3.17), we have
Since \(\{x_{n}\}\) is bounded, for each \(z\in\omega_{w}(x_{n})\), there exists a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) converging weakly to z. Without loss of generality, the subsequence \(\{\gamma_{n_{i}}\}\) of \(\{\gamma_{n}\}\) converges to a point \(\hat{\gamma}\in(0,\frac{1}{ \Vert A \Vert ^{2}})\). Since \(A^{*}(IT)A\) is inverse strongly monotone, we know that \(\{A^{*}(IT)Ax_{n_{i}}\}\) is bounded. Since \(P_{C}\) is firmly nonexpansive,
From \(\gamma_{n_{i}}\rightarrow\hat{\gamma}\), we have
From (3.12), we have
Since
we have
From Lemma 2.10, we obtain that
From Corollary 2.9, we obtain
Now, we show that \(z\in \operatorname {VI}(C,f)\). From (3.12), (3.15) and (3.18), we have \(y_{n_{i}}\rightharpoonup z\), \(t_{n_{i}}\rightharpoonup z\) and \(x_{n_{i}+1}\rightharpoonup z\).
Let
From Lemma 2.7, we know that B is maximal monotone and \(0\in Bv\) if and only if \(v\in \operatorname {VI}(C,f)\).
For each \((v,w)\in G(B)\), we have
Hence
So, we obtain that
On the other hand, from \(v\in C\) and
we get
and hence
Therefore, from (3.23) and (3.24), we obtain that
As \(i\rightarrow\infty\), we have
Since B is maximal monotone, we have \(0\in Bz\) and hence \(z\in \operatorname {VI}(C,f)\). So, we obtain that
From Lemma 2.11, we get
and from Lemma 2.12, we obtain
â€ƒâ–¡
Remark 3.2

(i)
If \(f=0\) and \(T=P_{Q}\), problem (3.1) reduces to the SFP introduced by Censor and Elfving, and the algorithm in Theorem 3.1 reduces to Byrneâ€™s CQ algorithm for solving the SFP.

(ii)
If \(T=I\), problem (3.1) reduces to the VIP, and the algorithm in Theorem 3.1 reduces to Korpelevichâ€™s extragradient method for solving the VIP with a monotone mapping.

(iii)
If \(A=I\), \(f=0\) and \(C=H_{1}\), problem (3.1) reduces to the fixed point problem of a nonexpansive mapping.
Now, we consider SVIP (1.6).
Theorem 3.3
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\) and Q be a nonempty closed convex subset of \(H_{2}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(A\neq0\), let \(f:C\rightarrow H_{1}\) be a monotone and kLipschitz continuous mapping and \(g:H_{2}\rightarrow H_{2}\) be an Î±inverse strongly monotone mapping. Setting \(\Gamma=\{z\in \operatorname {VI}(C,f): Az\in \operatorname {VI}(Q,g)\}\), assume that \(\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{t_{n}\}\) be generated by \(x_{1}=x\in C\) and
for each \(n\in\mathbb{N}\), where \(\{\gamma_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{ \Vert A \Vert ^{2}})\), \(\{\lambda_{n}\} \subset[c,d]\) for some \(c,d\in(0,\frac{1}{k})\), \(\mu\in(0,2\alpha)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\Gamma\), where \(z=\lim_{n\rightarrow \infty} P_{\Gamma}x_{n}\).
Proof
From Lemma 2.4, we can easily know that \(z\in \operatorname {VI}(Q,g)\) if and only if \(z=P_{Q}(I\mu g)z\) for \(\mu>0\), and for \(\mu\in(0,2\alpha)\), \(P_{Q}(I\mu g)\) is nonexpansive. Putting \(T=P_{Q}(I\mu g)\) in Theorem 3.1, we get the desired result.â€ƒâ–¡
Remark 3.4

(i)
If \(f=0\) and \(g=0\), SVIP (1.6) reduces to the SFP introduced by Censor and Elfving, and the algorithm in Theorem 3.3 reduces to Byrneâ€™s CQ algorithm for solving the SFP.

(ii)
If \(f=0\), \(C=H_{1}\) and \(A=I\), SVIP (1.6) reduces to the VIP and the algorithm in Theorem 3.3 reduces to (1.2) for solving the VIP with an inverse strongly monotone mapping.

(iii)
If \(g=0\) and \(Q=H_{2}\), SVIP (1.6) reduces to the VIP and the algorithm in Theorem 3.1 reduces to Korpelevichâ€™s extragradient method for solving the VIP with a monotone mapping.
4 Application
In this part, some applications which are useful in nonlinear analysis and optimization problems in a Hilbert space will be introduced. This section deals with some weak convergence theorems for some generalized split feasibility problems (GSFP) and some related problems by Theorem 3.1 and Theorem 3.3.
Let C be a nonempty closed convex subset of a real Hilbert space H, and let \(F:C\times C\rightarrow\mathbb{R}\) be a bifunction. Consider the equilibrium problem, i.e., find \(x^{*}\) satisfying
We denote the set of solutions of equilibrium problem (4.1) by \(\operatorname {EP}(F)\). For solving equilibrium problem (4.1), we assume that F satisfies the following properties:

(A1)
\(F(x,x)\)=0 for all \(x\in C\);

(A2)
F is monotone, i.e., \(F(x,y)+F(y,x)\leq0\) for all \(x,y\in C\);

(A3)
for each \(x, y, z\in C, \lim_{t\downarrow0}F(tz+(1t)x,y)\leq F(x,y)\);

(A4)
for each \(x\in C, y\mapsto F(x,y)\) is convex and lower semicontinuous.
Then we have the following lemmas.
Lemma 4.1
[14]
Let C be a nonempty closed convex subset of a real Hilbert space H, and let F be a bifunction of \(C\times C\) into \(\mathbb{R}\) satisfying the properties (A1)(A4). Let r be a positive real number and \(x\in H\). Then there exists \(z\in C\) such that
Lemma 4.2
[15]
Assume that \(F:C\times C\rightarrow\mathbb{R}\) is a bifunction satisfying the properties (A1)(A4). For \(r>0\) and \(x\in H\), define the resolvent \(T_{r}:H\rightarrow C\) of F by
Then the following hold:

(i)
\(T_{r}\) is singlevalued;

(ii)
\(T_{r}\) is firmly nonexpansive, i.e.,
$$ \Vert T_{r}xT_{r}y \Vert ^{2}\leq\langle T_{r}xT_{r}y, xy\rangle, \quad\forall x,y\in H; $$ 
(iii)
\(\operatorname {Fix}(T_{r})=\operatorname {EP}(F)\);

(iv)
\(\operatorname {EP}(F)\) is closed and convex.
Applying Theorem 3.1, Lemma 4.1 and Lemma 4.2, we get the following results.
Theorem 4.3
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\) and Q be a nonempty closed convex subset of \(H_{2}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(A\neq0\), let \(f:C\rightarrow H_{1}\) be a monotone and kLipschitz continuous mapping and \(F:Q\times Q\rightarrow\mathbb{R}\) be a bifunction satisfying (A1)(A4). Setting \(\Gamma=\{z\in \operatorname {VI}(C,f): Az\in \operatorname {EP}(F)\}\), assume that \(\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{t_{n}\}\) be generated by \(x_{1}=x\in C\) and
for each \(n\in\mathbb{N}\), where \(\{\gamma_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{ \Vert A \Vert ^{2}})\) and \(\{\lambda_{n}\} \subset[c,d]\) for some \(c,d\in(0,\frac{1}{k})\), \(T_{r}\) is a resolvent of F for \(r>0\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\Gamma\), where \(z=\lim_{n\rightarrow \infty} P_{\Gamma}x_{n}\).
Proof
Putting \(T=T_{r}\) in Theorem 3.1, we get the desired result by Lemmas 4.1 and 4.2.â€ƒâ–¡
The following theorems are related to the zero points of maximal monotone mappings.
Theorem 4.4
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\) and Q be a nonempty closed convex subset of \(H_{2}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(A\neq0\), let \(f:C\rightarrow H_{1}\) be a monotone and kLipschitz continuous mapping and \(B:H_{2}\rightarrow2^{H_{2}}\) be a maximal monotone mapping with \(D(B)\neq\emptyset\). Setting \(\Gamma=\{z\in \operatorname {VI}(C,f): Az\in B^{1}0\}\), assume that \(\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{t_{n}\}\) be generated by \(x_{1}=x\in C\) and
for each \(n\in\mathbb{N}\), where \(\{\gamma_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{ \Vert A \Vert ^{2}})\) and \(\{\lambda_{n}\} \subset[c,d]\) for some \(c,d\in(0,\frac{1}{k})\), \(J_{r}\) is a resolvent of B for \(r>0\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\Gamma\), where \(z=\lim_{n\rightarrow \infty} P_{\Gamma}x_{n}\).
Proof
Putting \(T=J_{r}\) in Theorem 3.1, we get the desired result.â€ƒâ–¡
Theorem 4.5
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\) and Q be a nonempty closed convex subset of \(H_{2}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(A\neq0\), \(f:C\rightarrow H_{1}\) be a monotone and kLipschitz continuous mapping, \(B:H_{2}\rightarrow2^{H_{2}}\) be a maximal monotone mapping with \(D(B)\neq\emptyset\) and \(F:H_{2}\rightarrow H_{2}\) be an Î±inverse strongly monotone mapping. Setting \(\Gamma=\{z\in \operatorname {VI}(C,f): Az\in(B+F)^{1}0\}\), assume that \(\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{t_{n}\}\) be generated by \(x_{1}=x\in C\) and
for each \(n\in\mathbb{N}\), where \(\{\gamma_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{ \Vert A \Vert ^{2}})\) and \(\{\lambda_{n}\} \subset[c,d]\) for some \(c,d\in(0,\frac{1}{k})\), \(J_{r}\) is a resolvent of B for \(r\in (0,2\alpha)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\Gamma\), where \(z=\lim_{n\rightarrow \infty} P_{\Gamma}x_{n}\).
Proof
We can easily prove that \(z\in(B+F)^{1}0\) if and only if \(z=J_{r}(IrF)z\) and \(J_{r}(IrF)\) is nonexpansive. Putting \(T=J_{r}(IrF)\) in Theorem 3.1, we get the desired result.â€ƒâ–¡
For a nonempty closed convex subset C, the constrained convex minimization problem is to find \(x^{*}\in C\) such that
where C is a nonempty closed convex subset of a real Hilbert space H and Ï• is a realvalued convex function. The set of solutions of constrained convex minimization problem (4.5) is denoted by \(\arg\min_{x\in C}\phi(x)\).
Lemma 4.6
Let H be a real Hilbert space and let C be a nonempty closed convex subset of H. Let Ï• be a convex function of H into \(\mathbb{R}\). If Ï• is differentiable, then z is a solution of (4.5) if and only if \(z\in \operatorname {VI}(C,\nabla\phi)\).
Proof
Let z be a solution of (4.5). For each \(x\in C\), \(z+\lambda(xz)\in C, \quad\forall\lambda\in(0,1)\). Since Ï• is differentiable, we have
Conversely, if \(z\in \operatorname {VI}(C,\nabla\phi)\), i.e., \(\langle\nabla\phi(z), xz\rangle\geq0, \quad\forall x\in C\). Since Ï• is convex, we have
Hence z is a solution of (4.5).â€ƒâ–¡
Applying Theorem 3.3 and Lemma 4.6, we obtain the following result.
Theorem 4.7
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\) and Q be a nonempty closed convex subset of \(H_{2}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(A\neq0\), let \(f:C\rightarrow H_{1}\) be a monotone and kLipschitz continuous mapping and \(\phi:H_{2}\rightarrow\mathbb{R}\) be a differentiable convex function, and suppose that âˆ‡Ï• is Î±ism. Setting \(\Gamma=\{z\in \operatorname {VI}(C,f): Az\in\arg\min_{y\in Q}\phi(y)\}\), assume that \(\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{t_{n}\}\) be generated by \(x_{1}=x\in C\) and
for each \(n\in\mathbb{N}\), where \(\{\gamma_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{ \Vert A \Vert ^{2}})\) and \(\{\lambda_{n}\} \subset[c,d]\) for some \(c,d\in(0,\frac{1}{k})\), \(\mu\in(0,2\alpha)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\Gamma\), where \(z=\lim_{n\rightarrow \infty} P_{\Gamma}x_{n}\).
Proof
Putting \(g=\nabla\phi\) in Theorem 3.3, we get the desired result by Lemma 4.6.â€ƒâ–¡
Applying Theorem 3.3 and Lemma 4.6, we obtain the following result. The problem to solve in the following theorem is called split minimization problem (SMP). It is also important in nonlinear analysis and optimization.
Theorem 4.8
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let C be a nonempty closed convex subset of \(H_{1}\) and Q be a nonempty closed convex subset of \(H_{2}\). Let \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(A\neq0\), \(\phi_{1}\) and \(\phi_{2}\) be differentiable convex functions of \(H_{1}\) into \(\mathbb{R}\) and \(H_{2}\) into \(\mathbb{R}\), respectively. Suppose that \(\nabla\phi_{1}\) is kLipschitz continuous and \(\nabla \phi_{2}\) is Î±ism. Setting \(\Gamma=\{z\in\arg\min_{x\in C}\phi_{1}(x): Az\in\arg\min_{y\in Q}\phi_{2}(y)\}\), assume that \(\Gamma\neq\emptyset\). Let the sequences \(\{x_{n}\}\), \(\{y_{n}\}\) and \(\{t_{n}\}\) be generated by \(x_{1}=x\in C\) and
for each \(n\in\mathbb{N}\), where \(\{\gamma_{n}\}\subset[a,b]\) for some \(a,b\in(0,\frac{1}{ \Vert A \Vert ^{2}})\) and \(\{\lambda_{n}\} \subset[c,d]\) for some \(c,d\in(0,\frac{1}{k})\), \(\mu\in(0,2\alpha)\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(z\in\Gamma\), where \(z=\lim_{n\rightarrow \infty} P_{\Gamma}x_{n}\).
Proof
Since \(\phi_{1}\) is convex, we can easily obtain that \(\nabla\phi_{1}\) is monotone. Putting \(f=\nabla\phi_{1}\) and \(g=\nabla\phi_{2}\) in Theorem 3.3, we obtain the desired result by Lemma 4.6.â€ƒâ–¡
5 Conclusion
It should be pointed out that the variational inequality problem and the split feasibility problem are important in nonlinear analysis and optimization. Censor et al. have recently introduced an algorithm which solves the split feasibility problem generated from the variational inequality problem and the split feasibility problem. The base of this paper is the work done by Censor et al. combined with Byrneâ€™s CQ algorithm for solving the split feasibility problem and Korpelevichâ€™s extragradient method for solving the variational inequality problem with a monotone mapping. The main aim of this paper is to propose an iterative method to find an element for solving a class of split feasibility problems under weaker conditions. As applications, we obtain some new weak convergence theorems by using our weak convergence result to solve the related problems in a Hilbert space.
Theorem 3.3 improves and extends Theorem 1.1 in the following ways:
References
Stampacchia, G: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 44134416 (1964)
Korpelevich, GM: An extragradient method for finding saddle points and for other problems. Ãˆkon. Mat. Metody 12, 747756 (1976)
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221239 (1994)
Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004)
Censor, Y, Gibali, A, Reich, S: The split variational inequality problem. The TechnionÂ  Isreal Institute of Technology, Haifa. arXiv:1009.3780 (2010)
Moudafi, A: Split monotone variational inclusions. J.Â Optim. Theory Appl. 150, 275283 (2011)
Xu, HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert space. Inverse Probl. 26, 10518 (2010)
Takahashi, W, Xu, HK, Yao, JC: Iterative methods for generalized split feasibility problems in Hilbert spaces. SetValued Var. Anal. 23(2), 205221 (2015)
Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 52865302 (2011)
Takahashi, W, Nadezhkina, N: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J.Â Optim. Theory Appl. 128, 191201 (2006)
Xu, HK: Viscosity approximation methods for nonexpansive mappings. J.Â Math. Anal. Appl. 298, 279291 (2004)
Xu, HK: Averaged mappings and the gradientprojection algorithm. J.Â Optim. Theory Appl. 150, 360378 (2011)
Takahashi, W, Toyoda, M: Weak convergence theorem for nonexpansive mappings and monotone mappings. J.Â Optim. Theory Appl. 118, 417428 (2003)
Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123145 (1994)
Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J.Â Nonlinear Convex Anal. 6, 117136 (2005)
Acknowledgements
The authors thank the referees for their helpful comments, which notably improved the presentation of this paper. This work was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing. The first author was supported by the Foundation of Tianjin Key Laboratory for Advanced Signal Processing and the Financial Funds for the Central Universities (grant number 3122017072). BingNan Jiang was supported in part by Technology Innovation Funds of Civil Aviation University of China for Graduate (Y1739).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authorsâ€™ contributions
All the authors read and approved the final manuscript.
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tian, M., Jiang, BN. Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space. J Inequal Appl 2017, 123 (2017). https://doi.org/10.1186/s1366001713979
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366001713979