- Research
- Open Access
- Published:
New strong convergence theorems for split variational inclusion problems in Hilbert spaces
Journal of Inequalities and Applications volume 2015, Article number: 176 (2015)
Abstract
The split variational inclusion problem is an important problem, and it is a generalization of the split feasibility problem. In this paper, we present a descent-conjugate gradient algorithm for the split variational inclusion problems in Hilbert spaces. Next, a strong convergence theorem of the proposed algorithm is proved under suitable conditions. As an application, we give a new strong convergence theorem for the split feasibility problem in Hilbert spaces. Finally, we give numerical results for split variational inclusion problems to demonstrate the efficiency of the proposed algorithm.
1 Introduction
Let H be a real Hilbert space, and \(B:H\multimap H\) be a set-valued mapping with domain \(\mathcal{D}(B):=\{x\in H:B(x)\neq\emptyset\}\). Recall that B is called monotone if \(\langle u-v,x-y\rangle\geq0\) for any \(u\in Bx\) and \(v\in By\); B is maximal monotone if its graph \(\{(x,y):x\in\mathcal{D}(B), y\in Bx\}\) is not properly contained in the graph of any other monotone mapping. An important problem for set-valued monotone mappings is to find \(\bar{x}\in H\) such that \(0\in B\bar{x}\). Here, \(\bar{x}\) is called a zero point of B. A well-known method for approximating a zero point of a maximal monotone mapping defined in a real Hilbert space is the proximal point algorithm first introduced by Martinet [1] and generated by Rockafellar [2]. This is an iterative procedure which generates \(\{x_{n}\}\) by \(x_{1}=x\in H\) and
where \(\{\beta_{n}\}\subseteq (0,\infty)\), B is a maximal monotone mapping in a real Hilbert space, and \(J_{r}^{B}\) is the resolvent mapping of B defined by \(J_{r}^{B}=(I+r B)^{-1}\) for each \(r>0\). In 1976, Rockafellar [2] proved the following in the Hilbert space setting: If the solution set \(B^{-1}(0)\) is nonempty and \(\liminf_{n\rightarrow \infty}\beta _{n}>0\), then the sequence \(\{x_{n}\}\) in (1.1) converges weakly to an element of \(B^{-1}(0)\). In particular, if B is the subdifferential ∂f of a proper lower semicontinuous and convex function \(f:H\rightarrow \mathbb{R}\), then (1.1) is reduced to
In this case, \(\{x_{n}\}\) converges weakly to a minimizer of f.
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and \(A^{*}\) be the adjoint of A. Let \(f:H_{1}\rightarrow H_{1}\) and \(g:H_{2}\rightarrow H_{2}\) be two proper lower semicontinuous, and convex functions. In 2011, Moudafi [3] presented the following general split variational inclusion problem:
Clearly, we know that split variational inclusion problem (SFVIP) is a generalization of variational inclusion problems and a generalization of split feasibility problem. Hence, it is important to study the split variational inclusion problems in Hilbert spaces.
For problem (GSFVIP), Moudafi [3] gave the following algorithm and a weak convergence theorem under suitable conditions:
It is worth noting that λ and γ are fixed numbers. Hence, it is important to establish generalized iteration processes and strong convergence theorems for problem (SFVIP).
In this paper, we consider the following split variational inclusion problems in Hilbert spaces:
In 2011, Byrne et al. [4] gave the following two convergence theorems for split variational inclusion problems.
First, from the idea of the algorithms for fixed point theorem, the algorithm given in Theorem 1.1 can be seen as a Picard iteration method.
Theorem 1.1
[4]
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\beta>0\) and \(\rho\in(0,\frac {2}{\|A\|^{2}})\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
for each \(n\in\mathbb{N}\). Then \(\{x_{n}\}\) converges weakly to an element \(\bar{x}\in\Omega\).
Next, from the idea of the algorithms for fixed point theorem, the algorithm given in Theorem 1.2 can be seen as Halpern’s iteration method.
Theorem 1.2
[4]
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\{a_{n}\}\) be a sequence of real numbers in \([0,1]\) and let \(\beta>0\). Let \(u\in H\) be fixed and let \(\rho\in(0,\frac{2}{\|A\|^{2}})\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
for each \(n\in\mathbb{N}\). Assume that \(\lim_{n\rightarrow \infty}a_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\). Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\) for some \(\bar{x}\in \Omega\).
Remark 1.1
In Theorems 1.1 and 1.2, we know that β and ρ are fixed numbers.
In 2013, Chuang [5] gave the following two convergent theorems for problem (SFVIP). Indeed, from the idea of the algorithms for fixed point theorem, the algorithm given in Theorem 1.3 can be seen as Halpern-Mann type iteration method.
Theorem 1.3
[5]
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\{a_{n}\}\), \(\{b_{n}\}\), and \(\{c_{n}\}\) be sequences of real numbers in \([0,1]\) with \(a_{n}+b_{n}+c_{n}=1\) and \(0< a_{n}<1\) for each \(n\in\mathbb{N}\). Let \(\{\beta_{n}\}\) be a sequence in \((0,\infty)\). Let \(u\in H\) be fixed. Let \(\{\rho_{n}\}\) be a sequence in \((0,\frac{2}{\|A\|^{2}+1})\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
for each \(n\in\mathbb{N}\). Assume that \(\lim_{n\rightarrow \infty}a_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\), \(\liminf_{n\rightarrow \infty}c_{n}\rho_{n}>0\), \(\liminf_{n\rightarrow \infty}b_{n}c_{n}>0\), and \(\liminf_{n\rightarrow \infty}\beta_{n}>0\). Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar {x}=P_{\Omega}u\).
Besides, the algorithm in Theorem 1.4 comes from the optimization theorem and Tikhonov regularization method.
Theorem 1.4
[5]
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\{\beta_{n}\}\) be a sequence in \((0,\infty)\), \(\{a_{n}\}\) be a sequence in \((0,1)\), and \(\{\rho_{n}\}\) be a sequence in \((0,2/(\|A\|^{2}+2))\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
for each \(n\in\mathbb{N}\). Assume that \(\lim_{n\rightarrow \infty}a_{n}=0\), \(\sum _{n=1}^{\infty}a_{n}\rho_{n}=\infty\), \(\liminf_{n\rightarrow \infty}\rho_{n}>0\) and \(\liminf_{n\rightarrow \infty}\beta_{n}>0\). Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar {x}=P_{\Omega}0\), i.e., \(\bar{x}\) is the minimal norm solution of (SFVIP).
Further, we also observed that Bnouhachem et al. [6] proposed the following descent-projection algorithm to study the split feasibility problem.
Let \(A:\mathbb{R}^{n}\rightarrow \mathbb{R}^{m}\) be a bounded linear operator, and \(A^{*}\) be the adjoint of A. Let \(f:\mathbb{R}^{n}\rightarrow (-\infty,\infty]\) and \(g:\mathbb {R}^{m}\rightarrow (-\infty,\infty]\) be two proper, lower semicontinuous, and convex functions. Let \(\{\rho_{k}\}\) be a sequence of positive real numbers.
Algorithm 1.1
For given \(x_{k}\in\mathbb{R}^{n}\), find the approximate solution by the following iterative process.
- Step 1.:
-
For \(k\in\mathbb{N}\), let \(C_{k}\) and \(Q_{k}\) be
$$\left \{ \begin{array}{l} C_{k}:=\{u\in\mathbb{R}^{n}: f(x_{k})+\langle u_{k},u-x_{k}\rangle\leq0\}, \\ Q_{k}:=\{v\in\mathbb{R}^{m}: g(Ax_{k})+\langle v_{k},v-Ax_{k}\rangle\leq0\}, \end{array} \right . $$where \(u_{k}\in\partial f(x_{k})\) and \(v_{k}\in\partial g(Ax_{k})\).
- Step 2.:
-
\(y_{k}=P_{C_{k}}[x_{k}-\rho_{k}A^{\top}(I-P_{Q_{k}})Ax_{k}]\), where \(\rho_{k}>0\) satisfies
$$\rho_{k}\bigl\Vert A^{\top}(I-P_{Q_{k}})Ax_{n}-A^{\top}(I-P_{Q_{k}})Ay_{k} \bigr\Vert \leq\delta \|x_{k}-y_{k}\|,\quad 0< \delta< 1. $$ - Step 3.:
-
If \(y_{k}=x_{k}\), then stop. Otherwise, go to Step 4.
- Step 4.:
-
The new iterative \(x_{k+1}\) is defined by
$$x_{k+1}=P_{C_{k}}\bigl[x_{k}-\alpha_{k} d(x_{k},\rho_{k})\bigr], $$where
$$\left \{ \begin{array}{l} d(x_{k},\rho_{k}):=x_{k}-y_{k}+\rho_{k}A^{\top}(I-P_{Q_{k}})Ay_{k}, \\ \varepsilon_{k}:=\rho_{k}[A^{\top}(I-P_{Q_{k}})Ay_{k}-A^{\top}(I-P_{Q_{k}})Ax_{k}], \\ D(x_{k},\rho_{k}):=x_{k}-y_{k}+\varepsilon_{k}, \\ \phi(x_{k},\rho_{k}):=\langle x_{k}-y_{k},D(x_{k},\rho_{k})\rangle, \\ \alpha_{k}:=\frac{\phi(x_{k},\rho_{k})}{\|d(x_{k},\rho_{k})\|^{2}}. \end{array} \right . $$
Let \(H_{1}\) and \(H_{2}\) be infinite dimensional Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator, and \(A^{*}\) be the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be set-valued maximal monotone mappings. Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\), and \(\{\rho_{n}\}\) be real sequences. Let δ be a fixed real numbers. Let Ω be the solution set of problem (SFVIP). In this paper, motivated by the above works and related results, we present the following algorithm with conjugate gradient method for the split variational inclusion problems in Hilbert spaces.
Motivated by Algorithm 1.1 and the above results, we want to give a strong convergence theorem in infinite dimensional real Hilbert spaces. (Indeed, for computers and program language, we can only give examples for a finite dimensional space.) Next, we want that the convergent rate of the given algorithm are faster than the above algorithms. Hence, we give the following algorithm with conjugate method. In our numerical results, we know that this algorithm is very fast under some conditions.
Algorithm 1.2
- Step 0.:
-
Choose \(x_{1}\in H_{1}\) arbitrarily, set \(r_{1}\in(0,1)\) and \(d_{0}=0\).
- Step 1.:
-
\(d_{n}:=-A^{*}(I-J^{B_{2}}_{\beta_{n}})Ax_{n}+\eta_{n} d_{n-1}\).
- Step 2.:
-
For \(n\in\mathbb{N}\), set \(y_{n}\) as
$$ y_{n}=J^{B_{1}}_{\beta_{n}} \bigl[(1-a_{n}\rho_{n})x_{n}-\rho_{n}A^{*} \bigl(I-J^{B_{2}}_{\beta _{n}}\bigr)Ax_{n}+\gamma_{n} d_{n}\bigr], $$(1.3)where \(\rho_{n}>0\) satisfies
$$ \rho_{n}\bigl\Vert A^{*}\bigl(I-J^{B_{2}}_{\beta_{n}} \bigr)Ax_{n}-A^{*}\bigl(I-J^{B_{2}}_{\beta _{n}} \bigr)Ay_{n}\bigr\Vert \leq\delta\|x_{n}-y_{n}\|, \quad 0< \delta< 1. $$(1.4) - Step 3.:
-
If \(x_{n}=y_{n}\), then set \(n:=n+1\) and go to Step 1. Otherwise, go to Step 3.
- Step 4.:
-
The new iterative \(x_{n+1}\) is defined by
$$ x_{n+1}=J^{B_{1}}_{\beta_{n}} \bigl[x_{n}-\alpha_{n} D(x_{n},\rho_{n}) \bigr], $$(1.5)where
$$\begin{aligned}& D(x_{n},\rho_{n}):=x_{n}-y_{n}+ \rho_{n}\bigl[A^{*}\bigl(I-J^{B_{2}}_{\beta _{n}} \bigr)Ay_{n}-A^{*}\bigl(I-J^{B_{2}}_{\beta_{n}} \bigr)Ax_{n}\bigr], \end{aligned}$$(1.6)$$\begin{aligned}& \alpha_{n}:=\frac{\langle x_{n}-y_{n},D(x_{n},\rho_{n})\rangle}{\|D(x_{n},\rho_{n})\|^{2}}. \end{aligned}$$(1.7)Then update \(n:=n+1\) and go to Step 1.
Remark 1.2
-
(1)
It is worth noting that \(d_{n}\) is defined by using the idea of the so-called conjugate gradient direction ([7], Chapter 5). Further, it is natural to assume that \(\{x_{n}\}\) is a bounded sequence for the convergence theorems with the conjugate gradient direction method.
-
(2)
If we set
$$ \varepsilon_{n}:=\rho_{n}\bigl[A^{*} \bigl(I-J^{B_{2}}_{\beta _{n}}\bigr)Ay_{n}-A^{*} \bigl(I-J^{B_{2}}_{\beta_{n}}\bigr)Ax_{n}\bigr], $$(1.8)then it follows from (1.4) and (1.8) that
$$ \bigl\vert \langle x_{n}-y_{n}, \varepsilon_{n}\rangle\bigr\vert \leq\|x_{n}-y_{n} \|\cdot\|\varepsilon_{n}\| \leq\delta\|x_{n}-y_{n}\| \cdot\|x_{n}-y_{n}\|=\delta\|x_{n}-y_{n} \|^{2}. $$(1.9) -
(3)
If we choose \(\rho_{n}\) such that \(0<\rho_{n}\leq\frac{\delta}{\|A\|\cdot \|A^{*}\|}=\frac{\delta}{\|A\|^{2}}\), then (1.4) holds.
-
(4)
In our convergence theorem, we may assume that \(x_{n}\neq y_{n}\) for each \(n\in\mathbb{N}\) by the assumptions on the sequence \(\{a_{n}\}\).
Next, a strong convergence theorem of the proposed algorithm is proved under suitable conditions. As an application, we give a descent-projection-conjugate gradient algorithm and a strong convergence theorem for the split feasibility problem. Finally, we give numerical results to demonstrate the efficiency of the proposed algorithm.
2 Preliminaries
Let H be a (real) Hilbert space with inner product \(\langle\cdot,\cdot\rangle\) and norm \(\|\cdot\|\), respectively. We denote the strongly convergence and the weak convergence of \(\{x_{n}\}\) to \(x\in H\) by \(x_{n}\rightarrow x\) and \(x_{n}\rightharpoonup x\), respectively. From [8, 9], for each \(x,y,u,v\in H\) and \(\lambda\in[0,1]\), we have
Let C be a nonempty, closed, and convex subset of a real Hilbert space H, and let \(T:C\rightarrow H\) be a mapping. Let \(\operatorname{Fix}(T):=\{x\in C: Tx=x\}\). Then T is said to be a nonexpansive mapping if \(\|Tx-Ty\|\leq\|x-y\|\) for every \(x,y\in C\). T is said to be a quasi-nonexpansive mapping if \(\operatorname{Fix}(T)\neq\emptyset\) and \(\|Tx-y\|\leq\|x-y\|\) for every \(x\in C\) and \(y\in \operatorname{Fix}(T)\). It is easy to see that \(\operatorname{Fix}(T)\) is a closed convex subset of C if T is a quasi-nonexpansive mapping. Besides, T is said to be a firmly nonexpansive mapping if \(\|Tx-Ty\|^{2}\leq\langle x-y,Tx-Ty\rangle\) for every \(x,y\in C\), that is, \(\|Tx-Ty\|^{2}\leq \|x-y\|^{2}-\|(I-T)x-(I-T)y\|^{2}\) for every \(x,y\in C\).
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Then for each \(x\in H\), there is a unique element \(\bar{x}\in C\) such that
Here, set \(P_{C}x=\bar{x}\), and \(P_{C}\) is called the metric projection from H onto C.
Lemma 2.1
[8]
Let C be a nonempty, closed, and convex subset of a real Hilbert space H, and let \(P_{C}\) be the metric projection from H onto C. Then \(\langle x-P_{C}x,P_{C}x-y\rangle\geq0\) for all \(x\in H\) and \(y\in C\).
Lemma 2.2
Let H be a real Hilbert space. Let \(B:H\multimap H\) be a set-valued maximal monotone mapping, \(\beta>0\), and let \(J_{\beta}^{B}\) be defined by \(J_{\beta}^{B}:=(I+\beta B)^{-1}\) (\(J_{\beta}^{B}\) is called resolvent mapping). Then the following are satisfied:
-
(i)
for each \(\beta>0\), \(J_{\beta}^{B}\) is a single-valued and firmly nonexpansive mapping;
-
(ii)
\(\mathcal{D}(J_{\beta}^{B})=H\) and \(\operatorname{Fix}(J_{\beta}^{B})=\{x\in \mathcal{D}(B):0\in Bx\}\);
-
(iii)
\(\|x-J_{\beta}^{B} x\|\leq\|x-J_{\gamma}^{B} x\|\) for all \(0<\beta\leq\gamma\) and for all \(x\in H\);
-
(iv)
\((I-J_{\beta}^{B})\) is a firmly nonexpansive mapping for each \(\beta>0\);
-
(v)
suppose that \(B^{-1}(0)\neq\emptyset\), then \(\|x-J_{\beta}^{B} x\|^{2}+\|J_{\beta}^{B} x-\bar{x}\|^{2}\leq\|x-\bar{x}\|^{2}\) for each \(x\in H\), each \(\bar{x}\in B^{-1}(0)\), and each \(\beta>0\);
-
(vi)
suppose that \(B^{-1}(0)\neq\emptyset\), then \(\langle x-J_{\beta}^{B} x,J_{\beta}^{B} x-w\rangle\geq0\) for each \(x\in H\) and each \(w\in B^{-1}(0)\), and each \(\beta>0\).
Lemma 2.3
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear operator, and \(A^{*}\) be the adjoint of A, and let \(\beta>0\) be fixed. Let \(B:H_{2}\multimap H_{2}\) be a set-valued maximal monotone mapping, and let \(J_{\beta}^{B}\) be a resolvent mapping of B. Let \(T:H_{1}\rightarrow H_{1}\) be defined by \(Tx:=A^{*}(I-J_{\beta}^{B})Ax\) for each \(x\in H_{1}\). Then
-
(i)
\(\|(I-J_{\beta}^{B})Ax-(I-J_{\beta}^{B})Ay\|^{2}\leq\langle Tx-Ty,x-y\rangle\) for all \(x,y\in H_{1}\);
-
(ii)
\(\|A^{*}(I-J_{\beta}^{B})Ax-A^{*}(I-J_{\beta}^{B})Ay\|^{2} \leq\|A\|^{2}\cdot\langle Tx-Ty,x-y\rangle\) for all \(x,y\in H_{1}\).
Lemma 2.4
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear operator, and \(A^{*}\) be the adjoint of A, and let \(\beta>0\) be fixed, and let \(\rho\in(0,\frac{2}{\|A\|^{2}})\). Let \(B_{2}:H_{2}\multimap H_{2}\) be a set-valued maximal monotone mapping, and let \(J_{\beta}^{B_{2}}\) be a resolvent mapping of \(B_{2}\). Then
for all \(x,y\in H_{1}\). Furthermore, \(I-\rho A^{*}(I-J_{\beta}^{B_{2}})A\) is a nonexpansive mapping.
Lemma 2.5
[10]
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow H\) be a nonexpansive mapping, and let \(\{x_{n}\}\) be a sequence in C. If \(x_{n}\rightharpoonup w\) and \(\lim_{n\rightarrow \infty }\|x_{n}-Tx_{n}\|=0\), then \(Tw=w\).
Lemma 2.6
[11]
Let \(\{a_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) such that \(a_{n_{i}}< a_{n_{i}+1}\) for all \(i\in\mathbb{N}\). Then there exists a nondecreasing sequence \(\{m_{k}\}\subseteq \mathbb{N}\) such that \(m_{k}\rightarrow \infty\), \(a_{m_{k}}\leq a_{m_{k}+1}\), and \(a_{k}\leq a_{m_{k}+1}\) are satisfied by all (sufficiently large) numbers \(k\in\mathbb{N}\). In fact, \(m_{k}=\max\{j\leq k:a_{j}< a_{j+1}\}\).
Lemma 2.7
[12]
Let \(\{a_{n}\}_{n\in\mathbb{N}}\) be a sequence of nonnegative real numbers, \(\{\alpha_{n}\}\) a sequence of real numbers in \([0,1]\) with \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\{u_{n}\}\) a sequence of nonnegative real numbers with \(\sum_{n=1}^{\infty}u_{n}<\infty\), \(\{t_{n}\}\) a sequence of real numbers with \(\limsup t_{n}\leq0\). Suppose that \(a_{n+1}\leq(1-\alpha_{n})a_{n}+\alpha_{n}t_{n}+u_{n}\) for each \(n\in\mathbb{N}\). Then \(\lim_{n\rightarrow \infty}a_{n}=0\).
3 Strong convergence theorems for (SFVIP)
In Remark 1.2, we have said that it is natural to assume that \(\{x_{n}\}\) is a bounded sequence in the following result. For example, ([13], Theorem 3.1) use the assumption: \(\{\nabla f_{2}(z_{n})\}\) is a bounded sequence; ([14], Assumption 3.2, Theorem 3.1) use the assumption: \(\{y_{n}^{i}\}_{n\in\mathbb{N}}\) is a bounded sequence; ([15], Assumption 2, Proposition 2.7) use the assumption: there exists a positive number \(M_{3}\) such that \(\|\nabla f_{\ell}(x)\|\leq M_{3}\) for each \(x\in\mathbb{R}^{p}\) and each \(\ell=1,2,\ldots,L\). Here, we need a similar assumption for our algorithm and convergence theorem in this paper.
Theorem 3.1
Let \(H_{1}\) and \(H_{2}\) be infinite dimensional Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator, and \(A^{*}\) be the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be set-valued maximal monotone mappings. Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\) be sequences in \([0,1]\). Choose \(\delta\in(0,1/2)\), and let \(\{\rho_{n}\}\) be a sequence in \((0,\min\{\frac{\delta}{\|A\|^{2}},\frac{2}{\|A\|^{2}+2}\})\). Let Ω be the solution set of problem (SFVIP) and assume that \(\Omega\neq\emptyset\). For the sequence \(\{x_{n}\}\) in Algorithm 1.2, and we further assume that:
-
(i)
\(\lim_{n\rightarrow \infty}a_{n}=\lim_{n\rightarrow \infty}\eta_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\), \(\liminf_{n\rightarrow \infty}\rho _{n}>0\), and \(\liminf_{n\rightarrow \infty}\beta_{n}>0\);
-
(ii)
\(\lim_{n\rightarrow \infty}\frac{\gamma_{n}}{a_{n}}=t\) for some \(t\geq0\), and \(\{x_{n}\}\) is a bounded sequence.
Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar{x}:=P_{\Omega}0\).
Proof
Clearly, Ω is a closed and convex subset of \(H_{1}\). Let \(\bar{x}=P_{\Omega}0\). Since \(\liminf_{n\rightarrow \infty }\rho_{n}>0\), we may assume that \(\rho_{n}\geq\rho\) for some \(\rho>0\). Without loss of generality, we may assume that \(x_{n}\neq y_{n}\) for each \(n\in\mathbb{N}\). Take any \(w\in\Omega\) and let w be fixed. Take any \(n\in\mathbb{N}\), and let n be fixed. Let \(\bar{x}=P_{\Omega}0\). Since \(w\in\Omega\), we know that \(Aw\in B_{2}^{-1}(0)\). By Lemma 2.2(ii), we know that
By (3.2),
Besides, by Lemma 2.3,
By (3.7), we know that
Here, we set
Then it follows from (1.9) and (3.9) that
and
By (1.7) and (3.11), \(\alpha_{n}\geq\frac{1}{2}\) for each \(n\in\mathbb{N}\). It follows from (1.9) and \(1>2\delta\) that
By (1.6), (1.7), (3.9), and (3.13),
So, \(\{\alpha_{n}\}\) is a bounded sequence. By (3.12) and (3.10),
It follows from (2.3) and (3.15) that
Since \(\lim_{n\rightarrow \infty}a_{n}=0\), and two sequences \(\{\rho _{n}\}\) and \(\{\alpha_{n}\}\) are bounded, we may assume that \(a_{n}\rho_{n}<1-\delta\) and \(0<\alpha_{n}a_{n}\rho_{n}<1\) for each \(n\in\mathbb{N}\). Since \(\{x_{n}\}\) is a bounded sequence, it is easy to see that \(\{A^{*}(I-J_{\beta_{n}}^{B_{2}})Ax_{n}\}\) is a bounded sequence. Then there exists \(M>0\) such that \(\|A^{*}(I-J_{\beta_{n}}^{B_{2}})Ax_{n}\|\leq M\) for each \(n\in\mathbb{N}\).
Since \(\lim_{n\rightarrow \infty}\eta_{n}=0\), there exists \(k\in \mathbb{N}\) such that \(\eta_{n}<1/2\) for each \(n>k\). Let \(M^{*}=\max\{M,\|d_{k}\|\}\). Then \(\|d_{k}\|< 2M^{*}\). Suppose that \(\|d_{n}\|\leq2M^{*}\) for some \(n>k\). Then we have
By the induction method, we know that \(\|d_{n}\|\leq2M^{*}\) for each \(n\geq k\). So, \(\{d_{n}\}\) is a bounded sequence.
Next, we know that
Hence, if follows from (3.17) and the two sequences \(\{x_{n}\}\) and \(\{d_{n}\}\) being bounded that sequence \(\{y_{n}\}\) is bounded.
Besides, we have
Next, we know that
Further, by Lemma 2.2, we have
This implies that
By (3.22), we also have
That is,
By (3.2) again, we have
This implies that
This implies that
Case 1: there exists a natural number N such that \(\|x_{n+1}-\bar{x}\|\leq\|x_{n}-\bar{x}\|\) for each \(n\geq N\).
Clearly, \(\lim_{n\rightarrow \infty}\|x_{n}-\bar{x}\|\) exists. By (3.15) and \(\lim_{n\rightarrow \infty}\gamma_{n}=0\), we know that
By (3.23), (3.28), (3.29), and \(\lim_{n\rightarrow \infty}\gamma_{n}=0\),
By (3.19), (3.30), and \(\lim_{n\rightarrow \infty}\gamma_{n}=0\),
By (3.31),
By (3.20), (3.32), and \(\lim_{n\rightarrow \infty}a_{n}=0\),
Since \(\liminf_{n\rightarrow \infty}\beta_{n}>0\), we may assume that \(\beta_{n}\geq\beta\) for some \(\beta>0\). By (3.32), (3.34) and Lemma 2.2(iii),
Since \(\{x_{n}\}\) is a bounded sequence, there is a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) and \(z\in H\) such that \(x_{n_{k}}\rightharpoonup z\) and
It follows from \(x_{n_{k}}\rightharpoonup z\) and (3.35) that \(z\in \operatorname{Fix}(J_{\beta}^{B_{1}})=B_{1}^{-1}(0)\). Besides, since \(x_{n_{k}}\rightharpoonup z\), we have
Then \(Ax_{n_{k}}\rightharpoonup Az\). Similarly, we know that \(Az\in \operatorname{Fix}(J_{\beta}^{B_{2}})=B_{2}^{-1}(0)\). So, \(z\in\Omega\). By (3.36) and Lemma 2.1, we know that
We also have
Hence, it follows from (3.28) and (3.39) that
By (3.27), (3.38), (3.40), and Lemma 2.7, we know that \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\).
Case 2: suppose that there exists a subset \(\{n_{i}\}\) of \(\{n\}\) such that \(\|x_{n_{i}}-\bar{x}\|\leq\|x_{n_{i}+1}-\bar{x}\|\) for all \(i\in\mathbb{N}\). By Lemma 2.6, there exists a nondecreasing sequence \(\{m_{k}\}\) in \(\mathbb{N}\) such that \(m_{k}\rightarrow \infty\),
for all \(k\in\mathbb{N}\). By (3.26),
for all \(k\in\mathbb{N}\). By (3.41) and (3.42),
for all \(k\in\mathbb{N}\). This implies that
for all \(k\in\mathbb{N}\). By (3.41), and following a similar argument to the above, we know that
By (3.41), (3.44), and (3.45), we can get the following and this is the conclusion of Theorem 3.1:
Now, for completeness, we show the proof of (3.45).
By (3.15),
By (3.48), we know that
Further,
We also have
Hence, it follows from (3.49) and (3.53) that
By (3.19),
where
By (3.55),
By (3.49) and (3.56), we know that
This implies that
By (3.20), (3.49), and (3.58), we have
Since \(\{x_{m_{k}}\}\) is bounded, there is a subsequence \(\{z_{k}\}\) of \(\{x_{m_{k}}\}\) such that \(z_{k}\rightharpoonup u\) and
By (3.58), (3.59), Lemma 2.2, and Lemma 2.5, we know that \(u\in\Omega\). So, by (3.60) and Lemma 2.1, we know that
Therefore, the proof is completed. □
4 Application: split feasibility problems
Let C and Q be nonempty, closed, and convex subsets of infinite dimensional real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator. The split feasibility problem is the following problem:
Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\), and \(\{\rho_{n}\}\) be real sequences. Let δ be a fixed real number. Let \(\Omega_{1}\) be the solution set of problem (SFP).
Algorithm 4.1
- Step 0.:
-
Choose \(x_{1}\in H_{1}\) arbitrarily, set \(r_{1}\in(0,1)\) and \(d_{0}=0\).
- Step 1.:
-
\(d_{n}:=-A^{*}(I-P_{Q})Ax_{n}+\eta_{n} d_{n-1}\).
- Step 2.:
-
For \(n\in\mathbb{N}\), set \(y_{n}\) as
$$ y_{n}=P_{C}\bigl[(1-a_{n} \rho_{n})x_{n}-\rho_{n}A^{*}(I-P_{Q})Ax_{n}+ \gamma_{n} d_{n}\bigr], $$(4.1)where \(\rho_{n}>0\) satisfies
$$ \rho_{n}\bigl\Vert A^{*}(I-P_{Q})Ax_{n}-A^{*}(I-P_{Q})Ay_{n} \bigr\Vert \leq\delta\|x_{n}-y_{n}\|,\quad 0< \delta< 1. $$(4.2) - Step 3.:
-
If \(x_{n}=y_{n}\), then set \(n:=n+1\) and go to Step 1. Otherwise, go to Step 3.
- Step 4.:
-
The new iterative \(x_{n+1}\) is defined by
$$ x_{n+1}=P_{C}\bigl[x_{n}- \alpha_{n} D(x_{n},\rho_{n})\bigr], $$(4.3)where
$$\begin{aligned}& D(x_{n},\rho_{n}):=x_{n}-y_{n}+ \rho_{n}\bigl[A^{*}(I-P_{Q})Ay_{n}-A^{*}(I-P_{Q})Ax_{n} \bigr], \end{aligned}$$(4.4)$$\begin{aligned}& \alpha_{n}:=\frac{\langle x_{n}-y_{n},D(x_{n},\rho_{n})\rangle}{\|D(x_{n},\rho_{n})\|^{2}}. \end{aligned}$$(4.5)Then update \(n:=n+1\) and go to Step 1.
Following the same argument as in [5], we can get the following strong convergence theorem of the proposed algorithm for the split feasibility problem.
Theorem 4.1
Let C and Q be nonempty, closed, and convex subsets of infinite dimensional real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator. Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\) be sequences in \([0,1]\). Choose \(\delta\in(0,1/2)\), and let \(\{\rho_{n}\}\) be a sequence in \((0,\min\{\frac{\delta}{\|A\|^{2}},\frac{2}{\|A\|^{2}+2}\})\). Let \(\Omega_{1}\) be the solution set of problem (SFVIP) and assume that \(\Omega_{1}\neq\emptyset\). For the sequence \(\{x_{n}\}\) in Algorithm 4.1, we further assume that:
-
(i)
\(\lim_{n\rightarrow \infty}a_{n}=\lim_{n\rightarrow \infty}\eta_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\), \(\liminf_{n\rightarrow \infty}\rho_{n}>0\);
-
(ii)
\(\lim_{n\rightarrow \infty}\frac{\gamma_{n}}{a_{n}}=t\) for some \(t\geq0\), and \(\{x_{n}\}\) is a bounded sequence.
Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar{x}:=P_{\Omega_{1}}0\).
5 Numerical results for (SFVIP)
All codes were written in R language (version 2.15.2 (2012-10-26)), and all numerical results run on ASUS All in one PC series with i3-2100 CPU.
Set \(u=(1,1)\), \(\beta_{1}=1\), \(\beta_{n}=1+\frac{1}{n-1}\) for \(n\geq2\), \(\eta_{n}=\frac{1}{n+1}\), \(a_{n}=\frac{1}{n+1}\), \(\gamma_{1}=1\), and \(\gamma_{n}=\frac{1}{n-1}\) for \(n\geq2\), and \(\beta=1\). Let \(\varepsilon>0\) and the algorithm stop if \(\|x_{n-1}-x_{n}\|<\varepsilon\).
Example 5.1
Let A and \(B_{1},B_{2}:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}\) be defined by
Find a point \(\bar{x}=(\bar{x}_{1},\bar{x}_{2})^{\top}\in\mathbb{R}^{2}\) such that \(B_{1}(\bar{x})=(0,0)^{\top}\) and \(B_{2}(A\bar{x})=(0,0)^{\top}\). Indeed, \(\bar{x}_{1}=1\) and \(\bar{x}_{2}=0\).
Example 5.2
Let \(B_{1}\) and \(B_{2}\) be the same as in Example 5.1. Let
Find a point \(\bar{x}=(\bar{x}_{1},\bar{x}_{2})\in\mathbb{R}^{2}\) such that \(B_{1}(\bar{x})=(0,0)^{\top}\) and \(B_{2}(A\bar{x})=(0,0)^{\top}\). Indeed, \(\bar{x}_{1}=0.5\) and \(\bar{x}_{2}=-0.5\).
Example 5.3
Let \(B_{1}:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}\), \(B_{2}:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}\) be defined by
Find a point \(\bar{x}=(\bar{x}_{1},\bar{x}_{2})^{\top}\in\mathbb{R}^{2}\) such that \(B_{1}(\bar{x})=(0,0)^{\top}\) and \(B_{2}(A\bar{x})=(0,0,0)^{\top}\). Indeed, \(\bar{x}_{1}=1.5\) and \(\bar {x}_{2}=-0.5\).
For the above examples, we give the numerical results (see Tables 1-3) for the proposed algorithm and related algorithms.
References
Martinet, B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4(R3), 154-158 (1970)
Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)
Moudafi, A: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275-283 (2011)
Byrne, C, Censor, Y, Gibali, A, Reich, S: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759-775 (2011)
Chuang, CS: Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 350 (2013)
Bnouhachem, A, Noor, MA, Khalfaoui, M, Zhaohan, S: On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 54, 627-639 (2012)
Nocedal, J, Wright, SJ: Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer, Berlin (1999)
Takahashi, W: Introduction to Nonlinear and Convex Analysis. Yokohoma Publishers, Yokohoma (2009)
Xu, HK: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2, 240-256 (2002)
Browder, FE: Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 53, 1272-1276 (1965)
Maingé, PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899-912 (2008)
Aoyama, K, Kimura, Y, Takahashi, W, Toyoda, M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 67, 2350-2360 (2007)
Iiduka, H: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22, 862-878 (2012)
Iiduka, H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1-26 (2013)
Blatt, D, Hero, A, Gauchman, H: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18, 29-51 (2007)
Acknowledgements
This research was supported by the Ministry of Science and Technology of the Republic of China.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Both authors contributed equally and significantly in writing this paper. Both authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Chuang, CS., Lin, IJ. New strong convergence theorems for split variational inclusion problems in Hilbert spaces. J Inequal Appl 2015, 176 (2015). https://doi.org/10.1186/s13660-015-0697-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0697-1
Keywords
- split variational inclusion problem
- maximal monotone mapping
- split feasibility problem
- resolvent mapping
- conjugate gradient method