Open Access

New strong convergence theorems for split variational inclusion problems in Hilbert spaces

Journal of Inequalities and Applications20152015:176

https://doi.org/10.1186/s13660-015-0697-1

Received: 26 February 2015

Accepted: 17 May 2015

Published: 3 June 2015

Abstract

The split variational inclusion problem is an important problem, and it is a generalization of the split feasibility problem. In this paper, we present a descent-conjugate gradient algorithm for the split variational inclusion problems in Hilbert spaces. Next, a strong convergence theorem of the proposed algorithm is proved under suitable conditions. As an application, we give a new strong convergence theorem for the split feasibility problem in Hilbert spaces. Finally, we give numerical results for split variational inclusion problems to demonstrate the efficiency of the proposed algorithm.

Keywords

split variational inclusion problem maximal monotone mapping split feasibility problem resolvent mapping conjugate gradient method

1 Introduction

Let H be a real Hilbert space, and \(B:H\multimap H\) be a set-valued mapping with domain \(\mathcal{D}(B):=\{x\in H:B(x)\neq\emptyset\}\). Recall that B is called monotone if \(\langle u-v,x-y\rangle\geq0\) for any \(u\in Bx\) and \(v\in By\); B is maximal monotone if its graph \(\{(x,y):x\in\mathcal{D}(B), y\in Bx\}\) is not properly contained in the graph of any other monotone mapping. An important problem for set-valued monotone mappings is to find \(\bar{x}\in H\) such that \(0\in B\bar{x}\). Here, \(\bar{x}\) is called a zero point of B. A well-known method for approximating a zero point of a maximal monotone mapping defined in a real Hilbert space is the proximal point algorithm first introduced by Martinet [1] and generated by Rockafellar [2]. This is an iterative procedure which generates \(\{x_{n}\}\) by \(x_{1}=x\in H\) and
$$ x_{n+1}=J_{\beta_{n}}^{B} x_{n}, \quad n\in\mathbb{N}, $$
(1.1)
where \(\{\beta_{n}\}\subseteq (0,\infty)\), B is a maximal monotone mapping in a real Hilbert space, and \(J_{r}^{B}\) is the resolvent mapping of B defined by \(J_{r}^{B}=(I+r B)^{-1}\) for each \(r>0\). In 1976, Rockafellar [2] proved the following in the Hilbert space setting: If the solution set \(B^{-1}(0)\) is nonempty and \(\liminf_{n\rightarrow \infty}\beta _{n}>0\), then the sequence \(\{x_{n}\}\) in (1.1) converges weakly to an element of \(B^{-1}(0)\). In particular, if B is the subdifferential ∂f of a proper lower semicontinuous and convex function \(f:H\rightarrow \mathbb{R}\), then (1.1) is reduced to
$$ x_{n+1}=\arg\min_{y\in H}\biggl\{ f(y)+ \frac{1}{2\beta _{n}}\|y-x_{n}\|^{2}\biggr\} , \quad n\in \mathbb{N}. $$
(1.2)
In this case, \(\{x_{n}\}\) converges weakly to a minimizer of f.
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and \(A^{*}\) be the adjoint of A. Let \(f:H_{1}\rightarrow H_{1}\) and \(g:H_{2}\rightarrow H_{2}\) be two proper lower semicontinuous, and convex functions. In 2011, Moudafi [3] presented the following general split variational inclusion problem:
$$ \mbox{Find }\bar{x}\in H_{1}\mbox{ such that }0\in f( \bar {x})+B_{1}(\bar{x}) \mbox{ and }0\in g(A\bar{x})+B_{2}(A \bar{x}). $$
(GSFVIP)
Clearly, we know that split variational inclusion problem (SFVIP) is a generalization of variational inclusion problems and a generalization of split feasibility problem. Hence, it is important to study the split variational inclusion problems in Hilbert spaces.
For problem (GSFVIP), Moudafi [3] gave the following algorithm and a weak convergence theorem under suitable conditions:
$$x_{n+1}:=J_{\lambda}^{B_{1}}(I-\lambda f) \bigl(x_{n}+\gamma A^{*}\bigl(J_{\lambda }^{B_{2}}(I-\lambda g)-I\bigr)Ax_{n}\bigr). $$
It is worth noting that λ and γ are fixed numbers. Hence, it is important to establish generalized iteration processes and strong convergence theorems for problem (SFVIP).
In this paper, we consider the following split variational inclusion problems in Hilbert spaces:
$$ \mbox{Find }\bar{x}\in H_{1}\mbox{ such that }0\in B_{1}(\bar{x}) \mbox{ and }0\in B_{2}(A\bar{x}). $$
(SFVIP)

In 2011, Byrne et al. [4] gave the following two convergence theorems for split variational inclusion problems.

First, from the idea of the algorithms for fixed point theorem, the algorithm given in Theorem 1.1 can be seen as a Picard iteration method.

Theorem 1.1

[4]

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\beta>0\) and \(\rho\in(0,\frac {2}{\|A\|^{2}})\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
$$x_{n+1}:=J_{\beta}^{B_{1}}\bigl[x_{n}-\rho A^{*} \bigl(I-J_{\beta}^{B_{2}}\bigr)Ax_{n}\bigr] $$
for each \(n\in\mathbb{N}\). Then \(\{x_{n}\}\) converges weakly to an element \(\bar{x}\in\Omega\).

Next, from the idea of the algorithms for fixed point theorem, the algorithm given in Theorem 1.2 can be seen as Halpern’s iteration method.

Theorem 1.2

[4]

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\{a_{n}\}\) be a sequence of real numbers in \([0,1]\) and let \(\beta>0\). Let \(u\in H\) be fixed and let \(\rho\in(0,\frac{2}{\|A\|^{2}})\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
$$x_{n+1}:=a_{n} u+(1-a_{n}) J_{\beta}^{B_{1}} \bigl[x_{n}-\rho A^{*}\bigl(I-J_{\beta}^{B_{2}} \bigr)Ax_{n}\bigr] $$
for each \(n\in\mathbb{N}\). Assume that \(\lim_{n\rightarrow \infty}a_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\). Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\) for some \(\bar{x}\in \Omega\).

Remark 1.1

In Theorems 1.1 and 1.2, we know that β and ρ are fixed numbers.

In 2013, Chuang [5] gave the following two convergent theorems for problem (SFVIP). Indeed, from the idea of the algorithms for fixed point theorem, the algorithm given in Theorem 1.3 can be seen as Halpern-Mann type iteration method.

Theorem 1.3

[5]

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\{a_{n}\}\), \(\{b_{n}\}\), and \(\{c_{n}\}\) be sequences of real numbers in \([0,1]\) with \(a_{n}+b_{n}+c_{n}=1\) and \(0< a_{n}<1\) for each \(n\in\mathbb{N}\). Let \(\{\beta_{n}\}\) be a sequence in \((0,\infty)\). Let \(u\in H\) be fixed. Let \(\{\rho_{n}\}\) be a sequence in \((0,\frac{2}{\|A\|^{2}+1})\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
$$x_{n+1}:=a_{n} u+b_{n} x_{n}+c_{n} J_{\beta_{n}}^{B_{1}}\bigl[x_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}\bigr] $$
for each \(n\in\mathbb{N}\). Assume that \(\lim_{n\rightarrow \infty}a_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\), \(\liminf_{n\rightarrow \infty}c_{n}\rho_{n}>0\), \(\liminf_{n\rightarrow \infty}b_{n}c_{n}>0\), and \(\liminf_{n\rightarrow \infty}\beta_{n}>0\). Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar {x}=P_{\Omega}u\).

Besides, the algorithm in Theorem 1.4 comes from the optimization theorem and Tikhonov regularization method.

Theorem 1.4

[5]

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator, and let \(A^{*}\) denote the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be two set-valued maximal monotone mappings. Let \(\{\beta_{n}\}\) be a sequence in \((0,\infty)\), \(\{a_{n}\}\) be a sequence in \((0,1)\), and \(\{\rho_{n}\}\) be a sequence in \((0,2/(\|A\|^{2}+2))\). Let Ω be the solution set of (SFVIP) and suppose that \(\Omega\neq\emptyset\). Let \(\{x_{n}\}\) be defined by
$$x_{n+1}:=J_{\beta_{n}}^{B_{1}}\bigl[(1-a_{n} \rho_{n})x_{n}-\rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)Ax_{n}\bigr] $$
for each \(n\in\mathbb{N}\). Assume that \(\lim_{n\rightarrow \infty}a_{n}=0\), \(\sum _{n=1}^{\infty}a_{n}\rho_{n}=\infty\), \(\liminf_{n\rightarrow \infty}\rho_{n}>0\) and \(\liminf_{n\rightarrow \infty}\beta_{n}>0\). Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar {x}=P_{\Omega}0\), i.e., \(\bar{x}\) is the minimal norm solution of (SFVIP).

Further, we also observed that Bnouhachem et al. [6] proposed the following descent-projection algorithm to study the split feasibility problem.

Let \(A:\mathbb{R}^{n}\rightarrow \mathbb{R}^{m}\) be a bounded linear operator, and \(A^{*}\) be the adjoint of A. Let \(f:\mathbb{R}^{n}\rightarrow (-\infty,\infty]\) and \(g:\mathbb {R}^{m}\rightarrow (-\infty,\infty]\) be two proper, lower semicontinuous, and convex functions. Let \(\{\rho_{k}\}\) be a sequence of positive real numbers.

Algorithm 1.1

For given \(x_{k}\in\mathbb{R}^{n}\), find the approximate solution by the following iterative process.
Step 1.: 
For \(k\in\mathbb{N}\), let \(C_{k}\) and \(Q_{k}\) be
$$\left \{ \begin{array}{l} C_{k}:=\{u\in\mathbb{R}^{n}: f(x_{k})+\langle u_{k},u-x_{k}\rangle\leq0\}, \\ Q_{k}:=\{v\in\mathbb{R}^{m}: g(Ax_{k})+\langle v_{k},v-Ax_{k}\rangle\leq0\}, \end{array} \right . $$
where \(u_{k}\in\partial f(x_{k})\) and \(v_{k}\in\partial g(Ax_{k})\).
Step 2.: 
\(y_{k}=P_{C_{k}}[x_{k}-\rho_{k}A^{\top}(I-P_{Q_{k}})Ax_{k}]\), where \(\rho_{k}>0\) satisfies
$$\rho_{k}\bigl\Vert A^{\top}(I-P_{Q_{k}})Ax_{n}-A^{\top}(I-P_{Q_{k}})Ay_{k} \bigr\Vert \leq\delta \|x_{k}-y_{k}\|,\quad 0< \delta< 1. $$
Step 3.: 

If \(y_{k}=x_{k}\), then stop. Otherwise, go to Step 4.

Step 4.: 
The new iterative \(x_{k+1}\) is defined by
$$x_{k+1}=P_{C_{k}}\bigl[x_{k}-\alpha_{k} d(x_{k},\rho_{k})\bigr], $$
where
$$\left \{ \begin{array}{l} d(x_{k},\rho_{k}):=x_{k}-y_{k}+\rho_{k}A^{\top}(I-P_{Q_{k}})Ay_{k}, \\ \varepsilon_{k}:=\rho_{k}[A^{\top}(I-P_{Q_{k}})Ay_{k}-A^{\top}(I-P_{Q_{k}})Ax_{k}], \\ D(x_{k},\rho_{k}):=x_{k}-y_{k}+\varepsilon_{k}, \\ \phi(x_{k},\rho_{k}):=\langle x_{k}-y_{k},D(x_{k},\rho_{k})\rangle, \\ \alpha_{k}:=\frac{\phi(x_{k},\rho_{k})}{\|d(x_{k},\rho_{k})\|^{2}}. \end{array} \right . $$

Let \(H_{1}\) and \(H_{2}\) be infinite dimensional Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator, and \(A^{*}\) be the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be set-valued maximal monotone mappings. Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\), and \(\{\rho_{n}\}\) be real sequences. Let δ be a fixed real numbers. Let Ω be the solution set of problem (SFVIP). In this paper, motivated by the above works and related results, we present the following algorithm with conjugate gradient method for the split variational inclusion problems in Hilbert spaces.

Motivated by Algorithm 1.1 and the above results, we want to give a strong convergence theorem in infinite dimensional real Hilbert spaces. (Indeed, for computers and program language, we can only give examples for a finite dimensional space.) Next, we want that the convergent rate of the given algorithm are faster than the above algorithms. Hence, we give the following algorithm with conjugate method. In our numerical results, we know that this algorithm is very fast under some conditions.

Algorithm 1.2

Step 0.: 

Choose \(x_{1}\in H_{1}\) arbitrarily, set \(r_{1}\in(0,1)\) and \(d_{0}=0\).

Step 1.: 

\(d_{n}:=-A^{*}(I-J^{B_{2}}_{\beta_{n}})Ax_{n}+\eta_{n} d_{n-1}\).

Step 2.: 
For \(n\in\mathbb{N}\), set \(y_{n}\) as
$$ y_{n}=J^{B_{1}}_{\beta_{n}} \bigl[(1-a_{n}\rho_{n})x_{n}-\rho_{n}A^{*} \bigl(I-J^{B_{2}}_{\beta _{n}}\bigr)Ax_{n}+\gamma_{n} d_{n}\bigr], $$
(1.3)
where \(\rho_{n}>0\) satisfies
$$ \rho_{n}\bigl\Vert A^{*}\bigl(I-J^{B_{2}}_{\beta_{n}} \bigr)Ax_{n}-A^{*}\bigl(I-J^{B_{2}}_{\beta _{n}} \bigr)Ay_{n}\bigr\Vert \leq\delta\|x_{n}-y_{n}\|, \quad 0< \delta< 1. $$
(1.4)
Step 3.: 

If \(x_{n}=y_{n}\), then set \(n:=n+1\) and go to Step 1. Otherwise, go to Step 3.

Step 4.: 
The new iterative \(x_{n+1}\) is defined by
$$ x_{n+1}=J^{B_{1}}_{\beta_{n}} \bigl[x_{n}-\alpha_{n} D(x_{n},\rho_{n}) \bigr], $$
(1.5)
where
$$\begin{aligned}& D(x_{n},\rho_{n}):=x_{n}-y_{n}+ \rho_{n}\bigl[A^{*}\bigl(I-J^{B_{2}}_{\beta _{n}} \bigr)Ay_{n}-A^{*}\bigl(I-J^{B_{2}}_{\beta_{n}} \bigr)Ax_{n}\bigr], \end{aligned}$$
(1.6)
$$\begin{aligned}& \alpha_{n}:=\frac{\langle x_{n}-y_{n},D(x_{n},\rho_{n})\rangle}{\|D(x_{n},\rho_{n})\|^{2}}. \end{aligned}$$
(1.7)
Then update \(n:=n+1\) and go to Step 1.

Remark 1.2

  1. (1)

    It is worth noting that \(d_{n}\) is defined by using the idea of the so-called conjugate gradient direction ([7], Chapter 5). Further, it is natural to assume that \(\{x_{n}\}\) is a bounded sequence for the convergence theorems with the conjugate gradient direction method.

     
  2. (2)
    If we set
    $$ \varepsilon_{n}:=\rho_{n}\bigl[A^{*} \bigl(I-J^{B_{2}}_{\beta _{n}}\bigr)Ay_{n}-A^{*} \bigl(I-J^{B_{2}}_{\beta_{n}}\bigr)Ax_{n}\bigr], $$
    (1.8)
    then it follows from (1.4) and (1.8) that
    $$ \bigl\vert \langle x_{n}-y_{n}, \varepsilon_{n}\rangle\bigr\vert \leq\|x_{n}-y_{n} \|\cdot\|\varepsilon_{n}\| \leq\delta\|x_{n}-y_{n}\| \cdot\|x_{n}-y_{n}\|=\delta\|x_{n}-y_{n} \|^{2}. $$
    (1.9)
     
  3. (3)

    If we choose \(\rho_{n}\) such that \(0<\rho_{n}\leq\frac{\delta}{\|A\|\cdot \|A^{*}\|}=\frac{\delta}{\|A\|^{2}}\), then (1.4) holds.

     
  4. (4)

    In our convergence theorem, we may assume that \(x_{n}\neq y_{n}\) for each \(n\in\mathbb{N}\) by the assumptions on the sequence \(\{a_{n}\}\).

     

Next, a strong convergence theorem of the proposed algorithm is proved under suitable conditions. As an application, we give a descent-projection-conjugate gradient algorithm and a strong convergence theorem for the split feasibility problem. Finally, we give numerical results to demonstrate the efficiency of the proposed algorithm.

2 Preliminaries

Let H be a (real) Hilbert space with inner product \(\langle\cdot,\cdot\rangle\) and norm \(\|\cdot\|\), respectively. We denote the strongly convergence and the weak convergence of \(\{x_{n}\}\) to \(x\in H\) by \(x_{n}\rightarrow x\) and \(x_{n}\rightharpoonup x\), respectively. From [8, 9], for each \(x,y,u,v\in H\) and \(\lambda\in[0,1]\), we have
$$\begin{aligned}& \|x+y\|^{2}\leq\|x\|^{2}+2\langle y,x+y \rangle; \end{aligned}$$
(2.1)
$$\begin{aligned}& \|x+y\|^{2}=\|x\|^{2}+2\langle x,y\rangle+\|y \|^{2}; \end{aligned}$$
(2.2)
$$\begin{aligned}& 2\langle x-y,u-v\rangle=\|x-v\|^{2}+\|y-u \|^{2}-\|x-u\|^{2}-\|y-v\|^{2}. \end{aligned}$$
(2.3)

Let C be a nonempty, closed, and convex subset of a real Hilbert space H, and let \(T:C\rightarrow H\) be a mapping. Let \(\operatorname{Fix}(T):=\{x\in C: Tx=x\}\). Then T is said to be a nonexpansive mapping if \(\|Tx-Ty\|\leq\|x-y\|\) for every \(x,y\in C\). T is said to be a quasi-nonexpansive mapping if \(\operatorname{Fix}(T)\neq\emptyset\) and \(\|Tx-y\|\leq\|x-y\|\) for every \(x\in C\) and \(y\in \operatorname{Fix}(T)\). It is easy to see that \(\operatorname{Fix}(T)\) is a closed convex subset of C if T is a quasi-nonexpansive mapping. Besides, T is said to be a firmly nonexpansive mapping if \(\|Tx-Ty\|^{2}\leq\langle x-y,Tx-Ty\rangle\) for every \(x,y\in C\), that is, \(\|Tx-Ty\|^{2}\leq \|x-y\|^{2}-\|(I-T)x-(I-T)y\|^{2}\) for every \(x,y\in C\).

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Then for each \(x\in H\), there is a unique element \(\bar{x}\in C\) such that
$$\|x-\bar{x}\|=\min_{y\in C}\|x-y\|. $$
Here, set \(P_{C}x=\bar{x}\), and \(P_{C}\) is called the metric projection from H onto C.

Lemma 2.1

[8]

Let C be a nonempty, closed, and convex subset of a real Hilbert space H, and let \(P_{C}\) be the metric projection from H onto C. Then \(\langle x-P_{C}x,P_{C}x-y\rangle\geq0\) for all \(x\in H\) and \(y\in C\).

Lemma 2.2

Let H be a real Hilbert space. Let \(B:H\multimap H\) be a set-valued maximal monotone mapping, \(\beta>0\), and let \(J_{\beta}^{B}\) be defined by \(J_{\beta}^{B}:=(I+\beta B)^{-1}\) (\(J_{\beta}^{B}\) is called resolvent mapping). Then the following are satisfied:
  1. (i)

    for each \(\beta>0\), \(J_{\beta}^{B}\) is a single-valued and firmly nonexpansive mapping;

     
  2. (ii)

    \(\mathcal{D}(J_{\beta}^{B})=H\) and \(\operatorname{Fix}(J_{\beta}^{B})=\{x\in \mathcal{D}(B):0\in Bx\}\);

     
  3. (iii)

    \(\|x-J_{\beta}^{B} x\|\leq\|x-J_{\gamma}^{B} x\|\) for all \(0<\beta\leq\gamma\) and for all \(x\in H\);

     
  4. (iv)

    \((I-J_{\beta}^{B})\) is a firmly nonexpansive mapping for each \(\beta>0\);

     
  5. (v)

    suppose that \(B^{-1}(0)\neq\emptyset\), then \(\|x-J_{\beta}^{B} x\|^{2}+\|J_{\beta}^{B} x-\bar{x}\|^{2}\leq\|x-\bar{x}\|^{2}\) for each \(x\in H\), each \(\bar{x}\in B^{-1}(0)\), and each \(\beta>0\);

     
  6. (vi)

    suppose that \(B^{-1}(0)\neq\emptyset\), then \(\langle x-J_{\beta}^{B} x,J_{\beta}^{B} x-w\rangle\geq0\) for each \(x\in H\) and each \(w\in B^{-1}(0)\), and each \(\beta>0\).

     

Lemma 2.3

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear operator, and \(A^{*}\) be the adjoint of A, and let \(\beta>0\) be fixed. Let \(B:H_{2}\multimap H_{2}\) be a set-valued maximal monotone mapping, and let \(J_{\beta}^{B}\) be a resolvent mapping of B. Let \(T:H_{1}\rightarrow H_{1}\) be defined by \(Tx:=A^{*}(I-J_{\beta}^{B})Ax\) for each \(x\in H_{1}\). Then
  1. (i)

    \(\|(I-J_{\beta}^{B})Ax-(I-J_{\beta}^{B})Ay\|^{2}\leq\langle Tx-Ty,x-y\rangle\) for all \(x,y\in H_{1}\);

     
  2. (ii)

    \(\|A^{*}(I-J_{\beta}^{B})Ax-A^{*}(I-J_{\beta}^{B})Ay\|^{2} \leq\|A\|^{2}\cdot\langle Tx-Ty,x-y\rangle\) for all \(x,y\in H_{1}\).

     

Lemma 2.4

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a linear operator, and \(A^{*}\) be the adjoint of A, and let \(\beta>0\) be fixed, and let \(\rho\in(0,\frac{2}{\|A\|^{2}})\). Let \(B_{2}:H_{2}\multimap H_{2}\) be a set-valued maximal monotone mapping, and let \(J_{\beta}^{B_{2}}\) be a resolvent mapping of \(B_{2}\). Then
$$\begin{aligned}& \bigl\Vert \bigl[x-\rho A^{*}\bigl(I-J_{\beta}^{B_{2}}\bigr)Ax \bigr]-\bigl[y-\rho A^{*}\bigl(I-J_{\beta}^{B_{2}}\bigr)Ay\bigr]\bigr\Vert ^{2} \\& \quad \leq\|x-y\|^{2}-\bigl(2\rho-\rho^{2}\|A \|^{2}\bigr)\bigl\Vert \bigl(I-J_{\beta}^{B_{2}} \bigr)Ax-\bigl(I-J_{\beta}^{B_{2}}\bigr)Ay\bigr\Vert ^{2} \end{aligned}$$
for all \(x,y\in H_{1}\). Furthermore, \(I-\rho A^{*}(I-J_{\beta}^{B_{2}})A\) is a nonexpansive mapping.

Lemma 2.5

[10]

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow H\) be a nonexpansive mapping, and let \(\{x_{n}\}\) be a sequence in C. If \(x_{n}\rightharpoonup w\) and \(\lim_{n\rightarrow \infty }\|x_{n}-Tx_{n}\|=0\), then \(Tw=w\).

Lemma 2.6

[11]

Let \(\{a_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) such that \(a_{n_{i}}< a_{n_{i}+1}\) for all \(i\in\mathbb{N}\). Then there exists a nondecreasing sequence \(\{m_{k}\}\subseteq \mathbb{N}\) such that \(m_{k}\rightarrow \infty\), \(a_{m_{k}}\leq a_{m_{k}+1}\), and \(a_{k}\leq a_{m_{k}+1}\) are satisfied by all (sufficiently large) numbers \(k\in\mathbb{N}\). In fact, \(m_{k}=\max\{j\leq k:a_{j}< a_{j+1}\}\).

Lemma 2.7

[12]

Let \(\{a_{n}\}_{n\in\mathbb{N}}\) be a sequence of nonnegative real numbers, \(\{\alpha_{n}\}\) a sequence of real numbers in \([0,1]\) with \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\), \(\{u_{n}\}\) a sequence of nonnegative real numbers with \(\sum_{n=1}^{\infty}u_{n}<\infty\), \(\{t_{n}\}\) a sequence of real numbers with \(\limsup t_{n}\leq0\). Suppose that \(a_{n+1}\leq(1-\alpha_{n})a_{n}+\alpha_{n}t_{n}+u_{n}\) for each \(n\in\mathbb{N}\). Then \(\lim_{n\rightarrow \infty}a_{n}=0\).

3 Strong convergence theorems for (SFVIP)

In Remark 1.2, we have said that it is natural to assume that \(\{x_{n}\}\) is a bounded sequence in the following result. For example, ([13], Theorem 3.1) use the assumption: \(\{\nabla f_{2}(z_{n})\}\) is a bounded sequence; ([14], Assumption 3.2, Theorem 3.1) use the assumption: \(\{y_{n}^{i}\}_{n\in\mathbb{N}}\) is a bounded sequence; ([15], Assumption 2, Proposition 2.7) use the assumption: there exists a positive number \(M_{3}\) such that \(\|\nabla f_{\ell}(x)\|\leq M_{3}\) for each \(x\in\mathbb{R}^{p}\) and each \(\ell=1,2,\ldots,L\). Here, we need a similar assumption for our algorithm and convergence theorem in this paper.

Theorem 3.1

Let \(H_{1}\) and \(H_{2}\) be infinite dimensional Hilbert spaces, \(A:H_{1}\rightarrow H_{2}\) be a bounded linear operator, and \(A^{*}\) be the adjoint of A. Let \(B_{1}:H_{1}\multimap H_{1}\) and \(B_{2}:H_{2}\multimap H_{2}\) be set-valued maximal monotone mappings. Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\) be sequences in \([0,1]\). Choose \(\delta\in(0,1/2)\), and let \(\{\rho_{n}\}\) be a sequence in \((0,\min\{\frac{\delta}{\|A\|^{2}},\frac{2}{\|A\|^{2}+2}\})\). Let Ω be the solution set of problem (SFVIP) and assume that \(\Omega\neq\emptyset\). For the sequence \(\{x_{n}\}\) in Algorithm 1.2, and we further assume that:
  1. (i)

    \(\lim_{n\rightarrow \infty}a_{n}=\lim_{n\rightarrow \infty}\eta_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\), \(\liminf_{n\rightarrow \infty}\rho _{n}>0\), and \(\liminf_{n\rightarrow \infty}\beta_{n}>0\);

     
  2. (ii)

    \(\lim_{n\rightarrow \infty}\frac{\gamma_{n}}{a_{n}}=t\) for some \(t\geq0\), and \(\{x_{n}\}\) is a bounded sequence.

     
Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar{x}:=P_{\Omega}0\).

Proof

Clearly, Ω is a closed and convex subset of \(H_{1}\). Let \(\bar{x}=P_{\Omega}0\). Since \(\liminf_{n\rightarrow \infty }\rho_{n}>0\), we may assume that \(\rho_{n}\geq\rho\) for some \(\rho>0\). Without loss of generality, we may assume that \(x_{n}\neq y_{n}\) for each \(n\in\mathbb{N}\). Take any \(w\in\Omega\) and let w be fixed. Take any \(n\in\mathbb{N}\), and let n be fixed. Let \(\bar{x}=P_{\Omega}0\). Since \(w\in\Omega\), we know that \(Aw\in B_{2}^{-1}(0)\). By Lemma 2.2(ii), we know that
$$ A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw=A^{*} \bigl(Aw-J_{\beta_{n}}^{B_{2}}Aw\bigr)=A^{*}(Aw-Aw)=0. $$
(3.1)
By Lemma 2.2(v) and (1.5),
$$ \|x_{n+1}-w\|^{2}+\bigl\Vert x_{n+1}-x_{n}+\alpha_{n} D(x_{n}, \rho_{n})\bigr\Vert ^{2} \leq \bigl\Vert x_{n}- \alpha_{n} D(x_{n},\rho_{n})-w\bigr\Vert ^{2}. $$
(3.2)
By (3.2),
$$\begin{aligned}& \Vert x_{n}-w\Vert ^{2}-\Vert x_{n+1}-w\Vert ^{2} \\& \quad \geq \Vert x_{n}-w\Vert ^{2}-\bigl\Vert x_{n}-\alpha_{n} D(x_{n},\rho _{n})-w \bigr\Vert ^{2}+\bigl\Vert x_{n+1}-x_{n}+ \alpha_{n} D(x_{n},\rho_{n})\bigr\Vert ^{2} \\& \quad \geq \Vert x_{n}-w\Vert ^{2}-\bigl\Vert x_{n}-\alpha_{n} D(x_{n},\rho_{n})-w\bigr\Vert ^{2} \\& \quad = \Vert x_{n}-w\Vert ^{2}-\Vert x_{n}-w\Vert ^{2}-\bigl\Vert \alpha_{n} D(x_{n},\rho_{n})\bigr\Vert ^{2}+ 2\bigl\langle x_{n}-w,\alpha_{n} D(x_{n},\rho_{n})\bigr\rangle \\& \quad = 2\bigl\langle x_{n}-w,\alpha_{n} D(x_{n}, \rho_{n})\bigr\rangle -\alpha _{n}^{2}\bigl\Vert D(x_{n},\rho_{n})\bigr\Vert ^{2}. \end{aligned}$$
(3.3)
Besides, by Lemma 2.3,
$$ \bigl\langle A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}} \bigr)Ay_{n}-A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw, y_{n}-w\bigr\rangle \geq0. $$
(3.4)
By (3.1) and (3.4),
$$ \bigl\langle A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}} \bigr)Ay_{n},y_{n}-w\bigr\rangle \geq0. $$
(3.5)
By Lemma 2.2(vi) and (1.3),
$$ \bigl\langle x_{n}-y_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n},y_{n}-w \bigr\rangle \geq \langle a_{n}\rho_{n}x_{n}- \gamma_{n}d_{n},y_{n}-w\rangle. $$
(3.6)
By (3.5) and (3.6), we have
$$\begin{aligned}& \langle a_{n}\rho_{n}x_{n}- \gamma_{n}d_{n},y_{n}-w\rangle \\& \quad \leq \bigl\langle x_{n}-y_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}+\rho _{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ay_{n},y_{n}-w \bigr\rangle \\& \quad = \bigl\langle D(x_{n},\rho_{n}),y_{n}-w \bigr\rangle . \end{aligned}$$
(3.7)
By (3.7), we know that
$$ \bigl\langle D(x_{n},\rho_{n}),x_{n}-y_{n} \bigr\rangle +\langle a_{n}\rho_{n}x_{n}-\gamma _{n}d_{n},y_{n}-w\rangle\leq\bigl\langle D(x_{n},\rho_{n}),x_{n}-w\bigr\rangle . $$
(3.8)
Here, we set
$$ \varepsilon_{n}:=\rho_{n}\bigl[A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ay_{n}-A^{*} \bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}\bigr]. $$
(3.9)
Then it follows from (1.9) and (3.9) that
$$\begin{aligned} \bigl\langle D(x_{n},\rho_{n}),x_{n}-y_{n} \bigr\rangle &= \langle x_{n}-y_{n},x_{n}-y_{n}+ \varepsilon_{n}\rangle \\ &= \|x_{n}-y_{n}\|^{2}+\langle x_{n}-y_{n},\varepsilon_{n}\rangle \\ &\geq \|x_{n}-y_{n}\|^{2}-\bigl|\langle x_{n}-y_{n},\varepsilon_{n}\rangle\bigr| \\ &\geq (1-\delta) \|x_{n}-y_{n}\|^{2} \end{aligned}$$
(3.10)
and
$$\begin{aligned} \bigl\langle D(x_{n},\rho_{n}),x_{n}-y_{n} \bigr\rangle =& \langle x_{n}-y_{n},x_{n}-y_{n}+ \varepsilon_{n}\rangle \\ = & \|x_{n}-y_{n}\|^{2}+\langle x_{n}-y_{n},\varepsilon_{n}\rangle \\ = & \frac{1}{2}\|x_{n}-y_{n}\|^{2}+ \langle x_{n}-y_{n},\varepsilon_{n}\rangle+ \frac{1}{2}\|x_{n}-y_{n}\|^{2} \\ \geq& \frac{1}{2}\|x_{n}-y_{n}\|^{2}+ \langle x_{n}-y_{n},\varepsilon _{n}\rangle+ \frac{1}{2}\|\varepsilon_{n}\|^{2} \\ = & \frac{1}{2}\|x_{n}-y_{n}+\varepsilon_{n} \|^{2} \\ = & \frac{1}{2}\bigl\Vert D(x_{n},\rho_{n})\bigr\Vert ^{2}. \end{aligned}$$
(3.11)
By (3.3) and (3.8),
$$\begin{aligned}& \|x_{n}-w\|^{2}-\|x_{n+1}-w \|^{2} \\& \quad \geq 2\alpha_{n}\bigl\langle x_{n}-w, D(x_{n},\rho_{n})\bigr\rangle - \alpha_{n}^{2} \bigl\Vert D(x_{n},\rho_{n})\bigr\Vert ^{2} \\& \quad \geq 2\alpha_{n}\bigl\langle D(x_{n}, \rho_{n}),x_{n}-y_{n}\bigr\rangle +2 \alpha_{n}a_{n}\rho_{n}\langle x_{n},y_{n}-w \rangle -2\alpha_{n}\gamma_{n}\langle d_{n},y_{n}-w \rangle \\& \qquad {} -\alpha_{n}^{2}\bigl\Vert D(x_{n}, \rho_{n})\bigr\Vert ^{2} \\& \quad = \alpha_{n}\bigl\langle D(x_{n},\rho_{n}),x_{n}-y_{n} \bigr\rangle +2\alpha_{n}a_{n}\rho_{n}\langle x_{n},y_{n}-w\rangle -2\alpha_{n} \gamma_{n}\langle d_{n},y_{n}-w\rangle. \end{aligned}$$
(3.12)
By (1.7) and (3.11), \(\alpha_{n}\geq\frac{1}{2}\) for each \(n\in\mathbb{N}\). It follows from (1.9) and \(1>2\delta\) that
$$\begin{aligned} \|x_{n}-y_{n}+\varepsilon_{n} \|^{2} = & \|x_{n}-y_{n}\|^{2}+\| \varepsilon _{n}\|^{2}+2\langle x_{n}-y_{n}, \varepsilon_{n}\rangle \\ \geq& \|x_{n}-y_{n}\|^{2}+\| \varepsilon_{n}\|^{2}-2\bigl\vert \langle x_{n}-y_{n},\varepsilon_{n}\rangle\bigr\vert \\ \geq& \|x_{n}-y_{n}\|^{2}+\| \varepsilon_{n}\|^{2}-2\delta\|x_{n}-y_{n} \|^{2} \\ \geq& (1-2\delta)\|x_{n}-y_{n}\|^{2}>0. \end{aligned}$$
(3.13)
By (1.6), (1.7), (3.9), and (3.13),
$$ \alpha_{n}^{2}\leq \biggl(\frac{\|x_{n}-y_{n}\|\cdot\|x_{n}-y_{n}+\varepsilon _{n}\|}{\|x_{n}-y_{n}+\varepsilon_{n}\|^{2}} \biggr)^{2} \leq \frac{\|x_{n}-y_{n}\|^{2}}{(1-2\delta)\|x_{n}-y_{n}\|^{2}}=\frac {1}{1-2\delta}. $$
(3.14)
So, \(\{\alpha_{n}\}\) is a bounded sequence. By (3.12) and (3.10),
$$\begin{aligned}& \|x_{n+1}-w\|^{2} \\& \quad \leq \|x_{n}-w\|^{2}-\alpha_{n}\bigl\langle D(x_{n},\rho_{n}),x_{n}-y_{n}\bigr\rangle + 2\alpha_{n}a_{n}\rho_{n}\langle x_{n},w-y_{n}\rangle+2\alpha_{n}\gamma _{n}\langle d_{n},y_{n}-w\rangle \\& \quad \leq \|x_{n}-w\|^{2}-\alpha_{n}(1-\delta) \|x_{n}-y_{n}\|^{2}+ 2\alpha_{n}a_{n} \rho_{n}\langle x_{n},w-y_{n}\rangle+2 \alpha_{n}\gamma _{n}\langle d_{n},y_{n}-w \rangle \\& \quad \leq \|x_{n}-w\|^{2}-\frac{1-\delta}{2} \|x_{n}-y_{n}\|^{2}+ 2\alpha_{n}a_{n} \rho_{n}\langle x_{n},w-y_{n}\rangle+2 \alpha_{n}\gamma _{n}\langle d_{n},y_{n}-w \rangle. \end{aligned}$$
(3.15)
It follows from (2.3) and (3.15) that
$$\begin{aligned}& \|x_{n+1}-w\|^{2} \\& \quad \leq \|x_{n}-w\|^{2}-\alpha_{n}(1-\delta) \|x_{n}-y_{n}\|^{2}+2\alpha _{n} \gamma_{n}\langle d_{n},y_{n}-w\rangle \\& \qquad {} +\alpha_{n}a_{n}\rho_{n}\bigl( \|x_{n}-y_{n}\|^{2}+\|w\|^{2}- \|x_{n}-w\|^{2}-\|y_{n}\|^{2}\bigr) \\& \quad \leq (1-\alpha_{n}a_{n}\rho_{n}) \|x_{n}-w\|^{2} -\alpha_{n}[1-\delta-a_{n} \rho_{n}] \|x_{n}-y_{n}\|^{2} \\& \qquad {} +\alpha_{n}a_{n}\rho_{n}\bigl(\|w \|^{2}-\|y_{n}\|^{2}\bigr)+2\alpha_{n} \gamma_{n}\langle d_{n},y_{n}-w\rangle. \end{aligned}$$
(3.16)
Since \(\lim_{n\rightarrow \infty}a_{n}=0\), and two sequences \(\{\rho _{n}\}\) and \(\{\alpha_{n}\}\) are bounded, we may assume that \(a_{n}\rho_{n}<1-\delta\) and \(0<\alpha_{n}a_{n}\rho_{n}<1\) for each \(n\in\mathbb{N}\). Since \(\{x_{n}\}\) is a bounded sequence, it is easy to see that \(\{A^{*}(I-J_{\beta_{n}}^{B_{2}})Ax_{n}\}\) is a bounded sequence. Then there exists \(M>0\) such that \(\|A^{*}(I-J_{\beta_{n}}^{B_{2}})Ax_{n}\|\leq M\) for each \(n\in\mathbb{N}\).
Since \(\lim_{n\rightarrow \infty}\eta_{n}=0\), there exists \(k\in \mathbb{N}\) such that \(\eta_{n}<1/2\) for each \(n>k\). Let \(M^{*}=\max\{M,\|d_{k}\|\}\). Then \(\|d_{k}\|< 2M^{*}\). Suppose that \(\|d_{n}\|\leq2M^{*}\) for some \(n>k\). Then we have
$$\|d_{n+1}\|\leq\bigl\Vert A^{*}\bigl(I-J_{\beta_{n+1}}^{B_{2}} \bigr)Ax_{n+1}\bigr\Vert +\eta _{n+1}\|d_{n}\|\leq M +\frac{1}{2}\|d_{n}\|\leq2M^{*}. $$
By the induction method, we know that \(\|d_{n}\|\leq2M^{*}\) for each \(n\geq k\). So, \(\{d_{n}\}\) is a bounded sequence.
Next, we know that
$$\begin{aligned}& \|y_{n}-w\| \\& \quad \leq \bigl\Vert J_{\beta_{n}}^{B_{1}}\bigl[(1-a_{n} \rho_{n})x_{n}-\rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)Ax_{n}+\gamma_{n} d_{n}\bigr] \\& \qquad {}- J_{\beta_{n}}^{B_{1}}\bigl[(1-a_{n}\rho_{n})w- \rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Aw\bigr]\bigr\Vert \\& \qquad {} +\bigl\Vert J_{\beta_{n}}^{B_{1}}\bigl[(1-a_{n} \rho_{n})w-\rho_{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}} \bigr)Aw\bigr]- J_{\beta_{n}}^{B_{1}}\bigl[w-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw\bigr]\bigr\Vert \\& \quad \leq (1-a_{n}\rho_{n})\|x_{n}-w\|+ \gamma_{n} \|d_{n}\|+\bigl\Vert J_{\beta _{n}}^{B_{1}} \bigl[(1-a_{n}\rho_{n})w\bigr]- J_{\beta_{n}}^{B_{1}}[w] \bigr\Vert \\& \quad \leq (1-a_{n}\rho_{n})\|x_{n}-w \|+a_{n}\rho_{n}\|w\|+\gamma_{n} \|d_{n} \|. \end{aligned}$$
(3.17)
Hence, if follows from (3.17) and the two sequences \(\{x_{n}\}\) and \(\{d_{n}\}\) being bounded that sequence \(\{y_{n}\}\) is bounded.
Besides, we have
$$\begin{aligned}& \|y_{n}-w\|^{2} \\& \quad \leq \bigl\Vert \bigl[(1-a_{n}\rho_{n})x_{n}- \rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n} \bigr]-\bigl[w-\rho_{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}} \bigr)Aw\bigr]+ \gamma_{n} d_{n}\bigr\Vert ^{2} \\& \quad = \bigl\Vert \bigl[x_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}\bigr]-\bigl[w-\rho _{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw\bigr]\bigr\Vert ^{2} \\& \qquad {} +2\bigl\langle \bigl[x_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}\bigr]-\bigl[w-\rho _{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw\bigr], \gamma_{n} d_{n}-a_{n}\rho_{n}x_{n} \bigr\rangle \\& \qquad {} +\|\gamma_{n} d_{n}-a_{n} \rho_{n}x_{n}\|^{2} \\& \quad \leq \bigl\Vert \bigl[x_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}\bigr]-\bigl[w-\rho _{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw\bigr]\bigr\Vert ^{2} \\& \qquad {} +2\bigl\langle \bigl[x_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}\bigr]-\bigl[w-\rho _{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw\bigr], \gamma_{n} d_{n}-a_{n}\rho_{n}x_{n} \bigr\rangle \\& \qquad {} +\gamma_{n}^{2}\|d_{n} \|^{2}+(a_{n}\rho_{n})^{2} \|x_{n}\|^{2}+2\gamma_{n} a_{n} \rho_{n} \|d_{n}\|\cdot\|x_{n}\|. \end{aligned}$$
(3.18)
By (3.18) and Lemma 2.4,
$$\begin{aligned}& \|y_{n}-w\|^{2} \\& \quad \leq \|x_{n}-w\|^{2}-\bigl(2\rho_{n}- \rho_{n}^{2}\|A\|^{2}\bigr)\bigl\Vert \bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}- \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw\bigr\Vert ^{2} \\& \qquad {} +2\bigl\langle \bigl[x_{n}-\rho_{n}A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}\bigr]-\bigl[w-\rho _{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Aw\bigr], \gamma_{n} d_{n}-a_{n}\rho_{n}x_{n} \bigr\rangle \\& \qquad {} +\gamma_{n}^{2}\|d_{n} \|^{2}+(a_{n}\rho_{n})^{2} \|x_{n}\|^{2}+2\gamma_{n} a_{n} \rho_{n} \|d_{n}\|\cdot\|x_{n}\| \\& \quad \leq \|x_{n}-w\|^{2}-\bigl(2\rho_{n}- \rho_{n}^{2}\|A\|^{2}\bigr)\bigl\Vert \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}\bigr\Vert ^{2} +\gamma_{n}^{2}\|d_{n} \|^{2} \\& \qquad {} +2\|x_{n}-w\|\cdot\bigl(\gamma_{n} \Vert d_{n}\Vert +a_{n}\rho_{n}\|x_{n}\| \bigr)+(a_{n}\rho _{n})^{2}\|x_{n} \|^{2}+2\gamma_{n} a_{n}\rho_{n} \|d_{n}\|\cdot\|x_{n}\|. \end{aligned}$$
(3.19)
Next, we know that
$$\begin{aligned}& \bigl\Vert y_{n}-J_{\beta_{n}}^{B_{1}}x_{n} \bigr\Vert \\& \quad = \bigl\Vert J_{\beta_{n}}^{B_{1}}\bigl[(1-a_{n} \rho_{n})x_{n}-\rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)Ax_{n}+\gamma_{n}d_{n}\bigr]-J_{\beta_{n}}^{B_{1}}x_{n} \bigr\Vert \\& \quad \leq \bigl\Vert \bigl[(1-a_{n}\rho_{n})x_{n}- \rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n} \bigr]-x_{n}\bigr\Vert +\gamma_{n} \|d_{n}\| \\& \quad \leq a_{n}\rho_{n}\|x_{n}\|+\rho_{n} \bigl\Vert A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)Ax_{n}\bigr\Vert +\gamma _{n} \|d_{n}\| \\& \quad \leq a_{n}\rho_{n}\|x_{n}\|+\rho_{n}\|A \|\cdot\bigl\| Ax_{n}-J_{\beta _{n}}^{B_{2}}Ax_{n}\bigr\| + \gamma_{n} \|d_{n}\|. \end{aligned}$$
(3.20)
Further, by Lemma 2.2, we have
$$\begin{aligned}& \|y_{n}-\bar{x}\|^{2} \\& \quad = \bigl\Vert J_{\beta_{n}}^{B_{1}}\bigl[(1-a_{n} \rho_{n})x_{n}-\rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)Ax_{n}+\gamma_{n} d_{n}\bigr]-J_{\beta_{n}}^{B_{1}} \bigl[\bar{x}-\rho _{n}A^{*}\bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)A \bar{x}\bigr]\bigr\Vert ^{2} \\& \quad \leq \bigl\langle (1-a_{n}\rho_{n})x_{n}- \rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}+ \gamma_{n} d_{n}-\bar{x}+\rho_{n}A^{*} \bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)A\bar{x},y_{n}-\bar{x} \bigr\rangle \\& \quad = \bigl\langle (1-a_{n}\rho_{n})x_{n}- \rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}-(1-a_{n} \rho_{n})\bar{x}+\rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)A\bar{x},y_{n}-\bar{x}\bigr\rangle \\& \qquad {} -a_{n}\rho_{n}\langle\bar{x},y_{n}- \bar{x}\rangle+\gamma_{n}\langle d_{n},y_{n}- \bar{x}\rangle \\& \quad \leq \bigl\Vert (1-a_{n}\rho_{n})x_{n}- \rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}-(1-a_{n} \rho_{n})\bar{x}+\rho_{n}A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)A\bar{x}\bigr\Vert \\& \qquad {} \cdot\|y_{n}-\bar{x}\|+a_{n}\rho_{n} \langle-\bar{x},y_{n}-\bar {x}\rangle+\gamma_{n}\langle d_{n},y_{n}-\bar{x}\rangle \\& \quad \leq (1-a_{n}\rho_{n})\|x_{n}-\bar{x}\|\cdot \|y_{n}-\bar{x}\| +a_{n}\rho_{n}\langle- \bar{x},y_{n}-\bar{x}\rangle +\gamma_{n}\langle d_{n},y_{n}-\bar{x}\rangle \\& \quad \leq \frac{(1-a_{n}\rho_{n})^{2}}{2}\|x_{n}-\bar{x}\|^{2}+ \frac {1}{2}\|y_{n}-\bar{x}\|^{2}+a_{n} \rho_{n}\langle-\bar{x},y_{n}-\bar {x}\rangle + \gamma_{n}\langle d_{n},y_{n}-\bar{x}\rangle \\& \quad \leq \biggl(\frac{1-a_{n}\rho_{n}}{2}\biggr)\|x_{n}-\bar{x} \|^{2}+\frac {1}{2}\|y_{n}-\bar{x}\|^{2}+a_{n} \rho_{n}\langle-\bar{x},y_{n}-\bar {x}\rangle + \gamma_{n}\langle d_{n},y_{n}-\bar{x}\rangle. \end{aligned}$$
(3.21)
This implies that
$$\begin{aligned}& \|y_{n}-\bar{x}\|^{2} \\& \quad \leq (1-a_{n}\rho_{n})\|x_{n}-\bar{x} \|^{2} +2a_{n}\rho_{n}\langle-\bar{x},y_{n}- \bar{x}\rangle +2\gamma_{n}\langle d_{n},y_{n}- \bar{x}\rangle \\& \quad = (1-a_{n}\rho_{n})\|x_{n}-\bar{x} \|^{2}+2a_{n}\rho_{n}\langle-\bar {x},y_{n}-x_{n}\rangle+2a_{n}\rho_{n} \langle-\bar{x},x_{n}-\bar{x}\rangle \\& \qquad {} +2\gamma_{n}\langle d_{n},y_{n}-\bar{x} \rangle \\& \quad \leq (1-a_{n}\rho_{n})\|x_{n}-\bar{x} \|^{2}+2a_{n}\rho_{n}\|\bar{x}\|\cdot \|y_{n}-x_{n}\|+2a_{n}\rho_{n}\langle- \bar{x},x_{n}-\bar{x}\rangle \\& \qquad {} +2\gamma_{n}\|d_{n}\|\cdot\|y_{n}- \bar{x}\|. \end{aligned}$$
(3.22)
By (3.22), we also have
$$\begin{aligned}& \|x_{n+1}-\bar{x}\|^{2}-\|x_{n+1}-y_{n} \|^{2}-2\langle x_{n+1}-x_{n},y_{n}-\bar {x}\rangle \\& \quad = \|y_{n}-\bar{x}\|^{2} \\& \quad \leq (1-a_{n}\rho_{n})\|x_{n}-\bar{x} \|^{2}+2a_{n}\rho_{n}\|\bar{x}\|\cdot \|y_{n}-x_{n}\|+2a_{n}\rho_{n}\langle- \bar{x},x_{n}-\bar{x}\rangle \\& \qquad {} +2\gamma_{n}\|d_{n}\|\cdot\|y_{n}- \bar{x}\|. \end{aligned}$$
(3.23)
That is,
$$\begin{aligned}& \|x_{n+1}-\bar{x}\|^{2} \\& \quad \leq (1-a_{n}\rho_{n})\|x_{n}-\bar{x} \|^{2}+2a_{n}\rho_{n}\|\bar{x}\|\cdot \|y_{n}-x_{n}\|+2a_{n}\rho_{n}\langle- \bar{x},x_{n}-\bar{x}\rangle \\& \qquad {} +\|x_{n+1}-y_{n}\|^{2}+2\langle x_{n+1}-x_{n},y_{n}-\bar{x}\rangle +2 \gamma_{n}\|d_{n}\|\cdot\|y_{n}-\bar{x}\|. \end{aligned}$$
(3.24)
By (3.2) again, we have
$$\begin{aligned}& \|x_{n+1}-\bar{x}\|^{2}+\|x_{n+1}-x_{n} \|^{2}+\bigl\Vert \alpha_{n} D(x_{n}, \rho_{n})\bigr\Vert ^{2} +2\bigl\langle x_{n+1}-x_{n}, \alpha_{n} D(x_{n},\rho_{n})\bigr\rangle \\& \quad \leq \|x_{n}-\bar{x}\|^{2}+\bigl\Vert \alpha_{n} D(x_{n},\rho_{n})\bigr\Vert ^{2}- 2\bigl\langle x_{n}-\bar{x},\alpha_{n} D(x_{n},\rho_{n})\bigr\rangle . \end{aligned}$$
This implies that
$$\begin{aligned} \|x_{n+1}-x_{n}\|^{2} \leq& \|x_{n}-\bar{x}\|^{2}-\|x_{n+1}-\bar{x} \|^{2} -2\bigl\langle x_{n+1}-x_{n}, \alpha_{n} D(x_{n},\rho_{n})\bigr\rangle \\ &{} -2\bigl\langle x_{n}-\bar{x},\alpha_{n} D(x_{n},\rho_{n})\bigr\rangle . \end{aligned}$$
(3.25)
By (2.1) and (3.16), we have
$$\begin{aligned}& \|x_{n+1}-\bar{x}\|^{2}-2\alpha_{n} \gamma_{n}\langle d_{n},y_{n}-\bar {x}\rangle \\& \quad \leq (1-\alpha_{n}a_{n}\rho_{n}) \|x_{n}-\bar{x}\|^{2} +\alpha_{n}a_{n} \rho_{n}\bigl(\|\bar{x}\|^{2}-\|y_{n} \|^{2}\bigr) \\& \quad = (1-\alpha_{n}a_{n}\rho_{n}) \|x_{n}-\bar{x}\|^{2} +\alpha_{n}a_{n} \rho_{n}\bigl(\|\bar{x}-y_{n}+y_{n}\|^{2}- \|y_{n}\|^{2}\bigr) \\& \quad \leq (1-\alpha_{n}a_{n}\rho_{n}) \|x_{n}-\bar{x}\|^{2} +\alpha_{n}a_{n} \rho_{n}\bigl(2\langle\bar{x}-y_{n},\bar{x}\rangle + \|y_{n}\|^{2}-\|y_{n}\|^{2}\bigr) \\& \quad \leq (1-\alpha_{n}a_{n}\rho_{n}) \|x_{n}-\bar{x}\|^{2} +2\alpha_{n}a_{n} \rho_{n}\langle\bar{x}-y_{n},\bar{x}\rangle \\& \quad = (1-\alpha_{n}a_{n}\rho_{n}) \|x_{n}-\bar{x}\|^{2} +2\alpha_{n}a_{n} \rho_{n}\langle-\bar{x},y_{n}-\bar{x}\rangle \\& \quad = (1-\alpha_{n}a_{n}\rho_{n}) \|x_{n}-\bar{x}\|^{2} +2\alpha_{n}a_{n} \rho_{n}\bigl(\langle-\bar{x},y_{n}-x_{n}\rangle + \langle-\bar{x},x_{n}-\bar{x}\rangle\bigr). \end{aligned}$$
(3.26)
This implies that
$$\begin{aligned}& \|x_{n+1}-\bar{x}\|^{2} \\& \quad \leq \biggl(1-\frac{1}{2}a_{n}\rho\biggr) \|x_{n}-\bar{x}\|^{2} +\frac{a_{n}\rho}{2}\cdot \frac{4\alpha_{n}\rho_{n}}{\rho}\bigl(\langle -\bar{x},y_{n}-x_{n}\rangle + \langle-\bar{x},x_{n}-\bar{x}\rangle\bigr) \\& \qquad {} +\frac{a_{n}\rho}{2}\cdot\frac{4\alpha_{n}}{\rho}\cdot\frac{\gamma _{n}}{a_{n}} \langle d_{n},y_{n}-\bar{x}\rangle. \end{aligned}$$
(3.27)

Case 1: there exists a natural number N such that \(\|x_{n+1}-\bar{x}\|\leq\|x_{n}-\bar{x}\|\) for each \(n\geq N\).

Clearly, \(\lim_{n\rightarrow \infty}\|x_{n}-\bar{x}\|\) exists. By (3.15) and \(\lim_{n\rightarrow \infty}\gamma_{n}=0\), we know that
$$ \lim_{n\rightarrow \infty}\|x_{n}-y_{n}\|= \lim_{n\rightarrow \infty }\bigl\Vert D(x_{n},\rho_{n}) \bigr\Vert =0. $$
(3.28)
By (3.25) and (3.28),
$$ \lim_{n\rightarrow \infty}\|x_{n+1}-x_{n}\|= \lim_{n\rightarrow \infty}\|x_{n+1}-y_{n}\|=0. $$
(3.29)
By (3.23), (3.28), (3.29), and \(\lim_{n\rightarrow \infty}\gamma_{n}=0\),
$$ \lim_{n\rightarrow \infty}\|y_{n}-\bar{x}\|=\lim _{n\rightarrow \infty}\|x_{n}-\bar{x}\|. $$
(3.30)
By (3.19), (3.30), and \(\lim_{n\rightarrow \infty}\gamma_{n}=0\),
$$ \lim_{n\rightarrow \infty}\bigl(2\rho_{n}- \rho_{n}^{2}\|A\|^{2}\bigr)\bigl\Vert Ax_{n}-J_{\beta _{n}}^{B_{2}}Ax_{n}\bigr\Vert ^{2}=0. $$
(3.31)
By (3.31),
$$ \lim_{n\rightarrow \infty}\bigl\Vert Ax_{n}-J_{\beta_{n}}^{B_{2}}Ax_{n} \bigr\Vert =0. $$
(3.32)
By (3.20), (3.32), and \(\lim_{n\rightarrow \infty}a_{n}=0\),
$$ \lim_{n\rightarrow \infty}\bigl\Vert y_{n}-J_{\beta_{n}}^{B_{1}}x_{n} \bigr\Vert =0. $$
(3.33)
By (3.28) and (3.33),
$$ \lim_{n\rightarrow \infty}\bigl\Vert x_{n}-J_{\beta_{n}}^{B_{1}}x_{n} \bigr\Vert =0. $$
(3.34)
Since \(\liminf_{n\rightarrow \infty}\beta_{n}>0\), we may assume that \(\beta_{n}\geq\beta\) for some \(\beta>0\). By (3.32), (3.34) and Lemma 2.2(iii),
$$ \lim_{n\rightarrow \infty}\bigl\Vert x_{n}-J_{\beta}^{B_{1}}x_{n} \bigr\Vert =\lim_{n\rightarrow \infty}\bigl\Vert Ax_{n}-J_{\beta}^{B_{2}}Ax_{n} \bigr\Vert =0. $$
(3.35)
Since \(\{x_{n}\}\) is a bounded sequence, there is a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) and \(z\in H\) such that \(x_{n_{k}}\rightharpoonup z\) and
$$ \limsup_{n\rightarrow \infty}\langle-\bar{x},x_{n}- \bar{x}\rangle= \lim_{k\rightarrow \infty}\langle-\bar{x},x_{n_{k}}- \bar{x}\rangle= \langle-\bar{x},z-\bar{x}\rangle. $$
(3.36)
It follows from \(x_{n_{k}}\rightharpoonup z\) and (3.35) that \(z\in \operatorname{Fix}(J_{\beta}^{B_{1}})=B_{1}^{-1}(0)\). Besides, since \(x_{n_{k}}\rightharpoonup z\), we have
$$ \lim_{k\rightarrow \infty}\langle Ax_{n_{k}}-Az,y\rangle= \lim_{k\rightarrow \infty}\bigl\langle x_{n_{k}}-z,A^{*}y\bigr\rangle =0. $$
(3.37)
Then \(Ax_{n_{k}}\rightharpoonup Az\). Similarly, we know that \(Az\in \operatorname{Fix}(J_{\beta}^{B_{2}})=B_{2}^{-1}(0)\). So, \(z\in\Omega\). By (3.36) and Lemma 2.1, we know that
$$ \limsup_{n\rightarrow \infty}\langle-\bar{x},x_{n}- \bar{x}\rangle= \lim_{k\rightarrow \infty}\langle-\bar{x},x_{n_{k}}- \bar{x}\rangle= \langle-\bar{x},z-\bar{x}\rangle\leq0. $$
(3.38)
We also have
$$\begin{aligned}& \limsup_{n\rightarrow \infty}\langle d_{n},x_{n}- \bar{x}\rangle \\& \quad = \limsup_{n\rightarrow \infty}\bigl\langle -A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)Ax_{n}+\eta_{n}d_{n-1},x_{n}-\bar{x} \bigr\rangle \\& \quad = \limsup_{n\rightarrow \infty}\bigl(\bigl\langle -A^{*} \bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}+A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)A\bar{x},x_{n}-\bar{x}\bigr\rangle + \langle\eta_{n}d_{n-1},x_{n}-\bar{x}\rangle \bigr) \\& \quad \leq \limsup_{n\rightarrow \infty}\eta_{n}\langle d_{n-1},x_{n}-\bar {x}\rangle=0. \end{aligned}$$
(3.39)
Hence, it follows from (3.28) and (3.39) that
$$\begin{aligned}& \limsup_{n\rightarrow \infty}\langle d_{n},y_{n}- \bar{x}\rangle \\& \quad = \limsup_{n\rightarrow \infty}\bigl(\langle d_{n},y_{n}-x_{n} \rangle+\langle d_{n},x_{n}-\bar{x}\rangle\bigr) \\& \quad \leq \limsup_{n\rightarrow \infty}\langle d_{n},y_{n}-x_{n} \rangle+ \limsup_{n\rightarrow \infty}\langle d_{n},x_{n}- \bar{x}\rangle\leq0. \end{aligned}$$
(3.40)
By (3.27), (3.38), (3.40), and Lemma 2.7, we know that \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\).
Case 2: suppose that there exists a subset \(\{n_{i}\}\) of \(\{n\}\) such that \(\|x_{n_{i}}-\bar{x}\|\leq\|x_{n_{i}+1}-\bar{x}\|\) for all \(i\in\mathbb{N}\). By Lemma 2.6, there exists a nondecreasing sequence \(\{m_{k}\}\) in \(\mathbb{N}\) such that \(m_{k}\rightarrow \infty\),
$$ \|x_{m_{k}}-\bar{x}\|\leq\|x_{m_{k}+1}-\bar{x}\|\quad \text{and}\quad \|x_{k}-\bar{x}\|\leq\|x_{m_{k}+1}-\bar{x}\| $$
(3.41)
for all \(k\in\mathbb{N}\). By (3.26),
$$\begin{aligned}& \|x_{m_{k}+1}-\bar{x}\|^{2} \\& \quad \leq (1-\alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}}) \|x_{m_{k}}-\bar{x}\|^{2} +2\alpha_{m_{k}}a_{m_{k}} \rho_{m_{k}}\langle-\bar {x},y_{m_{k}}-x_{m_{k}}\rangle \\& \qquad {} +2\alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}}\langle- \bar{x},x_{m_{k}}-\bar {x}\rangle +2\alpha_{m_{k}} \gamma_{m_{k}}\langle d_{m_{k}},y_{m_{k}}-\bar{x}\rangle \end{aligned}$$
(3.42)
for all \(k\in\mathbb{N}\). By (3.41) and (3.42),
$$\begin{aligned}& \alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}} \|x_{m_{k}}-\bar{x}\|^{2} \\& \quad \leq \|x_{m_{k}}-\bar{x}\|^{2}-\|x_{m_{k}+1}-\bar{x} \|^{2} +2\alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}} \langle-\bar {x},y_{m_{k}}-x_{m_{k}}\rangle \\& \qquad {} +2\alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}}\langle- \bar{x},x_{m_{k}}-\bar {x}\rangle +2\alpha_{m_{k}} \gamma_{m_{k}}\langle d_{m_{k}},y_{m_{k}}-\bar{x}\rangle \\& \quad \leq 2\alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}}\langle-\bar {x},y_{m_{k}}-x_{m_{k}}\rangle +2\alpha_{m_{k}}a_{m_{k}} \rho_{m_{k}}\langle-\bar{x},x_{m_{k}}-\bar {x}\rangle \\& \qquad {} +2\alpha_{m_{k}}\gamma_{m_{k}}\langle d_{m_{k}},y_{m_{k}}-\bar{x}\rangle \end{aligned}$$
(3.43)
for all \(k\in\mathbb{N}\). This implies that
$$\begin{aligned} \|x_{m_{k}}-\bar{x}\|^{2} \leq&2\langle- \bar{x},y_{m_{k}}-x_{m_{k}}\rangle +2\langle-\bar{x},x_{m_{k}}- \bar{x}\rangle \\ &{}+\frac{2\gamma_{m_{k}}}{a_{m_{k}}\rho_{m_{k}}}\langle d_{m_{k}},y_{m_{k}}-\bar {x}\rangle \end{aligned}$$
(3.44)
for all \(k\in\mathbb{N}\). By (3.41), and following a similar argument to the above, we know that
$$ \left \{ \begin{array}{l} \lim_{k\rightarrow \infty}\|y_{m_{k}}-x_{m_{k}}\|=\lim_{k\rightarrow \infty}\|x_{m_{k}+1}-x_{m_{k}}\|=0, \\ \limsup_{k\rightarrow \infty}\langle-\bar{x},x_{m_{k}}-\bar {x}\rangle\leq0, \\ \limsup_{k\rightarrow \infty}\langle d_{m_{k}},y_{m_{k}}-\bar{x}\rangle \leq0. \end{array} \right . $$
(3.45)
By (3.41), (3.44), and (3.45), we can get the following and this is the conclusion of Theorem 3.1:
$$ \lim_{k\rightarrow \infty}\|x_{m_{k}}-\bar{x}\|=\lim _{k\rightarrow \infty}\|x_{k}-\bar{x}\|=0. $$
(3.46)
Now, for completeness, we show the proof of (3.45).
By (3.15),
$$\begin{aligned}& \|x_{m_{k}+1}-\bar{x}\|^{2} \\& \quad \leq \|x_{m_{k}}-\bar{x}\|^{2}-\frac{1-\delta}{2} \|x_{m_{k}}-y_{m_{k}}\|^{2}+ 2\alpha_{m_{k}}a_{m_{k}} \rho_{m_{k}}\langle x_{m_{k}},\bar{x}-y_{m_{k}}\rangle \\& \qquad {} +2\alpha_{m_{k}}\gamma_{m_{k}}\langle d_{m_{k}},y_{m_{k}}-\bar{x}\rangle. \end{aligned}$$
(3.47)
By (3.41) and (3.47),
$$\begin{aligned}& \frac{1-\delta}{2} \|x_{m_{k}}-y_{m_{k}} \|^{2} \\& \quad \leq \|x_{m_{k}}-\bar{x}\|^{2}-\|x_{m_{k}+1}-\bar{x} \|^{2}+ 2\alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}}\langle x_{m_{k}},\bar{x}-y_{m_{k}}\rangle \\& \qquad {} +2\alpha_{m_{k}}\gamma_{m_{k}}\langle d_{m_{k}},y_{m_{k}}-\bar{x}\rangle \\& \quad \leq 2\alpha_{m_{k}}a_{m_{k}}\rho_{m_{k}}\langle x_{m_{k}},\bar {x}-y_{m_{k}}\rangle +2\alpha_{m_{k}} \gamma_{m_{k}}\langle d_{m_{k}},y_{m_{k}}-\bar{x}\rangle. \end{aligned}$$
(3.48)
By (3.48), we know that
$$ \lim_{k\rightarrow \infty}\|x_{m_{k}}-y_{m_{k}} \|=0. $$
(3.49)
Further,
$$ \lim_{k\rightarrow \infty}D(x_{m_{k}}, \rho_{m_{k}})=0. $$
(3.50)
By (3.25) and (3.41),
$$\begin{aligned}& \|x_{m_{k}+1}-x_{m_{k}}\|^{2} \\& \quad \leq -2\bigl\langle x_{{m_{k}}+1}-x_{m_{k}}, \alpha_{m_{k}} D(x_{m_{k}},\rho _{m_{k}})\bigr\rangle -2\bigl\langle x_{m_{k}}-\bar{x},\alpha_{m_{k}} D(x_{m_{k}}, \rho_{m_{k}})\bigr\rangle . \end{aligned}$$
(3.51)
By (3.50) and (3.51),
$$ \lim_{k\rightarrow \infty}\|x_{m_{k}+1}-x_{m_{k}} \|=0. $$
(3.52)
We also have
$$\begin{aligned} \begin{aligned}[b] &\limsup_{n\rightarrow \infty}\langle d_{n},x_{n}- \bar{x}\rangle \\ &\quad = \limsup_{n\rightarrow \infty}\bigl\langle -A^{*}\bigl(I-J_{\beta _{n}}^{B_{2}} \bigr)Ax_{n}+\eta_{n}d_{n-1},x_{n}-\bar{x} \bigr\rangle \\ &\quad = \limsup_{n\rightarrow \infty}\bigl(\bigl\langle -A^{*} \bigl(I-J_{\beta _{n}}^{B_{2}}\bigr)Ax_{n}+A^{*} \bigl(I-J_{\beta_{n}}^{B_{2}}\bigr)A\bar{x},x_{n}-\bar{x}\bigr\rangle + \langle\eta_{n}d_{n-1},x_{n}-\bar{x}\rangle \bigr) \\ &\quad \leq \limsup_{n\rightarrow \infty}\eta_{n}\langle d_{n-1},x_{n}-\bar {x}\rangle=0. \end{aligned} \end{aligned}$$
(3.53)
Hence, it follows from (3.49) and (3.53) that
$$\begin{aligned}& \limsup_{k\rightarrow \infty}\langle d_{m_{k}},y_{m_{k}}- \bar{x}\rangle \\& \quad = \limsup_{k\rightarrow \infty}\bigl(\langle d_{m_{k}},y_{m_{k}}-x_{m_{k}}\rangle+\langle d_{m_{k}},x_{m_{k}}-\bar{x}\rangle \bigr) \\& \quad \leq \limsup_{k\rightarrow \infty}\langle d_{m_{k}},y_{m_{k}}-x_{m_{k}} \rangle+ \limsup_{k\rightarrow \infty}\langle d_{m_{k}},x_{m_{k}}- \bar{x}\rangle \leq0. \end{aligned}$$
(3.54)
By (3.19),
$$\begin{aligned}& \|y_{m_{k}}-\bar{x}\|^{2} \\& \quad \leq \|x_{m_{k}}-\bar{x}\|^{2}-\bigl(2 \rho_{m_{k}}-\rho _{m_{k}}^{2}\|A\|^{2}\bigr) \bigl\Vert \bigl(I-J_{\beta_{m_{k}}}^{B_{2}}\bigr)Ax_{m_{k}}\bigr\Vert ^{2} \\& \qquad {} +M_{1}\cdot \bigl(\gamma_{m_{k}} \Vert d_{m_{k}}\Vert +a_{m_{k}}\rho_{m_{k}}\|x_{m_{k}}\| \bigr) \\& \qquad {} +\gamma_{m_{k}}^{2}\|d_{m_{k}} \|^{2}+(a_{m_{k}}\rho _{m_{k}})^{2} \|x_{m_{k}}\|^{2}+2\gamma_{m_{k}} a_{m_{k}} \rho_{m_{k}} \|d_{m_{k}}\|\cdot\|x_{m_{k}}\|, \end{aligned}$$
(3.55)
where
$$M_{1}:=\sup_{k\in\mathbb{N}} \bigl\{ 2\bigl\Vert \bigl[x_{m_{k}}-\rho_{m_{k}}A^{*}\bigl(I-J_{\beta_{m_{k}}}^{B_{2}} \bigr)Ax_{m_{k}}\bigr]- \bigl[\bar{x}-\rho_{m_{k}}A^{*} \bigl(I-J_{\beta_{m_{k}}}^{B_{2}}\bigr)A\bar{x}\bigr]\bigr\Vert \bigr\} . $$
By (3.55),
$$\begin{aligned}& \bigl(2\rho_{m_{k}}-\rho_{m_{k}}^{2}\|A \|^{2}\bigr)\bigl\Vert \bigl(I-J_{\beta _{m_{k}}}^{B_{2}} \bigr)Ax_{m_{k}}\bigr\Vert ^{2} \\& \quad \leq \|x_{m_{k}}-y_{m_{k}}\|\cdot\bigl(\Vert x_{m_{k}}-\bar{x}\Vert +\|y_{m_{k}}-\bar {x}\|\bigr) \\& \qquad {} +M_{1}\cdot \bigl(\gamma_{m_{k}} \Vert d_{m_{k}}\Vert +a_{m_{k}}\rho_{m_{k}}\|x_{m_{k}}\| \bigr) \\& \qquad {} +\gamma_{m_{k}}^{2}\|d_{m_{k}} \|^{2}+(a_{m_{k}}\rho _{m_{k}})^{2} \|x_{m_{k}}\|^{2}+2\gamma_{m_{k}} a_{m_{k}} \rho_{m_{k}} \|d_{m_{k}}\|\cdot\|x_{m_{k}}\|. \end{aligned}$$
(3.56)
By (3.49) and (3.56), we know that
$$ \lim_{k\rightarrow \infty}\bigl(2\rho_{m_{k}}-\rho _{m_{k}}^{2}\|A\|^{2}\bigr)\bigl\Vert \bigl(I-J_{\beta_{m_{k}}}^{B_{2}}\bigr)Ax_{m_{k}}\bigr\Vert =0. $$
(3.57)
This implies that
$$ \lim_{k\rightarrow \infty}\bigl\Vert Ax_{m_{k}}-J_{\beta_{m_{k}}}^{B_{2}}Ax_{m_{k}} \bigr\Vert =0. $$
(3.58)
By (3.20), (3.49), and (3.58), we have
$$ \lim_{k\rightarrow \infty}\bigl\Vert y_{m_{k}}-J_{\beta_{m_{k}}}^{B_{1}}x_{m_{k}} \bigr\Vert = \lim_{k\rightarrow \infty}\bigl\Vert x_{m_{k}}-J_{\beta_{m_{k}}}^{B_{1}}x_{m_{k}} \bigr\Vert =0. $$
(3.59)
Since \(\{x_{m_{k}}\}\) is bounded, there is a subsequence \(\{z_{k}\}\) of \(\{x_{m_{k}}\}\) such that \(z_{k}\rightharpoonup u\) and
$$ \limsup_{k\rightarrow \infty}\langle-\bar{x},x_{m_{k}}- \bar {x}\rangle= \lim_{k\rightarrow \infty}\langle-\bar{x},z_{k}- \bar{x}\rangle= \langle-\bar{x},u-\bar{x}\rangle. $$
(3.60)
By (3.58), (3.59), Lemma 2.2, and Lemma 2.5, we know that \(u\in\Omega\). So, by (3.60) and Lemma 2.1, we know that
$$ \limsup_{k\rightarrow \infty}\langle-\bar{x},x_{m_{k}}- \bar {x}\rangle\leq0. $$
(3.61)
Therefore, the proof is completed. □

4 Application: split feasibility problems

Let C and Q be nonempty, closed, and convex subsets of infinite dimensional real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator. The split feasibility problem is the following problem:
$$ \mbox{Find }\bar{x}\in H_{1}\mbox{ such that }\bar{x} \in C\mbox{ and }A\bar {x}\in Q. $$
(SFP)

Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\), and \(\{\rho_{n}\}\) be real sequences. Let δ be a fixed real number. Let \(\Omega_{1}\) be the solution set of problem (SFP).

Algorithm 4.1

Step 0.: 

Choose \(x_{1}\in H_{1}\) arbitrarily, set \(r_{1}\in(0,1)\) and \(d_{0}=0\).

Step 1.: 

\(d_{n}:=-A^{*}(I-P_{Q})Ax_{n}+\eta_{n} d_{n-1}\).

Step 2.: 
For \(n\in\mathbb{N}\), set \(y_{n}\) as
$$ y_{n}=P_{C}\bigl[(1-a_{n} \rho_{n})x_{n}-\rho_{n}A^{*}(I-P_{Q})Ax_{n}+ \gamma_{n} d_{n}\bigr], $$
(4.1)
where \(\rho_{n}>0\) satisfies
$$ \rho_{n}\bigl\Vert A^{*}(I-P_{Q})Ax_{n}-A^{*}(I-P_{Q})Ay_{n} \bigr\Vert \leq\delta\|x_{n}-y_{n}\|,\quad 0< \delta< 1. $$
(4.2)
Step 3.: 

If \(x_{n}=y_{n}\), then set \(n:=n+1\) and go to Step 1. Otherwise, go to Step 3.

Step 4.: 
The new iterative \(x_{n+1}\) is defined by
$$ x_{n+1}=P_{C}\bigl[x_{n}- \alpha_{n} D(x_{n},\rho_{n})\bigr], $$
(4.3)
where
$$\begin{aligned}& D(x_{n},\rho_{n}):=x_{n}-y_{n}+ \rho_{n}\bigl[A^{*}(I-P_{Q})Ay_{n}-A^{*}(I-P_{Q})Ax_{n} \bigr], \end{aligned}$$
(4.4)
$$\begin{aligned}& \alpha_{n}:=\frac{\langle x_{n}-y_{n},D(x_{n},\rho_{n})\rangle}{\|D(x_{n},\rho_{n})\|^{2}}. \end{aligned}$$
(4.5)
Then update \(n:=n+1\) and go to Step 1.

Following the same argument as in [5], we can get the following strong convergence theorem of the proposed algorithm for the split feasibility problem.

Theorem 4.1

Let C and Q be nonempty, closed, and convex subsets of infinite dimensional real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Let \(A:H_{1}\rightarrow H_{2}\) be a linear and bounded operator. Let \(\{a_{n}\}\), \(\{\eta_{n}\}\), \(\{\gamma_{n}\}\) be sequences in \([0,1]\). Choose \(\delta\in(0,1/2)\), and let \(\{\rho_{n}\}\) be a sequence in \((0,\min\{\frac{\delta}{\|A\|^{2}},\frac{2}{\|A\|^{2}+2}\})\). Let \(\Omega_{1}\) be the solution set of problem (SFVIP) and assume that \(\Omega_{1}\neq\emptyset\). For the sequence \(\{x_{n}\}\) in Algorithm 4.1, we further assume that:
  1. (i)

    \(\lim_{n\rightarrow \infty}a_{n}=\lim_{n\rightarrow \infty}\eta_{n}=0\), \(\sum_{n=1}^{\infty}a_{n}=\infty\), \(\liminf_{n\rightarrow \infty}\rho_{n}>0\);

     
  2. (ii)

    \(\lim_{n\rightarrow \infty}\frac{\gamma_{n}}{a_{n}}=t\) for some \(t\geq0\), and \(\{x_{n}\}\) is a bounded sequence.

     
Then \(\lim_{n\rightarrow \infty}x_{n}=\bar{x}\), where \(\bar{x}:=P_{\Omega_{1}}0\).

5 Numerical results for (SFVIP)

All codes were written in R language (version 2.15.2 (2012-10-26)), and all numerical results run on ASUS All in one PC series with i3-2100 CPU.

Set \(u=(1,1)\), \(\beta_{1}=1\), \(\beta_{n}=1+\frac{1}{n-1}\) for \(n\geq2\), \(\eta_{n}=\frac{1}{n+1}\), \(a_{n}=\frac{1}{n+1}\), \(\gamma_{1}=1\), and \(\gamma_{n}=\frac{1}{n-1}\) for \(n\geq2\), and \(\beta=1\). Let \(\varepsilon>0\) and the algorithm stop if \(\|x_{n-1}-x_{n}\|<\varepsilon\).

Example 5.1

Let A and \(B_{1},B_{2}:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}\) be defined by
$$\begin{aligned}& A:=\left [ \begin{array}{@{}c@{\quad}c@{}} 1 & 0 \\ 0 & 1 \end{array} \right ], \\ & B_{1} \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} 2 & -2 \\ -2 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} -2 \\ 2 \end{bmatrix} , \\ & B_{2} \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} 2 & 2 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} -2 \\ -2 \end{bmatrix} . \end{aligned}$$
Find a point \(\bar{x}=(\bar{x}_{1},\bar{x}_{2})^{\top}\in\mathbb{R}^{2}\) such that \(B_{1}(\bar{x})=(0,0)^{\top}\) and \(B_{2}(A\bar{x})=(0,0)^{\top}\). Indeed, \(\bar{x}_{1}=1\) and \(\bar{x}_{2}=0\).

Example 5.2

Let \(B_{1}\) and \(B_{2}\) be the same as in Example 5.1. Let
$$A:=\left [ \begin{array}{@{}c@{\quad}c@{}} 2 & 1 \\ 0 & -1 \end{array} \right ]. $$
Find a point \(\bar{x}=(\bar{x}_{1},\bar{x}_{2})\in\mathbb{R}^{2}\) such that \(B_{1}(\bar{x})=(0,0)^{\top}\) and \(B_{2}(A\bar{x})=(0,0)^{\top}\). Indeed, \(\bar{x}_{1}=0.5\) and \(\bar{x}_{2}=-0.5\).

Example 5.3

Let \(B_{1}:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}\), \(B_{2}:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}\) be defined by
$$\begin{aligned}& A:=\left [ \begin{array}{@{}c@{\quad}c@{}} 2 & 1 \\ 1 & 2 \\ 2 & 2 \end{array} \right ], \\ & B_{1} \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} 2 & 2 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} -2 \\ -2 \end{bmatrix} , \\ & B_{2} \begin{bmatrix} x\\ y\\ z \end{bmatrix} = \begin{bmatrix} 2 & -2 & -2 \\ -2 & 2 & 2 \\ -2 & 2 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} . \end{aligned}$$
Find a point \(\bar{x}=(\bar{x}_{1},\bar{x}_{2})^{\top}\in\mathbb{R}^{2}\) such that \(B_{1}(\bar{x})=(0,0)^{\top}\) and \(B_{2}(A\bar{x})=(0,0,0)^{\top}\). Indeed, \(\bar{x}_{1}=1.5\) and \(\bar {x}_{2}=-0.5\).
For the above examples, we give the numerical results (see Tables 1-3) for the proposed algorithm and related algorithms.
Table 1

Numerical results for Example  5.1 ( \(\pmb{\rho=\rho_{n}=0.01}\) )

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-3}}\)

\(\boldsymbol{\varepsilon=10^{-4}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

0.02

19

(0.9853714,−0.01460836)

0.02

60

(0.9932032,−0.006789955)

Theorem 1.1

0.01

218

(1.087499,0.08749895)

0.05

505

(1.008727,0.008726755)

Theorem 1.2

0.04

213

(1.237467,0.2433358)

0.08

939

(1.065859,0.06719048)

Theorem 1.3

0.01

137

(1.376151,0.3943719)

0.14

1,308

(1.094086,0.09599743)

Theorem 1.4

0.02

206

(1.083916,0.08392859)

0.06

484

(1.007433,0.007437749)

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-5}}\)

\(\boldsymbol{\varepsilon=10^{-6}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

0.06

287

(0.9977524,−0.002246182)

0.25

970

(0.9993371,−0.0006624296)

Theorem 1.1

0.06

792

(1.000870,0.0008703675)

0.08

1,078

(1.000088,8.750662e − 05)

Theorem 1.2

0.25

2,974

(1.020805,0.02122562)

0.99

9,403

(1.006580,0.006713283)

Theorem 1.3

0.34

4,205

(1.029428,0.03002226)

1.59

13,297

(1.009306,0.009494477)

Theorem 1.4

0.07

749

(1.000037,4.018706e − 05)

0.07

953

(0.9994309,−5.664470e − 04)

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-7}}\)

\(\boldsymbol{\varepsilon=10^{-8}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

0.75

2,999

(0.9997898,−0.0002100474)

2.62

9,426

(0.9999335,−6.645199e − 05)

Theorem 1.1

0.11

1,365

(1.000009,8.727519e − 06)

0.14

1,562

(1.000001,8.704438e − 07)

Theorem 1.2

5.03

29,732

(1.002081,0.002123133)

19.63

94,018

(1.000658,0.000671414)

Theorem 1.3

8.73

42,047

(1.002943,0.003002578)

21.03

132,961

(1.000931,0.0009495180)

Theorem 1.4

0.09

1,034

(0.9994033,−5.942738e − 04)

0.09

1,047

(0.9994030,−5.945951e − 04)

Table 2

Numerical results for Example  5.2 ( \(\pmb{\rho=\rho_{n}=0.001}\) )

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-3}}\)

\(\boldsymbol{\varepsilon=10^{-4}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

20

(0.4872068,−0.5128408)

0.02

61

(0.4953678 − 0.5046371)

Theorem 1.1

0.02

157

(1.382916,0.3832697)

0.26

3,035

(0.5882673,−0.4116973328)

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-5}}\)

\(\boldsymbol{\varepsilon=10^{-6}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

0.07

209

(0.4985109,−0.5014895)

0.19

716

(0.4996333,−0.5003667)

Theorem 1.1

0.56

5,912

(0.5088314,−0.4911650960)

0.90

8,790

(0.5008829,−0.4991167527)

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-7}}\)

\(\boldsymbol{\varepsilon=10^{-8}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

0.49

1,984

(0.4999414,−0.5000586)

1.04

3,944

(0.4999930,−0.5000070)

Theorem 1.1

1.33

11,668

(0.5000883,−0.4999116996)

1.79

14,545

(0.5000088,−0.4999911653)

Table 3

Numerical results for Example  5.3 ( \(\pmb{\rho=\rho_{n}=0.001}\) )

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-3}}\)

\(\boldsymbol{\varepsilon=10^{-4}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

0.02

40

(1.540193,−0.5400902)

0.04

162

(1.515364,−0.5153539)

Theorem 1.1

7

(0.5038810,0.4956130)

0.33

3,658

(1.3762567,−0.3763273899)

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-5}}\)

\(\boldsymbol{\varepsilon=10^{-6}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

0.19

697

(1.504028,−0.5040270)

0.55

2,233

(1.500311,−0.5003114)

Theorem 1.1

0.76

7,689

(1.4876280,−0.4876350226)

1.38

11,719

(1.4987623,−0.4987630241)

\(\boldsymbol{x_{1}=(1,1)^{\top}}\)

\(\boldsymbol{\varepsilon=10^{-7}}\)

\(\boldsymbol{\varepsilon=10^{-8}}\)

Time

Iteration

Approximate solution

Time

Iteration

Approximate solution

Algorithm 1.2

1.06

4,175

(1.499761,−0.4997615)

1.32

5,188

(1.499727,−0.4997275)

Theorem 1.1

2.06

15,750

(1.4998763,−0.4998763253)

2.83

19,781

(1.4999876,−0.4999876348)

Declarations

Acknowledgements

This research was supported by the Ministry of Science and Technology of the Republic of China.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Applied Mathematics, National Sun Yat-sen University
(2)
Department of Mathematics, National Kaohsiung Normal University

References

  1. Martinet, B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4(R3), 154-158 (1970) MATHMathSciNetGoogle Scholar
  2. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976) View ArticleMATHMathSciNetGoogle Scholar
  3. Moudafi, A: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275-283 (2011) View ArticleMATHMathSciNetGoogle Scholar
  4. Byrne, C, Censor, Y, Gibali, A, Reich, S: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759-775 (2011) MathSciNetGoogle Scholar
  5. Chuang, CS: Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 350 (2013) View ArticleMathSciNetGoogle Scholar
  6. Bnouhachem, A, Noor, MA, Khalfaoui, M, Zhaohan, S: On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 54, 627-639 (2012) View ArticleMATHMathSciNetGoogle Scholar
  7. Nocedal, J, Wright, SJ: Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer, Berlin (1999) View ArticleMATHGoogle Scholar
  8. Takahashi, W: Introduction to Nonlinear and Convex Analysis. Yokohoma Publishers, Yokohoma (2009) MATHGoogle Scholar
  9. Xu, HK: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2, 240-256 (2002) View ArticleGoogle Scholar
  10. Browder, FE: Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 53, 1272-1276 (1965) View ArticleMATHMathSciNetGoogle Scholar
  11. Maingé, PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899-912 (2008) View ArticleMATHMathSciNetGoogle Scholar
  12. Aoyama, K, Kimura, Y, Takahashi, W, Toyoda, M: Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 67, 2350-2360 (2007) View ArticleMATHMathSciNetGoogle Scholar
  13. Iiduka, H: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22, 862-878 (2012) View ArticleMATHMathSciNetGoogle Scholar
  14. Iiduka, H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1-26 (2013) View ArticleMATHMathSciNetGoogle Scholar
  15. Blatt, D, Hero, A, Gauchman, H: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18, 29-51 (2007) View ArticleMATHMathSciNetGoogle Scholar

Copyright

© Chuang and Lin 2015