Skip to content

Advertisement

  • Research
  • Open Access

New inertial algorithm for solving split common null point problem in Banach spaces

Journal of Inequalities and Applications20192019:17

https://doi.org/10.1186/s13660-019-1971-4

  • Received: 20 September 2018
  • Accepted: 15 January 2019
  • Published:

Abstract

Inspired by the works of Alvarez and Attouch (Set-Valued Anal. 9:3–11, 2001), López et al. (Inverse Probl. 28:ID085004, 2012), Takahashi (Arch. Math. (Basel) 104(4):357–365, 2015) and Suantai et al. (Appl. Gen. Topol. 18(2):345–360, 2017), as well as Promluang and Kuman (J. Inform. Math. Sci. 9(1):27–44, 2017), we propose a new inertial algorithm for solving split common null point problem without the prior knowledge of the operator norms in Banach spaces. Under mild and standard conditions, the weak and strong convergence theorems of the proposed algorithms are obtained. Also the split minimization problem is considered as the application of our results. Finally, the performances and computational examples are presented, and a comparison with related algorithms is provided to illustrate the efficiency and applicability of our new algorithm.

Keywords

  • Split common null point problem
  • Inertial technique
  • Self-adaptive step size
  • Set-valued
  • Strong convergence

MSC

  • 47H05
  • 47H09
  • 49J53
  • 65J15
  • 90C25

1 Introduction

In an excellent paper [6], Byrne, Censor, Gibali and Reich introduced the following split common null point problem (SCNPP) for set-valued operators: find a point \(x^{*}\in H_{1}\) such that
$$\begin{aligned} 0\in \bigcap_{i=1}^{p} A_{i}x^{*}, \end{aligned}$$
(1.1)
and
$$\begin{aligned} y_{j}^{*}=T_{j}x^{*}\in H_{2} \quad \text{such that } 0\in B_{j} \bigl(y_{j}^{*} \bigr), \text{ for each } j=1,2,\dots ,r, \end{aligned}$$
(1.2)
where \(H_{1}\) and \(H_{2}\) are two real Hilbert spaces and \(A_{i}:H _{1}\rightarrow 2^{H_{1}}\), \(B_{j}:H_{2}\rightarrow 2^{H_{2}}\) are maximal monotone operators, \(T_{j}:H_{1}\rightarrow H_{2}\) are bounded linear operators.
The split common null point problem is motivated by many related problems. The first is the split inverse problem (SIP) which is formulated in Censor, Gibali and Reich [7]. It concerns a model in which two vector spaces X and Y and a linear operator \(A:X\rightarrow Y\) are given. In addition, two inverse problems are involved. The first one, denoted by \(\mathit{IP}_{1}\), is formulated in the space X, and the second one, denoted by \(\mathit{IP}_{2}\), is formulated in the space Y. Given these data, the SIP is formulated as follows:
$$\begin{aligned}& \text{find a point }x^{*} \in X\text{ that solves } \mathit{IP}_{1} \\& \text{and such that} \\& \text{the point }y=Tx^{*}\in Y\text{ solves }\mathit{IP}_{2}. \end{aligned}$$

The first instance of SIP is the split convex feasibility problem (SCFP) (see, e.g., Censor and Elfving [8]) in which \(\mathit{IP}_{1}\) and \(\mathit{IP}_{2}\) are the convex feasibility problems (CFP). The SCFP has been well studied during the last two decades, both theoretically and practically. In particular, the CFP has been used to model significant real-world problem in sensor network, radiation therapy treatment planning, resolution enhancement, and in many other instances; see, e.g., Byrne [9] and the references therein.

Soon after, many authors asked if other inverse problems can be used for \(\mathit{IP}_{1}\) and \(\mathit{IP}_{2}\), besides CFP, and be embedded in the SIP methodology. For example, can SIP be with a Null Point Problem in each of the two spaces?

In fact, the SCNPP can be put in the context of SIP and related works. For instance, the split variational inclusion problem (SVIP) which is an SIP with VIP in each of the two spaces (see, e.g., Censor et al. [7]). The SVIP is formulated as follows:
$$\begin{aligned}& \text{find a point }x^{*}\in C\text{ such that } \bigl\langle f \bigl(x^{*} \bigr),x-x^{*} \bigr\rangle \geq 0\text{ for all }x \in C, \\& \text{and such that} \\& \text{the point }y^{*}=Tx^{*}\in Q\text{ and solves } \bigl\langle g \bigl(y^{*} \bigr),y-y^{*} \bigr\rangle \geq 0\text{ for all }y\in Q, \end{aligned}$$
where C and Q are nonempty closed convex subsets of Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(f:H_{1}\rightarrow H_{1}\) and \(g:H_{2}\rightarrow H_{2}\) are two given operators, \(T:H_{1}\rightarrow H_{2}\) is a bounded linear operator. If we take \(C=H_{1}\), \(Q=H_{2}\) and \(x=x^{*}-f(x^{*})\), \(y=y^{*}-g(y^{*})\) in SVIP, then we can get the split zero problem (SZP) which is introduced in Censor et al. [7] (Sect. 7.3). The formulation of SZP is as follows:
$$\begin{aligned}& \text{find a point }x^{*}\in H_{1} \\& \text{such that }f \bigl(x^{*} \bigr)=0\text{ and }g \bigl(Tx^{*} \bigr)=0. \end{aligned}$$
Following the idea in Censor et al. [7] and Rockafellar [10], Moudafi [11] introduced the split monotone variational inclusion (SMVI) which generalized the SVIP. The SMVI is as follows:
$$\begin{aligned}& \text{find a point }x^{*}\in H_{1}\text{ such that }0\in f \bigl(x^{*} \bigr)+B_{1} \bigl(x^{*} \bigr) \\& \text{and such that the point} \\& y^{*}=Tx^{*}\in H_{2}\text{ and solves }0 \in g \bigl(y^{*} \bigr)+B_{2} \bigl(y^{*} \bigr), \end{aligned}$$
where \(B_{1}:H_{1}\rightarrow 2^{H_{1}}\) and \(B_{2}:H_{2}\rightarrow 2^{H_{2}}\) are two set-valued mappings, \(f:H_{1}\rightarrow H_{1}\) and \(g:H_{2}\rightarrow H_{2}\) are two given operators, \(T:H_{1}\rightarrow H_{2}\) is a bounded linear operator. It is easy to see that the SZP is obtained from SMVI by letting \(B_{1}\) and \(B_{2}\) be zero operators. Since in Moudafi [11] all the applications of the SMVI were presented for \(f=g=0\), it follows that these applications are also covered by the SCNPP. In addition, the SCNPP is a generation of SZP.

Consequently, the SCNPP (1.1)–(1.2) has attracted wide attention thanks to the motivation of the above related problems and works. As for its applications in signal processing and image reconstruction, the reader can refer to Ansari and Rehan [12, 13], Censor et al. [14], Ceng et al. [15] and the reference therein.

Under the idea of CQ algorithm in Byrne [16, 17], relaxed CQ algorithm in Yang [18], extra-gradient method in Ceng et al. [15], many authors were dedicated to the study of the approximation solution of the SCNPP (1.1)–(1.2) for two set-valued mappings in Hilbert spaces in recent years, for instance, Byrne et al. [6] studied the following iterative method for two set-valued maximal operators in Hilbert spaces:
$$\begin{aligned} x_{n+1}=J_{\lambda }^{A} \bigl(x_{n}+\gamma T^{*} \bigl(J_{\lambda }^{B}-I \bigr)Tx_{n} \bigr), \quad \forall n\geq 1,\exists \lambda >0, \end{aligned}$$
(1.3)
and obtained weak convergence of the sequence under suitable conditions. For more details on the methods of solving the SCNPP and the related issues in Hilbert spaces, the reader might refer to Moudafi and Thakur [19], Gibali et al. [20], Censor et al. [7], Shehu and Iyiola [21], Ceng et al. [22], Sitthithakerngkiet et al. [23].
Based on the above works, Takahashi [3, 24] extended such a problem in Hilbert spaces to Banach spaces and then obtained strong convergence theorems. Soon afterwards, Alofi, Alsulami and Takahashi [25] introduced the following Halpern’s iteration to find a common solution of split null point problem between Hilbert and Banach spaces:
$$\begin{aligned} x_{n+1}=\beta _{n}x_{n}+(1-\beta _{n}) (\alpha _{n}u_{n}+(1-\alpha _{n})J _{\lambda _{n}}^{A} \bigl(x_{n}+\lambda _{n} T^{*}J_{E} \bigl(Q_{\mu }^{B}-I \bigr)Tx_{n} \bigr), \quad \forall n\geq 1, \end{aligned}$$
(1.4)
where \(J_{E}\) is duality mapping on a Banach space, \(\{u_{n}\}\) is a sequence in a Hilbert space such that \(u_{n}\rightarrow u\), and the step size \(\lambda _{n}\) satisfies \(0<\lambda _{n}\|T\|^{2}< 2\). Under suitable assumptions, they obtained a strong convergence theorem. Very recently, Suantai et al. [4] proposed the following scheme to approximate the solution of SCNPP for two set-valued mappings in Banach spaces:
$$\begin{aligned} x_{n+1}=\alpha _{n}f(x_{n})+\beta _{n}x_{n}+\gamma _{n}J_{\lambda _{n}} ^{A} \bigl(x_{n}+\lambda _{n} T^{*}J_{E} \bigl(Q_{\mu }^{B}-I \bigr)Tx_{n} \bigr),\quad \forall n\geq 1, \end{aligned}$$
(1.5)
where the step size satisfies \(0<\lambda _{n}\|T\|^{2}< 2\). For more works on the solution of the SCNPP for two set-valued mappings in Banach spaces and related issues, the reader might refer to Promluang and Kuman [5], Kamimura and Takahashi [26], Takahashi [27], Ansari and Rehan [28], Kazmi and Rizvi [29], among others.

Although it is clear that the algorithms mentioned above have better theoretic properties such as, but not only, weak and strong convergence to a solution of the SCNPP in Hilbert or Banach spaces, there is still a drawback: either (1.3) in Hilbert spaces or (1.4) and (1.5) in Banach spaces needs prior knowledge of bounded operator norm \(\|T\|\). In general, it is not a simple task and it might affect the convergence of these algorithms.

On the other hand, López [2] presented an algorithm in Hilbert spaces for solving split feasibility problem whose step size is self-adaptive:
$$\begin{aligned} x_{k+1}=P_{C_{k}} \bigl(I-\tau _{k}T^{*}(I-P_{Q_{k}})T \bigr)x_{n},\quad \forall k \geq 1, \end{aligned}$$
where the step size \(\tau _{k}=\frac{\rho _{k}f(x_{k})}{\|\nabla f(x _{k})\|^{2}}\), \(f(x_{k})=\frac{\|(I-P_{Q_{k}})Tx_{n}\|}{2}\), \(\nabla f(x_{k})= T^{*}(I-P_{Q_{k}})Tx_{n}\).
In addition, for approximating the null point of a maximal monotone operator, Alvarez and Attouch [1] introduced the following inertial proximal algorithm:
$$\begin{aligned} x_{n+1}=J_{\lambda _{n}}^{A} \bigl(x_{n}+\alpha _{n}(x_{n}-x_{n-1}) \bigr),\quad \forall n\geq 1, \end{aligned}$$
and obtained the weak convergence of the algorithm. Roughly speaking, the inertial technique may be exploited in some situations in order to “accelerate” the convergence. This point of view inspired various numerical methods related to the inertial terminology, all of them have nice convergence properties by incorporating second order information, see, e.g., Mainge [30], Alvarez [31, 32].

So it is natural to ask the following question:

Question 1.1

Can we construct a new inertial algorithm for solving the SCNPP (1.1)–(1.2) for two set-valued mappings in Banach spaces without prior knowledge of the operator norm \(\|T\|\)?

Motivated and inspired by the works of Alvarez and Attouch [1], Gibali et al. [20], López et al. [2], Takahashi [3], Alofi et al. [25] and Suantai et al. [4], as well as Promluang and Kuman [5], we wish to provide an affirmative answer to this question. Our contribution is a new inertial method, combining the idea of inertial proximal technique with self-adaptive rule, for solving the solution of the split common null point problem (SCNPP) (1.1)–(1.2) for two set-valued mappings in Banach spaces.

The outline of the paper is as follows. In Sect. 2, we collect definitions and results which are needed for our further analysis. In Sect. 3, our new inertial algorithms in Banach spaces are introduced and analyzed, and the weak and strong convergence theorems are obtained. In addition, the split minimizing problem is introduced as the application of our results in Sect. 4. Finally, numerical experiment including compressed sensing and a comparison with related algorithms are provided to illustrate the performances of our new algorithms.

2 Preliminaries

Let E be a real Banach space with norm \(\|\cdot \|\) and let \(E^{*}\) be the dual space of E. A normalized duality mapping \(J:E\rightarrow 2^{E^{*}}\) is defined by
$$\begin{aligned} Jx= \bigl\{ x^{*}\in E^{*}: \bigl\langle x,x^{*} \bigr\rangle = \bigl\Vert x^{*} \bigr\Vert ^{2}= \Vert x \Vert ^{2} \bigr\} , \end{aligned}$$
where \(\langle \cdot ,\cdot \rangle \) denotes generalized duality pairing between E and \(E^{*}\). Let \(U=\{x\in E:\|x\|=1\}\). The norm of E is said to be Gâteaux differentiable if for each \(x,y\in U\), the limit
$$\begin{aligned} \lim_{t\rightarrow 0}\frac{ \Vert x+ty \Vert - \Vert x \Vert }{t} \end{aligned}$$
exists. In the case, E is called smooth. It is well known that E is smooth if and only if J is single-valued and if E is uniformly smooth then J is uniformly continuous on bounded subsets of E. We note that in a Hilbert space, J is the identity operator.

A Banach space E is said to be p-uniformly smooth if for a fixed real number \(1< p\leq 2\), there exists a constant \(c>0\) such that \(\rho (t)=ct^{p}\) for all \(t>0\). From Chang et al. [33] and Chidume [34], we know that if E is a 2-uniformly smooth Banach space, then for all \(x,y\in E\) there exists a constant \(c>0\) such that \(\|Jx-Jy\|\leq c\|x-y\|\).

A multi-valued mapping \(A: E\rightarrow 2^{E^{*}}\) with domain \(D(A)=\{x\in E, Ax\neq \emptyset \}\) is said to be monotone if
$$\begin{aligned} \bigl\langle x-y,x^{*}-y^{*} \bigr\rangle \geq 0, \end{aligned}$$
for all \(x,y \in D(A)\), \(x^{*}\in Ax\) and \(y^{*}\in Ay\). A monotone operator A on E is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on E. The following theorem is due to Browder [35], see also Takahashi [36].

Theorem 2.1

(Browder [35])

Let E be a uniformly convex and smooth Banach space, and let J be the duality mapping of E into \(E^{*}\). Let A be a monotone operator of E into \(2^{E^{*}}\). Then A is maximal if and only if for any \(r>0\),
$$\begin{aligned} \mathfrak{R}(J+rA)=E^{*}, \end{aligned}$$
where \(\mathfrak{R}(J+rA)\) is the range of \(J+rA\).
Let E be a uniformly convex Banach space with a Gâteaux differentiable norm, and let \(A: E\rightarrow 2^{E^{*}}\) be a maximal monotone operator. Now we consider the metric resolvent of A
$$\begin{aligned} Q^{A}_{\mu }= \bigl(I+\mu J^{-1}A \bigr)^{-1}, \quad \mu >0. \end{aligned}$$
It is well known that the operator \(Q^{A}_{\mu }\) is firmly nonexpansive and the fixed points of the operator \(Q^{A}_{\mu }\) are the null points of A; see, e.g., Kohsaka and Takahashi [37, 38]. The resolvent plays an essential role in the approximation theory for zero points of maximal monotone operators in Banach spaces. According to the work of Aoyama et al. [39], we have the following properties:
$$\begin{aligned} \bigl\langle Q^{A}_{\mu }z_{n}-y,J \bigl(z_{n}-Q^{A}_{\mu }z_{n} \bigr) \bigr\rangle \geq 0, \quad y\in A^{-1}(0), \end{aligned}$$
(2.1)
in particular, if E is a real Hilbert space, then
$$\begin{aligned} \bigl\langle J^{A}_{\mu }z_{n}-y,z_{n}-J^{A}_{\mu }z_{n} \bigr\rangle \geq 0, \quad y\in A^{-1}(0), \end{aligned}$$
(2.2)
where \(J^{A}_{\mu }=(I+\mu A)^{-1}\) is the general resolvent, \(A^{-1}(0)=\{z\in E:0\in Az\}\). For more details on the properties of firmly nonexpansive mappings, one can see, e.g., Aoyama et al. [39], Bauschke et al. [40].
Let H be a Hilbert space with the inner product \(\langle \cdot , \cdot \rangle \), induced norm \(\|\cdot \|\) and identity operator I. The symbols “→” and “” denote the strong and weak convergence, respectively. For a given sequence \(\{x_{n}\}\subset H\), \(w_{w}(x_{n})\) denotes the weak w-limit set of \(\{x_{n}\}\), that is,
$$\begin{aligned} w_{w}(x_{n}):= \bigl\{ x\in H:x_{n_{j}} \rightharpoonup x \text{ for some subsequence }\{n_{j}\} \text{ of } \{n\} \bigr\} . \end{aligned}$$
It is well known that
$$\begin{aligned} \Vert \alpha x+\beta y+\gamma z \Vert ^{2} =&\alpha \Vert x \Vert ^{2}+\beta \Vert y \Vert ^{2}+ \gamma \Vert z \Vert ^{2} \\ &{}-\alpha \beta \Vert x-y \Vert ^{2}-\beta \gamma \Vert y-z \Vert ^{2}-\gamma \alpha \Vert x-z \Vert ^{2}, \end{aligned}$$
(2.3)
for any \(x,y,z \in H\) and for all α, β, γ with \(\alpha +\beta +\gamma =1\).
Moreover, the following inequality holds:
$$\begin{aligned} \Vert x+y \Vert ^{2}\leq \Vert x \Vert ^{2}+2\langle y,x+y\rangle ,\quad \forall x,y \in H. \end{aligned}$$
(2.4)
Let C be a closed convex subset of H. For every element \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that
$$\begin{aligned} \Vert x-P_{C}x \Vert =\min \bigl\{ \Vert x-y \Vert :y\in C \bigr\} . \end{aligned}$$
The operator \(P_{C}\) is called the metric projection of H onto C and some of its properties are summarized as follows:
$$\begin{aligned} \langle x-y,P_{C}x-P_{C}y\rangle \geq \Vert P_{C}x-P_{C}y \Vert ^{2},\quad \forall x,y\in H. \end{aligned}$$
Moreover, for all \(x\in H\) and \(y\in C\), \(P_{C}x\) is characterized by
$$\begin{aligned} \langle x-P_{C}x,y-P_{C}x\rangle \leq 0. \end{aligned}$$
(2.5)

Lemma 2.2

(see, e.g., Xu [41] and Maingé [42])

Assume that \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that
$$\begin{aligned} a_{n+1}\leq (1-\theta _{n})a_{n}+\delta _{n},\quad n\geq 0, \end{aligned}$$
where \(\{\theta _{n}\}\) is a sequence in \((0,1)\) and \(\{\delta _{n}\}\) is a sequence such that
  1. (1)

    \(\sum_{n=1}^{\infty }\theta _{n}=\infty \);

     
  2. (2)

    \(\lim \sup_{n\rightarrow \infty }\frac{\delta _{n}}{\theta _{n}}\leq 0\) or \(\sum_{n=1}^{\infty }|\delta _{n}|<\infty \).

     

Then the sequence \(\{a_{n}\}\) has a limit and \(\lim_{n\rightarrow \infty }a_{n}=0\).

Lemma 2.3

(see, e.g., Maingé [43])

Let \(\{\varGamma _{n}\}\) be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence \(\{\varGamma _{n_{j}}\}\) of \(\{\varGamma _{n}\}\) such that \(\varGamma _{n_{j}}<\varGamma _{n_{j+1}}\) for all \(j\geq 0\). Also consider the sequence of integers \(\{\sigma (n)\}_{n \geq n_{0}}\) defined by
$$\begin{aligned} \sigma (n)=\max \{k\leq n:\varGamma _{k}\leq \varGamma _{k+1}\}. \end{aligned}$$
Then \(\{\sigma (n)\}_{n\geq n_{0}}\) is a nondecreasing sequence verifying \(\lim_{n\rightarrow \infty }\sigma (n)=\infty \) and, for all \(n\geq n_{0}\),
$$\begin{aligned} \max \{\varGamma _{\sigma (n)},\varGamma _{n}\}\leq \varGamma _{\sigma (n)+1}. \end{aligned}$$

Lemma 2.4

(see, e.g., Halpern [44] and Suzuki [40])

Let H be a real Hilbert space and \(\{x_{n}\}\in H\) such that there exists a nonempty closed convex subset \(C\subset H\) satisfying
  1. (i)

    For every \(z\in C\), \(\lim_{n\rightarrow \infty }\|x_{n}-z \|\) exists;

     
  2. (ii)

    Any weak cluster point of \(\{x_{n}\}\) belongs to C.

     

Then there exists \(\bar{x}\in C\) such that \(\{x_{n}\}\) converges weakly to .

Lemma 2.5

(see, e.g., Maingé [30])

Let \(\{\varGamma _{k}\}\) and \(\{\delta _{n}\}\) be sequences in \([0,+\infty )\) which satisfy:
  1. (i)

    \(\varGamma _{n+1}-\varGamma _{n}\leq \theta _{n}(\varGamma _{n}- \varGamma _{n-1})+\delta _{n}\);

     
  2. (ii)

    \(\sum_{n=1}^{\infty }\delta _{n}<\infty \);

     
  3. (iii)

    \(\theta _{n}\in [0,\theta ]\), where \(\theta \in [0,1)\).

     

Then \(\{\varGamma _{n}\}\) is a converging sequence and \(\sum_{n=1}^{ \infty }[\varGamma _{n+1}-\varGamma _{n}]_{+}<\infty \), where \([t]_{+}=\max \{t,0\}\) for any \(t\in \mathbb{R}\).

3 Main results

In this section, we introduce our algorithms and state our main results.

Throughout the rest of this paper, we always assume that H is a real Hilbert space and E is a 2-uniformly convex smooth Banach space. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be two maximal monotone operators. Let \(T:H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \).

Consider the following split common null point problem in Banach spaces:
$$\begin{aligned}& \text{find }x^{*}\in H\text{ such that }0\in Ax^{*} \\& \text{and }y^{*}=Tx^{*}\in E\text{ such that }0\in By^{*}. \end{aligned}$$
Now we define the functions
$$\begin{aligned} f(x_{n})=\frac{1}{2} \bigl\Vert J \bigl(I-Q^{B}_{\mu } \bigr)Tx_{n} \bigr\Vert ^{2}, \qquad h(x_{n})= \frac{1}{2} \bigl\Vert \bigl(I-J_{r}^{A} \bigr)x_{n} \bigr\Vert ^{2}, \end{aligned}$$
and
$$\begin{aligned} F(x_{n})=T^{*}J \bigl(I-Q^{B}_{\mu } \bigr)Tx_{n}, \qquad H(x_{n})= \bigl(I-J_{r}^{A} \bigr)x_{n}, \end{aligned}$$
where J is the duality operator on E.

In the rest of this paper, we denote \(\varOmega =A^{-1}(0)\cap T^{-1}(B ^{-1}0)\), which means \(\varOmega =\{x^{*}\in H: x^{*}\in A^{-1}(0), Tx ^{*}\in B^{-1}(0)\}\).

3.1 Algorithms

Algorithm 3.1

Choose two positive sequences \(\{\epsilon _{n} \}\), \(\{\rho _{n}\}\) satisfying \(\sum_{n=1}^{\infty }\epsilon _{n}< \infty \), \(0<\rho _{n}<4\).

Select arbitrary starting points \(x_{0}, x_{1}\in C\), constant \(\alpha \in [0,1)\), and choose \(\alpha _{n}\) such that \(0<\alpha _{n}<\bar{ \alpha _{n}}\), where
$$\begin{aligned} \bar{\alpha _{n}}= \textstyle\begin{cases} \min \{\alpha ,\epsilon _{n}(\max \{ \Vert x_{n}-x_{n-1} \Vert ^{2}, \Vert x_{n}-x _{n-1} \Vert \})^{-1}\},&x_{n}\neq x_{n-1}, \\ \alpha , &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
Iterative Step. Given the iterates \(x_{n}\) (\(n\geq 1\)), for \(r>0\), compute
$$\begin{aligned} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \end{aligned}$$
(3.1)
and calculate the step size
$$\begin{aligned} \lambda _{n}=\rho _{n} \frac{f(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}, \end{aligned}$$
and the next iterate
$$\begin{aligned} x_{n+1}=J^{A}_{r} \bigl(I-\lambda _{n}T^{*}J \bigl(I-Q^{B}_{\mu } \bigr)T \bigr)w_{n}. \end{aligned}$$
(3.2)

Stop Criterion. If \(x_{n+1}=w_{n}\) then stop. Otherwise, set \(n:=n+1\) and return to Iterative Step.

Algorithm 3.2

Choose positive sequences \(\{\epsilon _{n}\}\), \(\{\rho _{n}\}\), \(\{\beta _{n}\}\) and \(\{\gamma _{n}\}\) satisfying \(\sum_{n=1}^{\infty }\epsilon _{n}<\infty \), \(0<\rho _{n}<4\) and
$$\begin{aligned}& 0< \beta _{n},\gamma _{n}< 1, \qquad \inf \beta _{n}(1-\beta _{n}-\gamma _{n})>0, \qquad \forall n\in \mathbb{N}; \\& \lim_{n\rightarrow \infty }\gamma _{n}=0, \qquad \sum _{n=1}^{\infty }\gamma _{n}=\infty , \qquad \epsilon _{n}=o(\gamma _{n}). \end{aligned}$$
Select arbitrary starting points \(x_{0}, x_{1}\in C\), constant \(\alpha \in [0,1)\), and choose \(\alpha _{n}\) such that \(0<\alpha _{n}<\bar{ \alpha _{n}}\), where
$$\begin{aligned} \bar{\alpha _{n}}= \textstyle\begin{cases} \min \{\alpha ,\epsilon _{n}(\max \{ \Vert x_{n}-x_{n-1} \Vert ^{2}, \Vert x_{n}-x _{n-1} \Vert \})^{-1}\},&x_{n}\neq x_{n-1}, \\ \alpha , &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
Iterative Step. Given the iterates \(x_{n}\) (\(n\geq 1\)), for \(r>0\), \(\mu >0\), compute
$$\begin{aligned} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \end{aligned}$$
and calculate the step size
$$\begin{aligned} \lambda _{n}=\rho _{n} \frac{f(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}, \end{aligned}$$
and the next iterate
$$\begin{aligned} x_{n+1}=(1-\beta _{n}-\gamma _{n})w_{n}+ \beta _{n}J^{A}_{r} \bigl(I-\lambda _{n}T^{*}J \bigl(I-Q^{B}_{\mu } \bigr)T \bigr)w_{n}. \end{aligned}$$
(3.3)

Stop Criterion. If \(x_{n+1}=w_{n}\) then stop. Otherwise, set \(n:=n+1\) and return to Iterative Step.

3.2 Weak convergence analysis for Algorithm 3.1

Lemma 3.1

Let H be a real Hilbert space, E a strictly convex reflexive and smooth Banach space, and let J be duality mapping on E. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be maximal operators such that \(A^{-1}(0)\neq \emptyset \) and \(B^{-1}(0) \neq \emptyset \). Let \(T:H\rightarrow E\) be a bounded linear operator such that \(T\neq \varnothing \) and \(T^{*}\) be the adjoint operator of T. Suppose that \(\varOmega =A^{-1}(0)\cap T^{-1}(B^{-1}(0))\neq \emptyset \). Let \(\lambda ,\mu ,r>0\) and \(z\in H\). Then the following are equivalent:
  1. (1)

    \(z\in A^{-1}(0)\cap T^{-1} (B^{-1}(0))\);

     
  2. (2)

    \(z=J^{A}_{r}(I-\lambda T^{*}J(I-Q_{\mu }^{B})T)z\),

     
where \(J^{A}_{r}=(I+rA)^{-1}\), \(Q_{\mu }^{B}=(I+\mu J^{-1} B)^{-1}\).

Proof

Since \(A^{-1}(0)\cap T^{-1} (B^{-1}(0))\neq \emptyset \), there exists \(z_{0}\in A^{-1}(0)\) such that \(Tz_{0}\in B^{-1}(0)\).

\((2)\Rightarrow (1)\). Assuming \(z=J^{A}_{r}(I-\lambda T^{*}J(I-Q _{\mu }^{B})T)z\), it follows from property (2.2) of \(J^{A}_{r}\) that
$$\begin{aligned} \bigl\langle z-\lambda T^{*}J \bigl(Tz-Q_{\mu }^{B}Tz \bigr)-z,z-y \bigr\rangle \geq 0,\quad \forall y\in A^{-1}(0), \end{aligned}$$
which yields
$$\begin{aligned} \bigl\langle -\lambda T^{*}J \bigl(Tz-Q_{\mu }^{B}Tz \bigr),z-y \bigr\rangle \geq 0,\quad \forall y\in A^{-1}(0), \end{aligned}$$
and hence
$$\begin{aligned} \bigl\langle T^{*}J \bigl(Tz-Q_{\mu }^{B}Tz \bigr),z-y \bigr\rangle \leq 0,\quad \forall y \in A^{-1}(0). \end{aligned}$$
Therefore
$$\begin{aligned} \bigl\langle J \bigl(Tz-Q_{\mu }^{B}Tz \bigr),Tz-Tz_{0} \bigr\rangle \leq 0. \end{aligned}$$
(3.4)
On the other hand, since \(Q_{\mu }^{B}\) is the resolvent of B for \(\mu >0\), we have
$$\begin{aligned} \bigl\langle J \bigl(Tz-Q_{\mu }^{B}Tz \bigr),Q_{\mu }^{B}Tz-v \bigr\rangle \geq 0,\quad v \in B^{-1}(0). \end{aligned}$$
It follows from \(Tz_{0}\in B^{-1}(0)\) that
$$\begin{aligned} \bigl\langle J \bigl(Tz-Q_{\mu }^{B}Tz \bigr),Q_{\mu }^{B}Tz-Tz_{0} \bigr\rangle \geq 0. \end{aligned}$$
(3.5)
Combining with (3.4) and (3.5), we can get
$$\begin{aligned} \bigl\langle J \bigl(Tz-Q_{\mu }^{B}Tz \bigr),Tz-Q_{\mu }^{B}Tz \bigr\rangle \leq 0, \end{aligned}$$
that is,
$$\begin{aligned} \bigl\Vert Q_{\mu }^{B}Tz-Tz \bigr\Vert ^{2}\leq 0, \end{aligned}$$
which means that \(Q_{\mu }^{B}Tz=Tz\). Therefore we obtain \(z=J^{A} _{r}(I-\lambda T^{*}J(I-Q_{\mu }^{B})T)z=J^{A}_{r}z\). Consequently, \(z\in A^{-1}(0)\cap T^{-1} (B^{-1}0)\).
\((1)\Rightarrow (2)\). Since \(z\in A^{-1}(0)\cap T^{-1} (B ^{-1}(0))\), we have that \(Tz\in B^{-1}(0)\) and \(z\in A^{-1}(0)\). It follows that \(z=J^{A}_{r}z\) and \(Tz=Q_{\mu }^{B}Tz\). Thus we get
$$\begin{aligned} J^{A}_{r} \bigl(I-\lambda T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)T \bigr)z=J^{A}_{r}z=z. \end{aligned}$$
This completes the proof. □

Lemma 3.2

Let H be a real Hilbert space, E a real 2-uniformly smooth Banach space, and let J be duality mapping on E. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be maximal operators such that \(A^{-1}(0)\neq \emptyset \) and \(B^{-1}(0)\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \). Assume that \(T^{-1}(B^{-1}(0))\neq \emptyset \). Let \(F= T^{*}J(I-Q _{\mu }^{B})T\), then F is Lipschitz continuous.

Proof

According to the work of Kohsaka and Takahashi [37, 38], \(Q_{\mu }^{B}\) is nonexpansive. Moreover, since E is a 2-uniformly smooth Banach space, there exists a constant \(c>0\) such that \(\|Jx-Jy\|\leq c\|x-y\|\) for all \(x,y\in E\), therefore we estimate
$$\begin{aligned} \Vert Fx-Fy \Vert =& \bigl\Vert T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)Tx- T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)Ty \bigr\Vert \\ =& \bigl\Vert T^{*} \bigl(J \bigl(I-Q_{\mu }^{B} \bigr)Tx-J \bigl(I-Q_{\mu }^{B} \bigr)Ty \bigr) \bigr\Vert \\ \leq & \bigl\Vert T^{*} \bigr\Vert \bigl\| J \bigl(I-Q_{\mu }^{B} \bigr)Tx-J \bigl(I-Q_{\mu }^{B} \bigr)Ty )\bigr\| \\ \leq & c \bigl\Vert T^{*} \bigr\Vert \bigl\| \bigl(I-Q_{\mu }^{B} \bigr)Tx- \bigl(I-Q_{\mu }^{B} \bigr)Ty )\bigr\| \\ \leq & c \bigl\Vert T^{*} \bigr\Vert ( \Vert Tx-Ty \Vert + \bigl\Vert Q_{\mu }^{B}Tx-Q_{\mu }^{B}Ty \bigr\Vert \\ \leq & 2c \Vert T \Vert ^{2} \Vert x-y \Vert , \end{aligned}$$
which implies that F is Lipschitz continuous. Similarly, \(I-J^{A} _{r}\) is Lipschitz continuous. This completes the proof. □

Lemma 3.3

Let us consider the split common null point problem with its solution \(\varOmega =A^{-1}(0)\cap T^{-1} (B^{-1}0)\) in Banach spaces. If \(x_{n+1}=w_{n}\) in Algorithm 3.1, then \(w_{n}\in \varOmega \).

Proof

If \(x_{n+1}=w_{n}\), then we have \(w_{n}=J^{A}_{r}(I- \lambda _{n}T^{*}J(I-Q_{\mu }^{B})T)w_{n}\). According to Lemma 3.1, we conclude that \(w_{n}\in A^{-1}(0)\) and \(Tw_{n}\in B^{-1}(0)\), that is, \(w_{n}\in \varOmega \). The proof is complete. □

Theorem 3.4

Let H be a real Hilbert space, E be a uniformly convex and 2-uniformly smooth Banach space. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be two maximal monotone operators such that \(\varOmega =A^{-1}(0)\cap T^{-1} (B^{-1}0)\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \). Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 converges weakly to \(x^{*} \in \varOmega \).

Proof

Without loss of generality, we take \(z\in \varOmega \), and then get \(z=J^{A}_{r} z\), \(Tz=Q_{\mu }^{B}Tz\) and \(J_{r}^{A}(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)z=z\), therefore from (2.3) we obtain
$$\begin{aligned} \Vert w_{n}-z \Vert ^{2} =& \bigl\Vert x_{n}+\alpha _{n}(x_{n}-x_{n-1}) \bigr\Vert ^{2} \\ =& \bigl\Vert (1+\alpha _{n}) (x_{n}-z)- \alpha _{n}(x_{n-1}-z) \bigr\Vert ^{2} \\ \leq &(1+\alpha _{n}) \Vert x_{n}-z \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-z \Vert ^{2}+ \alpha _{n}(1+\alpha _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \\ \leq &(1+\alpha _{n}) \Vert x_{n}-z \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-z \Vert ^{2}+2 \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}, \end{aligned}$$
(3.6)
and since \(J_{r}^{A}\) is nonexpansive,
$$\begin{aligned} \Vert x_{n+1}-z \Vert ^{2} =& \bigl\Vert J_{r}^{A} \bigl(I-\lambda _{n} T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)T \bigr)w _{n}-z \bigr\Vert ^{2} \\ \leq & \bigl\Vert \bigl(I-\lambda _{n} T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)T \bigr)w_{n}-z \bigr\Vert ^{2} \\ =& \Vert w_{n}-z \Vert ^{2}-2\lambda _{n} \bigl\langle w_{n}-z,T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)Tw _{n} \bigr\rangle +\lambda _{n}^{2}\bigl\| T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)T)w_{n} \bigr\| ^{2} \\ =& \Vert w_{n}-z \Vert ^{2}-2\lambda _{n} \bigl\langle Tw_{n}-Tz,J \bigl(I-Q_{\mu }^{B} \bigr)Tw _{n} \bigr\rangle +\lambda _{n}^{2} \bigl\Vert F(w_{n}) \bigr\Vert ^{2}. \end{aligned}$$
It follows from property (2.1) of \(Q_{\mu }^{B}\) that
$$\begin{aligned} \bigl\langle Q_{\mu }^{B}Tw_{n}-Tz,J \bigl(Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr) \bigr\rangle \geq 0,\quad Tz\in B^{-1}(0), \end{aligned}$$
and then we have that
$$\begin{aligned} \bigl\langle Tw_{n}-Tz,J \bigl(I-Q_{\mu }^{B} \bigr)Tw_{n} \bigr\rangle =& \bigl\langle Tw_{n}-Q _{\mu }^{B}Tw_{n},J \bigl(Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr) \bigr\rangle \\ & + \bigl\langle Q_{\mu }^{B}Tw_{n}-Tz,J \bigl(Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr) \bigr\rangle \\ =& \bigl\Vert Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr\Vert ^{2}+ \bigl\langle Q_{\mu }^{B}Tw_{n}-Tz,J \bigl(Tw _{n}-Q_{\mu }^{B}Tw_{n} \bigr) \bigr\rangle \\ \geq & \bigl\Vert J \bigl(Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr) \bigr\Vert ^{2} \\ =&2f(w_{n}). \end{aligned}$$
Therefore we have from (3.6) that
$$\begin{aligned} \Vert x_{n+1}-z \Vert ^{2} \leq & \Vert w_{n}-z \Vert ^{2}-4\lambda _{n}f(w_{n})+ \lambda _{n}^{2} \bigl\Vert F(w_{n}) \bigr\Vert ^{2} \\ \leq &(1+\alpha _{n}) \Vert x_{n}-z \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-z \Vert ^{2}+2 \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &{}+ \frac{\rho ^{2}_{n}f^{2}(w_{n})}{( \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2})^{2}} \bigl\Vert F(w_{n}) \bigr\Vert ^{2}-4\frac{\rho _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w _{n}) \Vert ^{2}} \\ \leq &(1+\alpha _{n}) \Vert x_{n}-z \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-z \Vert ^{2}+2 \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &{}+(\rho _{n}-4)\frac{\rho _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}. \end{aligned}$$
(3.7)
Thus we get
$$\begin{aligned} \Vert x_{n+1}-z \Vert ^{2}- \Vert x_{n}-z \Vert ^{2} \leq &\alpha _{n} \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x _{n-1}-z \Vert ^{2} \bigr)+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &{}+(\rho _{n}-4)\frac{\rho _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}} \\ \leq &\alpha _{n} \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x_{n-1}-z \Vert ^{2} \bigr)+2\alpha _{n} \Vert x _{n}-x_{n-1} \Vert ^{2} \end{aligned}$$
(3.8)
and
$$\begin{aligned}& (4-\rho _{n})\frac{\rho _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}} \\& \quad \leq \Vert x_{n}-z \Vert ^{2}- \Vert x_{n+1}-z \Vert ^{2}+\alpha _{n} \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x_{n-1}-z \Vert ^{2} \bigr)+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}. \end{aligned}$$
(3.9)
The fact that
$$\begin{aligned} \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}\leq \bar{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert ^{2}\leq \epsilon _{n} \end{aligned}$$
implies \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|^{2}< \sum_{n=1}^{\infty }\epsilon _{n}<\infty \). Denoting \(\varGamma _{n}=\|x _{n}-z\|^{2}\) and using Lemma 2.5 in (3.8), we conclude that \(\|x_{n}-z\|^{2}\) is a converging sequence, which implies that \(\{x_{n}\}\) is bounded, and so is \(\{w_{n}\}\).
Moreover, we have \(\sum_{n=1}^{\infty }(\|x_{n}-z\|^{2}-\|x_{n-1}-z \|^{2})_{+}<\infty \), and it follows from (3.9) that
$$\begin{aligned} \frac{f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}\rightarrow 0. \end{aligned}$$
Since \(F(w_{n})\) and \(H(w_{n})\) are Lipschitz continuous by Lemma 3.2, they are bounded, thus we have \(f(w_{n})\rightarrow 0\), therefore
$$\begin{aligned} \bigl\Vert J \bigl(Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr) \bigr\Vert ^{2}= \bigl\Vert Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr\Vert \rightarrow 0. \end{aligned}$$
(3.10)
Next we show that \(w_{w_{n}}(x_{n})\subset \varOmega \). Let \(\bar{x} \in w_{w_{n}}(x_{n})\) be an arbitrary element. Since \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{nk}\}\) of \(\{x_{n}\}\) which converges weakly to . Note that again
$$\begin{aligned} \alpha _{n} \Vert x_{n}-x_{n-1} \Vert \leq \bar{\alpha _{n}} \Vert x_{n}-x_{n-1} \Vert \leq \epsilon _{n}\rightarrow 0, \end{aligned}$$
which implies that \(\|w_{n}-x_{n}\|=\alpha _{n}\|x_{n}-x_{n-1}\|\rightarrow 0\). Therefore, there exists a subsequence \(\{w_{nk}\}\) of \(\{w_{n}\}\) which converges weakly to . It follows from the lower semicontinuity of \((I-Q_{\mu }^{B})T\) and J that
$$\begin{aligned} \bigl\Vert T\bar{x}-Q_{\mu }^{B}T\bar{x} \bigr\Vert =\lim _{k\rightarrow \infty }\inf \bigl\Vert Tw _{nk}-Q_{\mu }^{B}Tw_{nk} \bigr\Vert = 0, \end{aligned}$$
which means that \(T\bar{x}\in B^{-1}(0)\).
On the other hand, according (2.4), we have
$$\begin{aligned} \Vert x_{n+1}-w_{n} \Vert ^{2} =& \Vert x_{n+1}-z-w_{n}+z \Vert ^{2} \\ \leq & \Vert w_{n}-z \Vert ^{2}- \Vert x_{n+1}-z \Vert ^{2}+2\langle x_{n+1}-w_{n}, x _{n+1}-z\rangle \\ \leq &(1+\alpha _{n}) \Vert x_{n}-z \Vert ^{2}-\alpha _{n} \Vert x_{n-1}-z \Vert ^{2}+2 \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}- \Vert x_{n+1}-z \Vert ^{2} \\ &{}+2\langle x_{n+1}-w_{n}, x_{n+1}-z \rangle \\ =& \Vert x_{n}-z \Vert ^{2}- \Vert x_{n+1}-z \Vert ^{2}+\alpha _{n} \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x _{n-1}-z \Vert ^{2} \bigr) \\ &{}+2\langle x_{n+1}-w_{n}, x_{n+1}-z\rangle . \end{aligned}$$
(3.11)
According to property (2.2) of the resolvent, we have \(\langle J^{A} _{r}z_{n}-z,(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)w_{n}-x_{n+1} \rangle \geq 0\), where \(z_{n}=(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)w _{n}\), \(x_{n+1}= J^{A}_{r}z_{n} \) and \(z\in A^{-1}(0)\). Therefore, it follows from (3.10) that
$$\begin{aligned} \langle x_{n+1}-z,x_{n+1}-w_{n} \rangle \leq \bigl\langle x_{n+1}-z,- \lambda _{n} T^{*}J \bigl(Tw_{n}-Q_{\mu }^{B}Tw_{n} \bigr) \bigr\rangle \rightarrow 0. \end{aligned}$$
Thus, it follows from (3.11) that \(\|x_{n+1}-w_{n}\|\rightarrow 0\), which yields \(\|J^{A}_{r}(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)w_{n}-w _{n}\|\rightarrow 0\). Since recursion (3.3) can be rewritten as \(w_{n}-x_{n+1}-\lambda _{n}T^{*}J(I-Q^{B}_{\mu })Tw_{n}\in rA x_{n+1}\), we can conclude that
$$\begin{aligned} \frac{1}{r} \bigl(w_{n}-x_{n+1}-\lambda _{n}T^{*}J \bigl(I-Q^{B}_{\mu } \bigr)Tw_{n} \bigr) \in Ax_{n+1}. \end{aligned}$$
In addition, from (3.10) and (3.11), we get that
$$\begin{aligned} \bigl\Vert w_{n}-x_{n+1}-\lambda _{n}T^{*}J \bigl(I-Q^{B}_{\mu } \bigr)Tw_{n} \bigr\Vert \leq & \Vert w _{n}-x_{n+1} \Vert +\lambda _{n} \bigl\Vert T^{*}J \bigl(I-Q^{B}_{\mu } \bigr)Tw_{n} \bigr\Vert \\ =& \Vert w_{n}-x_{n+1} \Vert +\lambda _{n} \bigl\Vert F(w_{n}) \bigr\Vert \rightarrow 0, \end{aligned}$$
which means that \(0\in Ax_{n+1}\), therefore \(0\in A\bar{x}\) and \(\bar{x}\in A^{-1}(0)\). Consequently, \(\bar{x}\in \varOmega \). Since the choice of is arbitrary, we conclude that \(w_{w_{n}}(x_{n}) \subset \varOmega \). Hence it follows Lemma 2.4 that the result holds and the proof is complete. □

Remark 3.5

If the operator \(A:H\rightarrow 2^{H}\) and \(B:E\rightarrow 2^{E}\) are set-valued, odd and maximal monotone mappings, then the operator \(J_{r}^{A}(I-\lambda _{n}T^{*}J(I-Q_{ \mu }^{B})T)\) is asymptotical regular (see Ishikawa [45, Theorem 4.1] and Browder and Petryshyn [46, Theorem 5]) and odd. Consequently, the strong convergence of Algorithm 3.1 is obtained. (see Baillon et al. [47, Theorem 1.1], Byrne et al. [6, Theorem 4.3]).

Remark 3.6

If we take \(\lambda _{n}\equiv \gamma \) in Theorem 3.4, where \(\gamma \in (0,\frac{2}{L})\), \(L=\|T^{*}T\|\), the result holds.

3.3 Strong convergence analysis for Algorithm 3.2

For the strong convergence theorem of Algorithm 3.2, which we present next, we recall the minimum-norm element of Ω, which is a solution of the following problem:
$$\begin{aligned} \operatorname{argmin} \bigl\{ \Vert x \Vert :x\in H, x\in A^{-1}(0) \text{ and } y=Tx\in B^{-1}(0)\subset E \bigr\} . \end{aligned}$$

Theorem 3.7

Let H be a real Hilbert space and E be a uniformly convex and 2-uniformly smooth Banach space. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be two maximal monotone operators such that \(\varOmega \neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \). Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.2 converges strongly to \(z=P_{\varOmega }(0)\), the minimum-norm element of Ω.

Proof

We show several steps to illustrate the result. For simplicity, we denote \(u_{n}=J^{A}_{r}(I-\lambda _{n}T^{*}J(I-Q_{ \mu }^{B})T)w_{n}\).

Step 1. We show that sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded. Since Ω is not empty, we take \(p\in \varOmega \), and then it follows from (3.7) that
$$\begin{aligned} \Vert u_{n}-p \Vert ^{2} \leq & \Vert w_{n}-p \Vert ^{2}+(\rho _{n}-4) \frac{\rho _{n}f^{2}(w _{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}, \end{aligned}$$
which means that \(\|u_{n}-p\|\leq \|w_{n}-p\|\leq \|x_{n}-p\|+\alpha _{n}\|x_{n}-x_{n-1}\|\).
At the same time, we have that
$$\begin{aligned} \Vert x_{n+1}-p \Vert =& \bigl\Vert (1-\beta _{n}- \gamma _{n})w_{n}+\beta _{n}J_{r}^{A} \bigl(I- \lambda _{n}T^{*}J \bigl(I-Q_{\mu }^{B} \bigr)T \bigr)w_{n}-p \bigr\Vert \\ \leq & (1-\beta _{n}-\gamma _{n}) \Vert w_{n}-p \Vert +\beta _{n} \Vert u_{n}-p \Vert + \gamma _{n} \Vert p \Vert \\ \leq &(1-\gamma _{n}) \Vert w_{n}-p \Vert +\gamma _{n} \Vert p \Vert \\ \leq & (1-\gamma _{n}) \Vert x_{n}-p \Vert +(1-\gamma _{n})\alpha _{n} \Vert x_{n}-x _{n-1} \Vert +\gamma _{n} \Vert p \Vert \\ \leq & (1-\gamma _{n}) \Vert x_{n}-p \Vert +\gamma _{n} \biggl(\frac{(1-\gamma _{n}) \alpha _{n}}{\gamma _{n}} \Vert x_{n}-x_{n-1} \Vert + \Vert p \Vert \biggr). \end{aligned}$$
Since
$$\begin{aligned} \alpha _{n} \Vert x_{n}-x_{n-1} \Vert < \epsilon _{n}=o(\gamma _{n}), \end{aligned}$$
we have \(\frac{(1-\gamma _{n})\alpha _{n}}{\gamma _{n}}\|x_{n}-x_{n-1}\| \rightarrow 0\), so the sequence \(\{\frac{(1-\gamma _{n})\alpha _{n}}{ \gamma _{n}}\|x_{n}-x_{n-1}\|\}\) is bounded, and hence
$$\begin{aligned} \Vert x_{n+1}-z \Vert \leq & (1-\gamma _{n}) \Vert x_{n}-p \Vert +\gamma _{n} \biggl( \Vert p \Vert + \frac{(1- \gamma _{n})\alpha _{n}}{\gamma _{n}} \Vert x_{n}-x_{n-1} \Vert \biggr) \\ \leq &\max \bigl\{ \Vert x_{n}-p \Vert , \Vert p \Vert + \sigma \bigr\} , \end{aligned}$$
where \(\sigma =\sup_{n\in \mathbb{N}}\{\frac{(1-\gamma _{n})\alpha _{n}}{ \gamma _{n}}\|x_{n}-x_{n-1}\|\}\). Therefore we conclude that the sequence \(\{\|x_{n}-z\|\}\) is bounded, which in turn means that \(\{x_{n}\}\) is bounded, and so are \(\{u_{n}\}\) and \(\{w_{n}\}\).
Step 2. We show that \(\|x_{n+1}-x_{n}\|\rightarrow 0\) and \(x_{n}\rightarrow z\), where \(z=P_{\varOmega }(0)\), the minimum-norm element of Ω. To this end, we set \(y_{n}=(1-\beta _{n})w_{n}+\beta _{n}u _{n}\), and then \(x_{n+1}=y_{n}-\gamma _{n}w_{n}=(1-\gamma _{n})y_{n}- \gamma _{n}\beta _{n}(w_{n}-u_{n})\). Therefore we have from (2.4) that
$$\begin{aligned} \Vert x_{n+1}-z \Vert ^{2} =& \bigl\Vert (1- \gamma _{n}) (y_{n}-z)-\gamma _{n}z-\gamma _{n} \beta _{n}(w_{n}-u_{n}) \bigr\Vert ^{2} \\ \leq &(1-\gamma _{n})^{2} \Vert y_{n}-z \Vert ^{2}-2 \bigl\langle \gamma _{n} \beta _{n}(w _{n}-u_{n})+\gamma _{n}z,x_{n+1}-z \bigr\rangle \\ \leq &(1-\gamma _{n})^{2} \Vert y_{n}-z \Vert ^{2}-2\gamma _{n}\beta _{n}\langle w_{n}-u_{n},x_{n+1}-z\rangle +2\gamma _{n}\langle -z,x_{n+1}-z\rangle . \end{aligned}$$
Note that
$$\begin{aligned} \Vert y_{n}-z \Vert \leq &(1-\beta _{n}) \Vert w_{n}-z \Vert +\beta _{n} \Vert w_{n}-z \Vert \\ \leq & \Vert w_{n}-z \Vert , \end{aligned}$$
so
$$\begin{aligned} \Vert x_{n+1}-z \Vert ^{2} \leq &(1-\gamma _{n})^{2} \Vert w_{n}-z \Vert ^{2}-2\gamma _{n} \beta _{n}\langle w_{n}-u_{n},x_{n+1}-z\rangle +2\gamma _{n}\langle -z,x _{n+1}-z\rangle . \end{aligned}$$
(3.12)
On the other hand, it follows from (3.6) that
$$\begin{aligned} \Vert w_{n}-z \Vert ^{2} \leq & \Vert x_{n}-z \Vert ^{2}+\alpha _{n} \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x _{n-1}-z \Vert ^{2} \bigr)+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}, \end{aligned}$$
hence we have from (2.3) that
$$\begin{aligned} \Vert x_{n+1}-z \Vert ^{2} =& \bigl\Vert (1- \beta _{n}-\gamma _{n})w_{n}+\beta _{n}u_{n}-z \bigr\Vert ^{2} \\ =& \bigl\Vert (1-\beta _{n}-\gamma _{n}) (w_{n}-z)+\beta _{n}(u_{n}-z)+\gamma _{n}(-z) \bigr\Vert ^{2} \\ \leq &(1-\beta _{n}-\gamma _{n}) \Vert w_{n}-z \Vert ^{2}+\beta _{n} \Vert u_{n}-z \Vert ^{2}+\gamma _{n} \Vert z \Vert ^{2}\\ &{}-(1-\beta _{n}-\gamma _{n})\beta _{n} \Vert u_{n}-w _{n} \Vert ^{2} \\ \leq &(1-\gamma _{n}) \Vert w_{n}-z \Vert ^{2}-\rho _{n}(4-\rho _{n})\frac{\beta _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}+ \gamma _{n} \Vert z \Vert ^{2} \\ &{}-(1-\beta _{n}-\gamma _{n})\beta _{n} \Vert u_{n}-w_{n} \Vert ^{2} \\ \leq &(1-\gamma _{n}) \bigl( \Vert x_{n}-z \Vert ^{2}+\alpha _{n} \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x _{n-1}-z \Vert ^{2} \bigr)+2 \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \bigr) \\ &{}-\rho _{n}(4-\rho _{n})\frac{\beta _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}\\ &{} + \gamma _{n} \Vert z \Vert ^{2}-(1-\beta _{n}- \gamma _{n})\beta _{n} \Vert u_{n}-w_{n} \Vert ^{2} \\ \leq & \Vert x_{n}-z \Vert ^{2}+\alpha _{n}(1-\gamma _{n}) \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x _{n-1}-z \Vert ^{2} \bigr)+2 \alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &{}-\rho _{n}(4-\rho _{n})\frac{\beta _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}}+ \gamma _{n} \bigl( \Vert z \Vert ^{2}- \Vert x_{n}-z \Vert ^{2} \bigr) \\ &{}-(1-\beta _{n}-\gamma _{n})\beta _{n} \Vert u_{n}-w_{n} \Vert ^{2} , \end{aligned}$$
(3.13)
which implies
$$\begin{aligned}& (1-\beta _{n}-\gamma _{n})\beta _{n} \Vert u_{n}-w_{n} \Vert ^{2}+ \rho _{n}(4-\rho _{n})\frac{\beta _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w_{n}) \Vert ^{2}} \\& \quad \leq \Vert x_{n}-z \Vert ^{2}- \Vert x_{n+1}-z \Vert ^{2}+\alpha _{n}(1-\gamma _{n}) \bigl( \Vert x _{n}-z \Vert ^{2}- \Vert x_{n-1}-z \Vert ^{2} \bigr) \\& \qquad {}+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}+\gamma _{n} \bigl( \Vert z \Vert ^{2}- \Vert x_{n}-z \Vert ^{2} \bigr). \end{aligned}$$
(3.14)

Next we consider two possible cases for the convergence of the sequence \(\{\|x_{n}-z\|^{2}\}\).

Case I. Assume that \(\{\|x_{n}-z\|\}\) is not increasing, that is, there exists \(n_{0}\geq 0\) such that \(\|x_{n+1}-z\|\leq \|x_{n}-z \|\), for each \(n\geq n_{0}\). Therefore the limit of \(\|x_{n}-z\|\) exists and \(\lim_{n\rightarrow \infty }(\|x_{n+1}-z\|-\|x_{n}-z\|)=0\). Since \(\lim_{n\rightarrow \infty }\gamma _{n}=0\) and \(\alpha _{n}\|x_{n}-x _{n-1}\|^{2}<\epsilon _{n}=o (\gamma _{n})\rightarrow 0\), it follows from formula (3.14) that
$$\begin{aligned} \rho _{n}(4-\rho _{n})\frac{\beta _{n}f^{2}(w_{n})}{ \Vert F(w_{n}) \Vert ^{2}+ \Vert H(w _{n}) \Vert ^{2}}\rightarrow 0; \qquad (1-\beta _{n}-\gamma _{n})\beta _{n} \Vert u_{n}-w_{n} \Vert ^{2}\rightarrow 0. \end{aligned}$$
Note that since \(\inf (1-\beta _{n}-\gamma _{n})\beta _{n}>0\), \(\inf \rho _{n}(4-\rho _{n})>0\) and F and H are Lipschitz continuous, we obtain
$$\begin{aligned} \lim_{n\rightarrow \infty }f^{2}(w_{n})=0; \qquad \lim _{n\rightarrow \infty } \Vert u_{n}-w_{n} \Vert ^{2}=0, \end{aligned}$$
so \(f(w_{n})\rightarrow 0\) and \(\|u_{n}-w_{n}\|\rightarrow 0\) as \(n\rightarrow \infty \); furthermore, \(\|y_{n}-w_{n}\|=\beta _{n}\|u _{n}-w_{n}\|\rightarrow 0\). In view of the fact that \(\|x_{n+1}-y_{n} \|=\gamma _{n}\|w_{n}\|\rightarrow 0\) and \(\|w_{n}-x_{n}\|=\alpha _{n} \|x_{n}-x_{n-1}\|<\epsilon _{n}\rightarrow 0\), we have
$$\begin{aligned} \Vert x_{n+1}-x_{n} \Vert \leq \Vert x_{n+1}-y_{n} \Vert + \Vert y_{n}-w_{n} \Vert + \Vert w_{n}-x_{n} \Vert \rightarrow 0. \end{aligned}$$
Similarly as in the proof of Theorem 3.4, we conclude that \(w_{w}(x _{n})\subset \varOmega \). It follows from (3.6) that
$$\begin{aligned} \Vert w_{n}-z \Vert ^{2} \leq & \Vert x_{n}-z \Vert ^{2}+\alpha _{n} \bigl( \Vert x_{n}-z \Vert ^{2}- \Vert x _{n-1}-z \Vert ^{2} \bigr)+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ \leq & \Vert x_{n}-z \Vert ^{2}+\alpha _{n} \Vert x_{n}-x_{n-1} \Vert \bigl( \Vert x_{n}-z \Vert + \Vert x _{n-1}-z \Vert \bigr)+2\alpha _{n} \Vert x_{n}-x_{n-1} \Vert ^{2} \\ =& \Vert x_{n}-z \Vert ^{2}+\alpha _{n} \Vert x_{n}-x_{n-1} \Vert \bigl( \Vert x_{n}-z \Vert + \Vert x_{n-1}-z \Vert +2 \Vert x_{n}-x_{n-1} \Vert \bigr) \\ \leq & \Vert x_{n}-z \Vert ^{2}+\alpha _{n} M \Vert x_{n}-x_{n-1} \Vert , \end{aligned}$$
where \(M=\sup_{n\in \mathbb{N}}\{\|x_{n}-z\|+\|x_{n-1}-z\|+2\|x_{n}-x _{n-1}\|\}\).
Combining with (3.12), we have
$$\begin{aligned} \Vert x_{n+1}-z \Vert ^{2} \leq &(1-\gamma _{n})^{2} \Vert w_{n}-z \Vert ^{2}-2\gamma _{n} \beta _{n}\langle w_{n}-u_{n},x_{n+1}-z\rangle \\ &{}+2\gamma _{n}\langle -z,x_{n+1}-z\rangle \\ \leq & (1-\gamma _{n})^{2} \bigl( \Vert x_{n}-z \Vert ^{2}+\alpha _{n} M \Vert x_{n}-x_{n-1} \Vert \bigr)-2\gamma _{n}\beta _{n}\langle w_{n}-u_{n},x_{n+1}-z \rangle \\ &{}+2\gamma _{n}\langle -z,x_{n+1}-z\rangle \\ \leq &(1-\gamma _{n}) \Vert x_{n}-z \Vert ^{2}+\alpha _{n}(1-\gamma _{n}) M \Vert x _{n}-x_{n-1} \Vert -2\gamma _{n}\beta _{n}\langle w_{n}-u_{n},x_{n+1}-z \rangle \\ &{}+2\gamma _{n}\langle -z,x_{n+1}-z\rangle . \end{aligned}$$
(3.15)
Due to property (2.5), we have that \(\langle 0-P_{\varOmega }(0),y-P_{ \varOmega }(0)\rangle \leq 0\), \(\forall y\in \varOmega \), and therefore
$$\begin{aligned} \lim_{n\rightarrow \infty }\sup \langle x_{n+1}-z,-z\rangle = \max _{\hat{z}\in w_{w}(x_{n})}\langle \hat{z}-z,-z\rangle \leq 0. \end{aligned}$$
By using Lemma 2.2 to (3.15), since \(\gamma _{n}\rightarrow 0\), \(w_{n}-u _{n}\rightarrow 0\), \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|< \infty \), we conclude that \(\|x_{n}-z\|\rightarrow 0\), that is, sequence \(\{x_{n}\}\) converges strongly to \(z=P_{\varOmega }(0)\). Furthermore, we have from the property of metric projection that, for all \(p\in \varOmega \),
$$\begin{aligned} \langle p-z,-z\rangle \leq 0\quad \Longleftrightarrow\quad \Vert z \Vert ^{2} \leq \Vert z \Vert \Vert p \Vert \quad \Longrightarrow\quad \Vert z \Vert \leq \Vert p \Vert , \end{aligned}$$
which implies that z is the minimum-norm solution of the split null point problem.
Case II. If the sequence \(\|x_{n}-z\|^{2}\) is increasing, without loss of generality, we assume that there exists a subsequence \(\{\|x_{nk}-z\|\}\) of \(\{\|x_{n}-z\|\}\) such that \(\|x_{nk}-z\|\leq \|x_{nk+1}-z\|\) for all \(k\in \mathbb{N}\). In this case, we define an indicator
$$\begin{aligned} \sigma (n)=\max \bigl\{ m\leq n: \Vert x_{m}-z \Vert \leq \Vert x_{m+1}-z \Vert \bigr\} \end{aligned}$$
such that \(\sigma (n)\rightarrow \infty \) as \(n\rightarrow \infty \) and \(\|x_{\sigma (n)}-z\|^{2}\leq \|x_{\sigma (n)+1}-z\|^{2}\), and so from (3.14)
$$\begin{aligned} \gamma _{\sigma (n)} \bigl( \Vert z \Vert ^{2}- \Vert x_{\sigma (n)}-z \Vert ^{2} \bigr) \geq & \Vert x_{ \sigma (n)}-z \Vert ^{2}- \Vert x_{\sigma (n)+1}-z \Vert ^{2}+\gamma _{\sigma (n)} \bigl( \Vert z \Vert ^{2}- \Vert x_{\sigma (n)}-z \Vert ^{2} \bigr) \\ &{}+\alpha _{\sigma (n)}(1-\gamma _{\sigma (n)}) \bigl( \Vert x_{\sigma (n)}-z \Vert ^{2}- \Vert x_{{\sigma (n)}-1}-z \Vert ^{2} \bigr) \\ &{}+2\alpha _{\sigma (n)} \Vert x_{\sigma (n)}-x_{{\sigma (n)}-1} \Vert ^{2} \\ \geq &\rho _{\sigma (n)}(4-\rho _{\sigma (n)})\frac{\beta _{\sigma (n)}f ^{2}(w_{\sigma (n)})}{ \Vert F(w_{\sigma (n)}) \Vert ^{2}+\|H(w_{\sigma (n)}\| ^{2}} \\ &{}+(1-\beta _{\sigma (n)}-\gamma _{\sigma (n)})\beta _{\sigma (n)} \Vert u _{\sigma (n)}-w_{\sigma (n)} \Vert ^{2} , \end{aligned}$$
since \(\gamma _{\sigma (n)}\rightarrow 0 \) as \(n\rightarrow \infty \). Similarly as in the proof in Case I, we get that
$$\begin{aligned}& \lim_{n\rightarrow \infty }f^{2}(w_{\sigma (n)})=0; \\& \lim_{n\rightarrow \infty }\beta _{\sigma (n)} \Vert u_{\sigma (n)}-w_{ \sigma (n)} \Vert ^{2}=0; \end{aligned}$$
(3.16)
$$\begin{aligned}& \lim_{n\rightarrow \infty }\sup \langle x_{\sigma (n)+1}-z,-z\rangle =\max _{\tilde{z}\in w_{w}(x_{\sigma (n)})}\langle \tilde{z}-z,-z \rangle \leq 0; \end{aligned}$$
(3.17)
and
$$\begin{aligned} \Vert x_{\sigma (n)+1}-z \Vert ^{2} \leq &(1-\gamma _{\sigma (n)}) \Vert x_{\sigma (n)}-z \Vert ^{2} \\ &{}+\alpha _{\sigma (n)}(1-\gamma _{\sigma (n)}) M_{1} \Vert x_{\sigma (n)}-x _{{\sigma (n)}-1} \Vert \\ &{}-2\gamma _{\sigma (n)} \bigl[\beta _{\sigma (n)}\langle w_{\sigma (n)}-u _{\sigma (n)},x_{\sigma (n)+1}-z\rangle +\langle -z,x_{\sigma (n)+1}-z \rangle \bigr], \end{aligned}$$
(3.18)
where \(M_{1}= \sup_{{\sigma (n)}\in \mathbb{N}}\{\|x_{\sigma (n)}-z\|+ \|x_{{\sigma (n)}-1}-z\|+2\|x_{\sigma (n)}-x_{{\sigma (n)}-1}\|\}\).
Therefore
$$\begin{aligned} \Vert x_{\sigma (n)}-z \Vert ^{2} \leq &2\beta _{\sigma (n)} \bigl[\langle w_{\sigma (n)}-u_{\sigma (n)},x_{\sigma (n)+1}-z \rangle +2\langle -z,x_{\sigma (n)+1}-z\rangle \bigr] \\ &{}+ \frac{\alpha _{\sigma (n)}(1-\gamma _{\sigma (n)})}{\gamma _{\sigma (n)}} M_{1} \Vert x_{\sigma (n)}-x_{{\sigma (n)}-1} \Vert . \end{aligned}$$
(3.19)
Combining with the above formulas (3.16), (3.17) and (3.19) yields
$$\begin{aligned} \limsup_{n\rightarrow \infty } \Vert x_{\sigma (n)}-z \Vert ^{2}=0, \end{aligned}$$
and hence
$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert x_{\sigma (n)}-z \Vert ^{2}=0. \end{aligned}$$
From (3.18) we have
$$\begin{aligned} \limsup_{n\rightarrow \infty } \Vert x_{\sigma (n)+1}-z \Vert ^{2} = \limsup_{n\rightarrow \infty } \Vert x_{\sigma (n)}-z \Vert ^{2}=0. \end{aligned}$$
Thus \(\lim_{n\rightarrow \infty }\|x_{\sigma (n)+1}-z\|^{2}=0 \). Therefore, according to Lemma 2.3, we have
$$\begin{aligned} 0\leq \Vert x_{n}-z \Vert ^{2}\leq \max \bigl\{ \Vert x_{\sigma (n)}-z \Vert ^{2}, \Vert x_{n}-z \Vert ^{2} \bigr\} \leq \Vert x_{\sigma (n)+1}-z \Vert ^{2} \rightarrow 0. \end{aligned}$$
Consequently, sequence \(\{x_{n}\}\) converges strongly to \(z=P_{\varOmega }(0)\), which is the minimum-norm element of Ω. The proof is complete. □

Remark 3.8

If there are two firmly nonexpansive operators \(U:H\rightarrow H\) and \(W:E\rightarrow E\), where H and E are n- and m-dimensional Euclidean spaces, respectively, and T is a real \(m\times n\) matrix, we consider the split common fixed point problem as follows: find \(x^{*}\in \operatorname{Fix}(U)\) such that \(Tx^{*}\in \operatorname{Fix}(W)\), where \(\operatorname{Fix}(U)\) and \(\operatorname{Fix}(W)\) are fixed point sets of U and W, respectively. Taking \(J_{r}^{A}= U\) and \(Q_{\mu }^{B}=W\), we can then approximate the solution of the split common fixed point problem from the above algorithms. For direct operators in Euclidean spaces, the above algorithms also work for the split common fixed point problem; one can refer to the the work of Censor and Segal [48] and Kaznoon [49].

4 Applications

In this part, we consider our result for solving the split minimizing problem. The split minimization problem in Banach spaces is formulated as follows: find \(x\in H\) such that
$$\begin{aligned} x\in \operatorname{argmin} f \subset H,\qquad Tx\in \operatorname{argmin} g \subset E, \end{aligned}$$
(4.1)
where H and E are real Hilbert and Banach spaces, respectively, \(f:H\rightarrow \mathbb{R}\), \(g:E\rightarrow \mathbb{R}\) are two proper convex lower semicontinuous functions and \(T:H\rightarrow E\) is a bounded linear operator.
Denote
$$\begin{aligned} \mathrm{Prox}_{\mu g}(x)=\mathop{\operatorname{argmin}}\limits_{u\in E} \biggl\{ g(u)+\frac{ \Vert u-x \Vert ^{2}}{2 \mu } \biggr\} , \end{aligned}$$
and
$$\begin{aligned} \mathrm{Prox}_{r f}(x)=\mathop{\operatorname{argmin}}\limits_{v\in H} \biggl\{ f(v)+\frac{ \Vert v-x \Vert ^{2}}{2r} \biggr\} . \end{aligned}$$
From Rockafellar [50, 51], one can see that \(\mathrm{Prox}_{\lambda g}(x)\) is the metric resolvent of ∂g, and \(\mathrm{Prox}_{r f}(x)\) is the general resolvent of ∂f, where
$$\begin{aligned} \partial g= \bigl\{ x^{*}\in E^{*}: g(y)\geq \bigl\langle y-x,x^{*} \bigr\rangle +g(x); \forall y\in E \bigr\} , \end{aligned}$$
and
$$\begin{aligned} \partial f= \bigl\{ z\in H: f(y)\geq \langle y-x,z\rangle +f(x); \forall y \in H \bigr\} , \end{aligned}$$
are subdifferential operators of g and f, respectively. It is clear that \(\partial g:E\rightarrow 2^{E^{*}}\) and \(\partial f:H\rightarrow 2^{H}\) are maximal monotone operators and \((\partial g)^{-1}(0)= \operatorname{argmin} \{ g(x):x\in E\}\), \((\partial f)^{-1}(0)= \operatorname{argmin} \{ f(x):x\in H\}\).

Now we take \(A=\partial f\), \(B=\partial g\) in our theorems, and then the following results hold:

Theorem 4.1

Let H be a real Hilbert space and E be a uniformly convex and 2-uniformly smooth Banach space. Let \(f:H\rightarrow \mathbb{R}\), \(g:E\rightarrow \mathbb{R}\) be two proper convex lower semicontinuous functions such that \(\varOmega =(\partial f)^{-1}(0) \cap T^{-1} ((\partial g)^{-1}(0))\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*} \rightarrow H\) and \(T\neq \varnothing \). For arbitrary \(x_{0}, x_{1} \in H\),
$$ \textstyle\begin{cases} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ x_{n+1}=\mathrm{Prox}_{r f}(I-\lambda _{n}T^{*}J(I-\mathrm{Prox}_{\mu g})T)w_{n}, \end{cases} $$
where the sequences \(\{\alpha _{n}\}\), \(\{\lambda _{n}\}\) are the same as in Algorithm 3.1, then sequence \(\{x_{n}\}\) converges weakly to \(x^{*}\in \varOmega \).

Theorem 4.2

Let H be a real Hilbert space and E be a uniformly convex and 2-uniformly smooth Banach space. Let \(f:H\rightarrow \mathbb{R}\), \(g:E\rightarrow \mathbb{R}\) be two proper convex lower semicontinuous functions such that \(\varOmega =(\partial f)^{-1}(0) \cap T^{-1} ((\partial g)^{-1}(0))\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*} \rightarrow H\) and \(T\neq \varnothing \). For arbitrary \(x_{0}, x_{1} \in H\),
$$ \textstyle\begin{cases} w_{n}=x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ x_{n+1}=(1-\beta _{n}-\gamma _{n})x_{n}+\beta _{n}\mathrm{Prox}_{r f}(I-\lambda _{n}T^{*}J(I-\mathrm{Prox}_{\mu g})T)w_{n}, \end{cases} $$
where the sequences \(\{\alpha _{n}\}\), \(\{\lambda _{n}\}\), \(\{\beta _{n} \}\) and \(\{\gamma _{n}\}\) are the same as in Algorithm 3.2, then sequence \(\{x_{n}\}\) converges strongly to \(z=P_{\varOmega }(0)\).

5 Numerical examples

In this section, we present some examples to illustrate the applicability, efficiency and stability of our inertial self-adaptive step size iterative algorithms. We have written all the codes in Matlab R2016b and ran them on an LG dual core personal computer.

5.1 Numerical behavior of Algorithm 3.1

Example 5.1

Let \(H=E=\mathbb{R}\). Define the operators A, B and T by \(Ax=3x\), \(Bx=2x\), \(Tx=x\). In this example, we choose \(\epsilon _{n}=\frac{1}{(n+1)^{2}}\) and \(\alpha \in (0,1)\). If \(\alpha <\epsilon _{n}(\max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x_{n-1}\| \})^{-1}\), then \(\alpha _{n}=\frac{\alpha }{2}\); otherwise, we take \(\alpha _{n}= \frac{1}{(n+2)^{2}}\max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x _{n-1}\|\}^{-1}\), we set \(\rho _{n}=3-\frac{1}{n+1}\) for all \(n\in \mathbb{N}\) on Algorithm 3.1. We first test different α for given initial points \(x_{0}\) and \(x_{1}\), then test different initial points for \(r=1\). We aim to find the minimizers of A and B. According to Algorithm 3.1, we have the following numerical results in Fig. 1.
Figure 1
Figure 1

Results of Algorithm 3.1 for different parameters in Example 5.1

Example 5.2

Let \(H=E=\mathbb{R}^{3}\). Define the operators A, B and T as follows:
T = ( 6 3 1 8 7 5 3 6 2 ) , A = ( 1 / 3 0 0 0 1 / 2 0 0 0 1 ) , B 2 = ( 4 0 0 0 5 0 0 0 6 ) .
In this example, we set the parameters of Algorithm 3.1 by \(\rho _{n}=3- \frac{1}{n+1}\), \(\epsilon _{n}=\frac{1}{(n+1)^{2}}\) for all \(n\in \mathbb{N}\). Similarly as in Example 5.1, if \(\alpha <\epsilon _{n}( \max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x_{n-1}\|\})^{-1}\), then \(\alpha _{n}=\frac{\alpha }{2}\); otherwise, we take \(\alpha _{n}= \frac{1}{(n+2)^{2}}\max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x_{n-1}\|\} ^{-1}\). At the same time, we set the parameters \(\beta _{n}= \frac{n-1}{n+1}\), \(\gamma _{n}=\frac{1}{n+1}\) of Algorithm 3.2. The experimental results of Algorithms 3.1 and 3.2 are reported in Figs. 23 and Table 1 and Table 2.
Figure 2
Figure 2

Results of Algorithms 3.1 and 3.2 for different initial points in Example 5.2

Figure 3
Figure 3

Comparison of Algorithms 3.1 and 3.2 with VM and HM

Table 1

Convergence of the sequence in Algorithm 3.1

n

\(x_{n}\)

\(x_{n+1}\)

\(w_{n}\)

\(\|x_{n+1}-x_{n}\|\)

\(\|x_{n+1}-w_{n}\|\)

0

(13,−12,25)

(10,−5,15)

(9.998,−4.995,14.993)

12.5750

0.0088

1

(10,−5,15)

(2.280,−7.305,5.707)

(2.277,−7.305,5.703)

4.8141

0.0051

2

(2.280,−7.305,5.707)

(2.972,−3.161,3.356)

(2.973,−3.154,3.352)

2.8272

0.0083

3

(2.972,−3.161,3.356)

(1.053,−2.830,1.306)

(1.047,−2.829,1.299)

1.6818

0.0098

4

(1.053,−2.830,1.306)

(1.336,−1.229,0.874)

(1.338,−1.218,0.871)

1.0192

0.0121

5

(1.336,−1.229,0.874)

(0.511,−1.101,0.289)

(0.498,−1.099,0.281)

0.6520

0.0153

     

10

(0.106,−0.166,0.006)

(0.111,−0.063,0.019)

(0.111,−0.057,0.020)

0.0700

0.0059

20

(0.334,−0.989,−0.329)e − 04

(0.007,0.269,0.041)e − 04

(−0.091,0.647,0.152)e − 04

3.5041e − 05

4.0568e − 05

26

(−0.378,0.219,−0.010)e − 05

(−0.111,0.156,0.023)e − 05

(−0.031,0.136,0.033)e − 05

1.2179e − 06

8.2810e − 07

27

(−0.111,0.156,0.023)e − 06

(−0.741,0.424,−0.028)e − 06

(−0.629,0.085,−0.106)e − 06

4.864e − 07

1.44e − 07

Table 2

Convergence of the sequence in Algorithm 3.2

n

\(x_{n}\)

\(x_{n+1}\)

\(w_{n}\)

\(\|x_{n+1}-x_{n}\|\)

\(\|x_{n+1}-w_{n}\|\)

0

(10,0,−10)

(−10,5,10)

(−10.003,5.001,10.003)

28.7228

0.0039

1

(−10,5,10)

(−2.985,1.319,4.384)

(−2.981,1.317,4.379)

9.7107

0.0064

2

(−2.985,1.319,4.384)

(−2.291,−0.037,1.814)

(−2.288,−0.043,1.802)

2.9877

0.0134

3

(−2.291,−0.037,1.814)

(−1.033,0.336,1.048)

(−1.018,0.341,1.039)

1.5187

0.0183

4

(−1.033,0.336,1.048)

(−0.313,0.343,0.568)

(−0.296,0.344,0.557)

0.8660

0.0204

5

(−0.313,0.343,0.568)

(−0.332,0.091,0.225)

(−0.332,0.082,0.212)

0.4264

0.0156

     

10

(−0.034,0.009,0.009)

(−0.012,0.011,0.006)

(−0.006,0.012,0.005)

0.0223

0.0059

20

(0.318,−0.203,0.045)e − 05

(0.311,−0.452,−0.013)e − 06

(0.321,−0.128,0.062)e − 05

2.5567e − 06

7.6702e − 07

21

(0.311,−0.452,−0.013)e − 06

(0.119,−0.167,−0.005)e − 06

(0.059,−0.156,−0.020)e − 05

2.0896e − 06

6.2688e − 07

22

(0.119,−0.167,−0.005)e − 06

(0.879,−0.575,0.075)e − 06

(0.787,−0.246,0.113)e − 06

8.665e − 07

3.432e − 07

At this stage we would like to emphasize that our step size of Algorithms 3.1 and 3.2 are self-adaptive and not given beforehand. We have no need to know the norm of the operator T. The above figures and tables imply that sequences \(\{x_{n}\}\) generated by Algorithms 3.1 and 3.2 approximate the null point \(x^{*}\in A^{-1}(0)\cap T^{-1}(B^{-1}(0))\).

5.2 Comparison of Algorithm 3.2 with other algorithms

In this part, we present several experiments to compare Algorithm 3.2 with other algorithms. Two algorithms used to compare are the viscosity method (VM) of Suantai et al. [4], and the Halpern-type method (HM) of Alofi et al. [25], in which the step size depends on the norm of operator T. For the three algorithms, the operators A, B, T are defined as in Example 5.2. In view of the fact that the norm \(\|T\|\simeq 14.87\), we take the step size \(\lambda _{n}=0.001\) in the algorithms of Suantai et al. [4] and Alofi et al. [25].

We set the parameters \(\beta _{n}=\frac{2n-1}{2n+1}\), \(\rho _{n}=3- \frac{1}{n+1}\) and \(\gamma _{n}=\frac{1}{2n+1}\) in our Algorithm 3.2; \(\alpha _{n}=\frac{1}{2n+1}\), \(\beta _{n}=\gamma _{n}=\frac{n}{2n+1}\) and the contraction \(f(x)=\frac{x}{5}\) in Suantai et al. [4]; \(\beta _{n}=\frac{n}{2n+1}\), \(\alpha _{n}=\frac{1}{2n+1}\) and \(u_{n}=0\) in Alofi et al. [25]. In addition, we choose the stopping criterion for all the algorithms \(\|x_{n+1}-x_{n}\|\leq \mathit{DOL}\). Furthermore, we take \(x_{0}=(13,-12,25)\) and compare the iterations and computer times. The experiment results are reported in Fig. 4 and Table 3.
Figure 4
Figure 4

Numerical results for \(K=20\)

Table 3

Comparison of Algorithm 3.2 with other algorithms

DOL

Method

Step size

Iter (n)

CPU time (s)

\(\frac{\|z-x_{n}\|}{\|x_{0}-x_{n+1}\|}\)

10−6

Algorithm 3.2

\(\lambda _{n}=\frac{\rho _{n}f(w_{n})}{\|F(w_{n})\|^{2}+\|H(w_{n})\|^{2}}\)

19

0.143539

2.8971e − 08

(VM) Suantai et al. [4]

0.001

85

0.155777

1.9452e − 07

(HM) Alofi et al. [25]

0.001

87

0.156777

1.9582e − 07

10−8

Algorithm 3.2

\(\lambda _{n}=\frac{\rho _{n}f(w_{n})}{\|F(w_{n})\|^{2}+\|H(w_{n})\|^{2}}\)

27

0.157909

2.8306e − 10

(VM) Suantai et al. [4]

0.001

114

0.916747

1.9458e − 09

(HM) Alofi et al. [25]

0.001

116

0.168193

2.01508e − 09

10−10

Algorithm 3.2

\(\lambda _{n}=\frac{\rho _{n} f(w_{n})}{\|F(w_{n})\|^{2}+\|H(w_{n})\|^{2}}\)

31

0.147318

2.7699e − 12

(VM) Suantai et al. [4]

0.001

143

1.251064

1.997e − 11

(HM) Alofi et al. [25]

0.001

145

1.207683

2.1133e − 11

From Table 3, we can see that our Algorithm 3.2 is the best and seems to have a competitive advantage. However, as mentioned in the previous sections, the main advantage of our Algorithm 3.2 is that the inertial technique combined with self-adaptive step size is employed without the prior knowledge of operator norms.

5.3 Compressed sensing

For the last example we choose a problem from the field of compressed sensing, that is, recovery of a sparse and noisy signal from a limited number of sampling. Let \(x_{0}\in \mathbb{R}^{n}\) be K-sparse signal, \(K\ll n\). The sampling matrix \(A\in \mathbb{R}^{m\times n}\), \(m< n\) is stimulated from the standard Gaussian distribution and vector \(b=Ax+\epsilon \), where ϵ is additive noise. When \(\epsilon =0\), there is no noise in the observed data. Our task is to recover signal \(x_{0}\) from data b. For further explanations, one can consult Nguyen and Shin [52].

For solving the problem, we recall the LASSO problem Tibshirani [53]:
$$\begin{aligned}& \min_{x\in \mathbb{R}^{n}} \frac{1}{2} \Vert Ax-b \Vert _{2}^{2}, \\& \text{s.t.} \quad \Vert x \Vert _{1}\leq t, \end{aligned}$$
where \(t>0\) is a given constant. So, in relation with the SVIP (1.1)–(1.2), we consider \(B_{1}^{-1}(0)=\{x| \|x\|_{1}\leq t\}\), \(B_{2}^{-1}(0)=\{b\}\) and define
$$\begin{aligned} B_{1}(x)= \textstyle\begin{cases} \{u|\sup_{ \Vert y \Vert _{1}\leq t}\langle y-x,u\rangle \leq 0\}, & x\in C, \\ \emptyset , & \text{else}, \end{cases}\displaystyle \end{aligned}$$
and define
$$\begin{aligned} B_{2}(y)= \textstyle\begin{cases} H_{2}, \quad y=b, \\ \emptyset , \quad \text{else}. \end{cases}\displaystyle \end{aligned}$$
We set the parameters \(\beta _{n}=\frac{2n-1}{2n+1}\), \(\rho _{n}=3- \frac{1}{n+1}\) and \(\gamma _{n}=\frac{1}{2n+1}\) in our Algorithm 3.2 and compare with the results of Sitthithakerngkiet et al. [23] and Kazmi et al. [29]. For the experiment setting we choose the following parameters: \(A\in \mathbb{R}^{m\times n}\) is generated randomly with \(m=2^{10}\), \(n=2^{12}\), \(x_{0}\in \mathbb{R}^{n}\) contains K-spikes with amplitude ±1 distributed in the whole domain randomly. In addition, for simplicity, we take \(f(x)=\frac{x}{2}\), \(S=I\), \(\alpha _{i}= \frac{1}{i+1}\) in [29] and \(S_{i}=I\), \(\alpha _{i}=10^{-3}/(i+1)\), \(\beta _{i}=0.5-1/(10i+2)\) in [23] and \(\alpha _{i}=\frac{i-1}{i+1}\), \(\gamma _{i}=\frac{1}{i+1}\) in our algorithms. In addition, we take \(t=K\) in all the algorithms and the stopping criterion \(\|x_{n+1}-x_{n}\|\leq \mathit{DOL}\) with \(\mathit{DOL}=10^{-6}\). All the numerical results are presented in Table 4 and Figs. 45.
Figure 5
Figure 5

Numerical results for \(K=40\)

Table 4

Comparison of Algorithm 3.2 with those of Sitthithakerngkiet et al. [23] and Kazmi et al. [29]

K, m and n

DOL

Method

Step size

Iter (n)

K = 50, m = 210, n = 212

10−6

Algorithm 3.2

\(\lambda _{n}\)

1881

 

Sitthithakerngkiet et al. [23]

0.05

3262

 

Kazmi et al. [29]

0.05

28,674

K = 40, m = 210, n = 212

10−6

Algorithm 3.2

\(\lambda _{n}\)

1779

 

Sitthithakerngkiet et al. [23]

0.05

2942

 

Kazmi et al. [29]

0.05

26,488

K = 20, m = 210, n = 212

10−6

Algorithm 3.2

\(\lambda _{n}\)

1496

 

Sitthithakerngkiet et al. [23]

0.05

2094

 

Kazmi et al. [29]

0.05

19,488

6 Conclusion

Many important problems in mathematics, sciences, engineering and other fields can be reformulated in terms of finding zero points or null point of nonlinear operators. The split null point problem has received attention due to its wide applications in real world such as signal processing, image reconstruction, with particular progress in intensity-modulated radiation therapy, approximation as well as control theory. For solving the split common null point problem, many authors have dedicated their efforts to the construction of iterative algorithms. However, the drawback of these algorithms is that either the step size depends on the linear bounded operator norm in Banach spaces or the maximal operator belongs to Hilbert spaces.

This motivated studying the solution set of the split common null point problem without prior knowledge of the operator norms in Banach spaces. The main result of this paper is a new inertial algorithm which incorporates the self-adaptive step size rule to solve the split null point problems for multi-valued maximal monotone operators in Banach spaces. To some extent, the weak and strong convergence theorems of the new inertial algorithm in this paper complement the approximating methods for the solution of split common null point problem and extend and unify some results (see, e.g., Byrne et al. [6], Takahashi [23], Alofi [25], Suantai et al. [4] and Promluang and Kuman [5]). In addition, the numerical examples and comparisons are presented to illustrate the efficiency and reliability of our algorithms.

Declarations

Acknowledgements

The authors express their deep gratitude to the referee and the editor for his/her valuable comments and suggestions which helped tremendously in improving the quality of this paper and made it suitable for publication.

Funding

This article was funded by the National Science Foundation of China (11471059) and Science and Technology Research Project of Chongqing Municipal Education Commission (KJ 1706154) and the Research Project of Chongqing Technology and Business University (KFJJ2017069).

Authors’ contributions

All authors contributed equally to this work. All authors read and approved final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing, China

References

  1. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001) MathSciNetMATHView ArticleGoogle Scholar
  2. López, G., Martin-Marquez, V., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, ID085004 (2012) MathSciNetMATHView ArticleGoogle Scholar
  3. Takahashi, W.: The split common null point problem in Banach spaces. Arch. Math. (Basel) 104(4), 357–365 (2015) MathSciNetMATHView ArticleGoogle Scholar
  4. Suantai, S., Srisap, K., Naprang, N., Mamat, M., Yundon, V., Cholamjiak, J.: Convergence theorems for finding split common null point problem in Banach spaces. Appl. Gen. Topol. 18(2), 345–360 (2017) MathSciNetMATHView ArticleGoogle Scholar
  5. Promluang, K., Kumam, P.: Viscosity approximation method for split common null point problems between Banach spaces and Hilbert spaces. J. Inform. Math. Sci. 9(1), 27–44 (2017) Google Scholar
  6. Byrne, C., Censor, Y., Gibali, A., Reich, S.: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759–775 (2012) MathSciNetMATHGoogle Scholar
  7. Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012) MathSciNetMATHView ArticleGoogle Scholar
  8. Censor, Y., Elfving, T.: A multi-projection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221–239 (1994) MathSciNetMATHView ArticleGoogle Scholar
  9. Byrne, C.L.: Iterative projection onto conve sets using multiple Bregman distances. Inverse Probl. 15, 1295–1313 (1999) MATHView ArticleGoogle Scholar
  10. Rockafellar, R.T.: On maximal monotonicity of sums of nonlinear operators. Trans. Am. Math. Soc. 149, 75–88 (1970) MATHView ArticleGoogle Scholar
  11. Moudaf, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011) MathSciNetView ArticleGoogle Scholar
  12. Ansari, Q.H., Rehan, A., Wen, C.F.: Implicit and explicit algorithms for split common fixed point problems. J. Nonlinear Convex Anal. 17(7), 1381–1397 (2016) MathSciNetMATHGoogle Scholar
  13. Ansari, Q.H., Rehan, A.: Split feasibility and fixed point problems. In: Ansari, Q.H. (ed.) Nonlinear Analysis: Approximation Theory, Optimization and Applications, pp. 281–322. Springer, New Delhi (2014) MATHGoogle Scholar
  14. Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2003) View ArticleGoogle Scholar
  15. Ceng, L.C., Ansari, Q.H., Yao, J.C.: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64, 633–642 (2012) MathSciNetMATHView ArticleGoogle Scholar
  16. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002) MathSciNetMATHView ArticleGoogle Scholar
  17. Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004) MathSciNetMATHView ArticleGoogle Scholar
  18. Yang, Q.: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 20, 1261–1266 (2004) MathSciNetMATHView ArticleGoogle Scholar
  19. Moudafi, A., Thakur, B.S.: Solving proximal split feasibility problem without prior knowledge of matrix norms. Optim. Lett. 8(7), 2099–2110 (2013). https://doi.org/10.1007/s11590-013-0708-4 MATHView ArticleGoogle Scholar
  20. Gibali, A., Mai, D.T., Nguyen, T.V.: A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2018, 1–25 (2018) Google Scholar
  21. Shehu, Y., Iyiola, O.S.: Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method. J. Fixed Point Theory Appl. 19, 2483–2510 (2017) MathSciNetMATHView ArticleGoogle Scholar
  22. Ceng, L.C., Ansari, Q.H., Yao, J.C.: Mann type iterative methods for finding a common solution of split feasibility and fixed point problems. Positivity 16(3), 471–495 (2012) MathSciNetMATHView ArticleGoogle Scholar
  23. Sitthithakerngkiet, K., Deepho, J., Kumam, P.: Convergence analysis of a general iterative algorithm for finding a common solution o split variational inclusion and optimization problems. Numer. Algorithms (2018). https://doi.org/10.1007/s11075-017-0462-2 MathSciNetMATHView ArticleGoogle Scholar
  24. Takahashi, W., Yao, J.C.: Strong convergence theorems by hybrid methods for the split common null point problem in Banach spaces. Fixed Point Theory Appl. 2015, Article ID 87 (2015) MathSciNetMATHView ArticleGoogle Scholar
  25. Alofi, A.S., Alsulami, M., Takahashi, W.: Strongly convergent iterative method for the split common null point problem in Banach spaces. J. Nonlinear Convex Anal. 2, 311–324 (2016) MathSciNetMATHGoogle Scholar
  26. Kamimura, S., Takahashi, W.: Strong convergence of proximal-type algorithm in Banach spaces. SIAM J. Optim. 13, 938–945 (2002) MathSciNetMATHView ArticleGoogle Scholar
  27. Takahashi, W.: The split common null point problem for generalized resolvents in two Banach spaces. Numer. Algorithms 75(4), 1065–1078 (2017) MathSciNetMATHView ArticleGoogle Scholar
  28. Ansari, Q.H., Rehan, A.: Iterative methods for generalized split feasibility problems in Banach spaces. Carpath. J. Math. 33(1), 9–26 (2017) MathSciNetMATHGoogle Scholar
  29. Kazmi, K.R., Rizvi, S.H.: An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 8, 1113–1124 (2014) MathSciNetMATHView ArticleGoogle Scholar
  30. Mainge, P.E.: Convergence theorem for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223–236 (2008) MathSciNetMATHView ArticleGoogle Scholar
  31. Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102–1119 (2000) MathSciNetMATHView ArticleGoogle Scholar
  32. Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2003) MathSciNetMATHView ArticleGoogle Scholar
  33. Chang, S.S., Cho Zhou, Y.J., Zhou, H.Z.: Iterative Methods for Nonlinear Operator Equation in Banach Spaces. Nova Science Publishers, Huntington (2002) MATHGoogle Scholar
  34. Chidume, C.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Note in Mathematics, vol. 1965. Springer, London (2009) MATHGoogle Scholar
  35. Browder, F.E.: Nonlinear maximal monotone operators in Banach spaces. Math. Ann. 175, 89–113 (1968) MathSciNetMATHView ArticleGoogle Scholar
  36. Takashi, W.: Convex Analysis and Approximation of Fixed Point. Yokohama Publishers, Yokohama (2009) Google Scholar
  37. Kohsaka, F., Takahashi, W.: Existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces. SIAM J. Optim. 19(2), 824–835 (2008) MathSciNetMATHView ArticleGoogle Scholar
  38. Kohsaka, F., Takahashi, W.: Fixed point theorems for a class of nonlinear mappings related to maximal monotone operators in Banach spaces. Arch. Math. 91(2), 166–177 (2008) MathSciNetMATHView ArticleGoogle Scholar
  39. Aoyama, K., Kohsaka, F., Takahashi, W.: Three generalizations of firmly nonexpansive mappings: their relations and continuity properties. J. Nonlinear Convex Anal. 10(1), 131–147 (2009) MathSciNetMATHGoogle Scholar
  40. Bauschke, H.H., Wang, X.F., Yao, L.J.: General resolvents for monotone operators: characterization and extension (2008). arXiv:0810.3905v1
  41. Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(2), 240–256 (2002) MathSciNetMATHView ArticleGoogle Scholar
  42. Mainge, P.E.: Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 325, 469–479 (2007) MathSciNetMATHView ArticleGoogle Scholar
  43. Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008) MathSciNetMATHView ArticleGoogle Scholar
  44. Halpern, B.: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957–961 (1967) MathSciNetMATHView ArticleGoogle Scholar
  45. Ishikawa, S.: Fixed points and iteration of a nonexpansive mapping in Banach space. Proc. Am. Math. Soc. 59, 65–71 (1976) MathSciNetMATHView ArticleGoogle Scholar
  46. Browder, F.E., Petryshyn, W.V.: The solution by iteration of nonlinear functional equations in Banach spaces. Bull. Am. Math. Soc. 72, 571–575 (1966) MathSciNetMATHView ArticleGoogle Scholar
  47. Baillon, J.B., Bruck, R.E., Reich, S.: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 1–9 (1978) MathSciNetMATHGoogle Scholar
  48. Censor, Y., Segal, A.: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587–600 (2009) MathSciNetMATHGoogle Scholar
  49. Zaknoon, M.: Algorithmic developments for the convex feasibility problem. PhD Thesis, University of Haifa, Haifa, Israel (Apr. 2003) Google Scholar
  50. Rockafellar, R.T.: Characterization of subdifferencetials of convex function. Pac. J. Math. 17, 497–510 (1966) MATHView ArticleGoogle Scholar
  51. Rockafellar, R.T.: On maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970) MathSciNetMATHView ArticleGoogle Scholar
  52. Nguyen, T.L.N., Shin, Y.: Deterministic sensing matrices in compressed sensing: a survey. Sci. World J. 2013, 1–6 (2013) View ArticleGoogle Scholar
  53. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Methodol. 58, 267–288 (1996) MathSciNetMATHGoogle Scholar
  54. Eslamian, M., Vahidi, J.: Split common fixed point problem of non-expansive semigroup. Mediterr. J. Math. 13, 1177–1195 (2016) MathSciNetMATHView ArticleGoogle Scholar
  55. Eslamian, M., Zamani Eskandani, G., Raeisi, M.: Split common null point and common fixed point problems between Banach spaces and Hilbert spaces. Mediterr. J. Math. 14, 119 (2017) MathSciNetMATHView ArticleGoogle Scholar
  56. Gibali, A.: A new split inverse problem and application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2(2), 243–258 (2017) MathSciNetMATHGoogle Scholar

Copyright

© The Author(s) 2019

Advertisement