- Research
- Open access
- Published:
New inertial algorithm for solving split common null point problem in Banach spaces
Journal of Inequalities and Applications volume 2019, Article number: 17 (2019)
Abstract
Inspired by the works of Alvarez and Attouch (Set-Valued Anal. 9:3–11, 2001), López et al. (Inverse Probl. 28:ID085004, 2012), Takahashi (Arch. Math. (Basel) 104(4):357–365, 2015) and Suantai et al. (Appl. Gen. Topol. 18(2):345–360, 2017), as well as Promluang and Kuman (J. Inform. Math. Sci. 9(1):27–44, 2017), we propose a new inertial algorithm for solving split common null point problem without the prior knowledge of the operator norms in Banach spaces. Under mild and standard conditions, the weak and strong convergence theorems of the proposed algorithms are obtained. Also the split minimization problem is considered as the application of our results. Finally, the performances and computational examples are presented, and a comparison with related algorithms is provided to illustrate the efficiency and applicability of our new algorithm.
1 Introduction
In an excellent paper [6], Byrne, Censor, Gibali and Reich introduced the following split common null point problem (SCNPP) for set-valued operators: find a point \(x^{*}\in H_{1}\) such that
and
where \(H_{1}\) and \(H_{2}\) are two real Hilbert spaces and \(A_{i}:H _{1}\rightarrow 2^{H_{1}}\), \(B_{j}:H_{2}\rightarrow 2^{H_{2}}\) are maximal monotone operators, \(T_{j}:H_{1}\rightarrow H_{2}\) are bounded linear operators.
The split common null point problem is motivated by many related problems. The first is the split inverse problem (SIP) which is formulated in Censor, Gibali and Reich [7]. It concerns a model in which two vector spaces X and Y and a linear operator \(A:X\rightarrow Y\) are given. In addition, two inverse problems are involved. The first one, denoted by \(\mathit{IP}_{1}\), is formulated in the space X, and the second one, denoted by \(\mathit{IP}_{2}\), is formulated in the space Y. Given these data, the SIP is formulated as follows:
The first instance of SIP is the split convex feasibility problem (SCFP) (see, e.g., Censor and Elfving [8]) in which \(\mathit{IP}_{1}\) and \(\mathit{IP}_{2}\) are the convex feasibility problems (CFP). The SCFP has been well studied during the last two decades, both theoretically and practically. In particular, the CFP has been used to model significant real-world problem in sensor network, radiation therapy treatment planning, resolution enhancement, and in many other instances; see, e.g., Byrne [9] and the references therein.
Soon after, many authors asked if other inverse problems can be used for \(\mathit{IP}_{1}\) and \(\mathit{IP}_{2}\), besides CFP, and be embedded in the SIP methodology. For example, can SIP be with a Null Point Problem in each of the two spaces?
In fact, the SCNPP can be put in the context of SIP and related works. For instance, the split variational inclusion problem (SVIP) which is an SIP with VIP in each of the two spaces (see, e.g., Censor et al. [7]). The SVIP is formulated as follows:
where C and Q are nonempty closed convex subsets of Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(f:H_{1}\rightarrow H_{1}\) and \(g:H_{2}\rightarrow H_{2}\) are two given operators, \(T:H_{1}\rightarrow H_{2}\) is a bounded linear operator. If we take \(C=H_{1}\), \(Q=H_{2}\) and \(x=x^{*}-f(x^{*})\), \(y=y^{*}-g(y^{*})\) in SVIP, then we can get the split zero problem (SZP) which is introduced in Censor et al. [7] (Sect. 7.3). The formulation of SZP is as follows:
Following the idea in Censor et al. [7] and Rockafellar [10], Moudafi [11] introduced the split monotone variational inclusion (SMVI) which generalized the SVIP. The SMVI is as follows:
where \(B_{1}:H_{1}\rightarrow 2^{H_{1}}\) and \(B_{2}:H_{2}\rightarrow 2^{H_{2}}\) are two set-valued mappings, \(f:H_{1}\rightarrow H_{1}\) and \(g:H_{2}\rightarrow H_{2}\) are two given operators, \(T:H_{1}\rightarrow H_{2}\) is a bounded linear operator. It is easy to see that the SZP is obtained from SMVI by letting \(B_{1}\) and \(B_{2}\) be zero operators. Since in Moudafi [11] all the applications of the SMVI were presented for \(f=g=0\), it follows that these applications are also covered by the SCNPP. In addition, the SCNPP is a generation of SZP.
Consequently, the SCNPP (1.1)–(1.2) has attracted wide attention thanks to the motivation of the above related problems and works. As for its applications in signal processing and image reconstruction, the reader can refer to Ansari and Rehan [12, 13], Censor et al. [14], Ceng et al. [15] and the reference therein.
Under the idea of CQ algorithm in Byrne [16, 17], relaxed CQ algorithm in Yang [18], extra-gradient method in Ceng et al. [15], many authors were dedicated to the study of the approximation solution of the SCNPP (1.1)–(1.2) for two set-valued mappings in Hilbert spaces in recent years, for instance, Byrne et al. [6] studied the following iterative method for two set-valued maximal operators in Hilbert spaces:
and obtained weak convergence of the sequence under suitable conditions. For more details on the methods of solving the SCNPP and the related issues in Hilbert spaces, the reader might refer to Moudafi and Thakur [19], Gibali et al. [20], Censor et al. [7], Shehu and Iyiola [21], Ceng et al. [22], Sitthithakerngkiet et al. [23].
Based on the above works, Takahashi [3, 24] extended such a problem in Hilbert spaces to Banach spaces and then obtained strong convergence theorems. Soon afterwards, Alofi, Alsulami and Takahashi [25] introduced the following Halpern’s iteration to find a common solution of split null point problem between Hilbert and Banach spaces:
where \(J_{E}\) is duality mapping on a Banach space, \(\{u_{n}\}\) is a sequence in a Hilbert space such that \(u_{n}\rightarrow u\), and the step size \(\lambda _{n}\) satisfies \(0<\lambda _{n}\|T\|^{2}< 2\). Under suitable assumptions, they obtained a strong convergence theorem. Very recently, Suantai et al. [4] proposed the following scheme to approximate the solution of SCNPP for two set-valued mappings in Banach spaces:
where the step size satisfies \(0<\lambda _{n}\|T\|^{2}< 2\). For more works on the solution of the SCNPP for two set-valued mappings in Banach spaces and related issues, the reader might refer to Promluang and Kuman [5], Kamimura and Takahashi [26], Takahashi [27], Ansari and Rehan [28], Kazmi and Rizvi [29], among others.
Although it is clear that the algorithms mentioned above have better theoretic properties such as, but not only, weak and strong convergence to a solution of the SCNPP in Hilbert or Banach spaces, there is still a drawback: either (1.3) in Hilbert spaces or (1.4) and (1.5) in Banach spaces needs prior knowledge of bounded operator norm \(\|T\|\). In general, it is not a simple task and it might affect the convergence of these algorithms.
On the other hand, López [2] presented an algorithm in Hilbert spaces for solving split feasibility problem whose step size is self-adaptive:
where the step size \(\tau _{k}=\frac{\rho _{k}f(x_{k})}{\|\nabla f(x _{k})\|^{2}}\), \(f(x_{k})=\frac{\|(I-P_{Q_{k}})Tx_{n}\|}{2}\), \(\nabla f(x_{k})= T^{*}(I-P_{Q_{k}})Tx_{n}\).
In addition, for approximating the null point of a maximal monotone operator, Alvarez and Attouch [1] introduced the following inertial proximal algorithm:
and obtained the weak convergence of the algorithm. Roughly speaking, the inertial technique may be exploited in some situations in order to “accelerate” the convergence. This point of view inspired various numerical methods related to the inertial terminology, all of them have nice convergence properties by incorporating second order information, see, e.g., Mainge [30], Alvarez [31, 32].
So it is natural to ask the following question:
Question 1.1
Can we construct a new inertial algorithm for solving the SCNPP (1.1)–(1.2) for two set-valued mappings in Banach spaces without prior knowledge of the operator norm \(\|T\|\)?
Motivated and inspired by the works of Alvarez and Attouch [1], Gibali et al. [20], López et al. [2], Takahashi [3], Alofi et al. [25] and Suantai et al. [4], as well as Promluang and Kuman [5], we wish to provide an affirmative answer to this question. Our contribution is a new inertial method, combining the idea of inertial proximal technique with self-adaptive rule, for solving the solution of the split common null point problem (SCNPP) (1.1)–(1.2) for two set-valued mappings in Banach spaces.
The outline of the paper is as follows. In Sect. 2, we collect definitions and results which are needed for our further analysis. In Sect. 3, our new inertial algorithms in Banach spaces are introduced and analyzed, and the weak and strong convergence theorems are obtained. In addition, the split minimizing problem is introduced as the application of our results in Sect. 4. Finally, numerical experiment including compressed sensing and a comparison with related algorithms are provided to illustrate the performances of our new algorithms.
2 Preliminaries
Let E be a real Banach space with norm \(\|\cdot \|\) and let \(E^{*}\) be the dual space of E. A normalized duality mapping \(J:E\rightarrow 2^{E^{*}}\) is defined by
where \(\langle \cdot ,\cdot \rangle \) denotes generalized duality pairing between E and \(E^{*}\). Let \(U=\{x\in E:\|x\|=1\}\). The norm of E is said to be Gâteaux differentiable if for each \(x,y\in U\), the limit
exists. In the case, E is called smooth. It is well known that E is smooth if and only if J is single-valued and if E is uniformly smooth then J is uniformly continuous on bounded subsets of E. We note that in a Hilbert space, J is the identity operator.
A Banach space E is said to be p-uniformly smooth if for a fixed real number \(1< p\leq 2\), there exists a constant \(c>0\) such that \(\rho (t)=ct^{p}\) for all \(t>0\). From Chang et al. [33] and Chidume [34], we know that if E is a 2-uniformly smooth Banach space, then for all \(x,y\in E\) there exists a constant \(c>0\) such that \(\|Jx-Jy\|\leq c\|x-y\|\).
A multi-valued mapping \(A: E\rightarrow 2^{E^{*}}\) with domain \(D(A)=\{x\in E, Ax\neq \emptyset \}\) is said to be monotone if
for all \(x,y \in D(A)\), \(x^{*}\in Ax\) and \(y^{*}\in Ay\). A monotone operator A on E is said to be maximal if its graph is not properly contained in the graph of any other monotone operator on E. The following theorem is due to Browder [35], see also Takahashi [36].
Theorem 2.1
(Browder [35])
Let E be a uniformly convex and smooth Banach space, and let J be the duality mapping of E into \(E^{*}\). Let A be a monotone operator of E into \(2^{E^{*}}\). Then A is maximal if and only if for any \(r>0\),
where \(\mathfrak{R}(J+rA)\) is the range of \(J+rA\).
Let E be a uniformly convex Banach space with a Gâteaux differentiable norm, and let \(A: E\rightarrow 2^{E^{*}}\) be a maximal monotone operator. Now we consider the metric resolvent of A
It is well known that the operator \(Q^{A}_{\mu }\) is firmly nonexpansive and the fixed points of the operator \(Q^{A}_{\mu }\) are the null points of A; see, e.g., Kohsaka and Takahashi [37, 38]. The resolvent plays an essential role in the approximation theory for zero points of maximal monotone operators in Banach spaces. According to the work of Aoyama et al. [39], we have the following properties:
in particular, if E is a real Hilbert space, then
where \(J^{A}_{\mu }=(I+\mu A)^{-1}\) is the general resolvent, \(A^{-1}(0)=\{z\in E:0\in Az\}\). For more details on the properties of firmly nonexpansive mappings, one can see, e.g., Aoyama et al. [39], Bauschke et al. [40].
Let H be a Hilbert space with the inner product \(\langle \cdot , \cdot \rangle \), induced norm \(\|\cdot \|\) and identity operator I. The symbols “→” and “⇀” denote the strong and weak convergence, respectively. For a given sequence \(\{x_{n}\}\subset H\), \(w_{w}(x_{n})\) denotes the weak w-limit set of \(\{x_{n}\}\), that is,
It is well known that
for any \(x,y,z \in H\) and for all α, β, γ with \(\alpha +\beta +\gamma =1\).
Moreover, the following inequality holds:
Let C be a closed convex subset of H. For every element \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that
The operator \(P_{C}\) is called the metric projection of H onto C and some of its properties are summarized as follows:
Moreover, for all \(x\in H\) and \(y\in C\), \(P_{C}x\) is characterized by
Lemma 2.2
(see, e.g., Xu [41] and Maingé [42])
Assume that \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that
where \(\{\theta _{n}\}\) is a sequence in \((0,1)\) and \(\{\delta _{n}\}\) is a sequence such that
-
(1)
\(\sum_{n=1}^{\infty }\theta _{n}=\infty \);
-
(2)
\(\lim \sup_{n\rightarrow \infty }\frac{\delta _{n}}{\theta _{n}}\leq 0\) or \(\sum_{n=1}^{\infty }|\delta _{n}|<\infty \).
Then the sequence \(\{a_{n}\}\) has a limit and \(\lim_{n\rightarrow \infty }a_{n}=0\).
Lemma 2.3
(see, e.g., Maingé [43])
Let \(\{\varGamma _{n}\}\) be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence \(\{\varGamma _{n_{j}}\}\) of \(\{\varGamma _{n}\}\) such that \(\varGamma _{n_{j}}<\varGamma _{n_{j+1}}\) for all \(j\geq 0\). Also consider the sequence of integers \(\{\sigma (n)\}_{n \geq n_{0}}\) defined by
Then \(\{\sigma (n)\}_{n\geq n_{0}}\) is a nondecreasing sequence verifying \(\lim_{n\rightarrow \infty }\sigma (n)=\infty \) and, for all \(n\geq n_{0}\),
Lemma 2.4
(see, e.g., Halpern [44] and Suzuki [40])
Let H be a real Hilbert space and \(\{x_{n}\}\in H\) such that there exists a nonempty closed convex subset \(C\subset H\) satisfying
-
(i)
For every \(z\in C\), \(\lim_{n\rightarrow \infty }\|x_{n}-z \|\) exists;
-
(ii)
Any weak cluster point of \(\{x_{n}\}\) belongs to C.
Then there exists \(\bar{x}\in C\) such that \(\{x_{n}\}\) converges weakly to x̄.
Lemma 2.5
(see, e.g., Maingé [30])
Let \(\{\varGamma _{k}\}\) and \(\{\delta _{n}\}\) be sequences in \([0,+\infty )\) which satisfy:
-
(i)
\(\varGamma _{n+1}-\varGamma _{n}\leq \theta _{n}(\varGamma _{n}- \varGamma _{n-1})+\delta _{n}\);
-
(ii)
\(\sum_{n=1}^{\infty }\delta _{n}<\infty \);
-
(iii)
\(\theta _{n}\in [0,\theta ]\), where \(\theta \in [0,1)\).
Then \(\{\varGamma _{n}\}\) is a converging sequence and \(\sum_{n=1}^{ \infty }[\varGamma _{n+1}-\varGamma _{n}]_{+}<\infty \), where \([t]_{+}=\max \{t,0\}\) for any \(t\in \mathbb{R}\).
3 Main results
In this section, we introduce our algorithms and state our main results.
Throughout the rest of this paper, we always assume that H is a real Hilbert space and E is a 2-uniformly convex smooth Banach space. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be two maximal monotone operators. Let \(T:H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \).
Consider the following split common null point problem in Banach spaces:
Now we define the functions
and
where J is the duality operator on E.
In the rest of this paper, we denote \(\varOmega =A^{-1}(0)\cap T^{-1}(B ^{-1}0)\), which means \(\varOmega =\{x^{*}\in H: x^{*}\in A^{-1}(0), Tx ^{*}\in B^{-1}(0)\}\).
3.1 Algorithms
Algorithm 3.1
Choose two positive sequences \(\{\epsilon _{n} \}\), \(\{\rho _{n}\}\) satisfying \(\sum_{n=1}^{\infty }\epsilon _{n}< \infty \), \(0<\rho _{n}<4\).
Select arbitrary starting points \(x_{0}, x_{1}\in C\), constant \(\alpha \in [0,1)\), and choose \(\alpha _{n}\) such that \(0<\alpha _{n}<\bar{ \alpha _{n}}\), where
Iterative Step. Given the iterates \(x_{n}\) (\(n\geq 1\)), for \(r>0\), compute
and calculate the step size
and the next iterate
Stop Criterion. If \(x_{n+1}=w_{n}\) then stop. Otherwise, set \(n:=n+1\) and return to Iterative Step.
Algorithm 3.2
Choose positive sequences \(\{\epsilon _{n}\}\), \(\{\rho _{n}\}\), \(\{\beta _{n}\}\) and \(\{\gamma _{n}\}\) satisfying \(\sum_{n=1}^{\infty }\epsilon _{n}<\infty \), \(0<\rho _{n}<4\) and
Select arbitrary starting points \(x_{0}, x_{1}\in C\), constant \(\alpha \in [0,1)\), and choose \(\alpha _{n}\) such that \(0<\alpha _{n}<\bar{ \alpha _{n}}\), where
Iterative Step. Given the iterates \(x_{n}\) (\(n\geq 1\)), for \(r>0\), \(\mu >0\), compute
and calculate the step size
and the next iterate
Stop Criterion. If \(x_{n+1}=w_{n}\) then stop. Otherwise, set \(n:=n+1\) and return to Iterative Step.
3.2 Weak convergence analysis for Algorithm 3.1
Lemma 3.1
Let H be a real Hilbert space, E a strictly convex reflexive and smooth Banach space, and let J be duality mapping on E. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be maximal operators such that \(A^{-1}(0)\neq \emptyset \) and \(B^{-1}(0) \neq \emptyset \). Let \(T:H\rightarrow E\) be a bounded linear operator such that \(T\neq \varnothing \) and \(T^{*}\) be the adjoint operator of T. Suppose that \(\varOmega =A^{-1}(0)\cap T^{-1}(B^{-1}(0))\neq \emptyset \). Let \(\lambda ,\mu ,r>0\) and \(z\in H\). Then the following are equivalent:
-
(1)
\(z\in A^{-1}(0)\cap T^{-1} (B^{-1}(0))\);
-
(2)
\(z=J^{A}_{r}(I-\lambda T^{*}J(I-Q_{\mu }^{B})T)z\),
where \(J^{A}_{r}=(I+rA)^{-1}\), \(Q_{\mu }^{B}=(I+\mu J^{-1} B)^{-1}\).
Proof
Since \(A^{-1}(0)\cap T^{-1} (B^{-1}(0))\neq \emptyset \), there exists \(z_{0}\in A^{-1}(0)\) such that \(Tz_{0}\in B^{-1}(0)\).
\((2)\Rightarrow (1)\). Assuming \(z=J^{A}_{r}(I-\lambda T^{*}J(I-Q _{\mu }^{B})T)z\), it follows from property (2.2) of \(J^{A}_{r}\) that
which yields
and hence
Therefore
On the other hand, since \(Q_{\mu }^{B}\) is the resolvent of B for \(\mu >0\), we have
It follows from \(Tz_{0}\in B^{-1}(0)\) that
Combining with (3.4) and (3.5), we can get
that is,
which means that \(Q_{\mu }^{B}Tz=Tz\). Therefore we obtain \(z=J^{A} _{r}(I-\lambda T^{*}J(I-Q_{\mu }^{B})T)z=J^{A}_{r}z\). Consequently, \(z\in A^{-1}(0)\cap T^{-1} (B^{-1}0)\).
\((1)\Rightarrow (2)\). Since \(z\in A^{-1}(0)\cap T^{-1} (B ^{-1}(0))\), we have that \(Tz\in B^{-1}(0)\) and \(z\in A^{-1}(0)\). It follows that \(z=J^{A}_{r}z\) and \(Tz=Q_{\mu }^{B}Tz\). Thus we get
This completes the proof. □
Lemma 3.2
Let H be a real Hilbert space, E a real 2-uniformly smooth Banach space, and let J be duality mapping on E. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be maximal operators such that \(A^{-1}(0)\neq \emptyset \) and \(B^{-1}(0)\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \). Assume that \(T^{-1}(B^{-1}(0))\neq \emptyset \). Let \(F= T^{*}J(I-Q _{\mu }^{B})T\), then F is Lipschitz continuous.
Proof
According to the work of Kohsaka and Takahashi [37, 38], \(Q_{\mu }^{B}\) is nonexpansive. Moreover, since E is a 2-uniformly smooth Banach space, there exists a constant \(c>0\) such that \(\|Jx-Jy\|\leq c\|x-y\|\) for all \(x,y\in E\), therefore we estimate
which implies that F is Lipschitz continuous. Similarly, \(I-J^{A} _{r}\) is Lipschitz continuous. This completes the proof. □
Lemma 3.3
Let us consider the split common null point problem with its solution \(\varOmega =A^{-1}(0)\cap T^{-1} (B^{-1}0)\) in Banach spaces. If \(x_{n+1}=w_{n}\) in Algorithm 3.1, then \(w_{n}\in \varOmega \).
Proof
If \(x_{n+1}=w_{n}\), then we have \(w_{n}=J^{A}_{r}(I- \lambda _{n}T^{*}J(I-Q_{\mu }^{B})T)w_{n}\). According to Lemma 3.1, we conclude that \(w_{n}\in A^{-1}(0)\) and \(Tw_{n}\in B^{-1}(0)\), that is, \(w_{n}\in \varOmega \). The proof is complete. □
Theorem 3.4
Let H be a real Hilbert space, E be a uniformly convex and 2-uniformly smooth Banach space. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be two maximal monotone operators such that \(\varOmega =A^{-1}(0)\cap T^{-1} (B^{-1}0)\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \). Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 converges weakly to \(x^{*} \in \varOmega \).
Proof
Without loss of generality, we take \(z\in \varOmega \), and then get \(z=J^{A}_{r} z\), \(Tz=Q_{\mu }^{B}Tz\) and \(J_{r}^{A}(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)z=z\), therefore from (2.3) we obtain
and since \(J_{r}^{A}\) is nonexpansive,
It follows from property (2.1) of \(Q_{\mu }^{B}\) that
and then we have that
Therefore we have from (3.6) that
Thus we get
and
The fact that
implies \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|^{2}< \sum_{n=1}^{\infty }\epsilon _{n}<\infty \). Denoting \(\varGamma _{n}=\|x _{n}-z\|^{2}\) and using Lemma 2.5 in (3.8), we conclude that \(\|x_{n}-z\|^{2}\) is a converging sequence, which implies that \(\{x_{n}\}\) is bounded, and so is \(\{w_{n}\}\).
Moreover, we have \(\sum_{n=1}^{\infty }(\|x_{n}-z\|^{2}-\|x_{n-1}-z \|^{2})_{+}<\infty \), and it follows from (3.9) that
Since \(F(w_{n})\) and \(H(w_{n})\) are Lipschitz continuous by Lemma 3.2, they are bounded, thus we have \(f(w_{n})\rightarrow 0\), therefore
Next we show that \(w_{w_{n}}(x_{n})\subset \varOmega \). Let \(\bar{x} \in w_{w_{n}}(x_{n})\) be an arbitrary element. Since \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{nk}\}\) of \(\{x_{n}\}\) which converges weakly to x̄. Note that again
which implies that \(\|w_{n}-x_{n}\|=\alpha _{n}\|x_{n}-x_{n-1}\|\rightarrow 0\). Therefore, there exists a subsequence \(\{w_{nk}\}\) of \(\{w_{n}\}\) which converges weakly to x̄. It follows from the lower semicontinuity of \((I-Q_{\mu }^{B})T\) and J that
which means that \(T\bar{x}\in B^{-1}(0)\).
On the other hand, according (2.4), we have
According to property (2.2) of the resolvent, we have \(\langle J^{A} _{r}z_{n}-z,(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)w_{n}-x_{n+1} \rangle \geq 0\), where \(z_{n}=(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)w _{n}\), \(x_{n+1}= J^{A}_{r}z_{n} \) and \(z\in A^{-1}(0)\). Therefore, it follows from (3.10) that
Thus, it follows from (3.11) that \(\|x_{n+1}-w_{n}\|\rightarrow 0\), which yields \(\|J^{A}_{r}(I-\lambda _{n} T^{*}J(I-Q_{\mu }^{B})T)w_{n}-w _{n}\|\rightarrow 0\). Since recursion (3.3) can be rewritten as \(w_{n}-x_{n+1}-\lambda _{n}T^{*}J(I-Q^{B}_{\mu })Tw_{n}\in rA x_{n+1}\), we can conclude that
In addition, from (3.10) and (3.11), we get that
which means that \(0\in Ax_{n+1}\), therefore \(0\in A\bar{x}\) and \(\bar{x}\in A^{-1}(0)\). Consequently, \(\bar{x}\in \varOmega \). Since the choice of x̄ is arbitrary, we conclude that \(w_{w_{n}}(x_{n}) \subset \varOmega \). Hence it follows Lemma 2.4 that the result holds and the proof is complete. □
Remark 3.5
If the operator \(A:H\rightarrow 2^{H}\) and \(B:E\rightarrow 2^{E}\) are set-valued, odd and maximal monotone mappings, then the operator \(J_{r}^{A}(I-\lambda _{n}T^{*}J(I-Q_{ \mu }^{B})T)\) is asymptotical regular (see Ishikawa [45, Theorem 4.1] and Browder and Petryshyn [46, Theorem 5]) and odd. Consequently, the strong convergence of Algorithm 3.1 is obtained. (see Baillon et al. [47, Theorem 1.1], Byrne et al. [6, Theorem 4.3]).
Remark 3.6
If we take \(\lambda _{n}\equiv \gamma \) in Theorem 3.4, where \(\gamma \in (0,\frac{2}{L})\), \(L=\|T^{*}T\|\), the result holds.
3.3 Strong convergence analysis for Algorithm 3.2
For the strong convergence theorem of Algorithm 3.2, which we present next, we recall the minimum-norm element of Ω, which is a solution of the following problem:
Theorem 3.7
Let H be a real Hilbert space and E be a uniformly convex and 2-uniformly smooth Banach space. Let \(A:H\rightarrow 2^{H}\), \(B:E\rightarrow 2^{E^{*}}\) be two maximal monotone operators such that \(\varOmega \neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*}\rightarrow H\) and \(T\neq \varnothing \). Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.2 converges strongly to \(z=P_{\varOmega }(0)\), the minimum-norm element of Ω.
Proof
We show several steps to illustrate the result. For simplicity, we denote \(u_{n}=J^{A}_{r}(I-\lambda _{n}T^{*}J(I-Q_{ \mu }^{B})T)w_{n}\).
Step 1. We show that sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded. Since Ω is not empty, we take \(p\in \varOmega \), and then it follows from (3.7) that
which means that \(\|u_{n}-p\|\leq \|w_{n}-p\|\leq \|x_{n}-p\|+\alpha _{n}\|x_{n}-x_{n-1}\|\).
At the same time, we have that
Since
we have \(\frac{(1-\gamma _{n})\alpha _{n}}{\gamma _{n}}\|x_{n}-x_{n-1}\| \rightarrow 0\), so the sequence \(\{\frac{(1-\gamma _{n})\alpha _{n}}{ \gamma _{n}}\|x_{n}-x_{n-1}\|\}\) is bounded, and hence
where \(\sigma =\sup_{n\in \mathbb{N}}\{\frac{(1-\gamma _{n})\alpha _{n}}{ \gamma _{n}}\|x_{n}-x_{n-1}\|\}\). Therefore we conclude that the sequence \(\{\|x_{n}-z\|\}\) is bounded, which in turn means that \(\{x_{n}\}\) is bounded, and so are \(\{u_{n}\}\) and \(\{w_{n}\}\).
Step 2. We show that \(\|x_{n+1}-x_{n}\|\rightarrow 0\) and \(x_{n}\rightarrow z\), where \(z=P_{\varOmega }(0)\), the minimum-norm element of Ω. To this end, we set \(y_{n}=(1-\beta _{n})w_{n}+\beta _{n}u _{n}\), and then \(x_{n+1}=y_{n}-\gamma _{n}w_{n}=(1-\gamma _{n})y_{n}- \gamma _{n}\beta _{n}(w_{n}-u_{n})\). Therefore we have from (2.4) that
Note that
so
On the other hand, it follows from (3.6) that
hence we have from (2.3) that
which implies
Next we consider two possible cases for the convergence of the sequence \(\{\|x_{n}-z\|^{2}\}\).
Case I. Assume that \(\{\|x_{n}-z\|\}\) is not increasing, that is, there exists \(n_{0}\geq 0\) such that \(\|x_{n+1}-z\|\leq \|x_{n}-z \|\), for each \(n\geq n_{0}\). Therefore the limit of \(\|x_{n}-z\|\) exists and \(\lim_{n\rightarrow \infty }(\|x_{n+1}-z\|-\|x_{n}-z\|)=0\). Since \(\lim_{n\rightarrow \infty }\gamma _{n}=0\) and \(\alpha _{n}\|x_{n}-x _{n-1}\|^{2}<\epsilon _{n}=o (\gamma _{n})\rightarrow 0\), it follows from formula (3.14) that
Note that since \(\inf (1-\beta _{n}-\gamma _{n})\beta _{n}>0\), \(\inf \rho _{n}(4-\rho _{n})>0\) and F and H are Lipschitz continuous, we obtain
so \(f(w_{n})\rightarrow 0\) and \(\|u_{n}-w_{n}\|\rightarrow 0\) as \(n\rightarrow \infty \); furthermore, \(\|y_{n}-w_{n}\|=\beta _{n}\|u _{n}-w_{n}\|\rightarrow 0\). In view of the fact that \(\|x_{n+1}-y_{n} \|=\gamma _{n}\|w_{n}\|\rightarrow 0\) and \(\|w_{n}-x_{n}\|=\alpha _{n} \|x_{n}-x_{n-1}\|<\epsilon _{n}\rightarrow 0\), we have
Similarly as in the proof of Theorem 3.4, we conclude that \(w_{w}(x _{n})\subset \varOmega \). It follows from (3.6) that
where \(M=\sup_{n\in \mathbb{N}}\{\|x_{n}-z\|+\|x_{n-1}-z\|+2\|x_{n}-x _{n-1}\|\}\).
Combining with (3.12), we have
Due to property (2.5), we have that \(\langle 0-P_{\varOmega }(0),y-P_{ \varOmega }(0)\rangle \leq 0\), \(\forall y\in \varOmega \), and therefore
By using Lemma 2.2 to (3.15), since \(\gamma _{n}\rightarrow 0\), \(w_{n}-u _{n}\rightarrow 0\), \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|< \infty \), we conclude that \(\|x_{n}-z\|\rightarrow 0\), that is, sequence \(\{x_{n}\}\) converges strongly to \(z=P_{\varOmega }(0)\). Furthermore, we have from the property of metric projection that, for all \(p\in \varOmega \),
which implies that z is the minimum-norm solution of the split null point problem.
Case II. If the sequence \(\|x_{n}-z\|^{2}\) is increasing, without loss of generality, we assume that there exists a subsequence \(\{\|x_{nk}-z\|\}\) of \(\{\|x_{n}-z\|\}\) such that \(\|x_{nk}-z\|\leq \|x_{nk+1}-z\|\) for all \(k\in \mathbb{N}\). In this case, we define an indicator
such that \(\sigma (n)\rightarrow \infty \) as \(n\rightarrow \infty \) and \(\|x_{\sigma (n)}-z\|^{2}\leq \|x_{\sigma (n)+1}-z\|^{2}\), and so from (3.14)
since \(\gamma _{\sigma (n)}\rightarrow 0 \) as \(n\rightarrow \infty \). Similarly as in the proof in Case I, we get that
and
where \(M_{1}= \sup_{{\sigma (n)}\in \mathbb{N}}\{\|x_{\sigma (n)}-z\|+ \|x_{{\sigma (n)}-1}-z\|+2\|x_{\sigma (n)}-x_{{\sigma (n)}-1}\|\}\).
Therefore
Combining with the above formulas (3.16), (3.17) and (3.19) yields
and hence
From (3.18) we have
Thus \(\lim_{n\rightarrow \infty }\|x_{\sigma (n)+1}-z\|^{2}=0 \). Therefore, according to Lemma 2.3, we have
Consequently, sequence \(\{x_{n}\}\) converges strongly to \(z=P_{\varOmega }(0)\), which is the minimum-norm element of Ω. The proof is complete. □
Remark 3.8
If there are two firmly nonexpansive operators \(U:H\rightarrow H\) and \(W:E\rightarrow E\), where H and E are n- and m-dimensional Euclidean spaces, respectively, and T is a real \(m\times n\) matrix, we consider the split common fixed point problem as follows: find \(x^{*}\in \operatorname{Fix}(U)\) such that \(Tx^{*}\in \operatorname{Fix}(W)\), where \(\operatorname{Fix}(U)\) and \(\operatorname{Fix}(W)\) are fixed point sets of U and W, respectively. Taking \(J_{r}^{A}= U\) and \(Q_{\mu }^{B}=W\), we can then approximate the solution of the split common fixed point problem from the above algorithms. For direct operators in Euclidean spaces, the above algorithms also work for the split common fixed point problem; one can refer to the the work of Censor and Segal [48] and Kaznoon [49].
4 Applications
In this part, we consider our result for solving the split minimizing problem. The split minimization problem in Banach spaces is formulated as follows: find \(x\in H\) such that
where H and E are real Hilbert and Banach spaces, respectively, \(f:H\rightarrow \mathbb{R}\), \(g:E\rightarrow \mathbb{R}\) are two proper convex lower semicontinuous functions and \(T:H\rightarrow E\) is a bounded linear operator.
Denote
and
From Rockafellar [50, 51], one can see that \(\mathrm{Prox}_{\lambda g}(x)\) is the metric resolvent of ∂g, and \(\mathrm{Prox}_{r f}(x)\) is the general resolvent of ∂f, where
and
are subdifferential operators of g and f, respectively. It is clear that \(\partial g:E\rightarrow 2^{E^{*}}\) and \(\partial f:H\rightarrow 2^{H}\) are maximal monotone operators and \((\partial g)^{-1}(0)= \operatorname{argmin} \{ g(x):x\in E\}\), \((\partial f)^{-1}(0)= \operatorname{argmin} \{ f(x):x\in H\}\).
Now we take \(A=\partial f\), \(B=\partial g\) in our theorems, and then the following results hold:
Theorem 4.1
Let H be a real Hilbert space and E be a uniformly convex and 2-uniformly smooth Banach space. Let \(f:H\rightarrow \mathbb{R}\), \(g:E\rightarrow \mathbb{R}\) be two proper convex lower semicontinuous functions such that \(\varOmega =(\partial f)^{-1}(0) \cap T^{-1} ((\partial g)^{-1}(0))\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*} \rightarrow H\) and \(T\neq \varnothing \). For arbitrary \(x_{0}, x_{1} \in H\),
where the sequences \(\{\alpha _{n}\}\), \(\{\lambda _{n}\}\) are the same as in Algorithm 3.1, then sequence \(\{x_{n}\}\) converges weakly to \(x^{*}\in \varOmega \).
Theorem 4.2
Let H be a real Hilbert space and E be a uniformly convex and 2-uniformly smooth Banach space. Let \(f:H\rightarrow \mathbb{R}\), \(g:E\rightarrow \mathbb{R}\) be two proper convex lower semicontinuous functions such that \(\varOmega =(\partial f)^{-1}(0) \cap T^{-1} ((\partial g)^{-1}(0))\neq \emptyset \). Let \(T: H\rightarrow E\) be a bounded linear operator with adjoint operator \(T^{*}:E^{*} \rightarrow H\) and \(T\neq \varnothing \). For arbitrary \(x_{0}, x_{1} \in H\),
where the sequences \(\{\alpha _{n}\}\), \(\{\lambda _{n}\}\), \(\{\beta _{n} \}\) and \(\{\gamma _{n}\}\) are the same as in Algorithm 3.2, then sequence \(\{x_{n}\}\) converges strongly to \(z=P_{\varOmega }(0)\).
5 Numerical examples
In this section, we present some examples to illustrate the applicability, efficiency and stability of our inertial self-adaptive step size iterative algorithms. We have written all the codes in Matlab R2016b and ran them on an LG dual core personal computer.
5.1 Numerical behavior of Algorithm 3.1
Example 5.1
Let \(H=E=\mathbb{R}\). Define the operators A, B and T by \(Ax=3x\), \(Bx=2x\), \(Tx=x\). In this example, we choose \(\epsilon _{n}=\frac{1}{(n+1)^{2}}\) and \(\alpha \in (0,1)\). If \(\alpha <\epsilon _{n}(\max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x_{n-1}\| \})^{-1}\), then \(\alpha _{n}=\frac{\alpha }{2}\); otherwise, we take \(\alpha _{n}= \frac{1}{(n+2)^{2}}\max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x _{n-1}\|\}^{-1}\), we set \(\rho _{n}=3-\frac{1}{n+1}\) for all \(n\in \mathbb{N}\) on Algorithm 3.1. We first test different α for given initial points \(x_{0}\) and \(x_{1}\), then test different initial points for \(r=1\). We aim to find the minimizers of A and B. According to Algorithm 3.1, we have the following numerical results in Fig. 1.
Example 5.2
Let \(H=E=\mathbb{R}^{3}\). Define the operators A, B and T as follows:
In this example, we set the parameters of Algorithm 3.1 by \(\rho _{n}=3- \frac{1}{n+1}\), \(\epsilon _{n}=\frac{1}{(n+1)^{2}}\) for all \(n\in \mathbb{N}\). Similarly as in Example 5.1, if \(\alpha <\epsilon _{n}( \max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x_{n-1}\|\})^{-1}\), then \(\alpha _{n}=\frac{\alpha }{2}\); otherwise, we take \(\alpha _{n}= \frac{1}{(n+2)^{2}}\max \{\|x_{n}-x_{n-1}\|^{2},\|x_{n}-x_{n-1}\|\} ^{-1}\). At the same time, we set the parameters \(\beta _{n}= \frac{n-1}{n+1}\), \(\gamma _{n}=\frac{1}{n+1}\) of Algorithm 3.2. The experimental results of Algorithms 3.1 and 3.2 are reported in Figs. 2–3 and Table 1 and Table 2.
At this stage we would like to emphasize that our step size of Algorithms 3.1 and 3.2 are self-adaptive and not given beforehand. We have no need to know the norm of the operator T. The above figures and tables imply that sequences \(\{x_{n}\}\) generated by Algorithms 3.1 and 3.2 approximate the null point \(x^{*}\in A^{-1}(0)\cap T^{-1}(B^{-1}(0))\).
5.2 Comparison of Algorithm 3.2 with other algorithms
In this part, we present several experiments to compare Algorithm 3.2 with other algorithms. Two algorithms used to compare are the viscosity method (VM) of Suantai et al. [4], and the Halpern-type method (HM) of Alofi et al. [25], in which the step size depends on the norm of operator T. For the three algorithms, the operators A, B, T are defined as in Example 5.2. In view of the fact that the norm \(\|T\|\simeq 14.87\), we take the step size \(\lambda _{n}=0.001\) in the algorithms of Suantai et al. [4] and Alofi et al. [25].
We set the parameters \(\beta _{n}=\frac{2n-1}{2n+1}\), \(\rho _{n}=3- \frac{1}{n+1}\) and \(\gamma _{n}=\frac{1}{2n+1}\) in our Algorithm 3.2; \(\alpha _{n}=\frac{1}{2n+1}\), \(\beta _{n}=\gamma _{n}=\frac{n}{2n+1}\) and the contraction \(f(x)=\frac{x}{5}\) in Suantai et al. [4]; \(\beta _{n}=\frac{n}{2n+1}\), \(\alpha _{n}=\frac{1}{2n+1}\) and \(u_{n}=0\) in Alofi et al. [25]. In addition, we choose the stopping criterion for all the algorithms \(\|x_{n+1}-x_{n}\|\leq \mathit{DOL}\). Furthermore, we take \(x_{0}=(13,-12,25)\) and compare the iterations and computer times. The experiment results are reported in Fig. 4 and Table 3.
From Table 3, we can see that our Algorithm 3.2 is the best and seems to have a competitive advantage. However, as mentioned in the previous sections, the main advantage of our Algorithm 3.2 is that the inertial technique combined with self-adaptive step size is employed without the prior knowledge of operator norms.
5.3 Compressed sensing
For the last example we choose a problem from the field of compressed sensing, that is, recovery of a sparse and noisy signal from a limited number of sampling. Let \(x_{0}\in \mathbb{R}^{n}\) be K-sparse signal, \(K\ll n\). The sampling matrix \(A\in \mathbb{R}^{m\times n}\), \(m< n\) is stimulated from the standard Gaussian distribution and vector \(b=Ax+\epsilon \), where ϵ is additive noise. When \(\epsilon =0\), there is no noise in the observed data. Our task is to recover signal \(x_{0}\) from data b. For further explanations, one can consult Nguyen and Shin [52].
For solving the problem, we recall the LASSO problem Tibshirani [53]:
where \(t>0\) is a given constant. So, in relation with the SVIP (1.1)–(1.2), we consider \(B_{1}^{-1}(0)=\{x| \|x\|_{1}\leq t\}\), \(B_{2}^{-1}(0)=\{b\}\) and define
and define
We set the parameters \(\beta _{n}=\frac{2n-1}{2n+1}\), \(\rho _{n}=3- \frac{1}{n+1}\) and \(\gamma _{n}=\frac{1}{2n+1}\) in our Algorithm 3.2 and compare with the results of Sitthithakerngkiet et al. [23] and Kazmi et al. [29]. For the experiment setting we choose the following parameters: \(A\in \mathbb{R}^{m\times n}\) is generated randomly with \(m=2^{10}\), \(n=2^{12}\), \(x_{0}\in \mathbb{R}^{n}\) contains K-spikes with amplitude ±1 distributed in the whole domain randomly. In addition, for simplicity, we take \(f(x)=\frac{x}{2}\), \(S=I\), \(\alpha _{i}= \frac{1}{i+1}\) in [29] and \(S_{i}=I\), \(\alpha _{i}=10^{-3}/(i+1)\), \(\beta _{i}=0.5-1/(10i+2)\) in [23] and \(\alpha _{i}=\frac{i-1}{i+1}\), \(\gamma _{i}=\frac{1}{i+1}\) in our algorithms. In addition, we take \(t=K\) in all the algorithms and the stopping criterion \(\|x_{n+1}-x_{n}\|\leq \mathit{DOL}\) with \(\mathit{DOL}=10^{-6}\). All the numerical results are presented in Table 4 and Figs. 4–5.
6 Conclusion
Many important problems in mathematics, sciences, engineering and other fields can be reformulated in terms of finding zero points or null point of nonlinear operators. The split null point problem has received attention due to its wide applications in real world such as signal processing, image reconstruction, with particular progress in intensity-modulated radiation therapy, approximation as well as control theory. For solving the split common null point problem, many authors have dedicated their efforts to the construction of iterative algorithms. However, the drawback of these algorithms is that either the step size depends on the linear bounded operator norm in Banach spaces or the maximal operator belongs to Hilbert spaces.
This motivated studying the solution set of the split common null point problem without prior knowledge of the operator norms in Banach spaces. The main result of this paper is a new inertial algorithm which incorporates the self-adaptive step size rule to solve the split null point problems for multi-valued maximal monotone operators in Banach spaces. To some extent, the weak and strong convergence theorems of the new inertial algorithm in this paper complement the approximating methods for the solution of split common null point problem and extend and unify some results (see, e.g., Byrne et al. [6], Takahashi [23], Alofi [25], Suantai et al. [4] and Promluang and Kuman [5]). In addition, the numerical examples and comparisons are presented to illustrate the efficiency and reliability of our algorithms.
References
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)
López, G., Martin-Marquez, V., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, ID085004 (2012)
Takahashi, W.: The split common null point problem in Banach spaces. Arch. Math. (Basel) 104(4), 357–365 (2015)
Suantai, S., Srisap, K., Naprang, N., Mamat, M., Yundon, V., Cholamjiak, J.: Convergence theorems for finding split common null point problem in Banach spaces. Appl. Gen. Topol. 18(2), 345–360 (2017)
Promluang, K., Kumam, P.: Viscosity approximation method for split common null point problems between Banach spaces and Hilbert spaces. J. Inform. Math. Sci. 9(1), 27–44 (2017)
Byrne, C., Censor, Y., Gibali, A., Reich, S.: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759–775 (2012)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Censor, Y., Elfving, T.: A multi-projection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221–239 (1994)
Byrne, C.L.: Iterative projection onto conve sets using multiple Bregman distances. Inverse Probl. 15, 1295–1313 (1999)
Rockafellar, R.T.: On maximal monotonicity of sums of nonlinear operators. Trans. Am. Math. Soc. 149, 75–88 (1970)
Moudaf, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)
Ansari, Q.H., Rehan, A., Wen, C.F.: Implicit and explicit algorithms for split common fixed point problems. J. Nonlinear Convex Anal. 17(7), 1381–1397 (2016)
Ansari, Q.H., Rehan, A.: Split feasibility and fixed point problems. In: Ansari, Q.H. (ed.) Nonlinear Analysis: Approximation Theory, Optimization and Applications, pp. 281–322. Springer, New Delhi (2014)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2003)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64, 633–642 (2012)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Yang, Q.: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 20, 1261–1266 (2004)
Moudafi, A., Thakur, B.S.: Solving proximal split feasibility problem without prior knowledge of matrix norms. Optim. Lett. 8(7), 2099–2110 (2013). https://doi.org/10.1007/s11590-013-0708-4
Gibali, A., Mai, D.T., Nguyen, T.V.: A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2018, 1–25 (2018)
Shehu, Y., Iyiola, O.S.: Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method. J. Fixed Point Theory Appl. 19, 2483–2510 (2017)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: Mann type iterative methods for finding a common solution of split feasibility and fixed point problems. Positivity 16(3), 471–495 (2012)
Sitthithakerngkiet, K., Deepho, J., Kumam, P.: Convergence analysis of a general iterative algorithm for finding a common solution o split variational inclusion and optimization problems. Numer. Algorithms (2018). https://doi.org/10.1007/s11075-017-0462-2
Takahashi, W., Yao, J.C.: Strong convergence theorems by hybrid methods for the split common null point problem in Banach spaces. Fixed Point Theory Appl. 2015, Article ID 87 (2015)
Alofi, A.S., Alsulami, M., Takahashi, W.: Strongly convergent iterative method for the split common null point problem in Banach spaces. J. Nonlinear Convex Anal. 2, 311–324 (2016)
Kamimura, S., Takahashi, W.: Strong convergence of proximal-type algorithm in Banach spaces. SIAM J. Optim. 13, 938–945 (2002)
Takahashi, W.: The split common null point problem for generalized resolvents in two Banach spaces. Numer. Algorithms 75(4), 1065–1078 (2017)
Ansari, Q.H., Rehan, A.: Iterative methods for generalized split feasibility problems in Banach spaces. Carpath. J. Math. 33(1), 9–26 (2017)
Kazmi, K.R., Rizvi, S.H.: An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 8, 1113–1124 (2014)
Mainge, P.E.: Convergence theorem for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223–236 (2008)
Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102–1119 (2000)
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2003)
Chang, S.S., Cho Zhou, Y.J., Zhou, H.Z.: Iterative Methods for Nonlinear Operator Equation in Banach Spaces. Nova Science Publishers, Huntington (2002)
Chidume, C.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Note in Mathematics, vol. 1965. Springer, London (2009)
Browder, F.E.: Nonlinear maximal monotone operators in Banach spaces. Math. Ann. 175, 89–113 (1968)
Takashi, W.: Convex Analysis and Approximation of Fixed Point. Yokohama Publishers, Yokohama (2009)
Kohsaka, F., Takahashi, W.: Existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces. SIAM J. Optim. 19(2), 824–835 (2008)
Kohsaka, F., Takahashi, W.: Fixed point theorems for a class of nonlinear mappings related to maximal monotone operators in Banach spaces. Arch. Math. 91(2), 166–177 (2008)
Aoyama, K., Kohsaka, F., Takahashi, W.: Three generalizations of firmly nonexpansive mappings: their relations and continuity properties. J. Nonlinear Convex Anal. 10(1), 131–147 (2009)
Bauschke, H.H., Wang, X.F., Yao, L.J.: General resolvents for monotone operators: characterization and extension (2008). arXiv:0810.3905v1
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(2), 240–256 (2002)
Mainge, P.E.: Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 325, 469–479 (2007)
Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Halpern, B.: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957–961 (1967)
Ishikawa, S.: Fixed points and iteration of a nonexpansive mapping in Banach space. Proc. Am. Math. Soc. 59, 65–71 (1976)
Browder, F.E., Petryshyn, W.V.: The solution by iteration of nonlinear functional equations in Banach spaces. Bull. Am. Math. Soc. 72, 571–575 (1966)
Baillon, J.B., Bruck, R.E., Reich, S.: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 1–9 (1978)
Censor, Y., Segal, A.: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587–600 (2009)
Zaknoon, M.: Algorithmic developments for the convex feasibility problem. PhD Thesis, University of Haifa, Haifa, Israel (Apr. 2003)
Rockafellar, R.T.: Characterization of subdifferencetials of convex function. Pac. J. Math. 17, 497–510 (1966)
Rockafellar, R.T.: On maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Nguyen, T.L.N., Shin, Y.: Deterministic sensing matrices in compressed sensing: a survey. Sci. World J. 2013, 1–6 (2013)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Methodol. 58, 267–288 (1996)
Eslamian, M., Vahidi, J.: Split common fixed point problem of non-expansive semigroup. Mediterr. J. Math. 13, 1177–1195 (2016)
Eslamian, M., Zamani Eskandani, G., Raeisi, M.: Split common null point and common fixed point problems between Banach spaces and Hilbert spaces. Mediterr. J. Math. 14, 119 (2017)
Gibali, A.: A new split inverse problem and application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2(2), 243–258 (2017)
Acknowledgements
The authors express their deep gratitude to the referee and the editor for his/her valuable comments and suggestions which helped tremendously in improving the quality of this paper and made it suitable for publication.
Funding
This article was funded by the National Science Foundation of China (11471059) and Science and Technology Research Project of Chongqing Municipal Education Commission (KJ 1706154) and the Research Project of Chongqing Technology and Business University (KFJJ2017069).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this work. All authors read and approved final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tang, Y. New inertial algorithm for solving split common null point problem in Banach spaces. J Inequal Appl 2019, 17 (2019). https://doi.org/10.1186/s13660-019-1971-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-019-1971-4