Strong convergence of a relaxed CQ algorithm for the split feasibility problem
 Songnian He^{1, 2}Email author and
 Ziyi Zhao^{1}
https://doi.org/10.1186/1029242X2013197
© He and Zhao; licensee Springer 2013
Received: 15 November 2012
Accepted: 9 April 2013
Published: 22 April 2013
Abstract
The split feasibility problem (SFP) is finding a point in a given closed convex subset of a Hilbert space such that its image under a bounded linear operator belongs to a given closed convex subset of another Hilbert space. The most popular iterative method is Byrne’s CQ algorithm. López et al. proposed a relaxed CQ algorithm for solving SFP where the two closed convex sets are both level sets of convex functions. This algorithm can be implemented easily since it computes projections onto halfspaces and has no need to know a priori the norm of the bounded linear operator. However, their algorithm has only weak convergence in the setting of infinitedimensional Hilbert spaces. In this paper, we introduce a new relaxed CQ algorithm such that the strong convergence is guaranteed. Our result extends and improves the corresponding results of López et al. and some others.
MSC:90C25, 90C30, 47J25.
Keywords
split feasibility problem relaxed CQ algorithm Hilbert space strong convergence bounded linear operator1 Introduction
where A is a given $M\times N$ real matrix (where ${A}^{\ast}$ is the transpose of A), C and Q are nonempty, closed and convex subsets in ${\mathbb{R}}^{N}$ and ${\mathbb{R}}^{M}$, respectively. This problem has received much attention [2] due to its applications in signal processing and image reconstruction, with particular progress in intensitymodulated radiation therapy [3–5], and many other applied fields.
where ${\tau}_{n}\in (0,\frac{2}{{\parallel A\parallel}^{2}})$, and ${P}_{C}$ and ${P}_{Q}$ are the orthogonal projections onto the sets C and Q, respectively. Compared with Censer and Elfving’ algorithm [1], the Byrne’ algorithm is easily executed since it only deal with orthogonal projections with no need to compute matrix inverses.
where the stepsize ${\tau}_{n}$ is chosen in the interval $(0,\frac{2}{L})$, where L is the Lipschitz constant of ∇f.
The computation of a projection onto a general closed convex subset is generally difficult. To overcome this difficulty, Fukushima [20] suggested a socalled relaxed projection method to calculate the projection onto a level set of a convex function by computing a sequence of projections onto halfspaces containing the original level set. In the setting of finitedimensional Hilbert spaces, this idea was followed by Yang [13], who introduced the relaxed CQ algorithms for solving SFP (1.1) where the closed convex subsets C and Q are level sets of convex functions.
Recently, for the purpose of generality, the SFP (1.1) is studied in a more general setting. For instance, Xu [12] and López et al. [19] considered the SFP (1.1) in infinitedimensional Hilbert spaces (i.e., the finitedimensional Euclidean spaces ${\mathbb{R}}^{N}$ and ${\mathbb{R}}^{M}$ are replaced with general Hilbert spaces). Very recently, López et al. proposed a relaxed CQ algorithm with a new adaptive way of determining the stepsize sequence $({\tau}_{n})$ for solving the SFP (1.1) where the closed convex subsets C and Q are level sets of convex functions. This algorithm can be implemented easily since it computes projections onto halfspaces and has no need to know a priori the norm of the bounded linear operator. However, their algorithm has only weak convergence in the setting of infinitedimensional Hilbert spaces. In this paper, we introduce a new relaxed CQ algorithm such that the strong convergence is guaranteed in infinitedimensional Hilbert spaces. Our result extends and improves the corresponding results of López et al. and some others.
The rest of this paper is organized as follows. Some useful lemmas are listed in Section 2. In Section 3, the strong convergence of the new relaxed CQ algorithm of this paper is proved.
2 Preliminaries
Throughout the rest of this paper, we denote by H or K a Hilbert space, A is a bounded linear operator from H to K, and by I the identity operator on H or K. If $f:H\to \mathbb{R}$ is a differentiable function, then we denote by ∇f the gradient of the function f. We will also use the notations:

→ denotes strong convergence.

⇀ denotes weak convergence.

${\omega}_{w}({x}_{n})=\{x\mid \mathrm{\exists}\{{x}_{{n}_{k}}\}\subset \{{x}_{n}\}\text{such that}{x}_{{n}_{k}}\rightharpoonup x\}$ denotes the weak ωlimit set of $\{{x}_{n}\}$.
This relation is called the subdifferentiable inequality.
A function $f:H\to \mathbb{R}$ is said to be subdifferentiable at x, if it has at least one subgradient at x. The set of subgradients of f at the point x is called the subdifferentiable of f at x, and it is denoted by $\partial f(x)$. A function f is called subdifferentiable, if it is subdifferentiable at all $x\in H$. If a function f is differentiable and convex, then its gradient and subgradient coincide.
The following lemma is not hard to prove (see [17, 21]).
 (i)
f is convex and differential.
 (ii)
$\mathrm{\nabla}f(x)={A}^{\ast}(I{P}_{Q})Ax$, $x\in H$.
 (iii)
f is wlsc on H.
 (iv)
∇f is ${\parallel A\parallel}^{2}$Lipschitz: $\parallel \mathrm{\nabla}f(x)\mathrm{\nabla}f(y)\parallel \le {\parallel A\parallel}^{2}\parallel xy\parallel $, $x,y\in H$.
The following are characterizations of firmly nanexpansive mappings (see [22]).
 (i)
T is firmly nonexpansive.
 (ii)
${\parallel TxTy\parallel}^{2}\le \u3008xy,TxTy\u3009$, $x,y\in H$.
 (iii)
$IT$ is firmly nonexpansive.
Lemma 2.3 [23]
 (i)
${\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}=\mathrm{\infty}$.
 (ii)
${lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\sigma}_{n}\le 0$, or ${\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}{\sigma}_{n}<\mathrm{\infty}$.
Then ${lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0$.
3 Iterative Algorithm
where $c:H\to \mathbb{R}$ and $q:K\to \mathbb{R}$ are convex functions. We assume that both c and q are subdifferentiable on H and K, respectively, and that ∂c and ∂q are bounded operators (i.e., bounded on bounded sets). By the way, we mention that every convex function defined on a finitedimensional Hilbert space is subdifferentiable and its subdifferential operator is a bounded operator (see [24]).
where ${\zeta}_{n}\in \partial q(A{x}_{n})$.
Firstly, we recall the relaxed CQ algorithm of López et al. [19] for solving the SFP (1.1) where C and Q are given in (3.1) as follows.
López et al. proved that under some certain conditions the sequence $({x}_{n})$ generated by Algorithm 3.1 converges weakly to a solution of the SFP (1.1). Since the projections onto halfspaces ${C}_{n}$ and ${Q}_{n}$ have closed forms and ${\tau}_{n}$ is obtained adaptively via the formula (3.4) (no need to know a priori the norm of operator A), the above relaxed CQ algorithm 3.1 is implementable. But the weak convergence is its a weakness. To overcome this weakness, inspired by Algorithm 3.1, we will introduce a new relaxed CQ algorithm for solving the SFP (1.1) where C and Q are given in (3.1) so that the strong convergence is guaranteed.
It is well known that Halpern’s algorithm has a strong convergence for finding a fixed point of a nonexpansive mapping [25, 26]. Then we are in a position to give our algorithm. The algorithm given below is referred to as a Halperntype algorithm [27].
where the sequence $({\alpha}_{n})\subset (0,1)$ and $({\tau}_{n})$ and $({\rho}_{n})$ are given as in (3.4).
The convergence result of Algorithm 3.2 is stated in the next theorem.
Theorem 3.3 Assume that $({\alpha}_{n})$ and $({\rho}_{n})$ satisfy the assumptions:
(a1) ${lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0$ and ${\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}$.
(a2) ${inf}_{n}{\rho}_{n}(4{\rho}_{n})>0$.
Then the sequence $({x}_{n})$ generated by Algorithm 3.2 converges in norm to ${P}_{S}u$.
Now, following an idea in [28], we prove ${s}_{n}\to 0$ by distinguishing two cases.
This implies that $\parallel \mathrm{\nabla}{f}_{n}({x}_{n})\parallel $ is bounded and it yields ${f}_{n}({x}_{n})\to 0$, namely $\parallel (I{P}_{{Q}_{n}})A{x}_{n}\parallel \to 0$.
Applying Lemma 2.3 to (3.16), we obtain ${s}_{n}\to 0$.
which, together with (3.17), in turn implies that ${s}_{n}\to 0$, that is, ${x}_{n}\to z$. □
Remark 3.4 Since u can be chosen in H arbitrarily, one can compute the minimumnorm solution of SFP (1.1) where C and Q are given in (3.1) by taking $u=0$ in Algorithm 3.2 whether $0\in C$ or $0\notin C$.
Declarations
Acknowledgements
The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (ZXH2012K001) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.
Authors’ Affiliations
References
 Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in product space. Numer. Algorithms 1994, 8(2–4):221–239.MathSciNetView ArticleGoogle Scholar
 López G, MartínMárquez V, Xu HK: Iterative algorithms for the multipulsets split feasiblity problem. In Biomedical Mathematics Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison, WI; 2010:243–279.Google Scholar
 Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 2003, 51: 2353–2365.View ArticleGoogle Scholar
 Censor Y, Elfving T, Kopf N, Bortfeld T: The multiplesets split feasibility problem and its applications for inverse problem. Inverse Probl. 2005, 21: 2071–2084. 10.1088/02665611/21/6/017MathSciNetView ArticleGoogle Scholar
 López G, MartínMárquez V, Xu HK: Perturbation techniques for nonexpansive mappings with applications. Nonlinear Anal., Real World Appl. 2009, 10: 2369–2383. 10.1016/j.nonrwa.2008.04.020MathSciNetView ArticleGoogle Scholar
 Byrne C: Iterative oblique projection onto convex sets and the split feasibity problem. Inverse Probl. 2002, 18: 441–453. 10.1088/02665611/18/2/310MathSciNetView ArticleGoogle Scholar
 Dang Y, Gao Y: The strong convergence of a KMCQlike algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007Google Scholar
 Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/02665611/21/5/009MathSciNetView ArticleGoogle Scholar
 Schöpfer F, Schuster T, Louis AK: An iterative regularization method for the solution of the split feasibility problem in Banach spaces. Inverse Probl. 2008., 24: Article ID 055008Google Scholar
 Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010., 2010: Article ID 102085Google Scholar
 Xu HK: A variable Krasonosel’skiiMann algorithm and the multipleset split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/02665611/22/6/007View ArticleGoogle Scholar
 Xu HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
 Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/02665611/20/4/014View ArticleGoogle Scholar
 Yang Q: On variableset relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleGoogle Scholar
 Zhao J, Yang Q: Generalized KM theorems and their applications. Inverse Probl. 2006, 22(3):833–844. 10.1088/02665611/22/3/006View ArticleGoogle Scholar
 Zhao J, Yang Q: Selfadaptive projection methods for the multiplesets split feasibility problem. Inverse Probl. 2011., 27: Article ID 035009Google Scholar
 Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20(1):103–120. 10.1088/02665611/20/1/006MathSciNetView ArticleGoogle Scholar
 Figueiredo MA, Nowak RD, Wright SJ: Gradient projection for space reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1: 586–598.View ArticleGoogle Scholar
 López G, MartínMárquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012. doi:10.1088/0266–5611/28/8/085004Google Scholar
 Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 58–70. 10.1007/BF01589441MathSciNetView ArticleGoogle Scholar
 Aubin JP: Optima and Equilibria: An Introduction to Nonlinear Analysis. Springer, Berlin; 1993.View ArticleGoogle Scholar
 Goebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
 Xu HK: Iterative algorithms for nonlinnear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar
 Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problem. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MathSciNetView ArticleGoogle Scholar
 Suzuki T: A sufficient and necessary condition for Halperntype strong convergence to fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 2007, 135: 99–106.View ArticleGoogle Scholar
 Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059MathSciNetView ArticleGoogle Scholar
 Halpern B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73: 957–961. 10.1090/S000299041967118640View ArticleGoogle Scholar
 Maingé PE: A hybrid extragradientviscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 47: 1499–1515. 10.1137/060675319MathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.