- Open Access
Adaptively relaxed algorithms for solving the split feasibility problem with a new step size
© Zhou and Wang; licensee Springer. 2014
- Received: 26 February 2014
- Accepted: 20 October 2014
- Published: 5 November 2014
In the present paper, we propose several kinds of adaptively relaxed iterative algorithms with a new step size for solving the split feasibility problem in real Hilbert spaces. The proposed algorithms never terminate, while the known algorithms existing in the literature may terminate. Several weak and strong convergence theorems of the proposed algorithms have been established. Some numerical experiments are also included to illustrate the effectiveness of the proposed algorithms.
MSC:46E20, 47J20, 47J25.
- split feasibility problem
- adaptive step size
- adaptively relaxed iterative algorithm
Since its inception in 1994, the split feasibility problem SFP  has been attracting researchers’ interest [2, 3] due to its extensive applications in signal processing and image reconstruction , with particular progress in intensity-modulated radiation therapy [5, 6].
The set of solutions for SFP (1.1) is denote by .
where the step size is chosen in the open interval , while and are the orthogonal projections onto C and Q, respectively.
where , while L is the Lipschitz constant of ∇f. Noting that , we see immediately that (1.8) is exactly CQ algorithm (1.2).
We note that, in algorithms (1.2) and (1.8) mentioned above, the choice of the step size depends heavily on the operator (matrix) norm . This means that for actual implementation of CQ algorithm (1.2), one has first to know at least an upper bound of the operator (matrix) norm , which is in general difficult. To overcome this difficulty, several authors proposed several various of adaptive methods, which permit the step size to be selected self-adaptively; see [7–9].
where is chosen in the open interval . By virtue of the step size (1.11), López et al.  introduced four kinds of algorithms for solving SFP (1.1).
where will be defined in Section 3.
The purpose of this paper is to introduce a new choice of the step size sequence that makes the associated algorithms never terminate. A new stop rule is also given, which ensures that the th iteration is a solution of SFP (1.1) and the iterative process stops. Several weak and strong convergence results are presented. Numerical experiments are included to illustrate the effectiveness of the proposed algorithms and the applications in signal processing of the CQ algorithm with the step size selected in this paper.
The rest of this paper is organized as follows. In the next section, some necessary concepts and important facts are collected. The weak and strong convergence theorems of the proposed algorithms with step size (1.12) are established in Section 3. Finally in Section 4, we provide some numerical experiments to illustrate the effectiveness and applications of the proposed algorithms with step size (1.12) to inverse problems arising from signal processing.
Throughout this paper, we assume that SFP (1.1) is consistent, i.e., . We denote by ℝ the set of real numbers. Let and real Hilbert spaces and the letter I the identity mapping on or . If is a differentiable (subdifferentiable) functional, then we denote by ∇f (∂f) the gradient (subdifferential) of f. Given a sequence in H, (resp. ‘’) denotes the strong (resp. weak) convergence of to x. The symbols and denote inner product and norm of Hilbert spaces and , respectively. Let be a mapping. We use to denote the set of fixed points of T. We also denote by the domain of T.
Some equalities in Hilbert space play very important roles for solving linear and nonlinear problems arising from real world.
for all and .
- (i)nonexpansive if(2.3)
- (ii)firmly nonexpansive if(2.4)
- (iii)λ-averaged if there exist some and another nonexpansive mapping such that(2.5)
The following proposition describes the characterizations of firmly nonexpansive mappings (see ).
T is firmly nonexpansive;
is firmly nonexpansive;
for all ;
T is -averaged;
Now we list some basic properties of below; see  for details.
for all .
(p4) is nonexpansive;
From (p2), (p3), and (p4), we see immediately that both and are firmly nonexpansive and -averaged.
f is said to be w-lsc on if it is w-lsc at every point .
It is well known that for a convex function , it is w-lsc on if and only if it is lsc on .
f is convex and differentiable;
f is w-lsc on ;
- (iv)∇f is -Lipschitz:
if and only if ;
the sequence converges strongly;
if , then .
Proposition 2.5 (see )
or . Then ().
where . Clearly, and for all .
More precisely, we introduce the following relaxed CQ algorithm in an adaptive way.
with and . If for some , then is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.7) to compute the next iteration .
We remark in passing that if for some , then for all , consequently, is a solution of SFP (1.1). Thus, we may assume that the sequence generated by Algorithm 3.1 is infinite.
Theorem 3.2 Assume that . Then the sequence generated by Algorithm 3.1 converges weakly to a solution of SFP (1.1), where .
is Fejér monotone w.r.t. Γ; in particular,
is a bounded sequence;
Note that for . This, together with (3.13), implies that , that is, . By our assumption that ∂q is a bounded mapping, we see that there exists a constant such that , .
Then w-lsc of C implies that , thus and , completing the proof. □
We introduce a little more general algorithm as follows.
where the step size is as before and is a sequence in satisfying . If for some , then is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.15) to compute the next iteration .
We have the following weak convergence theorem.
Theorem 3.4 Assume that . Then the sequence generated by Algorithm 3.3 converges weakly to a solution of the SFP (1.1) where .
is Fejér monotone w.r.t. Γ; in particular,
is a bounded sequence;
By our assumptions on and , we have and , the rest of the arguments follow exactly form the corresponding parts of Theorem 3.2, we omit its details. This completes the proof. □
We remark that Theorem 3.4 generalizes Theorem 3.2, that is, if we take in Theorem 3.4, then we can obtain Theorem 3.2. It is really interesting work to compare convergence rate of Algorithms 3.1 and 3.3.
Generally speaking, Algorithms 3.1 and 3.3 have only the weak convergence in the frame work of infinite-dimensional spaces, and therefore the modifications of Algorithms 3.1 and 3.3 are needed in order to realize the strong convergence. Considerable efforts have been made and several interesting results have been reported recently; see [17–20]. Below is our modification of Algorithms 3.1 and 3.3.
If for some , then is a solution of SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.17)-(3.20) to compute the next iteration .
Theorem 3.6 Assume that . Then the sequence generated by Algorithm 3.5 converges strongly to a solution of SFP (1.1), where .
This implies that and hence . Consequently, for all , and thus (3.21) holds true.
This derives that exists, dented by d.
From this one derives that ().
which implies that .
consequently, , since .
This implies that by Proposition 2.2(p1). Therefore converges strongly to because of the uniqueness of . This completes the proof. □
If for some , then is a solution of SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.39)-(3.41) to compute the next iteration .
Along the proof lines of Theorem 3.6 we can prove the following.
Theorem 3.8 Assume that ; then the sequence generated by Algorithm 3.7 converges strongly to a solution of SFP (1.1), where .
The proof of Theorem 3.8 is similar to that of Theorem 3.6, and therefore we omit its details.
We next turn our attention to another kind of algorithm.
where the step size is given by (1.12), is a contraction with contractive coefficient , and is a real sequence in . If for some , then is an approximate solution of SFP (1.1) (he approximate rule will be given below) and the iterative process stops; otherwise, we set and go on to (3.43) to compute the next iteration .
Such an is called an approximate solution of SFP (1.1). If , then is a solution of SFP (1.1).
for all .
for all , therefore is bounded; so are and .
Finally, we show that ().
We consider two possible cases.
and thus .
Applying Proposition 2.5 to (3.58), we derive that as , i.e., as .
Then as , for all and for all ; see  for details.
and as .
At this point, by virtue of a similar reasoning to the corresponding parts in case 1, we can deduce that .
from which one derives that , and hence as . From this it turns out that as , since as . Consequently, , as , since as . This completes the proof. □
By using an argument like the method in Theorem 3.10, we have the following more general algorithm and convergence theorem.
where is a real sequence in satisfying , is a real sequence in satisfying conditions (C1) and (C2) , is a contraction with contractive coefficient and is given by (1.12). If for some , then is an approximate solution of SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.59) to compute the next iteration .
for any real number .
When we set and , it is a particular case of SFP (1.1); see . Therefore, we continue by applying the CQ algorithm to solve (4.2). We compute the projection onto C through a soft thresholding method; see [11, 21–23].
Next, according to the examples in [11, 22], we also choose two similar particular problems: compressed sensing and image deconvolution, which can be covered by (4.1). The experiments compare the performances of the proposed step size (1.12) with the step size in , and analysis some properties of (1.12).
4.1 Compressed sensing
where is an estimated signal of x.
The second and third plots in Figure 1 correspond to the results with step sizes (1.11) and (1.12) to Algorithm 3.1, respectively. The recovered result by Algorithm 3.3 with step size (1.12) is shown in the fourth plot. Especially for the fifth, when we set , , when we have the iteration steps of Algorithm 3.3 start to approach the number in the second plot, and the restored precision is a little poorer than the others.
4.2 Image deconvolution
In this subsection, we continue by applying Algorithms 3.1 and 3.3 to recover the blurred Cameraman image. In the experiments, from [22, 24] we employ Haar wavelets and the blur point spread function , for ; the noise variance is . The size of the image is . The threshold value is hand-tuned for the best SNR improvement. t is the sum of all the original pixel values.
In this paper we have proposed several kinds of adaptively relaxed iterative algorithms with a new variable step size for solving SFP (1.1). The feature is that the new variable step size contains a sequence of positive numbers in its denominator. Because of this, the proposed algorithms with relaxed iterations will never terminate at any iteration step. On the other hand, unlike the previous known algorithms, our stop rule is that the related iteration process will stop if for some .
By means of new analysis techniques, we have proved several kinds of weak and strong convergence theorems of the proposed algorithms for solving SFP (1.1), which improved, extended, and complemented those existing in the literature. We remark that all convergence results in this paper still hold true if we use the step size given by (1.11) to replace the step size given by (1.12). In such a case, the stop rules should be modified. We would like to point out that our Theorems 3.10 and 3.12 are closely related to a sort of variational inequalities.
Finally, numerical experiments have been presented to illustrate the effectiveness of the proposed algorithms and applications in signal processing of the algorithms with the step size selected in this paper. The numerical results tell us that the changes of the choice of the step size given by (1.12) may affect the convergence rate of the iterative algorithms, and should be chosen as small as possible; for instance, we can choose such that as .
This research was supported by the National Natural Science Foundation of China (11071053).
- Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MathSciNetView ArticleMATHGoogle Scholar
- López G, Martín-Márquez V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2009:243–279.Google Scholar
- Chang SS, Kim JK, Cho YJ, Sim JY: Weak and strong convergence theorems of solutions to split feasibility problem for nonspreading type mapping in Hilbert spaces. Fixed Point Theory Appl. 2014., 2014: Article ID 11Google Scholar
- Stark H, Yang Y: Vector Space Projections: a Numerical Approach to Signal and Image Processing, Neural Nets and Optics. Wiley, New York; 1998.MATHGoogle Scholar
- Censor Y, Bortfeld T, Martin B, Bortfeld T: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2003, 51: 2353–2365.View ArticleGoogle Scholar
- Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017MathSciNetView ArticleMATHGoogle Scholar
- Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009MathSciNetView ArticleMATHGoogle Scholar
- Li M: Improved relaxed CQ methods for solving the split feasibility problem. Adv. Model. Optim. 2011, 13: 305–318.MathSciNetGoogle Scholar
- Abdellah B, Muhammad AN, Mohamed K, Sheng ZH: On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 2012, 54: 627–639. 10.1007/s10898-011-9782-2View ArticleMathSciNetMATHGoogle Scholar
- Yang Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleMATHGoogle Scholar
- López G, Martín-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004Google Scholar
- Göebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleMATHGoogle Scholar
- Aubin JP: Optima and Equilibria: An Introduction to Nonlinear Analysis. Springer, Berlin; 1993.View ArticleMATHGoogle Scholar
- Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006MathSciNetView ArticleMATHGoogle Scholar
- Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MathSciNetView ArticleMATHGoogle Scholar
- Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2011, 66: 240–256.View ArticleMathSciNetMATHGoogle Scholar
- Wang FH, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010. 10.1155/2010/102085Google Scholar
- Dang YZ, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007Google Scholar
- Yu X, Shahzad N, Yao YH: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 2012, 6: 1447–1462. 10.1007/s11590-011-0340-0MathSciNetView ArticleMATHGoogle Scholar
- Yao YH, Postolache M, Liou YC: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013., 2013: Article ID 201 10.1186/1687-1812-2013-201Google Scholar
- Daubechies I, Fornasier M, Loris I: Accelerated projected gradient method for linear inverse problems with sparsity constraints. J. Fourier Anal. Appl. 2008, 14: 764–792. 10.1007/s00041-008-9039-8MathSciNetView ArticleMATHGoogle Scholar
- Figueiredo MA, Nowak RD, Wright SJ: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1: 586–598.View ArticleGoogle Scholar
- Starck JL, Murtagh F, Fadili JM: Sparse Image and Signal Processing Wavelets, Curvelets, Morphological Diversity. Cambridge University Press, Cambridge; 2010:166.View ArticleMATHGoogle Scholar
- Mário ATF: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12: 906–917. 10.1109/TIP.2003.814255MathSciNetView ArticleMATHGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.