- Research
- Open access
- Published:
Adaptively relaxed algorithms for solving the split feasibility problem with a new step size
Journal of Inequalities and Applications volume 2014, Article number: 448 (2014)
Abstract
In the present paper, we propose several kinds of adaptively relaxed iterative algorithms with a new step size for solving the split feasibility problem in real Hilbert spaces. The proposed algorithms never terminate, while the known algorithms existing in the literature may terminate. Several weak and strong convergence theorems of the proposed algorithms have been established. Some numerical experiments are also included to illustrate the effectiveness of the proposed algorithms.
MSC:46E20, 47J20, 47J25.
1 Introduction
Since its inception in 1994, the split feasibility problem SFP [1] has been attracting researchers’ interest [2, 3] due to its extensive applications in signal processing and image reconstruction [4], with particular progress in intensity-modulated radiation therapy [5, 6].
Let and be real Hilbert spaces, C and Q be nonempty closed convex subsets of and , respectively, and a bounded linear operator. Then SFP can be formulated as finding a point with the property
The set of solutions for SFP (1.1) is denote by .
Over the past two decades years or so, the researchers invested and designed various iterative algorithms for solving SFP (1.1); see [6–13]. The most popular algorithm, among them, is Byrne’s CQ algorithm, which generates a sequence by the recursive procedure
where the step size is chosen in the open interval , while and are the orthogonal projections onto C and Q, respectively.
We remark in passing that Byrne’s CQ algorithm (1.2) is indeed a special case of the classical gradient projection method (GPM). To see this, let us define by
then the convex objective f is differentiable and has a Lipschitz gradient given by
We consider the following convex minimization problem:
It is well known that is a solution of problem (1.5) if and only if
Also, we know that (1.6) holds true if and only if
Note that if , then . Consequently, we can utilize the classical gradient projection method (GPM) below to solve SFP (1.1):
where , while L is the Lipschitz constant of ∇f. Noting that , we see immediately that (1.8) is exactly CQ algorithm (1.2).
We note that, in algorithms (1.2) and (1.8) mentioned above, the choice of the step size depends heavily on the operator (matrix) norm . This means that for actual implementation of CQ algorithm (1.2), one has first to know at least an upper bound of the operator (matrix) norm , which is in general difficult. To overcome this difficulty, several authors proposed several various of adaptive methods, which permit the step size to be selected self-adaptively; see [7–9].
Yang [10] considered the following step size:
where is a sequence of positive real numbers such that
Very recently, López et al. [11] introduced another choice of the step size sequence as follows:
where is chosen in the open interval . By virtue of the step size (1.11), López et al. [11] introduced four kinds of algorithms for solving SFP (1.1).
We observe that if for some , then the algorithms introduced by López et al. [11] have to terminate in the n th step of iterations. In this case is not necessarily a solution of the SFP (1.1), since may be not in C, Algorithm 4.1 in [11] is such a case. To make up the flaw, we introduce a new choice of the step size sequence as follows:
where is chosen in the open interval and is a sequence of positive numbers in , while and are given by, respectively,
where will be defined in Section 3.
The purpose of this paper is to introduce a new choice of the step size sequence that makes the associated algorithms never terminate. A new stop rule is also given, which ensures that the th iteration is a solution of SFP (1.1) and the iterative process stops. Several weak and strong convergence results are presented. Numerical experiments are included to illustrate the effectiveness of the proposed algorithms and the applications in signal processing of the CQ algorithm with the step size selected in this paper.
The rest of this paper is organized as follows. In the next section, some necessary concepts and important facts are collected. The weak and strong convergence theorems of the proposed algorithms with step size (1.12) are established in Section 3. Finally in Section 4, we provide some numerical experiments to illustrate the effectiveness and applications of the proposed algorithms with step size (1.12) to inverse problems arising from signal processing.
2 Preliminaries
Throughout this paper, we assume that SFP (1.1) is consistent, i.e., . We denote by ℝ the set of real numbers. Let and real Hilbert spaces and the letter I the identity mapping on or . If is a differentiable (subdifferentiable) functional, then we denote by ∇f (∂f) the gradient (subdifferential) of f. Given a sequence in H, (resp. ‘’) denotes the strong (resp. weak) convergence of to x. The symbols and denote inner product and norm of Hilbert spaces and , respectively. Let be a mapping. We use to denote the set of fixed points of T. We also denote by the domain of T.
Some equalities in Hilbert space play very important roles for solving linear and nonlinear problems arising from real world.
It is well known that in a real Hilbert space , the following two equalities hold:
for all .
for all and .
Recall that a mapping is said to be
-
(i)
nonexpansive if
(2.3)
for all ;
-
(ii)
firmly nonexpansive if
(2.4)
for all ;
-
(iii)
λ-averaged if there exist some and another nonexpansive mapping such that
(2.5)
The following proposition describes the characterizations of firmly nonexpansive mappings (see [12]).
Proposition 2.1 Let be a mapping. Then the following statements are equivalent.
-
(i)
T is firmly nonexpansive;
-
(ii)
is firmly nonexpansive;
-
(iii)
for all ;
-
(iv)
T is -averaged;
-
(v)
is nonexpansive.
Recall that the metric (nearest point) projection form onto a nonempty closed convex subset C of is defined as follows: for each , there exists a unique point with the property:
Now we list some basic properties of below; see [12] for details.
Proposition 2.2
(p1) Given and . Then if and only if we have the inequality
(p2)
(p3)
for all .
(p4) is nonexpansive;
(p5)
in particular,
(p6)
From (p2), (p3), and (p4), we see immediately that both and are firmly nonexpansive and -averaged.
Recall that a function is called convex if
It is well known that a differentiable function f is convex if and only if we have the relation
Recall that an element is said to be a subgradient of at x if
If the function has at least one subgradient at x, it is said to be subdifferentiable at x. The set of subgradients of f at the point x is called the subdifferential of f at x, and is denoted by . A function f is called subdifferentiable if it is subdifferentiable at every . If f is convex and differentiable, then for every . A function f is called subdifferentiable if it is subdifferentiable at every . If f is convex and differentiable, then for every . A function is said to be weakly lower semi-continuous (w-lsc) at x if implies
f is said to be w-lsc on if it is w-lsc at every point .
It is well known that for a convex function , it is w-lsc on if and only if it is lsc on .
It is an easy exercise to prove the following conclusions (see [13, 14]).
Proposition 2.3 Let f be given as in (1.3). Then the following conclusions hold.
-
(i)
f is convex and differentiable;
-
(ii)
, ;
-
(iii)
f is w-lsc on ;
-
(iv)
∇f is -Lipschitz:
The concept of Fejér monotonicity plays a key role in establishing weak convergence theorems. Recall that a sequence in is said to be Fejér monotone with respect to (w.r.t.) a nonempty closed convex subset C in if
Proposition 2.4 (see [11, 15])
Let C be a nonempty closed convex in . If the sequence is Fejér monotone w.r.t. C, then the following hold:
-
(i)
if and only if ;
-
(ii)
the sequence converges strongly;
-
(iii)
if , then .
Proposition 2.5 (see [16])
Let be a sequence of nonnegative real numbers such that
where is a sequence in and is a sequence in ℝ such that
-
(i)
;
-
(ii)
or . Then ().
3 Main results
Let and be convex functions and define level sets of c and q as follows:
Assume that both c and q are subdifferentiable on and , respectively, and that ∂c and ∂q are bounded mappings. Given an arbitrary initial data . Assume that is the current value for . We introduce two sequences of half-spaces as follows:
where , and
where . Clearly, and for all .
Construct via the formula
where is given as (1.12),
and
More precisely, we introduce the following relaxed CQ algorithm in an adaptive way.
Algorithm 3.1 Choose an initial data arbitrarily. Assume that the n th iterate has been constructed then we compute the th iteration via the formula:
where the step size is chosen in such a way that
with and . If for some , then is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.7) to compute the next iteration .
We remark in passing that if for some , then for all , consequently, is a solution of SFP (1.1). Thus, we may assume that the sequence generated by Algorithm 3.1 is infinite.
Theorem 3.2 Assume that . Then the sequence generated by Algorithm 3.1 converges weakly to a solution of SFP (1.1), where .
Proof Let be fixed, and set . By virtue of (2.1), (3.7), and Proposition 2.2(p6), we have
In view of Proposition 2.2, we know that are firmly nonexpansive for all , and from this one derives
from which it turns out that
which in turn allows us to deduce the following conclusions:
-
(i)
is Fejér monotone w.r.t. Γ; in particular,
-
(ii)
is a bounded sequence;
-
(iii)
; and
-
(iv)
(3.12)
By Proposition 2.4(i), to show that , it suffices to show that . To see this, take and let be a sequence of weakly converging to . By our assumption that , without loss of generality, we can assume that for all . It follows from (3.12) (iii) that
Note that for . This, together with (3.13), implies that , that is, . By our assumption that ∂q is a bounded mapping, we see that there exists a constant such that , .
Since , by the definition of , we have
Noting that and using the w-lsc of q, we have , which implies that . We next prove . Firstly, from (3.12) (iv), we know that . Notice that
we have
Since ∂c is a bounded mapping, we have such that
Since , by the definition of , we have
Then w-lsc of C implies that , thus and , completing the proof. □
We introduce a little more general algorithm as follows.
Algorithm 3.3 Choose an initial data arbitrarily. Assume that the n th iteration has been constructed; then we compute the th iteration via the formula:
where the step size is as before and is a sequence in satisfying . If for some , then is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.15) to compute the next iteration .
We have the following weak convergence theorem.
Theorem 3.4 Assume that . Then the sequence generated by Algorithm 3.3 converges weakly to a solution of the SFP (1.1) where .
Proof Let be fixed and set . By virtue of (2.1), (2.2), (3.15), (3.10), and Proposition 2.2(p6), we have
which implies that
-
(i)
is Fejér monotone w.r.t. Γ; in particular,
-
(ii)
is a bounded sequence;
-
(iii)
; and
-
(iv)
.
By our assumptions on and , we have and , the rest of the arguments follow exactly form the corresponding parts of Theorem 3.2, we omit its details. This completes the proof. □
We remark that Theorem 3.4 generalizes Theorem 3.2, that is, if we take in Theorem 3.4, then we can obtain Theorem 3.2. It is really interesting work to compare convergence rate of Algorithms 3.1 and 3.3.
Generally speaking, Algorithms 3.1 and 3.3 have only the weak convergence in the frame work of infinite-dimensional spaces, and therefore the modifications of Algorithms 3.1 and 3.3 are needed in order to realize the strong convergence. Considerable efforts have been made and several interesting results have been reported recently; see [17–20]. Below is our modification of Algorithms 3.1 and 3.3.
Algorithm 3.5 Choose an arbitrary initial data . Assume that the n th iteration has been constructed. Set
with the step size given by (1.12), and define two half-spaces and by
The th iterate is then constructed in the formula:
If for some , then is a solution of SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.17)-(3.20) to compute the next iteration .
Theorem 3.6 Assume that . Then the sequence generated by Algorithm 3.5 converges strongly to a solution of SFP (1.1), where .
Proof Firstly, we show that
for all . Indeed, in view of (3.11), we have for all . To show (3.21) holds, it suffices to show that for all . We complete the proof by induction. Since , . Assume that form some ; we plan to show . Since , and closed convex, is well defined. It follows from Proposition 2.2(p1) that
This implies that and hence . Consequently, for all , and thus (3.21) holds true.
From the definition of and Proposition 2.2(p1), we see that . It then follows from (3.20) that
This derives that exists, dented by d.
Noting that , we have
By virtue of (2.1) and (3.24), we obtain
From this one derives that ().
Since , we have
from which it turns out that
and
At this point, we show . To end this, take . Then there exists a subsequence of such that . By our assumption that , from (3.28) we conclude that
which implies that , since is bounded. Notice that , and ∂q is a bounded mapping, we have
Since and q is w-lsc on , we derive
which implies that .
We next show . Indeed, from (3.27) we have
From (3.29), we have also
Consequently, it follows from (3.31) and (3.32) that
Since , noting ∂c is a bounded mapping, we immediately obtain
Then the w-lsc of c ensures that
from which it turns out that , and thus . It follows from (3.21) that , which implies that
for all . Thus, from (3.34) we obtain
in particular, we have
consequently, , since .
At this point, by virtue of (3.19), we have
in particular, we have
Thus, upon taking the limit as in (3.37), we obtain
This implies that by Proposition 2.2(p1). Therefore converges strongly to because of the uniqueness of . This completes the proof. □
Algorithm 3.7 Choose an arbitrary initial data . Assume that the n th iteration has been constructed. Set
with the step size given by (1.12) and the relaxed factor in satisfying . Define two half-spaces and by
The th iteration is then constructed by the formula:
If for some , then is a solution of SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.39)-(3.41) to compute the next iteration .
Along the proof lines of Theorem 3.6 we can prove the following.
Theorem 3.8 Assume that ; then the sequence generated by Algorithm 3.7 converges strongly to a solution of SFP (1.1), where .
The proof of Theorem 3.8 is similar to that of Theorem 3.6, and therefore we omit its details.
We next turn our attention to another kind of algorithm.
Algorithm 3.9 Choose an arbitrary initial data . Assume that the n th iteration has been constructed; then we compute the th iteration via the recursion:
where the step size is given by (1.12), is a contraction with contractive coefficient , and is a real sequence in . If for some , then is an approximate solution of SFP (1.1) (he approximate rule will be given below) and the iterative process stops; otherwise, we set and go on to (3.43) to compute the next iteration .
We point out that if for some , then (3.43) reduces to
This implies that and hence . Write
Then it follows from (3.44) that
Such an is called an approximate solution of SFP (1.1). If , then is a solution of SFP (1.1).
Theorem 3.10 Assume that and satisfy conditions (C1) , and (C2) , respectively. Then the sequence generated by Algorithm 3.9 converges strongly to a solution of SFP (1.1), where , equivalently, solves the following variational inequality:
Proof First of all, we show there exists a unique such that . Indeed, since is a contraction with the contractive coefficient , by the Banach contractive mapping principle, we conclude that there exists a unique such that , equivalently, solves the following variational inequality:

Write and . Then (3.43) can be rewritten as
Noting that and for all , we have for all , and hence
Since is firmly nonexpansive, we have
By virtue of (2.1) and (3.47), we obtain
in particular, we have
for all .
We now estimate . By virtue of definition of the norm and Schwarz’s inequality, we obtain
from which it turns out that
Substituting (3.48) into (3.50) yields
By virtue of Proposition 2.2(p6), (3.48), and (3.51), noting that for all , we have
We next show that is bounded. Using (3.43) and (3.49), we have
for all , therefore is bounded; so are and .
Finally, we show that ().
Set and assume that for all . Then (3.52) reduces to
We consider two possible cases.
Case 1. is eventually decreasing, i.e., there exists some integer such that
which means that exists. Note that is bounded and . Letting in (3.53) yields and . Since is a bounded sequence, we conclude that and hence
Observe that ,
and
We may assume that
Without loss of generality, we assume that (); then (). Since , is a bounded sequence and () by (3.54), we deduce that
as , then w-lsc of q implies that
and thus .
On the other hand, since , is a bounded sequence, and by (3.55), we derive
as , then w-lsc of c implies that
and thus . Consequently, . It follows from (3.56) and Proposition 2.2(p1) that
Taking into account of (3.53), we have
Applying Proposition 2.5 to (3.58), we derive that as , i.e., as .
Case 2. is not eventually decreasing. In this case, we can find an integer such that . Define , . Then and . Define by
Then as , for all and for all ; see [20] for details.
Since for all , it follows from (3.53) that
and as .
At this point, by virtue of a similar reasoning to the corresponding parts in case 1, we can deduce that .
Noting that , it follows from (3.53) that
from which one derives that , and hence as . From this it turns out that as , since as . Consequently, , as , since as . This completes the proof. □
By using an argument like the method in Theorem 3.10, we have the following more general algorithm and convergence theorem.
Algorithm 3.11 Choose an arbitrary initial data . Assume that the n th iteration has been constructed; then we compute the th iteration via the recursion
where is a real sequence in satisfying , is a real sequence in satisfying conditions (C1) and (C2) , is a contraction with contractive coefficient and is given by (1.12). If for some , then is an approximate solution of SFP (1.1) and the iterative process stops; otherwise, we set and go on to (3.59) to compute the next iteration .
Theorem 3.12 Assume that ; the sequence generated by algorithm (3.59) converges strongly to a solution of the SFP (1.1), where , equivalently, is a solution of the following variational inequality:

4 Numerical experiments
In this section, we consider two typical numerical experiments to illustrate the performance of step size (1.12) with CQ-like algorithms. Firstly, we introduce a linear observation model as follows, which covers many problems in signal and image processing:
where is the observed or measured data with noisy ε. denotes the bounded linear observation operator. A is sparse and the range of it is not closed in most inverse problems, thus A is often ill-condition and the problem is also ill-posed. When x is a sparse expansion, finding the solutions of (4.1) can be seen as finding a solution to the least-square problem
for any real number .
When we set and , it is a particular case of SFP (1.1); see [11]. Therefore, we continue by applying the CQ algorithm to solve (4.2). We compute the projection onto C through a soft thresholding method; see [11, 21–23].
Next, according to the examples in [11, 22], we also choose two similar particular problems: compressed sensing and image deconvolution, which can be covered by (4.1). The experiments compare the performances of the proposed step size (1.12) with the step size in [11], and analysis some properties of (1.12).
4.1 Compressed sensing
In a general compressed sensing model, we set the hits of a signal is . There exist m=50 spikes with amplitude ±1 distributed in the whole domain randomly. The plot can be seen on the top of Figure 1. Then we set the observation dimension and a matrix A with order is also generated arbitrarily. A standard Gaussian distribution noise with variance is added. Let t=50 in (4.2).
For the step sizes (1.11) and (1.12), we always set the constant . For Algorithm 3.3, we set . All the processes are started with initial signal and finished with the stop rule
We calculated the mean squared error (MSE) for the results
where is an estimated signal of x.
The second and third plots in Figure 1 correspond to the results with step sizes (1.11) and (1.12) to Algorithm 3.1, respectively. The recovered result by Algorithm 3.3 with step size (1.12) is shown in the fourth plot. Especially for the fifth, when we set , , when we have the iteration steps of Algorithm 3.3 start to approach the number in the second plot, and the restored precision is a little poorer than the others.
For (1.12) we firstly set ; then in order to study its effect to the convergence speed of the CQ algorithm, we let it be , is an integer. In Figure 2 we can find that when the best MSE curves can be obtained, and it starts to change less. Therefore, should be as little as possible.
4.2 Image deconvolution
In this subsection, we continue by applying Algorithms 3.1 and 3.3 to recover the blurred Cameraman image. In the experiments, from [22, 24] we employ Haar wavelets and the blur point spread function , for ; the noise variance is . The size of the image is . The threshold value is hand-tuned for the best SNR improvement. t is the sum of all the original pixel values.
We observe the performance of in (1.12); see Figure 3. We find that at the beginning several steps there are similar SNR curves, however, after 19 iterations, is similar with (1.11). When , the curves are worse than the others. While we set the curve starts to be consistent with the curve of (1.11). Therefore, we also know that should be better as little as possible.
5 Conclusion remarks
In this paper we have proposed several kinds of adaptively relaxed iterative algorithms with a new variable step size for solving SFP (1.1). The feature is that the new variable step size contains a sequence of positive numbers in its denominator. Because of this, the proposed algorithms with relaxed iterations will never terminate at any iteration step. On the other hand, unlike the previous known algorithms, our stop rule is that the related iteration process will stop if for some .
By means of new analysis techniques, we have proved several kinds of weak and strong convergence theorems of the proposed algorithms for solving SFP (1.1), which improved, extended, and complemented those existing in the literature. We remark that all convergence results in this paper still hold true if we use the step size given by (1.11) to replace the step size given by (1.12). In such a case, the stop rules should be modified. We would like to point out that our Theorems 3.10 and 3.12 are closely related to a sort of variational inequalities.
Finally, numerical experiments have been presented to illustrate the effectiveness of the proposed algorithms and applications in signal processing of the algorithms with the step size selected in this paper. The numerical results tell us that the changes of the choice of the step size given by (1.12) may affect the convergence rate of the iterative algorithms, and should be chosen as small as possible; for instance, we can choose such that as .
References
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692
López G, Martín-Márquez V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2009:243–279.
Chang SS, Kim JK, Cho YJ, Sim JY: Weak and strong convergence theorems of solutions to split feasibility problem for nonspreading type mapping in Hilbert spaces. Fixed Point Theory Appl. 2014., 2014: Article ID 11
Stark H, Yang Y: Vector Space Projections: a Numerical Approach to Signal and Image Processing, Neural Nets and Optics. Wiley, New York; 1998.
Censor Y, Bortfeld T, Martin B, Bortfeld T: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2003, 51: 2353–2365.
Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017
Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009
Li M: Improved relaxed CQ methods for solving the split feasibility problem. Adv. Model. Optim. 2011, 13: 305–318.
Abdellah B, Muhammad AN, Mohamed K, Sheng ZH: On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 2012, 54: 627–639. 10.1007/s10898-011-9782-2
Yang Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048
López G, Martín-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004
Göebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Aubin JP: Optima and Equilibria: An Introduction to Nonlinear Analysis. Springer, Berlin; 1993.
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710
Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2011, 66: 240–256.
Wang FH, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010. 10.1155/2010/102085
Dang YZ, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007
Yu X, Shahzad N, Yao YH: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 2012, 6: 1447–1462. 10.1007/s11590-011-0340-0
Yao YH, Postolache M, Liou YC: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013., 2013: Article ID 201 10.1186/1687-1812-2013-201
Daubechies I, Fornasier M, Loris I: Accelerated projected gradient method for linear inverse problems with sparsity constraints. J. Fourier Anal. Appl. 2008, 14: 764–792. 10.1007/s00041-008-9039-8
Figueiredo MA, Nowak RD, Wright SJ: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1: 586–598.
Starck JL, Murtagh F, Fadili JM: Sparse Image and Signal Processing Wavelets, Curvelets, Morphological Diversity. Cambridge University Press, Cambridge; 2010:166.
Mário ATF: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12: 906–917. 10.1109/TIP.2003.814255
Acknowledgements
This research was supported by the National Natural Science Foundation of China (11071053).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All the authors contributed equally. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhou, H., Wang, P. Adaptively relaxed algorithms for solving the split feasibility problem with a new step size. J Inequal Appl 2014, 448 (2014). https://doi.org/10.1186/1029-242X-2014-448
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-448