Skip to main content

Adaptively relaxed algorithms for solving the split feasibility problem with a new step size

Abstract

In the present paper, we propose several kinds of adaptively relaxed iterative algorithms with a new step size for solving the split feasibility problem in real Hilbert spaces. The proposed algorithms never terminate, while the known algorithms existing in the literature may terminate. Several weak and strong convergence theorems of the proposed algorithms have been established. Some numerical experiments are also included to illustrate the effectiveness of the proposed algorithms.

MSC:46E20, 47J20, 47J25.

1 Introduction

Since its inception in 1994, the split feasibility problem SFP [1] has been attracting researchers’ interest [2, 3] due to its extensive applications in signal processing and image reconstruction [4], with particular progress in intensity-modulated radiation therapy [5, 6].

Let H 1 and H 2 be real Hilbert spaces, C and Q be nonempty closed convex subsets of H 1 and H 2 , respectively, and A: H 1 H 2 a bounded linear operator. Then SFP can be formulated as finding a point x ˆ with the property

x ˆ CandA x ˆ Q.
(1.1)

The set of solutions for SFP (1.1) is denote by Γ=C A 1 (Q).

Over the past two decades years or so, the researchers invested and designed various iterative algorithms for solving SFP (1.1); see [613]. The most popular algorithm, among them, is Byrne’s CQ algorithm, which generates a sequence { x n } by the recursive procedure

x 1 H 1 , x n + 1 = P C ( x n τ n A ( I P Q ) A x n ) ,n1,
(1.2)

where the step size τ n is chosen in the open interval (0,2/ A 2 ), while P C and P Q are the orthogonal projections onto C and Q, respectively.

We remark in passing that Byrne’s CQ algorithm (1.2) is indeed a special case of the classical gradient projection method (GPM). To see this, let us define f: H 1 R by

f(x)= 1 2 ( I P Q ) A x 2 ,
(1.3)

then the convex objective f is differentiable and has a Lipschitz gradient given by

f(x)= A (I P Q )Ax.
(1.4)

We consider the following convex minimization problem:

min x C f(x).
(1.5)

It is well known that x ˆ C is a solution of problem (1.5) if and only if

f ( x ˆ ) , x x ˆ 0,xC.
(1.6)

Also, we know that (1.6) holds true if and only if

x ˆ = P C (Iτf) x ˆ ,τ>0.
(1.7)

Note that if Γ, then x ˆ Γf( x ˆ )= min x C f(x)(1.6) holds(1.7) holds. Consequently, we can utilize the classical gradient projection method (GPM) below to solve SFP (1.1):

x 1 C, x n + 1 = P C ( x n τ n f ( x n ) ) ,n1,
(1.8)

where τ n (0,2/L), while L is the Lipschitz constant of f. Noting that L= A 2 , we see immediately that (1.8) is exactly CQ algorithm (1.2).

We note that, in algorithms (1.2) and (1.8) mentioned above, the choice of the step size τ n depends heavily on the operator (matrix) norm A. This means that for actual implementation of CQ algorithm (1.2), one has first to know at least an upper bound of the operator (matrix) norm A, which is in general difficult. To overcome this difficulty, several authors proposed several various of adaptive methods, which permit the step size τ n to be selected self-adaptively; see [79].

Yang [10] considered the following step size:

τ n := ρ n f ( x n ) ,
(1.9)

where { ρ n } is a sequence of positive real numbers such that

n = 0 ρ n =, n = 0 ρ n 2 <.
(1.10)

Very recently, López et al. [11] introduced another choice of the step size sequence { τ n } as follows:

τ n := ρ n f n ( x n ) f n ( x n ) 2 ,
(1.11)

where { ρ n } is chosen in the open interval (0,4). By virtue of the step size (1.11), López et al. [11] introduced four kinds of algorithms for solving SFP (1.1).

We observe that if f( x n )=0 for some n1, then the algorithms introduced by López et al. [11] have to terminate in the n th step of iterations. In this case x n is not necessarily a solution of the SFP (1.1), since x n may be not in C, Algorithm 4.1 in [11] is such a case. To make up the flaw, we introduce a new choice of the step size sequence { τ n } as follows:

τ n := ρ n f n ( x n ) ( f n ( x n ) + σ n ) 2 ,
(1.12)

where { ρ n } is chosen in the open interval (0,4) and { σ n } is a sequence of positive numbers in (0,1), while f n and f n are given by, respectively,

f n (x)= 1 2 ( I P Q n ) A x 2 ,
(1.13)
f n (x)= A (I P Q n )Ax,
(1.14)

where { Q n } will be defined in Section 3.

The purpose of this paper is to introduce a new choice of the step size sequence { τ n } that makes the associated algorithms never terminate. A new stop rule is also given, which ensures that the (n+1)th iteration x n + 1 is a solution of SFP (1.1) and the iterative process stops. Several weak and strong convergence results are presented. Numerical experiments are included to illustrate the effectiveness of the proposed algorithms and the applications in signal processing of the CQ algorithm with the step size selected in this paper.

The rest of this paper is organized as follows. In the next section, some necessary concepts and important facts are collected. The weak and strong convergence theorems of the proposed algorithms with step size (1.12) are established in Section 3. Finally in Section 4, we provide some numerical experiments to illustrate the effectiveness and applications of the proposed algorithms with step size (1.12) to inverse problems arising from signal processing.

2 Preliminaries

Throughout this paper, we assume that SFP (1.1) is consistent, i.e., Γ. We denote by the set of real numbers. Let H 1 and H 2 real Hilbert spaces and the letter I the identity mapping on H 1 or H 2 . If f: H 1 R is a differentiable (subdifferentiable) functional, then we denote by f (∂f) the gradient (subdifferential) of f. Given a sequence { x n } in H, w w ( x n ) (resp. ‘ x n x’) denotes the strong (resp. weak) convergence of { x n } to x. The symbols , and denote inner product and norm of Hilbert spaces H 1 and H 2 , respectively. Let T: H 1 H 1 be a mapping. We use Fix(T) to denote the set of fixed points of T. We also denote by dom(T) the domain of T.

Some equalities in Hilbert space H 1 play very important roles for solving linear and nonlinear problems arising from real world.

It is well known that in a real Hilbert space H 1 , the following two equalities hold:

x ± y 2 = x 2 ±2x,y+ y 2 ,
(2.1)

for all x,y H 1 .

t x + ( 1 t ) y 2 =t x 2 +(1t) y 2 t(1t) x y 2 ,
(2.2)

for all x,y H 1 and tR.

Recall that a mapping T:dom(T) H 1 H 1 is said to be

  1. (i)

    nonexpansive if

    TxTyxy,
    (2.3)

for all x,ydom(T);

  1. (ii)

    firmly nonexpansive if

    T x T y 2 x y 2 ( I T ) x ( I T ) y 2 ,
    (2.4)

for all x,ydom(T);

  1. (iii)

    λ-averaged if there exist some λ(0,1) and another nonexpansive mapping S: H 1 H 2 such that

    T=(1λ)I+λS.
    (2.5)

The following proposition describes the characterizations of firmly nonexpansive mappings (see [12]).

Proposition 2.1 Let T:dom(T) H 1 H 1 be a mapping. Then the following statements are equivalent.

  1. (i)

    T is firmly nonexpansive;

  2. (ii)

    IT is firmly nonexpansive;

  3. (iii)

    T x T y 2 xy,TxTy for all x,y H 1 ;

  4. (iv)

    T is 1 2 -averaged;

  5. (v)

    2TI is nonexpansive.

Recall that the metric (nearest point) projection form H 1 onto a nonempty closed convex subset C of H 1 is defined as follows: for each x H 1 , there exists a unique point P C xC with the property:

x P C xxy,yC.
(2.6)

Now we list some basic properties of P C below; see [12] for details.

Proposition 2.2

(p1) Given xH and zC. Then z= P C x if and only if we have the inequality

xz,yz0,yC;
(2.7)

(p2)

P C x P C y 2 xy, P C x P C y,for all x,y H 1 ;
(2.8)

(p3)

( I P C ) x ( I P C ) y 2 ( I P C ) x ( I P C ) y , x y ,
(2.9)

for all x,y H 1 .

(p4) 2 P C I is nonexpansive;

(p5)

P C x P C y 2 x y 2 ( I P C ) x ( I P C ) y 2 ,x,y H 1 ;
(2.10)

in particular,

(p6)

P C x y 2 x y 2 ( I P C ) x 2 ,for all x H 1  and yC.
(2.11)

From (p2), (p3), and (p4), we see immediately that both P C and (I P C ) are firmly nonexpansive and 1 2 -averaged.

Recall that a function f: H 1 R is called convex if

f ( λ x + ( 1 λ ) y ) λf(x)+(1λ)f(y),λ(0,1),x,y H 1 .

It is well known that a differentiable function f is convex if and only if we have the relation

f(z)f(x)+ f ( x ) , z x ,z H 1 .

Recall that an element ξ H 1 is said to be a subgradient of f: H 1 R at x if

f(z)f(x)+ξ,zx,z H 1 .

If the function f: H 1 R has at least one subgradient at x, it is said to be subdifferentiable at x. The set of subgradients of f at the point x is called the subdifferential of f at x, and is denoted by f(x). A function f is called subdifferentiable if it is subdifferentiable at every x H 1 . If f is convex and differentiable, then f(x)={f(x)} for every x H 1 . A function f is called subdifferentiable if it is subdifferentiable at every x H 1 . If f is convex and differentiable, then f(x)={f(x)} for every x H 1 . A function f: H 1 R is said to be weakly lower semi-continuous (w-lsc) at x if x n x implies

f(x) lim ̲ n f( x n ).

f is said to be w-lsc on H 1 if it is w-lsc at every point x H 1 .

It is well known that for a convex function f: H 1 R, it is w-lsc on H 1 if and only if it is lsc on H 1 .

It is an easy exercise to prove the following conclusions (see [13, 14]).

Proposition 2.3 Let f be given as in (1.3). Then the following conclusions hold.

  1. (i)

    f is convex and differentiable;

  2. (ii)

    f(x)= A (I P Q )Ax, x H 1 ;

  3. (iii)

    f is w-lsc on H 1 ;

  4. (iv)

    f is A 2 -Lipschitz:

    f ( x ) f ( y ) A 2 xy,x,yH.

The concept of Fejér monotonicity plays a key role in establishing weak convergence theorems. Recall that a sequence { x n } in H 1 is said to be Fejér monotone with respect to (w.r.t.) a nonempty closed convex subset C in H 1 if

x n + 1 z x n z,n1,zC.

Proposition 2.4 (see [11, 15])

Let C be a nonempty closed convex in H 1 . If the sequence { x n } is Fejér monotone w.r.t. C, then the following hold:

  1. (i)

    x n x ˆ if and only if w w ( x n )C;

  2. (ii)

    the sequence { P C x n } converges strongly;

  3. (iii)

    if x n x ˆ C, then x ˆ = lim n P C x n .

Proposition 2.5 (see [16])

Let { α n } be a sequence of nonnegative real numbers such that

α n + 1 (1 t n ) α n + t n b n ,n1,

where { t n } is a sequence in (0,1) and b n is a sequence in such that

  1. (i)

    n = 1 t n =;

  2. (ii)

    lim ¯ n b n 0 or n = 1 | t n b n |<. Then α n 0 (n).

3 Main results

Let c: H 1 R and q: H 2 R be convex functions and define level sets of c and q as follows:

C= { x H 1 | c ( x ) 0 } andQ= { y H 2 | q ( y ) 0 } .
(3.1)

Assume that both c and q are subdifferentiable on H 1 and H 2 , respectively, and that ∂c and ∂q are bounded mappings. Given an arbitrary initial data x 1 H 1 . Assume that x n is the current value for n1. We introduce two sequences of half-spaces as follows:

C n = { x H 1 | c ( x n ) ξ n , x n x } ,
(3.2)

where ξ n c( x n ), and

Q n = { y H 2 | q ( A x n ) η n , A x n y } ,
(3.3)

where η n q(A x n ). Clearly, C C n and Q Q n for all n1.

Construct x n + 1 via the formula

x n + 1 = P C n ( x n τ n f ( x n ) ) ,
(3.4)

where { τ n } is given as (1.12),

f n (x)= 1 2 ( I P Q n ) A x 2
(3.5)

and

f n (x)= A (I P Q n )Ax,
(3.6)

More precisely, we introduce the following relaxed CQ algorithm in an adaptive way.

Algorithm 3.1 Choose an initial data x 1 H 1 arbitrarily. Assume that the n th iterate x n has been constructed then we compute the (n+1)th iteration x n + 1 via the formula:

x n + 1 = P C n ( x n τ n f ( x n ) ) ,n1,
(3.7)

where the step size τ n is chosen in such a way that

τ n := ρ n f n ( x n ) ( f n ( x n ) + σ n ) 2 ,
(3.8)

with 0< ρ n <4 and 0< σ n <1. If x n + 1 = x n for some n1, then x n is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set n:=n+1 and go on to (3.7) to compute the next iteration x n + 2 .

We remark in passing that if x n + 1 = x n for some n1, then x m = x n for all mn+1, consequently, lim m x m = x n is a solution of SFP (1.1). Thus, we may assume that the sequence { x n } generated by Algorithm 3.1 is infinite.

Theorem 3.2 Assume that lim ̲ n ρ n (4 ρ n )ρ>0. Then the sequence { x n } generated by Algorithm  3.1 converges weakly to a solution x ˆ of SFP (1.1), where x ˆ = lim n P Γ x n .

Proof Let zΓ be fixed, and set y n = x n τ n f n ( x n ). By virtue of (2.1), (3.7), and Proposition 2.2(p6), we have

x n + 1 z 2 = P C n y n z 2 y n z 2 y n P C n y n 2 = x n z τ n f n ( x n ) 2 y n P C n y n 2 = x n z 2 2 τ n f n ( x n ) , x n z y n P C n y n 2 + τ n 2 f n ( x n ) 2 .
(3.9)

In view of Proposition 2.2, we know that I P Q n are firmly nonexpansive for all n1, and from this one derives

f n ( x n ) , x n z = ( I P Q n ) A x n , A x n A z = ( I P Q n ) A x n ( I P Q n ) A z , A x n A z ( I P Q n ) A x n 2 = 2 f n ( x n ) ,
(3.10)

from which it turns out that

x n + 1 z 2 x n z 2 4 τ n f n ( x n ) + τ n 2 f n ( x n ) 2 y n P C n y n 2 = x n z 2 4 ρ n f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 + ρ n 2 f n 2 ( x n ) ( f n ( x n ) + σ n ) 4 f n ( x n ) 2 y n P C n y n 2 x n z 2 ρ n ( 4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 y n P C n y n 2 ,
(3.11)

which in turn allows us to deduce the following conclusions:

  1. (i)

    { x n } is Fejér monotone w.r.t. Γ; in particular,

  2. (ii)

    { x n } is a bounded sequence;

  3. (iii)

    n = 1 ρ n (4 ρ n ) f n 2 ( x n )/ ( f n ( x n ) + σ n ) 2 <; and

  4. (iv)
    n = 1 y n P C n y n 2 <.
    (3.12)

By Proposition 2.4(i), to show that x n x, it suffices to show that w w ( x n )Γ. To see this, take x w w ( x n ) and let { x n k } be a sequence of { x n } weakly converging to x . By our assumption that lim ̲ n ρ n (4 ρ n )ρ>0, without loss of generality, we can assume that ρ n (4 ρ n ) ρ 2 for all n1. It follows from (3.12) (iii) that

f n ( x n ) f n ( x n ) + σ n 0(n).
(3.13)

Note that f n ( x n )+ σ n A 2 x n z+1 for zΓ. This, together with (3.13), implies that f n ( x n )0, that is, (I P Q n )A x n 0. By our assumption that ∂q is a bounded mapping, we see that there exists a constant M>0 such that η n M, η n q(A x n ).

Since P Q n (A x n ) Q n , by the definition of Q n , we have

q(A x n ) η n , A x n P Q n ( A x n ) M ( I P Q n ) A x n 0.
(3.14)

Noting that A x n k A x ˆ and using the w-lsc of q, we have q(A x ) lim ̲ k q(A x n k )0, which implies that A x Q. We next prove x C. Firstly, from (3.12) (iv), we know that y n P C n y n 0. Notice that

y n x n = τ n f n ( x n ) 4 f n ( x n ) f n ( x n ) + σ n f n ( x n ) f n ( x n ) + σ n 4 f n ( x n ) f n ( x n ) + σ n 0,

we have

x n P C n x n x n y n + y n P C n y n + P C n y n P C n x n 2 x n y n + y n P C n y n 0 .

Since ∂c is a bounded mapping, we have M 1 >0 such that

ξ 1 M 1 , ξ n c( x n ).

Since P C n ( x n ) C n , by the definition of C n , we have

c( x n ) ξ n , x n P C n x n M 1 ( I P C n ) x n 0.

Then w-lsc of C implies that c( x ) lim ̲ k c( x n k )0, thus x C and w w ( x n )Γ, completing the proof. □

We introduce a little more general algorithm as follows.

Algorithm 3.3 Choose an initial data x 1 H 1 arbitrarily. Assume that the n th iteration x n has been constructed; then we compute the (n+1)th iteration x n + 1 via the formula:

x n + 1 = β n x n +(1 β n ) P C n ( x n τ n f ( x n ) ) ,n1,
(3.15)

where the step size { τ n } is as before and { β n } is a sequence in (0,1) satisfying lim ¯ n β n <1. If x n + 1 = x n for some n1, then x n is a solution of the SFP (1.1) and the iterative process stops; otherwise, we set n:=n+1 and go on to (3.15) to compute the next iteration x n + 2 .

We have the following weak convergence theorem.

Theorem 3.4 Assume that lim ̲ n ρ n (4 ρ n )ρ>0. Then the sequence { x n } generated by Algorithm  3.3 converges weakly to a solution x ˆ of the SFP (1.1) where x ˆ = lim n P Γ x n .

Proof Let zΓ be fixed and set y n = x n τ n f n ( x n ). By virtue of (2.1), (2.2), (3.15), (3.10), and Proposition 2.2(p6), we have

x n + 1 z 2 = β n ( x n z ) + ( 1 β n ) ( P C n y n z ) 2 = β n x n z 2 + ( 1 β n ) P C n y n z 2 β n ( 1 β n ) x n P C n y n 2 β n x n z 2 + ( 1 β n ) y n z 2 ( 1 β n ) y n P C n y n 2 = β n x n z 2 + ( 1 β n ) x n z τ n f n ( x n ) 2 ( 1 β n ) y n P C n y n 2 = β n x n z 2 + ( 1 β n ) x n z 2 2 ( 1 β n ) τ n f n ( x n ) , x n z + ( 1 β n ) τ n 2 f n ( x n ) 2 ( 1 β n ) y n P C n y n 2 x n z 2 4 ( 1 β n ) τ n f n ( x n ) + ( 1 β n ) τ n 2 τ n f n ( x n ) 2 β n ( 1 β n ) y n P C n y n 2 x n z 2 ( 1 β n ) ρ n ( 4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 ( 1 β n ) y n P C n y n 2 ,
(3.16)

which implies that

  1. (i)

    { x n } is Fejér monotone w.r.t. Γ; in particular,

  2. (ii)

    { x n } is a bounded sequence;

  3. (iii)

    n = 1 (1 β n ) ρ n (4 ρ n ) f n 2 ( x n )/ ( f n ( x n ) + σ n ) 2 <; and

  4. (iv)

    n = 1 (1 β n ) y n P C n y n 2 <.

By our assumptions on { β n } and { ρ n }, we have f n ( x n ) f n ( x n ) + σ n 0 and y n P C n y n 0, the rest of the arguments follow exactly form the corresponding parts of Theorem 3.2, we omit its details. This completes the proof. □

We remark that Theorem 3.4 generalizes Theorem 3.2, that is, if we take β n 0 in Theorem 3.4, then we can obtain Theorem 3.2. It is really interesting work to compare convergence rate of Algorithms 3.1 and 3.3.

Generally speaking, Algorithms 3.1 and 3.3 have only the weak convergence in the frame work of infinite-dimensional spaces, and therefore the modifications of Algorithms 3.1 and 3.3 are needed in order to realize the strong convergence. Considerable efforts have been made and several interesting results have been reported recently; see [1720]. Below is our modification of Algorithms 3.1 and 3.3.

Algorithm 3.5 Choose an arbitrary initial data x 1 H 1 . Assume that the n th iteration x n H 1 has been constructed. Set

y n = P C n ( x n τ n f ( x n ) ) ,n1,
(3.17)

with the step size τ n given by (1.12), and define two half-spaces Y n and Z n by

Y n = { z H 1 : y n z 2 x n z 2 ρ n ( 4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 } ,
(3.18)
Z n = { z H 1 : x 1 x n , z x n 0 } .
(3.19)

The (n+1)th iterate x n + 1 is then constructed in the formula:

x n + 1 = P Y n Z n ( x 1 ).
(3.20)

If x n + 1 = x n for some n1, then x n is a solution of SFP (1.1) and the iterative process stops; otherwise, we set n:=n+1 and go on to (3.17)-(3.20) to compute the next iteration x n + 2 .

Theorem 3.6 Assume that lim ̲ n ρ n (4 ρ n )ρ>0. Then the sequence { x n } generated by Algorithm  3.5 converges strongly to a solution x of SFP (1.1), where x = P Γ ( x 1 ).

Proof Firstly, we show that

Y n Z n Γ,
(3.21)

for all n1. Indeed, in view of (3.11), we have Γ Y n for all n1. To show (3.21) holds, it suffices to show that Γ Z n for all n1. We complete the proof by induction. Since Z 1 = H 1 , Γ Z 1 . Assume that Γ Z k form some k1; we plan to show Γ Z k + 1 . Since Γ Y k Z k , and Y k Z k closed convex, x n + 1 = P Y n Z n ( x 1 ) is well defined. It follows from Proposition 2.2(p1) that

x k + 1 z, x k + 1 x 1 0,zΓ.
(3.22)

This implies that z Z k + 1 and hence Γ Z k + 1 . Consequently, Γ Z n for all n1, and thus (3.21) holds true.

From the definition of Z n and Proposition 2.2(p1), we see that x n = P Z n x 1 . It then follows from (3.20) that

x n x 1 = P Z n x 1 x 1 x n + 1 x n = P Z n + 1 x 1 x 1 P Γ x 1 x 1 .
(3.23)

This derives that lim n x n x 1 exists, dented by d.

Noting that x n + 1 Z n , we have

x n + 1 x n , x n x 1 0.
(3.24)

By virtue of (2.1) and (3.24), we obtain

x n + 1 x 1 2 x n x 1 2 = x n + 1 x n 2 +2 x n + 1 x n , x n x 1 x n + 1 x n 2 .
(3.25)

From this one derives that x n + 1 x n 0 (n).

Since x n + 1 Y n , we have

y n x n + 1 2 x n x n + 1 2 ρ n (4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 ,
(3.26)

from which it turns out that

y n x n + 1 x n x n + 1 0
(3.27)

and

ρ n (4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 x n x n + 1 2 0.
(3.28)

At this point, we show w w ( x n )Γ. To end this, take x ˆ w w ( x n ). Then there exists a subsequence { x n j } of { x n } such that x n j w x ˆ . By our assumption that lim ̲ n ρ n (4 ρ n )ρ>0, from (3.28) we conclude that

f n ( x n ) f n ( x n ) + σ n 0,
(3.29)

which implies that f n ( x n )0, since { f n ( x n )+ σ n } is bounded. Notice that P Q n (A x n ) Q n , and ∂q is a bounded mapping, we have

q(A x n ) η n , A x n P Q n ( A x n ) M ( I P Q n ) A x n 0.
(3.30)

Since A x n j A x ˆ and q is w-lsc on H 2 , we derive

q(A x ˆ ) lim ̲ j q(A x n j )0,

which implies that A x ˆ Q.

We next show x ˆ C. Indeed, from (3.27) we have

y n x n y n x n + 1 + x n + 1 x n 0.
(3.31)

From (3.29), we have also

τ n f n ( x n ) = ρ n f n ( x n ) f n ( x n ) + σ n f n ( x n ) f n ( x n ) + σ n 4 f n ( x n ) f n ( x n ) + σ n 0.
(3.32)

Consequently, it follows from (3.31) and (3.32) that

x n P C n x n x n y n + y n P C n x n x n y n + τ n f n ( x n ) 0 .
(3.33)

Since P C n ( x n ) C n , noting ∂c is a bounded mapping, we immediately obtain

c( x n ) ξ n , x n P C n x n M 2 ( I P C n ) x n 0.

Then the w-lsc of c ensures that

c( x ˆ ) lim ̲ j c( x n j )0,

from which it turns out that x ˆ C, and thus x ˆ Γ. It follows from (3.21) that x ˆ Z n , which implies that

x 1 x n , x ˆ x n 0,
(3.34)

for all n1. Thus, from (3.34) we obtain

x n x ˆ 2 x ˆ x 1 , x ˆ x n ,
(3.35)

in particular, we have

x n j x ˆ 2 x ˆ x 1 , x ˆ x n j ,

consequently, x n j x ˆ , since x n j w x ˆ .

At this point, by virtue of (3.19), we have

x 1 x n ,z x n 0,zΓ,
(3.36)

in particular, we have

x 1 x n j ,z x n j 0,zΓ.
(3.37)

Thus, upon taking the limit as j in (3.37), we obtain

x 1 x ˆ ,z x ˆ 0,zΓ.
(3.38)

This implies that x ˆ = P Γ x 1 by Proposition 2.2(p1). Therefore { x n } converges strongly to x ˆ = P Γ x 1 because of the uniqueness of P Γ x 1 . This completes the proof. □

Algorithm 3.7 Choose an arbitrary initial data x 1 H 1 . Assume that the n th iteration x n H 1 has been constructed. Set

y n = β n x n +(1 β n ) P C n ( x n τ n f ( x n ) ) ,
(3.39)

with the step size τ n given by (1.12) and the relaxed factor β n in [0,1) satisfying lim ¯ n β n <1. Define two half-spaces Y n and Z n by

Y n = { z H 1 : y n z 2 x n z 2 ρ n ( 4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 ( 1 β n ) x n τ n f n ( x n ) P C n ( x n τ n f n ( x n ) ) 2 } ,
(3.40)
Z n = { z H 1 : x 1 x n , z x n 0 } .
(3.41)

The (n+1)th iteration x n + 1 is then constructed by the formula:

x n + 1 = P Y n Z n ( x 1 ).
(3.42)

If x n + 1 = x n for some n1, then x n is a solution of SFP (1.1) and the iterative process stops; otherwise, we set n:=n+1 and go on to (3.39)-(3.41) to compute the next iteration x n + 2 .

Along the proof lines of Theorem 3.6 we can prove the following.

Theorem 3.8 Assume that lim ̲ n ρ n (4 ρ n )ρ>0; then the sequence { x n } generated by Algorithm  3.7 converges strongly to a solution x of SFP (1.1), where x = P Γ ( x 1 ).

The proof of Theorem 3.8 is similar to that of Theorem 3.6, and therefore we omit its details.

We next turn our attention to another kind of algorithm.

Algorithm 3.9 Choose an arbitrary initial data x 1 H 1 . Assume that the n th iteration x n H 1 has been constructed; then we compute the (n+1)th iteration x n + 1 via the recursion:

x n + 1 = P C n ( α n g ( x n ) + ( 1 α n ) ( x n τ n f ( x n ) ) ) ,n1,
(3.43)

where the step size τ n is given by (1.12), g: H 1 H 1 is a contraction with contractive coefficient δ(0,1), and { α n } is a real sequence in (0,1). If x n + 1 = x n for some n1, then x n is an approximate solution of SFP (1.1) (he approximate rule will be given below) and the iterative process stops; otherwise, we set n:=n+1 and go on to (3.43) to compute the next iteration x n + 2 .

We point out that if x n + 1 = x n for some n1, then (3.43) reduces to

x n = P C n ( α n g ( x n ) + ( 1 α n ) ( x n τ n f ( x n ) ) ) ,n1.
(3.44)

This implies that x n C n and hence x n C. Write

e( x n , τ n )= x n P C n ( x n τ n f n ( x n ) .

Then it follows from (3.44) that

e( x n , τ n ) α n g ( x n ) x n + τ n f n ( x n ) ,n1.
(3.45)

Such an x n is called an approximate solution of SFP (1.1). If e( x n , τ n )=0, then x n is a solution of SFP (1.1).

Theorem 3.10 Assume that { α n } and { ρ n } satisfy conditions (C1) α n 0, n = 1 α n = and (C2) lim ̲ n ρ n (4 ρ n )>0, respectively. Then the sequence { x n } generated by Algorithm  3.9 converges strongly to a solution x of SFP (1.1), where x = P Γ g( x ), equivalently, x solves the following variational inequality:

( I g ) x , x x 0,xΓ.
(VI)

Proof First of all, we show there exists a unique x Γ such that x = P Γ g( x ). Indeed, since P Γ g: H 1 H 1 is a contraction with the contractive coefficient δ(0,1), by the Banach contractive mapping principle, we conclude that there exists a unique x H 1 such that x = P Γ g( x )Γ, equivalently, x solves the following variational inequality:

(VI)

Write y n = x n τ n f n ( x n ) and z n = α n g x n +(1 α n ) y n . Then (3.43) can be rewritten as

x n + 1 = P C n z n .
(3.46)

Noting that x Γ and Q Q n for all n1, we have A x Q n for all n1, and hence

(I P Q n )A x =0.

Since I P Q n is firmly nonexpansive, we have

f ( x n ) , x n x = ( I P Q n ) A x n , A x n A x = ( I P Q n ) A x n ( I P Q n ) A x , A x n A x ( I P Q n ) A x n 2 = 2 f n ( x n ) .
(3.47)

By virtue of (2.1) and (3.47), we obtain

y n x 2 = x n x τ n f n ( x n ) 2 = x n x 2 2 τ n f n ( x n ) , x n x + τ n 2 f n ( x n ) 2 x n x 2 4 τ n f n ( x n ) + τ n 2 f n ( x n ) 2 = x n x 2 4 ρ n f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 + ρ n 2 f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 f n ( x n ) 2 ( f n ( x n ) + σ n ) 2 x n x 2 ρ n ( 4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 ,
(3.48)

in particular, we have

y n x x n x ,
(3.49)

for all n1.

We now estimate z n x 2 . By virtue of definition of the norm and Schwarz’s inequality, we obtain

z n x 2 = z n x , z n x = α n g ( x n ) x , z n x + ( 1 α n ) y n x , z n x = α n g ( x n ) g ( x ) , z n x + α n g ( x ) x , z n x + ( 1 α n ) y n x , z n x δ 2 α n 2 x n x 2 + α n 2 z n x 2 + α n g ( x ) x , z n x + 1 α n 2 y n x 2 + 1 α n 2 z n x 2 ,

from which it turns out that

z n x 2 δ 2 α n x n x 2 +2 α n g ( x ) x , z n x +(1 α n ) y n x 2 .
(3.50)

Substituting (3.48) into (3.50) yields

z n x 2 [ 1 ( 1 δ 2 ) α n ] x n x 2 + 2 α n g ( x ) x , z n x ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 .
(3.51)

By virtue of Proposition 2.2(p6), (3.48), and (3.51), noting that x C C n for all n1, we have

x n + 1 x 2 = P C n z n x 2 z n x 2 z n P C n z n 2 [ 1 ( 1 δ 2 ) α n ] x n x 2 + 2 α n ( g I ) x , z n x ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 ( I P C n ) z n 2 .
(3.52)

We next show that { x n } is bounded. Using (3.43) and (3.49), we have

x n + 1 x α n g ( x n ) x + ( 1 α n ) y n x δ α n x n x + α n g ( x ) x + ( 1 α n ) x n x = [ 1 ( 1 δ ) α n ] x 1 x + ( 1 δ ) α n g ( x ) x 1 δ max { x 1 x , g ( x ) x 1 δ } = M ,

for all n1, therefore { x n } is bounded; so are { y n } and { z n }.

Finally, we show that x n x (n).

Set s n = x n x 2 and assume that ρ n (4 ρ n )ρ for all n1. Then (3.52) reduces to

s n + 1 s n + ( 1 δ 2 ) α n s n + ( 1 α n ) ρ f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 + ( I P C n ) z n 2 2 α n ( g I ) x , z n x .
(3.53)

We consider two possible cases.

Case 1. { s n } is eventually decreasing, i.e., there exists some integer n 0 1 such that

s n + 1 s n for all n n 0 ,

which means that lim n s n exists. Note that { z n } is bounded and α n 0. Letting n in (3.53) yields f n ( x n ) f n ( x n ) + σ n 0 and (I P Q n ) z n 0. Since { f n ( x n )+ σ n } is a bounded sequence, we conclude that f n ( x n )0 and hence

(I P Q n )A x n 0.
(3.54)

Observe that z n y n α n g( x n ) y n α n M 1 0,

y n x n = τ n f n ( x n ) = ρ n f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 f n ( x n ) 4 f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 0 ,

and

x n P C n x n x n z n + z n P C n z n + P C n z n P C n x n 2 x n z n + ( I P C n ) z n 0 .
(3.55)

We may assume that

lim ¯ n ( g I ) x , z n x = lim ¯ n ( g I ) x , x n x = lim ¯ k ( g I ) x , x n k x .
(3.56)

Without loss of generality, we assume that x n k x ˆ (k); then A x n k A x ˆ (k). Since P Q n k A x n k Q n k , { η n k }q(A x n k ) is a bounded sequence and (I P Q n k )A x n k 0 (k) by (3.54), we deduce that

q(A x n k ) η n k , A x n k P Q n k ( A x n k ) η n k ( I P Q n k ) A x n k 0,

as k, then w-lsc of q implies that

q(A x ˆ ) lim ̲ k q(A x n k )0,

and thus A x ˆ Q.

On the other hand, since P C n k ( x n k ) C n k , { ξ n k }c( x n k ) is a bounded sequence, and (I C Q n k ) x n k 0 by (3.55), we derive

c( x n k ) ξ n k , x n k P C n k x n k ξ n k ( I P C n k ) x n k 0

as k, then w-lsc of c implies that

c( x ˆ ) lim ̲ k c( x n k )0,

and thus x ˆ C. Consequently, x ˆ C A 1 (Q)=Γ. It follows from (3.56) and Proposition 2.2(p1) that

lim ¯ n ( g I ) x , z n x = ( g I ) x , x ˆ x 0.
(3.57)

Taking into account of (3.53), we have

s n + 1 [ 1 ( 1 δ 2 ) α n ] s n +2 α n ( g I ) x , z n x .
(3.58)

Applying Proposition 2.5 to (3.58), we derive that s n 0 as n, i.e., x n x as n.

Case 2. { s n } is not eventually decreasing. In this case, we can find an integer n 0 1 such that s n 0 < s n 0 + 1 . Define J(n):={ n 0 kn: s k < s k + 1 }, n> n 0 . Then J(n) and J(n)J(n+1). Define τ:NN by

τ(n):=maxJ(n),n> n 0 .

Then τ(n) as n, s τ ( n ) s τ ( n ) + 1 for all n> n 0 and s n s τ ( n ) + 1 for all n> n 0 ; see [20] for details.

Since s τ ( n ) s τ ( n ) + 1 for all n> n 0 , it follows from (3.53) that

s τ ( n + 1 ) s τ ( n ) 0, ρ f n 2 ( x n ) ( f n ( x n ) + σ n ) 2 M 1 α τ ( n ) 0

and (I P C τ ( n ) ) z τ ( n ) 0 as n.

At this point, by virtue of a similar reasoning to the corresponding parts in case 1, we can deduce that lim ¯ n (gI) x , z τ ( n ) x = lim ¯ n (gI) x , x τ ( n ) x 0.

Noting that s τ ( n ) s τ ( n ) + 1 , it follows from (3.53) that

s τ ( n ) 2 1 δ 2 ( g I ) x , z τ ( n ) x ,

from which one derives that lim ¯ n s τ ( n ) 0, and hence s τ ( n ) 0 as n. From this it turns out that s τ ( n ) + 1 0 as n, since s τ ( n ) + 1 s τ ( n ) 0 as n. Consequently, s n 0, as n, since 0 s n s τ ( n ) + 1 0 as n. This completes the proof. □

By using an argument like the method in Theorem 3.10, we have the following more general algorithm and convergence theorem.

Algorithm 3.11 Choose an arbitrary initial data x 1 H 1 . Assume that the n th iteration x n H 1 has been constructed; then we compute the (n+1)th iteration x n + 1 via the recursion

x n + 1 = β n x n +(1 β n ) P C n [ α n g ( x n ) + ( 1 α n ) ( x n τ n f ( x n ) ) ] ,n1,
(3.59)

where { β n } is a real sequence in [0,1) satisfying lim ¯ n β n <1, { α n } is a real sequence in (0,1) satisfying conditions (C1) α n 0 and (C2) n = 1 α n =, g: H 1 H 1 is a contraction with contractive coefficient δ(0,1) and τ n is given by (1.12). If x n + 1 = x n for some n1, then x n is an approximate solution of SFP (1.1) and the iterative process stops; otherwise, we set n:=n+1 and go on to (3.59) to compute the next iteration x n + 2 .

Theorem 3.12 Assume that lim ̲ n ρ n (4 ρ n )>0; the sequence generated by algorithm (3.59) converges strongly to a solution x of the SFP (1.1), where x = P Γ g( x ), equivalently, x is a solution of the following variational inequality:

(VI)

4 Numerical experiments

In this section, we consider two typical numerical experiments to illustrate the performance of step size (1.12) with CQ-like algorithms. Firstly, we introduce a linear observation model as follows, which covers many problems in signal and image processing:

y=Ax+ε,x R N ,
(4.1)

where y R M is the observed or measured data with noisy ε. A: R N R M denotes the bounded linear observation operator. A is sparse and the range of it is not closed in most inverse problems, thus A is often ill-condition and the problem is also ill-posed. When x is a sparse expansion, finding the solutions of (4.1) can be seen as finding a solution to the least-square problem

min x R N 1 2 y A x 2 subject to x 1 <t,
(4.2)

for any real number t>0.

When we set C={x R N : x 1 t} and Q={y}, it is a particular case of SFP (1.1); see [11]. Therefore, we continue by applying the CQ algorithm to solve (4.2). We compute the projection onto C through a soft thresholding method; see [11, 2123].

Next, according to the examples in [11, 22], we also choose two similar particular problems: compressed sensing and image deconvolution, which can be covered by (4.1). The experiments compare the performances of the proposed step size (1.12) with the step size in [11], and analysis some properties of (1.12).

4.1 Compressed sensing

In a general compressed sensing model, we set the hits of a signal x R N is N= 2 12 . There exist m=50 spikes with amplitude ±1 distributed in the whole domain randomly. The plot can be seen on the top of Figure 1. Then we set the observation dimension M= 2 10 and a matrix A with M×N order is also generated arbitrarily. A standard Gaussian distribution noise with variance σ ε 2 = 10 4 is added. Let t=50 in (4.2).

Figure 1
figure 1

Compressed sensing problem, from top to bottom: original signal, results of CQ algorithm with step sizes ( 1.11 ), ( 1.12 ), and Algorithm  3.3 .

For the step sizes (1.11) and (1.12), we always set the constant ρ=2. For Algorithm 3.3, we set β n =0.5. All the processes are started with initial signal x 0 =0 and finished with the stop rule

x n + 1 x n / x n < 10 3 .

We calculated the mean squared error (MSE) for the results

MSE=(1/N) x x ,

where x is an estimated signal of x.

The second and third plots in Figure 1 correspond to the results with step sizes (1.11) and (1.12) to Algorithm 3.1, respectively. The recovered result by Algorithm 3.3 with step size (1.12) is shown in the fourth plot. Especially for the fifth, when we set β n = ( n + 1 ) k , k=1,2,3, , when we have k3 the iteration steps of Algorithm 3.3 start to approach the number in the second plot, and the restored precision is a little poorer than the others.

For (1.12) we firstly set σ n =σ=0.5; then in order to study its effect to the convergence speed of the CQ algorithm, we let it be σ n = ( n + 1 ) l , l1 is an integer. In Figure 2 we can find that when l4 the best MSE curves can be obtained, and it starts to change less. Therefore, σ n should be as little as possible.

Figure 2
figure 2

The performance curves of MSE to different σ n in 100 iterations. The line with squares is corresponding to σ n =0.5, x corresponding to l=1, circles to l=4, points to l=16.

4.2 Image deconvolution

In this subsection, we continue by applying Algorithms 3.1 and 3.3 to recover the blurred Cameraman image. In the experiments, from [22, 24] we employ Haar wavelets and the blur point spread function h i j = ( 1 + i 2 + j 2 ) 1 , for i,j=4,,4; the noise variance is σ 2 =2. The size of the image is N=M= 256 2 . The threshold value is hand-tuned for the best SNR improvement. t is the sum of all the original pixel values.

We observe the performance of σ n in (1.12); see Figure 3. We find that at the beginning several steps there are similar SNR curves, however, after 19 iterations, σ=0.5 is similar with (1.11). When 1l<6, the curves are worse than the others. While we set l6 the curve starts to be consistent with the curve of (1.11). Therefore, we also know that σ n should be better as little as possible.

Figure 3
figure 3

The performance curves of SNR to different σ n in 70 iterations. The line with squares corresponds to (1.11), x to l=1, triangles to l=3, circles to l=16.

5 Conclusion remarks

In this paper we have proposed several kinds of adaptively relaxed iterative algorithms with a new variable step size τ n for solving SFP (1.1). The feature is that the new variable step size τ n contains a sequence of positive numbers in its denominator. Because of this, the proposed algorithms with relaxed iterations will never terminate at any iteration step. On the other hand, unlike the previous known algorithms, our stop rule is that the related iteration process will stop if x n + 1 = x n for some n1.

By means of new analysis techniques, we have proved several kinds of weak and strong convergence theorems of the proposed algorithms for solving SFP (1.1), which improved, extended, and complemented those existing in the literature. We remark that all convergence results in this paper still hold true if we use the step size τ n given by (1.11) to replace the step size given by (1.12). In such a case, the stop rules should be modified. We would like to point out that our Theorems 3.10 and 3.12 are closely related to a sort of variational inequalities.

Finally, numerical experiments have been presented to illustrate the effectiveness of the proposed algorithms and applications in signal processing of the algorithms with the step size selected in this paper. The numerical results tell us that the changes of the choice of the step size σ n given by (1.12) may affect the convergence rate of the iterative algorithms, and σ n should be chosen as small as possible; for instance, we can choose σ n such that σ n 0 as n.

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  MATH  Google Scholar 

  2. López G, Martín-Márquez V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2009:243–279.

    Google Scholar 

  3. Chang SS, Kim JK, Cho YJ, Sim JY: Weak and strong convergence theorems of solutions to split feasibility problem for nonspreading type mapping in Hilbert spaces. Fixed Point Theory Appl. 2014., 2014: Article ID 11

    Google Scholar 

  4. Stark H, Yang Y: Vector Space Projections: a Numerical Approach to Signal and Image Processing, Neural Nets and Optics. Wiley, New York; 1998.

    MATH  Google Scholar 

  5. Censor Y, Bortfeld T, Martin B, Bortfeld T: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2003, 51: 2353–2365.

    Article  Google Scholar 

  6. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    Article  MathSciNet  MATH  Google Scholar 

  7. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

    Article  MathSciNet  MATH  Google Scholar 

  8. Li M: Improved relaxed CQ methods for solving the split feasibility problem. Adv. Model. Optim. 2011, 13: 305–318.

    MathSciNet  Google Scholar 

  9. Abdellah B, Muhammad AN, Mohamed K, Sheng ZH: On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 2012, 54: 627–639. 10.1007/s10898-011-9782-2

    Article  MathSciNet  MATH  Google Scholar 

  10. Yang Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048

    Article  MathSciNet  MATH  Google Scholar 

  11. López G, Martín-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004

    Google Scholar 

  12. Göebel K, Kirk WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  MATH  Google Scholar 

  13. Aubin JP: Optima and Equilibria: An Introduction to Nonlinear Analysis. Springer, Berlin; 1993.

    Book  MATH  Google Scholar 

  14. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  MATH  Google Scholar 

  15. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

    Article  MathSciNet  MATH  Google Scholar 

  16. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2011, 66: 240–256.

    Article  MathSciNet  MATH  Google Scholar 

  17. Wang FH, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010. 10.1155/2010/102085

    Google Scholar 

  18. Dang YZ, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007

    Google Scholar 

  19. Yu X, Shahzad N, Yao YH: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 2012, 6: 1447–1462. 10.1007/s11590-011-0340-0

    Article  MathSciNet  MATH  Google Scholar 

  20. Yao YH, Postolache M, Liou YC: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013., 2013: Article ID 201 10.1186/1687-1812-2013-201

    Google Scholar 

  21. Daubechies I, Fornasier M, Loris I: Accelerated projected gradient method for linear inverse problems with sparsity constraints. J. Fourier Anal. Appl. 2008, 14: 764–792. 10.1007/s00041-008-9039-8

    Article  MathSciNet  MATH  Google Scholar 

  22. Figueiredo MA, Nowak RD, Wright SJ: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1: 586–598.

    Article  Google Scholar 

  23. Starck JL, Murtagh F, Fadili JM: Sparse Image and Signal Processing Wavelets, Curvelets, Morphological Diversity. Cambridge University Press, Cambridge; 2010:166.

    Book  MATH  Google Scholar 

  24. Mário ATF: An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12: 906–917. 10.1109/TIP.2003.814255

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (11071053).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peiyuan Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors contributed equally. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, H., Wang, P. Adaptively relaxed algorithms for solving the split feasibility problem with a new step size. J Inequal Appl 2014, 448 (2014). https://doi.org/10.1186/1029-242X-2014-448

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-448

Keywords