Open Access

Strong convergence theorem of two-step iterative algorithm for split feasibility problems

Journal of Inequalities and Applications20142014:280

https://doi.org/10.1186/1029-242X-2014-280

Received: 20 February 2014

Accepted: 11 July 2014

Published: 15 August 2014

Abstract

The main purpose of this paper is to introduce a two-step iterative algorithm for split feasibility problems such that the strong convergence is guaranteed. Our result extends and improves the corresponding results of He et al. and some others.

MSC:90C25, 90C30, 47J25.

Keywords

split feasibility problem two-step iterative algorithm strong convergence bounded linear operator

1 Introduction

The split feasibility problem (SFP) was proposed by Censor and Elfving in [1]. It can be formulated as the problem of finding a point x satisfying the property
x C , A x Q ,
(1.1)

where A is a given M × N real matrix, C and Q are nonempty, closed and convex subsets in R N and R M , respectively.

Due to its extraordinary utility and broad applicability in many areas of applied mathematics (most notably, fully-discretized models of problems in image reconstruction from projections, in image processing, and in intensity-modulated radiation therapy), algorithms for solving convex feasibility problems continue to receive great attention (see, for instance, [25] and also [610]).

We assume that SFP (1.1) is consistent, and let Γ be the solution set, i.e.,
Γ = { x C : A x Q } .
It is not hard to see that Γ is closed convex and x Γ if and only if it solves the fixed-point equation
x = P C ( I γ A ( I P Q ) A ) x ,
(1.2)

where P C and P Q are the orthogonal projections onto C and Q, respectively, γ > 0 is any positive constant and A denotes the adjoint of A.

To solve (1.2), Byrne [11] proposed his CQ algorithm which generates a sequence { x n } by
x n + 1 = P C ( x n τ n A ( I P Q ) A x n ) ,
(1.3)

where τ n ( 0 , 2 A 2 ) .

The CQ algorithm (1.3) can be obtained from optimization. In fact, if we introduce the convex objective function
f ( x ) = 1 2 ( I P Q ) A x 2 ,
(1.4)
and analyze the minimization problem
min x C f ( x ) ,
(1.5)
then the CQ algorithm (1.3) comes immediately as a special case of the gradient projection algorithm (GPA). Since the convex objective function f ( x ) is differentiable and has a Lipschitz gradient, which is given by
f ( x ) = A ( I P Q ) A x ,
(1.6)
the GPA for solving the minimization problem (1.4) generates a sequence { x n } recursively as
x n + 1 = P C ( x n τ n f ( x n ) ) ,
(1.7)

where τ n is chosen in the interval ( 0 , 2 L ) , and L is the Lipschitz constant of f.

Observe that in algorithms (1.3) and (1.7) mentioned above, in order to implement the CQ algorithm, one has to compute the operator norm A , which is in general not an easy work in practice. To overcome this difficulty, some authors proposed different adaptive choices of selecting the τ n (see [1113]). For instance, López et al. introduced a new way of selecting the stepsize [12] as follows:
τ n : = ρ n f ( x n ) f ( x n ) 2 , 0 < ρ n < 4 .
(1.8)

The computation of a projection onto a general closed convex subset is generally difficult. To overcome this difficulty, Fukushima [14] suggested the so-called relaxed projection method to calculate the projection onto a level set of a convex function by computing a sequence of projections onto half-spaces containing the original level set. In the setting of finite-dimensional Hilbert spaces, this idea was followed by Yang [9], who introduced the relaxed CQ algorithms for solving SFP (1.1) where the closed convex subsets C and Q are level sets of convex functions.

Recently, for the purpose of generality, SFP (1.1) has been studied in a more general setting. For instance, see [8, 12]. However, their algorithm has only weak convergence in the setting of infinite-dimensional Hilbert spaces. Very recently, He and Zhao [15] introduced a new relaxed CQ algorithm (1.9) such that the strong convergence is guaranteed in infinite-dimensional Hilbert spaces:
x n + 1 = P C n ( α n u + ( 1 α n ) ( x n τ n f n ( x n ) ) ) .
(1.9)

Motivated and inspired by the research going on in this section, the purpose of this article is to study a two-step iterative algorithm for split feasibility problems such that the strong convergence is guaranteed in infinite-dimensional Hilbert spaces. Our result extends and improves the corresponding results of He and Zhao [15] and some others.

2 Preliminaries and lemmas

Throughout the rest of this paper, we assume that H, H 1 and H 2 all are Hilbert spaces, A is a bounded linear operator from H 1 to H 2 , and I is the identity operator on H, H 1 or H 2 . If f : H R is a differentiable function, then we denote by f the gradient of the function f. We will also use the following notations: → to denote the strong convergence, to denote the weak convergence and
w ω ( x n ) = { x | { x n k } { x n }  such that  x n k x }

to denote the weak ω-limit set of { x n } .

Recall that a mapping T : H H is said to be nonexpansive if
T x T y x y , x , y H .
T : H H is said to be firmly nonexpansive if
T x T y 2 x y 2 ( I T ) x ( I T ) y 2 , x , y H .

A mapping T : H H is said to be demiclosed at origin if for any sequence { x n } H with x n x and lim sup n ( I T ) x n = 0 , then x = T x .

It is easy to prove that if T : H H is a firmly nonexpansive mapping, then T is demiclosed at origin.

A function f : H R is called convex if
f ( λ x + ( 1 λ ) y ) λ f ( x ) + ( 1 λ ) f ( y ) , λ ( 0 , 1 ) , x , y H .
A function f : H R is said to be weakly lower semi-continuous (w-lsc) at x if x n x implies
f ( x ) lim inf n f ( x n ) .
Lemma 2.1 Let T : H 2 H 2 be a firmly nonexpansive mapping such that ( I T ) x is a convex function from H 2 to , let A : H 1 H 2 be a bounded linear operator and
f ( x ) = 1 2 ( I T ) A x 2 , x H 1 .
Then
  1. (i)

    f ( x ) = A ( I T ) A x , x H 1 .

     
  2. (ii)

    f is A 2 -Lipschitz: f ( x ) f ( y ) A 2 x y , x , y H 1 .

     
Proof (i) From the definition of f, we know that f is convex. For any given x H 1 and for any v H 1 , first we prove that the limit
f ( x ) , v = lim h 0 + f ( x + h v ) f ( x ) h
exists in R ¯ : = { } R { + } and satisfies
f ( x ) , v f ( x + v ) f ( x ) , v H 1 .
If fact, if 0 < h 1 h 2 , then
f ( x + h 1 v ) f ( x ) = f ( h 1 h 2 ( x + h 2 v ) + ( 1 h 1 h 2 ) x ) f ( x ) .
Since f is convex and h 1 h 2 1 , it follows that
f ( x + h 1 v ) f ( x ) h 1 h 2 f ( x + h 2 v ) + ( 1 h 1 h 2 ) f ( x ) f ( x ) ,
and
f ( x + h 1 v ) f ( x ) h 1 f ( x + h 2 v ) f ( x ) h 2 .
This shows that this difference quotient is increasing, therefore it has a limit in R ¯ as h 0 + :
f ( x ) , v = inf h > 0 f ( x + h v ) f ( x ) h = lim h 0 + f ( x + h v ) f ( x ) h .
(2.1)
This implies that f is differential. Taking h = 1 in (2.1), we have
f ( x ) , v f ( x + v ) f ( x ) .
Next we prove that
f ( x ) = A ( I T ) A x , x H 1 .
In fact, since
lim h 0 + f ( x + h v ) f ( x ) h = lim h 0 + A x + h A v T A ( x + h v ) 2 ( I T ) A x 2 2 h
(2.2)
and
A x + h A v T A ( x + h v ) 2 ( I T ) A x 2 = A x 2 + h 2 A v 2 + 2 h A A x , v + T A ( x + h v ) 2 A x 2 T A x 2 2 A x , T A ( x + h v ) T A x 2 h A T A ( x + h v ) , v .
(2.3)
Substituting (2.3) into (2.2), simplifying it and then letting h 0 + , we have
lim h 0 + f ( x + h v ) f ( x ) h = lim h 0 + 2 h { A A x , v A T A ( x + h v ) , v } 2 h = A ( I T ) A x , v , v H 1 .
It follows from (2.1) that
f ( x ) = A ( I T ) A x , x H 1 .
Now we prove conclusion (ii). Indeed, it follows from (i) that
f ( x ) f ( y ) = A ( I T ) A x A ( I T ) A y = A [ ( I T ) A x ( I T ) A y ] A A x A y A 2 x y , x , y H 1 .

Lemma 2.1 is proved. □

Lemma 2.2 Let T : H H be an operator. The following statements are equivalent.
  1. (i)

    T is firmly nonexpansive.

     
  2. (ii)

    T x T y 2 x y , T x T y , x , y H .

     
  3. (iii)

    I T is firmly nonexpansive.

     
Proof (i) (ii): Since T is firmly nonexpansive, we have, for all x , y H ,
T x T y 2 x y 2 ( I T ) x ( I T ) y 2 = x y 2 ( x y ) ( T x T y ) 2 = x y 2 x y 2 T x T y 2 + 2 x y , T x T y = 2 x y , T x T y T x T y 2 ,
hence
T x T y 2 x y , T x T y , x , y H .
  1. (ii)
    (iii): From (ii) we know that for all x , y H ,
    ( I T ) x ( I T ) y 2 = ( x y ) + ( T x T y ) 2 = x y 2 2 x y , T x T y + T x T y 2 x y 2 x y , T x T y = x y , ( I T ) x ( I T ) y .
     
This implies that I T is firmly nonexpansive.
  1. (iii)

    (i): From (iii) we immediately know that T is firmly nonexpansive. □

     

Lemma 2.3 [16]

Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + γ n σ n , n = 0 , 1 , 2 , ,
where { γ n } is a sequence in ( 0 , 1 ) , and { σ n } is a sequence in such that
  1. (i)

    n = 0 γ n = ;

     
  2. (ii)

    lim sup n σ n 0 , or n = 0 | γ n σ n | < .

     

Then lim n a n = 0 .

Lemma 2.4 [17]

Let X be a real Banach space and J : X 2 X be the normalized duality mapping, then, for any x , y X , the following inequality holds:
x + y 2 x 2 + 2 y , j ( x + y ) , j ( x + y ) J ( x , y ) .
Especially, if X is a real Hilbert space, then we have
x + y 2 x 2 + 2 y , x + y , x , y X .

3 Main results

In this section, we shall prove our main theorem.

Theorem 3.1 Let H 1 , H 2 be two real Hilbert spaces, A : H 1 H 2 be a bounded linear operator. Let S : H 1 H 1 be a firmly nonexpansive mapping, T 1 , T 2 : H 2 H 2 be two firmly nonexpansive mappings such that ( I T i ) x ( i = 1 , 2 ) is a convex function from H 2 to with C : = F ( S ) and Q : = F ( T 1 ) F ( T 2 ) . Let u H 1 and { x n } be a sequence generated by
{ x 0 H 1  chosen arbitrarily , x n + 1 = S ( α n u + ( 1 α n ) ( y n ξ n g ( y n ) ) ) , y n = β n u + ( 1 β n ) ( x n τ n f ( x n ) ) ,
(3.1)
where
f ( x n ) = 1 2 ( I T 1 ) A x n 2 , f ( x n ) = A ( I T 1 ) A x n , τ n = ρ n f ( x n ) f ( x n ) 2
and
g ( y n ) = 1 2 ( I T 2 ) A y n 2 , g ( y n ) = A ( I T 2 ) A y n , ξ n = ρ n g ( y n ) g ( y n ) 2 .
If the solution set Γ of SPF (1.1) is not empty, and the sequences { ρ n } ( 0 , 4 ) and { α n } , { β n } ( 0 , 1 ) satisfy the following conditions:
  1. (i)

    lim n α n = 0 , lim n β n = 0 ,

     
  2. (ii)

    the sequence { α n β n } is bounded and Σ n = 1 β n = ,

     

then the sequence { x n } converges strongly to P Γ u .

Proof Since Γ (≠) is the solution set of SPF (1.1), Γ is closed and convex. Thus, the metric projection P Γ is well defined. Letting p = P Γ u , it follows from Lemma 2.4 that
y n p 2 = β n u + ( 1 β n ) ( x n τ n f ( x n ) ) p 2 ( 1 β n ) x n τ n f ( x n ) p 2 + 2 β n u p , y n p .
(3.2)
Since p Γ C , f ( p ) = 0 . Observe that I T 1 is firmly nonexpansive, from Lemma 2.2(ii) we have that
f ( x n ) , x n p = ( I T 1 ) A x n , A x n A p ( I T 1 ) A x n 2 = 2 f ( x n ) ,
which implies that
x n τ n f ( x n ) p 2 = x n p 2 + τ n f ( x n ) 2 2 τ n f ( x n ) , x n p x n p 2 + τ n 2 f ( x n ) 2 4 τ n f ( x n ) = x n p 2 ρ n ( 4 ρ n ) f 2 ( x n ) f ( x n ) 2 .
Thus, we have
y n p 2 ( 1 β n ) x n τ n f ( x n ) p 2 + 2 β n u p , y n p ( 1 β n ) x n p 2 + 2 β n u p , y n p ( 1 β n ) ρ n ( 4 ρ n ) f 2 ( x n ) f ( x n ) 2 ,
(3.3)
and so we have
y n p 2 ( 1 β n ) x n p 2 + 2 β n 2 ( u p ) , ( y n p ) 2 ( 1 β n ) x n p 2 + 1 4 β n y n p 2 + 4 β n u p 2 .
(3.4)
Consequently, we have
y n p 2 1 β n 1 1 4 β n x n p 2 + 3 4 β n 1 1 4 β n 16 3 u p 2 .
It turns out that
y n p max { x n p , 16 3 u p } ,
and inductively
y n p max { x 0 p , 16 3 u p } .
This implies that the sequence { y n } is bounded. Since the mapping S is firmly nonexpansive, it follows from Lemma 2.4 that
x n + 1 p 2 = S ( α n u + ( 1 α n ) ( y n ξ n g ( y n ) ) ) p 2 α n ( u p ) + ( 1 α n ) ( y n ξ n g ( y n ) p ) 2 ( 1 α n ) y n ξ n g ( y n ) p 2 + 2 α n u p , x n + 1 p .
Since p Γ C , g ( p ) = 0 . Observe that I T 2 is firmly nonexpansive, it is deduced from Lemma 2.2(ii) that
g ( y n ) , y n p = ( I T 2 ) A y n , A y n A p ( I T 2 ) A y n 2 = 2 g ( y n ) ,
which implies that
y n ξ n g ( y n ) p 2 = y n p 2 + ξ n g ( y n ) 2 2 ξ n g ( y n ) , y n p y n p 2 + ξ n 2 g ( y n ) 2 4 ξ n g ( y n ) = y n p 2 ρ n ( 4 ρ n ) g 2 ( y n ) g ( y n ) 2 .
Thus, we have
x n + 1 p 2 ( 1 α n ) y n ξ n g ( y n ) p 2 + 2 α n u p , x n + 1 p ( 1 α n ) y n p 2 + 2 α n u p , x n + 1 p ( 1 α n ) ρ n ( 4 ρ n ) g 2 ( y n ) g ( y n ) 2
(3.5)
and
x n + 1 p 2 ( 1 α n ) y n p 2 + 2 α n u p , x n + 1 p ( 1 α n ) y n p 2 + 1 4 α n x n + 1 p 2 + 4 α n u p 2 .
Consequently, we have
x n + 1 p 2 1 α n 1 1 4 α n y n p 2 + 3 4 α n 1 1 4 α n 16 3 u p 2 .
It turns out that
x n + 1 p max { y n p , 16 3 u p } .
Since { y n } is bounded, so is { x n } . Since lim n α n = 0 and lim n β n = 0 , without loss of generality, we may assume that there is σ > 0 such that
ρ n ( 4 ρ n ) ( 1 α n ) ( 1 β n ) > σ , n 1 .
Substituting (3.3) into (3.5), we have
x n + 1 p 2 ( 1 β n ) x n p 2 + 2 β n u p , y n p + 2 α n u p , x n + 1 p σ f 2 ( x n ) f ( x n ) 2 σ g 2 ( y n ) g ( y n ) 2 .
(3.6)
Setting s n = x n p 2 , we get the following inequality:
s n + 1 s n + β n s n + σ f 2 ( x n ) f ( x n ) 2 + σ g 2 ( y n ) g ( y n ) 2 2 α n u p , x n + 1 p + 2 β n u p , y n p .
(3.7)

Now, we prove s n 0 . For the purpose, we consider two cases.

Case 1: { s n } is eventually decreasing, i.e., there exists a sufficiently large positive integer k 1 such that s n > s n + 1 holds for all n k . In this case, { s n } must be convergent, and from (3.7) it follows that
σ f 2 ( x n ) f ( x n ) 2 + σ g 2 ( y n ) g ( y n ) 2 ( s n s n + 1 ) + ( α n + β n ) M ,
(3.8)
where M is a constant such that M 2 ( x n + 1 p + y n p ) u p for all n N . Using condition (i) and (3.8), we have that
f 2 ( x n ) f ( x n ) 2 + g 2 ( y n ) g ( y n ) 2 0 ( n ) .
To verify that f ( x n ) 0 and g ( y n ) 0 , it suffices to show that { f ( x n ) } and { g ( y n ) } are bounded. In fact, it follows from Lemma 2.1(ii) that for all n N ,
f ( x n ) = f ( x n ) f ( p ) A 2 x n p
and
g ( y n ) = g ( y n ) g ( p ) A 2 y n p .
These imply that { f ( x n ) } and { g ( y n ) } are bounded. It yields f ( x n ) 0 and g ( y n ) 0 , namely
( I T 1 ) A x n 0 , ( I T 2 ) A y n 0 .
(3.9)
From (3.1) we have
x n y n = β n u + ( 1 β n ) ( x n τ n f ( x n ) ) x n = β n ( u x n + τ n f ( x n ) ) = β n u x n + τ n f ( x n ) .
Noting that { x n } , { f ( x n ) } are bounded and β n 0 , τ n 0 , we get
x n y n 0 ( n ) .
(3.10)
For any x w ω ( x n ) , and { x n k } is a subsequence of { x n } such that x n k x H 1 , then it follows from (3.10) that y n k x . Thus we have
A x n k A x , A y n k A x .
(3.11)
On the other hand, from (3.9) we have
( I T 1 ) A x n k 0 , ( I T 2 ) A y n k 0 .
(3.12)

Since T 1 , T 2 are demiclosed at origin, from (3.11) and (3.12) we have that A x F ( T 1 ) F ( T 2 ) , i.e., A x Q .

Next, we turn to prove x C . For convenience, we set v n : = α n u + ( 1 α n ) ( y n ξ n g ( y n ) ) . Since S is firmly nonexpansive, it follows from Lemma 2.4 that
x n + 1 p 2 = S v n S p 2 = S v n S y n + S y n S p 2 S y n S p 2 + 2 S v n S y n , x n + 1 p S y n p 2 + 2 S v n S y n x n + 1 p y n p 2 ( I S ) y n 2 + 2 v n y n x n + 1 p .
(3.13)
In view of the definition of v n , we have
v n y n α n y n u + ξ n g ( y n ) = α n y n u + ρ n g ( y n ) g ( y n ) .
(3.14)
From (3.4), (3.13) and (3.14), we have
x n + 1 p 2 ( 1 β n ) x n p 2 + 1 4 β n y n p 2 + 4 β n u p 2 ( I S ) y n 2 + ( α n + g ( y n ) g ( y n ) ) N ,
(3.15)
where N 2 x n + 1 p ( y n u + 2 ρ n ) is a suitable constant. Clearly, from (3.15) we have
( I S ) y n 2 s n s n + 1 + 1 4 β n y n p 2 + 4 β n u p 2 + ( α n + g ( y n ) g ( y n ) ) N .
(3.16)
Thus, we assert that ( I S ) y n 0 . In view of y n k x and S is demiclosed at origin, we get x F ( S ) , i.e., x C . Consequently, w w ( x n ) Γ . Furthermore, we have
lim sup n u p , x n p = max w w w ( x n ) u P Γ u , w P Γ u 0
and
lim sup n u p , y n p = max w w w ( y n ) u P Γ u , w P Γ u 0 .
From (3.7) we have
s n + 1 ( 1 β n ) s n + 2 α n u p , x n + 1 p + 2 β n u p , y n p = ( 1 β n ) s n + β n ( 2 α n β n u p , x n + 1 p + 2 u p , y n p ) .
(3.17)

From condition (ii) and Lemma 2.3, we obtain s n 0 .

Case 2: { s n } is not eventually decreasing, that is, we can find a positive integer n 0 such that s n 0 s n 0 + 1 . Now we define
U n : = { n 0 k n : s k s k + 1 } , n > n 0 .
It easy to see that U n is nonempty and satisfies U n U n + 1 . Let
ψ ( n ) : = max U n , n > n 0 .
It is clear that ψ ( n ) as n (otherwise, { s n } is eventually decreasing). It is also clear that s ψ ( n ) s ψ ( n ) + 1 for all n > n 0 . Moreover, we prove that
s n s ψ ( n ) + 1 , n > n 0 .
(3.18)
In fact, if ψ ( n ) = n , then inequality (3.18) is trivial; if ψ ( n ) < n , from the definition of ψ ( n ) , there exists some i N such that ψ ( n ) + i = n , we deduce that
s ψ ( n ) + 1 > s ψ ( n ) + 2 > > s ψ ( n ) + i = s n ,
and inequality (3.18) holds again. Since s ψ ( n ) s ψ ( n ) + 1 for all n > n 0 , it follows from (3.8) that
σ f 2 ( x ψ ( n ) ) f ( x ψ ( n ) ) 2 + σ g 2 ( y ψ ( n ) ) g ( y ψ ( n ) ) 2 ( α ψ ( n ) + β ψ ( n ) ) M 0 .
Noting that { f ( x ψ ( n ) ) } and { g ( y ψ ( n ) ) } are both bounded, we get
f ( x ψ ( n ) ) 0 , g ( y ψ ( n ) ) 0 .
By the same argument to the proof in case 1, we have w w ( x ψ ( n ) ) Γ and
x ψ ( n ) y ψ ( n ) 0 .
(3.19)
On the other hand, noting s ψ ( n ) s ψ ( n ) + 1 again, we have from (3.14) and (3.16) that
x ψ ( n ) + 1 y ψ ( n ) = S v ψ ( n ) S y ψ ( n ) + S y ψ ( n ) y ψ ( n ) S v ψ ( n ) S y ψ ( n ) + S y ψ ( n ) y ψ ( n ) v ψ ( n ) y ψ ( n ) + ( I S ) y ψ ( n ) α ψ ( n ) y ψ ( n ) u + ρ ψ ( n ) g ( y ψ ( n ) ) g ( y ψ ( n ) ) + 1 4 β ψ ( n ) y ψ ( n ) p 2 + 4 β ψ ( n ) u p 2 + ( α ψ ( n ) + g ( y ψ ( n ) ) g ( y ψ ( n ) ) ) N .
(3.20)
Letting n , we get
x ψ ( n ) + 1 y ψ ( n ) 0 .
(3.21)
From (3.19) and (3.21), we have
x ψ ( n ) x ψ ( n ) + 1 x ψ ( n ) y ψ ( n ) + y ψ ( n ) x ψ ( n ) + 1 0 ( n ) .
(3.22)
Furthermore, we can deduce that
lim sup n u p , x ψ ( n ) + 1 p = lim sup n u p , x ψ ( n ) p = max w w w ( x ψ ( n ) ) u P Γ u , w P Γ u 0
(3.23)
and
lim sup n u p , y ψ ( n ) p = max w w w ( y ψ ( n ) ) u P Γ u , w P Γ u 0 .
(3.24)
Since s ψ ( n ) s ψ ( n ) + 1 , it follows from (3.7) that
s ψ ( n ) 2 u p , y ψ ( n ) p + 2 α ψ ( n ) β ψ ( n ) u p , x ψ ( n ) + 1 p , n > n 0 .
(3.25)
Combining (3.23), (3.24) and (3.25), we have
lim sup n s ψ ( n ) 0 ,
(3.26)
and hence s ψ ( n ) 0 , which together with (3.22) implies that
s ψ ( n ) + 1 ( x ψ ( n ) p ) + ( x ψ ( n ) + 1 x ψ ( n ) ) s ψ ( n ) + x ψ ( n ) + 1 x ψ ( n ) 0 .
(3.27)

Noting inequality (3.18), this shows that s n 0 , that is, x n p . This completes the proof of Theorem 3.1. □

Declarations

Acknowledgements

This study was supported by the Scientific Research Fund of Sichuan Provincial Education Department (13ZA0199), the Scientific Research Fund of Sichuan Provincial Department of Science and Technology (2012JYZ011) and the Scientific Research Project of Yibin University (No. 2013YY06) and partially supported by the National Natural Science Foundation of China (Grant No. 11361070).

Authors’ Affiliations

(1)
Department of Mathematics, Yibin University
(2)
College of Statistics and Mathematics, Yunnan University of Finance and Economics

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in product space. Numer. Algorithms 1994, 8: 221-239. 10.1007/BF02142692MathSciNetView ArticleGoogle Scholar
  2. Aleyner A, Reich S: Block-iterative algorithms for solving convex feasibility problems in Hilbert and in Banach. J. Math. Anal. Appl. 2008,343(1):427-435. 10.1016/j.jmaa.2008.01.087MathSciNetView ArticleGoogle Scholar
  3. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996,38(3):367-426. 10.1137/S0036144593251710MathSciNetView ArticleGoogle Scholar
  4. Moudafi A: A relaxed alternating CQ-algorithm for convex feasibility problems. Nonlinear Anal. 2013, 79: 117-121.MathSciNetView ArticleGoogle Scholar
  5. Masad E, Reich S: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8: 367-371.MathSciNetGoogle Scholar
  6. Yao Y, Chen R, Marino G, Liou YC: Applications of fixed point and optimization methods to the multiple-sets split feasibility problem. J. Appl. Math. 2012. Article ID 927530, 2012: Article ID 927530Google Scholar
  7. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021-2034. 10.1088/0266-5611/22/6/007View ArticleGoogle Scholar
  8. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010. Article ID 105018,26(10): Article ID 105018Google Scholar
  9. Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261-1266. 10.1088/0266-5611/20/4/014View ArticleGoogle Scholar
  10. Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791-1799. 10.1088/0266-5611/21/5/017View ArticleGoogle Scholar
  11. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441-453. 10.1088/0266-5611/18/2/310MathSciNetView ArticleGoogle Scholar
  12. López G, Martín-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012. 10.1088/0266-5611/28/8/085004Google Scholar
  13. Yang Q: On variable-set relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166-179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleGoogle Scholar
  14. Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 58-70. 10.1007/BF01589441MathSciNetView ArticleGoogle Scholar
  15. He S, Zhao Z: Strong convergence of a relaxed CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2013. Article ID 197, 2013: Article ID 197 10.1186/1029-242X-2013-197Google Scholar
  16. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240-256. 10.1112/S0024610702003332View ArticleGoogle Scholar
  17. Chang SS: On Chidume’s open questions and approximate solutions for multi-valued strongly accretive mapping equations in Banach spaces. J. Math. Anal. Appl. 1997, 216: 94-111. 10.1006/jmaa.1997.5661MathSciNetView ArticleGoogle Scholar

Copyright

© Tang and Chang; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.