Open Access

The constrained multiple-sets split feasibility problem and its projection algorithms

Journal of Inequalities and Applications20132013:272

https://doi.org/10.1186/1029-242X-2013-272

Received: 30 January 2013

Accepted: 14 May 2013

Published: 31 May 2013

Abstract

The projection algorithms for solving the constrained multiple-sets split feasibility problem are presented. The strong convergence results of the algorithms are given under some mild conditions. Especially, the minimum norm solution of the constrained multiple-sets split feasibility problem can be found.

1 Introduction

Let H 1 and H 2 be two real Hilbert spaces. Let C 1 , C 2 , , C N be N nonempty closed convex subsets of H 1 and let Q 1 , Q 2 , , Q M be M nonempty closed convex subsets of H 2 . Let A : H 1 H 2 be a bounded linear operator. The multiple-sets split feasibility problem is formulated as follows:
Find an  x i = 1 N C i  such that  A x j = 1 M Q j .
(1.1)
A special case If N = M = 1 , then the multiple-sets split feasibility problem is reduced to the split feasibility problem which is formulated as finding a point x with the property
x C and A x Q .

The split feasibility problem in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. It has been found that the multiple-sets split feasibility problem and the split feasibility problem can be used to model the intensity-modulated radiation therapy [36]. Various algorithms have been invented to solve the multiple-sets split feasibility problem and the split feasibility problem, see, e.g., [724] and references therein.

The popular algorithm that solves the multiple-sets split feasibility problem and the split feasibility problem is Byrne’s CQ algorithm [11] which is found to be a gradient-projection method in convex minimization. Motivated by this idea, in this paper, we present the composite projection algorithms for solving the constrained multiple-sets split feasibility problem. The strong convergence results of the algorithms are given under some mild conditions. Especially, the minimum norm solution of the constrained multiple-sets split feasibility problem can be found.

2 Preliminaries

2.1 Concepts

Let H be a real Hilbert space with the inner product , and the norm , respectively, and let Ω be a nonempty closed convex subset of H. Recall that the (nearest point or metric) projection from H onto Ω, denoted by P Ω , is defined in such a way that, for each x H , P Ω ( x ) is the unique point in Ω with the property
x P Ω ( x ) = min { x y : y Ω } .
It is known that P Ω satisfies
x y , P Ω ( x ) P Ω ( y ) P Ω ( x ) P Ω ( y ) 2 , x , y H .
Moreover, P Ω is characterized by the following properties:
x P Ω ( x ) , y P Ω ( x ) 0

for all x H and y Ω .

We also recall that a mapping f : Ω H is said to be ρ-contractive if T x T y ρ x y for some constant ρ [ 0 , 1 ) and for all x , y Ω . A mapping T : Ω Ω is said to be nonexpansive if T x T y x y for all x , y Ω . A mapping T is called averaged if T = ( 1 δ ) I + δ U , where δ ( 0 , 1 ) and U : Ω Ω is nonexpansive. In this case, we also say that T is δ-averaged. A bounded linear operator B is said to be strongly positive on H if there exists a constant α > 0 such that
B x , x α x 2 , x H .
Let A be an operator with domain D ( A ) and range R ( A ) in H.
  1. (i)
    A is monotone if for all x , y D ( A ) ,
    A x A y , x y 0 .
     
  2. (ii)
    Given a number ν > 0 , A is said to be ν-inverse strongly monotone (ν-ism) (or co-coercive) if
    A x A y , x y ν A x A y 2 , x , y H .
     

It is easily seen that a projection P Ω is a 1-ism and hence P Ω is 1 2 -averaged.

We will need to use the following notation:

  • Fix ( T ) stands for the set of fixed points of T;

  • x n x stands for the weak convergence of { x n } to x;

  • x n x stands for the strong convergence of { x n } to x.

2.2 Mathematical model

Now, we consider the mathematical model of the multiple-sets split feasibility problem. Let x C 1 . Assume that A x Q 1 . Then we get ( I P Q 1 ) A x = 0 , which implies γ A ( I P Q 1 ) A x = 0 , hence x satisfies the fixed point equation x = ( I γ A ( I P Q 1 ) A ) x . At the same time, note that x C 1 . Thus,
x = P C 1 ( I γ A ( I P Q 1 ) A ) x .
Now, we know x solves the split feasibility problem if and only if x solves the above fixed point equation. This result reminds us that the multiple-sets split feasibility problem is equivalent to a common fixed point problem of finitely many nonexpansive mappings. On the other hand, x solves the multiple-sets split feasibility problem implies that x satisfies two properties:
  1. (i)

    the distance from x to each C i is zero and

     
  2. (ii)

    the distance from Ax to each Q j is also zero.

     
First, we consider the following proximity function:
g ( x ) = 1 2 i = 1 N α i x P C i x 2 + 1 2 j = 1 M β j A x P Q j A x 2 ,
where { α i } and { β j } are positive real numbers, and P C i and P Q j are the metric projections onto C i and Q j , respectively. It is clear that the proximity function g is convex and differentiable with the gradient
g ( x ) = i = 1 N α i ( I P C i ) x + j = 1 M β j A ( I P Q j ) A x .
We can check that the gradient g ( x ) is L-Lipschitz continuous with constant
L = i = 1 N α i + j = 1 M β j A 2 .
Note that x is a solution of the multiple-sets split feasibility problem (1.1) if and only if g ( x ) = 0 . Since g ( x ) 0 for all x H 1 , a solution of the multiple-sets split feasibility problem (1.1) is a minimizer of g over any closed convex subset, with minimum value of zero. This motivates us to consider the following minimization problem:
min x Ω g ( x ) ,
(2.1)
where Ω is a closed convex subset of H 1 whose intersection with the solution set of the multiple-sets split feasibility problem is nonempty, and get a solution of the so-called constrained multiple-sets split feasibility problem
x Ω  such that  x  solves (1.1) .
(2.2)

2.3 The well-known lemmas

The following lemmas will be helpful for our main results in the next section.

Lemma 2.1 [25]

Let { x n } and { z n } be bounded sequences in a Banach space X and let { β n } be a sequence in [ 0 , 1 ] with 0 < lim inf n β n lim sup n β n < 1 . Suppose that x n + 1 = ( 1 β n ) z n + β n x n for all integers n 0 and lim sup n ( z n + 1 z n x n + 1 x n ) 0 . Then lim n z n x n = 0 .

Lemma 2.2 [26]

Let K be a nonempty closed convex subset of a real Hilbert space H. Let T : K K be a nonexpansive mapping with Fix ( T ) . Then T is demiclosed on K, i.e., if x n x K weakly and x n T x n 0 , then x = T x .

Lemma 2.3 [27]

Assume that { a n } is a sequence of nonnegative real numbers such that a n + 1 ( 1 γ n ) a n + δ n , where { γ n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence such that
  1. (1)

    n = 1 γ n = ;

     
  2. (2)

    lim sup n δ n / γ n 0 or n = 1 | δ n | < .

     

Then lim n a n = 0 .

3 Main results

Let H 1 and H 2 be two real Hilbert spaces. Let C 1 , C 2 , , C N be N nonempty closed convex subsets of H 1 and let Q 1 , Q 2 , , Q M be M nonempty closed convex subsets of H 2 . Let A : H 1 H 2 be a bounded linear operator. Assume that the multiple-sets split feasibility problem is consistent, i.e., it is solvable. Now, we are devoted to solving the constrained multiple-set split feasibility problem (2.2).

For solving (2.2), we introduce the following iterative algorithm.

Algorithm 3.1 Let f : H 1 H 1 be a ρ-contraction. Let B : H 1 H 1 be a self-adjoint, strongly positive bounded linear operator with coefficient α > 0 . Let σ and γ be two constants such that 0 < γ < 2 L and 0 < σ ρ < α . For arbitrary initial point x 0 H 1 , we define a sequence { x n } iteratively by
x n + 1 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) × P Ω ( ξ n σ f + ( I ξ n B ) ) x n ,
(3.1)

for all n 0 , where { ξ n } is a real sequence in ( 0 , 1 ) .

Fact 3.2 The mapping I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) is γ L 2 -averaged.

In order to check Fact 3.2, we need the following lemmas.

Lemma 3.3 (Baillon-Haddad) [28]

If h : H R has an L-Lipschitz continuous gradient h, then h is 1 L -ism.

Lemma 3.4 Given T : H H and let V = I T be the complement of T. Given also S : H H .
  1. (i)

    T is nonexpansive if and only if V is 1 2 -inverse strongly monotone (in short, 1 2 -ism).

     
  2. (ii)

    If S is ν-ism, then for γ > 0 , γS is ν γ -ism.

     
  3. (iii)

    S is averaged if and only if the complement I S is ν-ism for some ν > 1 2 .

     
Lemma 3.5 Given operators S , T , V : H H .
  1. (i)

    If S = ( 1 α ) T + α V for some α ( 0 , 1 ) and if T is averaged and V is nonexpansive, then S is averaged.

     
  2. (ii)

    S is firmly nonexpansive if and only if the complement I S is firmly nonexpansive. If S is firmly nonexpansive, then S is averaged.

     
  3. (iii)

    If S = ( 1 α ) T + α V for some α ( 0 , 1 ) , T is firmly nonexpansive and V is nonexpansive, then S is averaged.

     
  4. (iv)

    If S and T are both averaged, then the product (composite) ST is averaged.

     

Proof of Fact 3.2 Since gradient g ( x ) = i = 1 N α i ( I P C i ) x + j = 1 M β j A ( I P Q j ) A x has an L-Lipschitz constant L = i = 1 N α i + j = 1 M β j A 2 , by Lemma 3.4, g is 1 L -ism and γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) is 1 γ L -ism. Again, from Lemma 3.4(iii), we deduce that I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) is γ L 2 -averaged. □

Now, we prove the convergence of the sequence { x n } .

Theorem 3.6 Suppose that S . Assume that the sequence { ξ n } satisfies the control conditions:
  1. (i)

    lim n ξ n = 0 and

     
  2. (ii)

    n = 0 ξ n = .

     
Then the sequence { x n } generated by (3.1) converges to a solution x of (2.2), where x also solves the following VI:
x S such that σ f ( x ) B x , x ˜ x 0 for all x ˜ S ,
(3.2)

where S is the set of solutions of (2.2).

Proof Let x S . Since B is strongly positive bounded linear operator with coefficient α > 0 , we have I ξ n B 1 α ξ n (without loss of generality, we may assume ξ n 1 α ). Thus, by (3.1), we have
x n + 1 x = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) × P Ω ( ξ n σ f + ( I ξ n B ) ) x n x ξ n σ f ( x n ) + ( I ξ n B ) x n x ξ n σ f ( x n ) f ( x ) + I ξ n B x n x + ξ n σ f ( x ) B x ξ n σ ρ x n x + ( 1 ξ n α ) x n x + ξ n σ f ( x ) B x = [ 1 ( α σ ρ ) ξ n ] x n x + ( α σ ρ ) ξ n f ( x ) B x / ( α σ ρ ) .
An induction yields
x n + 1 x max { x n x , f ( x ) B x α σ ρ } max { x 0 x , f ( x ) B x α σ ρ } .

Hence, { x n } is bounded.

It is well-known that the metric projection P Ω is firmly nonexpansive, hence averaged. By Fact 3.2, I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) is γ L 2 -averaged. From Lemma 3.5, the composite of three averaged mappings is averaged. So, P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω is an averaged mapping. Thus, there must exist a positive constant δ ( 0 , 1 ) such that
P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω = ( 1 δ ) I + δ U ,
where U is a nonexpansive mapping. Set y n = ξ n σ f ( x n ) + ( I ξ n B ) x n for all n 0 . Then we have
x n + 1 = ( ( 1 δ ) I + δ U ) ( ξ n σ f ( x n ) + ( I ξ n B ) x n ) = ( 1 δ ) x n + ξ n ( 1 δ ) ( σ f ( x n ) B x n ) + δ U y n = ( 1 δ ) x n + δ ( 1 δ δ ξ n ( σ f ( x n ) B x n ) + U y n ) = ( 1 δ ) x n + δ z n ,
where
z n = ( 1 δ ) ξ n δ ( σ f ( x n ) B x n ) + U y n .
By virtue of ξ n 0 (as n ) and the boundedness of the sequences { f ( x n ) } and { B x n } , we firstly observe that
lim n y n x n = lim n ξ n σ f ( x n ) B x n = 0 ,
and
lim n z n U y n = lim n ( 1 δ ) ξ n δ σ f ( x n ) B x n = 0 .
Next, we estimate z n + 1 z n . Note that
z n + 1 z n = ( 1 δ ) ξ n + 1 δ ( σ f ( x n + 1 ) B x n + 1 ) + U y n + 1 ( 1 δ ) ξ n δ ( σ f ( x n ) B x n ) U y n .
It follows that
z n + 1 z n 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) + U y n + 1 U y n 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) + y n + 1 y n .
Since y n + 1 y n = ξ n + 1 σ f ( x n + 1 ) + ( I ξ n + 1 B ) x n + 1 ξ n σ f ( x n ) ( I ξ n B ) x n , we get
z n + 1 z n ξ n + 1 σ f ( x n + 1 ) + ( I ξ n + 1 B ) x n + 1 ξ n σ f ( x n ) ( I ξ n B ) x n + 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) x n + 1 x n + ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n + 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) .
It follows that
z n + 1 z n x n + 1 x n ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n + 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) .
Since lim n ξ n = 0 and the sequences { f ( x n ) } , { B x n } are bounded, we deduce
lim sup n ( z n + 1 z n x n + 1 x n ) 0 .
By Lemma 2.1, we get
lim n z n x n = 0 .
Therefore,
lim n U x n x n = lim n P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω ( x n ) x n = 0 .
By the definition of the sequence { x n } , we know that x n Ω . Hence, P Ω ( x n ) = x n . So,
lim n P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) x n x n = 0 .
Next we prove
lim sup n σ f ( x ) B x , P Ω ( y n ) x 0 .
In order to get this inequality, we need to prove the following:
lim sup n σ f ( x ) B x , x n x 0 ,
where x is the unique solution of VI(3.2). For this purpose, we choose a subsequence { x n i } of { x n } such that
lim sup n σ f ( x ) B x , x n x = lim i σ f ( x ) B x , x n i x .
Since { x n i } is bounded, there exists a subsequence of { x n i } which converges weakly to a point x ˜ . Without loss of generality, we may assume that { x n i } converges weakly to x ˜ . Since P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) is nonexpansive, by Lemma 2.2, we have x n i x ˜ Fix ( P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) ) . Therefore,
lim sup n σ f ( x ) B x , x n x = lim i σ f ( x ) B x , x n i x = σ f ( x ) B x , x ˜ x 0 .
Since x n P Ω ( y n ) = P Ω ( x n ) P Ω ( y n ) x n y n 0 , we obtain
lim sup n σ f ( x ) B x , P Ω ( y n ) x 0 .
Note that
P Ω ( y n ) x 2 = P Ω ( y n ) y n , P Ω ( y n ) x + y n x , P Ω ( y n ) x .
From the property of the metric P Ω , we have P Ω ( y n ) y n , P Ω ( y n ) x 0 . Hence,
P Ω ( y n ) x 2 y n x , P Ω ( y n ) x = ξ n σ ( f ( x n ) f ( x ) ) + ( I ξ n B ) ( x n x ) , P Ω ( y n ) x + ξ n σ f ( x ) B x , P Ω ( y n ) x ( ξ n σ f ( x n ) f ( x ) + I ξ n B x n x ) P Ω ( y n ) x + ξ n σ f ( x ) B x , P Ω ( y n ) x ( 1 ξ n ( α σ ρ ) ) x n x P Ω ( y n ) x + ξ n σ f ( x ) B x , P Ω ( y n ) x 1 ξ n ( α σ ρ ) 2 x n x 2 + 1 2 P Ω ( y n ) x 2 + ξ n σ f ( x ) B x , P Ω ( y n ) x .
It follows that
P Ω ( y n ) x 2 [ 1 ( α σ ρ ) ξ n ] x n x 2 + 2 ξ n σ f ( x ) B x , P Ω ( y n ) x .
Finally, we show that x n x . From (3.1), we have
x n + 1 x 2 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω ( y n ) x 2 P Ω ( y n ) x 2 [ 1 ( α σ ρ ) ξ n ] x n x 2 + ( α σ ρ ) ξ n 2 α σ ρ σ f ( x ) B x , P Ω ( y n ) x = ( 1 γ n ) x n x 2 + δ n ,

where γ n = ( α σ ρ ) ξ n and δ n = ( α σ ρ ) ξ n 2 α σ ρ σ f ( x ) B x , P Ω ( y n ) x . Since n = 1 γ n = and lim sup n δ n γ n = lim sup n 2 α σ ρ σ f ( x ) B x , P Ω ( y n ) x 0 , all conditions of Lemma 2.3 are satisfied. Therefore, we immediately deduce that x n x . This completes the proof. □

From (3.1) and Theorem 3.6, we can deduce easily the following results.

Algorithm 3.7 For an arbitrary initial point x 0 H 1 , we define a sequence { x n } iteratively by
x n + 1 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) × P Ω ( ξ n σ f ( x n ) + ( 1 ξ n ) x n ) ,
(3.3)

for all n 0 , where { ξ n } is a real sequence in ( 0 , 1 ) .

Corollary 3.8 Suppose that S . Assume that the sequence { ξ n } satisfies the conditions
  1. (i)

    lim n ξ n = 0 and

     
  2. (ii)

    n = 0 ξ n = .

     
Then the sequence { x n } generated by (3.3) converges to a point x , which solves the following variational inequality:
x S such that σ f ( x ) x , x ˜ x 0 for all x ˜ S .
Algorithm 3.9 For an arbitrary initial point x 0 , we define a sequence { x n } iteratively by
x n + 1 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω ( ( 1 ξ n ) x n ) ,
(3.4)

for all n 0 , where { ξ n } is a real sequence in ( 0 , 1 ) .

Corollary 3.10 Suppose that S . Assume that the sequence { ξ n } satisfies the conditions
  1. (i)

    lim n ξ n = 0 and

     
  2. (ii)

    n = 0 ξ n = .

     

Then the sequence { x n } generated by (3.4) converges to a point x S which is the minimum norm element in S.

Declarations

Acknowledgements

Jinwei Shi was supported in part by the scientific research fund of the Educational Commission of Hebei Province of China (No. 936101101) and the National Natural Science Foundation of China (No. 51077053). Yeong-Cheng Liou was supported in part by NSC 101-2628-E-230-001-MY3 and NSC 101-2622-E-230-005-CC3.

Authors’ Affiliations

(1)
College of Science, Agricultural University of Hebei
(2)
North China Electric Power University
(3)
Department of Information Management, Cheng Shiu University

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MathSciNetView ArticleGoogle Scholar
  2. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310MathSciNetView ArticleGoogle Scholar
  3. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017MathSciNetView ArticleGoogle Scholar
  4. Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010MathSciNetView ArticleGoogle Scholar
  5. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001View ArticleGoogle Scholar
  6. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007View ArticleGoogle Scholar
  7. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
  8. Yao Y, Chen R, Marino G, Liou YC: Applications of fixed point and optimization methods to the multiple-sets split feasibility problem. J. Appl. Math. 2012., 2012: Article ID 927530Google Scholar
  9. Yao Y, Wu J, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 140679. doi:10.1155/2012/140679Google Scholar
  10. Yao Y, Yang PX, Kang SM: Composite projection algorithms for the split feasibility problem. Math. Comput. Model. 2013, 57: 693–700. 10.1016/j.mcm.2012.07.026MathSciNetView ArticleGoogle Scholar
  11. Byrne C: An unified treatment of some iterative algorithm algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006MathSciNetView ArticleGoogle Scholar
  12. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009MathSciNetView ArticleGoogle Scholar
  13. Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014View ArticleGoogle Scholar
  14. Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799. 10.1088/0266-5611/21/5/017View ArticleGoogle Scholar
  15. Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010. doi:10.1155/2010/102085Google Scholar
  16. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007Google Scholar
  17. Yu X, Shahzad N, Yao Y: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 2012, 6: 1447–1462. 10.1007/s11590-011-0340-0MathSciNetView ArticleGoogle Scholar
  18. Dang Y, Gao Y, Han Y: A perturbed projection algorithm with inertial technique for split feasibility problem. J. Appl. Math. 2012., 2012: Article ID 207323Google Scholar
  19. Dang Y, Gao Y: An extrapolated iterative algorithm for multiple-set split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 149508Google Scholar
  20. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27(1): Article ID 015007Google Scholar
  21. Yao Y, Kim TH, Chebbi S, Xu HK: A modified extragradient method for the split feasibility and fixed point problems. J. Nonlinear Convex Anal. 2012, 13(3):383–396.MathSciNetGoogle Scholar
  22. Wang Z, Yang Q, Yang Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2010. doi:10.1016/j.amc.2010.11.058Google Scholar
  23. Ceng LC, Yao JC: Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl. 2013., 2013: Article ID 43Google Scholar
  24. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal., Theory Methods Appl. 2012, 75: 2116–2125. 10.1016/j.na.2011.10.012MathSciNetView ArticleGoogle Scholar
  25. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103–123.View ArticleGoogle Scholar
  26. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
  27. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar
  28. Baillon JB, Haddad G: Quelques proprietes des operateurs anglebornes et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664MathSciNetView ArticleGoogle Scholar

Copyright

© Zheng et al.; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.