Skip to main content

The constrained multiple-sets split feasibility problem and its projection algorithms

Abstract

The projection algorithms for solving the constrained multiple-sets split feasibility problem are presented. The strong convergence results of the algorithms are given under some mild conditions. Especially, the minimum norm solution of the constrained multiple-sets split feasibility problem can be found.

1 Introduction

Let H 1 and H 2 be two real Hilbert spaces. Let C 1 , C 2 ,, C N be N nonempty closed convex subsets of H 1 and let Q 1 , Q 2 ,, Q M be M nonempty closed convex subsets of H 2 . Let A: H 1 H 2 be a bounded linear operator. The multiple-sets split feasibility problem is formulated as follows:

Find an x i = 1 N C i  such that Ax j = 1 M Q j .
(1.1)

A special case If N=M=1, then the multiple-sets split feasibility problem is reduced to the split feasibility problem which is formulated as finding a point x with the property

xCandAxQ.

The split feasibility problem in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. It has been found that the multiple-sets split feasibility problem and the split feasibility problem can be used to model the intensity-modulated radiation therapy [36]. Various algorithms have been invented to solve the multiple-sets split feasibility problem and the split feasibility problem, see, e.g., [724] and references therein.

The popular algorithm that solves the multiple-sets split feasibility problem and the split feasibility problem is Byrne’s CQ algorithm [11] which is found to be a gradient-projection method in convex minimization. Motivated by this idea, in this paper, we present the composite projection algorithms for solving the constrained multiple-sets split feasibility problem. The strong convergence results of the algorithms are given under some mild conditions. Especially, the minimum norm solution of the constrained multiple-sets split feasibility problem can be found.

2 Preliminaries

2.1 Concepts

Let H be a real Hilbert space with the inner product , and the norm , respectively, and let Ω be a nonempty closed convex subset of H. Recall that the (nearest point or metric) projection from H onto Ω, denoted by P Ω , is defined in such a way that, for each xH, P Ω (x) is the unique point in Ω with the property

x P Ω ( x ) =min { x y : y Ω } .

It is known that P Ω satisfies

x y , P Ω ( x ) P Ω ( y ) P Ω ( x ) P Ω ( y ) 2 ,x,yH.

Moreover, P Ω is characterized by the following properties:

x P Ω ( x ) , y P Ω ( x ) 0

for all xH and yΩ.

We also recall that a mapping f:ΩH is said to be ρ-contractive if TxTyρxy for some constant ρ[0,1) and for all x,yΩ. A mapping T:ΩΩ is said to be nonexpansive if TxTyxy for all x,yΩ. A mapping T is called averaged if T=(1δ)I+δU, where δ(0,1) and U:ΩΩ is nonexpansive. In this case, we also say that T is δ-averaged. A bounded linear operator B is said to be strongly positive on H if there exists a constant α>0 such that

Bx,xα x 2 ,xH.

Let A be an operator with domain D(A) and range R(A) in H.

  1. (i)

    A is monotone if for all x,yD(A),

    AxAy,xy0.
  2. (ii)

    Given a number ν>0, A is said to be ν-inverse strongly monotone (ν-ism) (or co-coercive) if

    AxAy,xyν A x A y 2 ,x,yH.

It is easily seen that a projection P Ω is a 1-ism and hence P Ω is 1 2 -averaged.

We will need to use the following notation:

  • Fix(T) stands for the set of fixed points of T;

  • x n x stands for the weak convergence of { x n } to x;

  • x n x stands for the strong convergence of { x n } to x.

2.2 Mathematical model

Now, we consider the mathematical model of the multiple-sets split feasibility problem. Let x C 1 . Assume that Ax Q 1 . Then we get (I P Q 1 )Ax=0, which implies γ A (I P Q 1 )Ax=0, hence x satisfies the fixed point equation x=(Iγ A (I P Q 1 )A)x. At the same time, note that x C 1 . Thus,

x= P C 1 ( I γ A ( I P Q 1 ) A ) x.

Now, we know x solves the split feasibility problem if and only if x solves the above fixed point equation. This result reminds us that the multiple-sets split feasibility problem is equivalent to a common fixed point problem of finitely many nonexpansive mappings. On the other hand, x solves the multiple-sets split feasibility problem implies that x satisfies two properties:

  1. (i)

    the distance from x to each C i is zero and

  2. (ii)

    the distance from Ax to each Q j is also zero.

First, we consider the following proximity function:

g(x)= 1 2 i = 1 N α i x P C i x 2 + 1 2 j = 1 M β j A x P Q j A x 2 ,

where { α i } and { β j } are positive real numbers, and P C i and P Q j are the metric projections onto C i and Q j , respectively. It is clear that the proximity function g is convex and differentiable with the gradient

g(x)= i = 1 N α i (I P C i )x+ j = 1 M β j A (I P Q j )Ax.

We can check that the gradient g(x) is L-Lipschitz continuous with constant

L= i = 1 N α i + j = 1 M β j A 2 .

Note that x is a solution of the multiple-sets split feasibility problem (1.1) if and only if g( x )=0. Since g(x)0 for all x H 1 , a solution of the multiple-sets split feasibility problem (1.1) is a minimizer of g over any closed convex subset, with minimum value of zero. This motivates us to consider the following minimization problem:

min x Ω g(x),
(2.1)

where Ω is a closed convex subset of H 1 whose intersection with the solution set of the multiple-sets split feasibility problem is nonempty, and get a solution of the so-called constrained multiple-sets split feasibility problem

x Ω such that  x  solves (1.1).
(2.2)

2.3 The well-known lemmas

The following lemmas will be helpful for our main results in the next section.

Lemma 2.1 [25]

Let { x n } and { z n } be bounded sequences in a Banach space X and let { β n } be a sequence in [0,1] with 0< lim inf n β n lim sup n β n <1. Suppose that x n + 1 =(1 β n ) z n + β n x n for all integers n0 and lim sup n ( z n + 1 z n x n + 1 x n )0. Then lim n z n x n =0.

Lemma 2.2 [26]

Let K be a nonempty closed convex subset of a real Hilbert space H. Let T:KK be a nonexpansive mapping with Fix(T). Then T is demiclosed on K, i.e., if x n xK weakly and x n T x n 0, then x=Tx.

Lemma 2.3 [27]

Assume that { a n } is a sequence of nonnegative real numbers such that a n + 1 (1 γ n ) a n + δ n , where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (1)

    n = 1 γ n =;

  2. (2)

    lim sup n δ n / γ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

3 Main results

Let H 1 and H 2 be two real Hilbert spaces. Let C 1 , C 2 ,, C N be N nonempty closed convex subsets of H 1 and let Q 1 , Q 2 ,, Q M be M nonempty closed convex subsets of H 2 . Let A: H 1 H 2 be a bounded linear operator. Assume that the multiple-sets split feasibility problem is consistent, i.e., it is solvable. Now, we are devoted to solving the constrained multiple-set split feasibility problem (2.2).

For solving (2.2), we introduce the following iterative algorithm.

Algorithm 3.1 Let f: H 1 H 1 be a ρ-contraction. Let B: H 1 H 1 be a self-adjoint, strongly positive bounded linear operator with coefficient α>0. Let σ and γ be two constants such that 0<γ< 2 L and 0<σρ<α. For arbitrary initial point x 0 H 1 , we define a sequence { x n } iteratively by

x n + 1 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) × P Ω ( ξ n σ f + ( I ξ n B ) ) x n ,
(3.1)

for all n0, where { ξ n } is a real sequence in (0,1).

Fact 3.2 The mapping Iγ( i = 1 N α i (I P C i )+ j = 1 M β j A (I P Q j )A) is γ L 2 -averaged.

In order to check Fact 3.2, we need the following lemmas.

Lemma 3.3 (Baillon-Haddad) [28]

If h:HR has an L-Lipschitz continuous gradient h, then h is 1 L -ism.

Lemma 3.4 Given T:HH and let V=IT be the complement of T. Given also S:HH.

  1. (i)

    T is nonexpansive if and only if V is 1 2 -inverse strongly monotone (in short, 1 2 -ism).

  2. (ii)

    If S is ν-ism, then for γ>0, γS is ν γ -ism.

  3. (iii)

    S is averaged if and only if the complement IS is ν-ism for some ν> 1 2 .

Lemma 3.5 Given operators S,T,V:HH.

  1. (i)

    If S=(1α)T+αV for some α(0,1) and if T is averaged and V is nonexpansive, then S is averaged.

  2. (ii)

    S is firmly nonexpansive if and only if the complement IS is firmly nonexpansive. If S is firmly nonexpansive, then S is averaged.

  3. (iii)

    If S=(1α)T+αV for some α(0,1), T is firmly nonexpansive and V is nonexpansive, then S is averaged.

  4. (iv)

    If S and T are both averaged, then the product (composite) ST is averaged.

Proof of Fact 3.2 Since gradient g(x)= i = 1 N α i (I P C i )x+ j = 1 M β j A (I P Q j )Ax has an L-Lipschitz constant L= i = 1 N α i + j = 1 M β j A 2 , by Lemma 3.4, g is 1 L -ism and γ( i = 1 N α i (I P C i )+ j = 1 M β j A (I P Q j )A) is 1 γ L -ism. Again, from Lemma 3.4(iii), we deduce that Iγ( i = 1 N α i (I P C i )+ j = 1 M β j A (I P Q j )A) is γ L 2 -averaged. □

Now, we prove the convergence of the sequence { x n }.

Theorem 3.6 Suppose that S. Assume that the sequence { ξ n } satisfies the control conditions:

  1. (i)

    lim n ξ n =0 and

  2. (ii)

    n = 0 ξ n =.

Then the sequence { x n } generated by (3.1) converges to a solution x of (2.2), where x also solves the following VI:

x S such that σ f ( x ) B x , x ˜ x 0 for all x ˜ S,
(3.2)

where S is the set of solutions of (2.2).

Proof Let x S. Since B is strongly positive bounded linear operator with coefficient α>0, we have I ξ n B1α ξ n (without loss of generality, we may assume ξ n 1 α ). Thus, by (3.1), we have

x n + 1 x = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) × P Ω ( ξ n σ f + ( I ξ n B ) ) x n x ξ n σ f ( x n ) + ( I ξ n B ) x n x ξ n σ f ( x n ) f ( x ) + I ξ n B x n x + ξ n σ f ( x ) B x ξ n σ ρ x n x + ( 1 ξ n α ) x n x + ξ n σ f ( x ) B x = [ 1 ( α σ ρ ) ξ n ] x n x + ( α σ ρ ) ξ n f ( x ) B x / ( α σ ρ ) .

An induction yields

x n + 1 x max { x n x , f ( x ) B x α σ ρ } max { x 0 x , f ( x ) B x α σ ρ } .

Hence, { x n } is bounded.

It is well-known that the metric projection P Ω is firmly nonexpansive, hence averaged. By Fact 3.2, Iγ( i = 1 N α i (I P C i )+ j = 1 M β j A (I P Q j )A) is γ L 2 -averaged. From Lemma 3.5, the composite of three averaged mappings is averaged. So, P Ω (Iγ( i = 1 N α i (I P C i )+ j = 1 M β j A (I P Q j )A)) P Ω is an averaged mapping. Thus, there must exist a positive constant δ(0,1) such that

P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω =(1δ)I+δU,

where U is a nonexpansive mapping. Set y n = ξ n σf( x n )+(I ξ n B) x n for all n0. Then we have

x n + 1 = ( ( 1 δ ) I + δ U ) ( ξ n σ f ( x n ) + ( I ξ n B ) x n ) = ( 1 δ ) x n + ξ n ( 1 δ ) ( σ f ( x n ) B x n ) + δ U y n = ( 1 δ ) x n + δ ( 1 δ δ ξ n ( σ f ( x n ) B x n ) + U y n ) = ( 1 δ ) x n + δ z n ,

where

z n = ( 1 δ ) ξ n δ ( σ f ( x n ) B x n ) +U y n .

By virtue of ξ n 0 (as n) and the boundedness of the sequences {f( x n )} and {B x n }, we firstly observe that

lim n y n x n = lim n ξ n σ f ( x n ) B x n =0,

and

lim n z n U y n = lim n ( 1 δ ) ξ n δ σ f ( x n ) B x n =0.

Next, we estimate z n + 1 z n . Note that

z n + 1 z n = ( 1 δ ) ξ n + 1 δ ( σ f ( x n + 1 ) B x n + 1 ) +U y n + 1 ( 1 δ ) ξ n δ ( σ f ( x n ) B x n ) U y n .

It follows that

z n + 1 z n 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) + U y n + 1 U y n 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) + y n + 1 y n .

Since y n + 1 y n = ξ n + 1 σf( x n + 1 )+(I ξ n + 1 B) x n + 1 ξ n σf( x n )(I ξ n B) x n , we get

z n + 1 z n ξ n + 1 σ f ( x n + 1 ) + ( I ξ n + 1 B ) x n + 1 ξ n σ f ( x n ) ( I ξ n B ) x n + 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) x n + 1 x n + ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n + 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) .

It follows that

z n + 1 z n x n + 1 x n ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n + 1 δ δ ( ξ n + 1 σ f ( x n + 1 ) B x n + 1 + ξ n σ f ( x n ) B x n ) .

Since lim n ξ n =0 and the sequences {f( x n )}, {B x n } are bounded, we deduce

lim sup n ( z n + 1 z n x n + 1 x n ) 0.

By Lemma 2.1, we get

lim n z n x n =0.

Therefore,

lim n U x n x n = lim n P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω ( x n ) x n = 0 .

By the definition of the sequence { x n }, we know that x n Ω. Hence, P Ω ( x n )= x n . So,

lim n P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) x n x n =0.

Next we prove

lim sup n σ f ( x ) B x , P Ω ( y n ) x 0.

In order to get this inequality, we need to prove the following:

lim sup n σ f ( x ) B x , x n x 0,

where x is the unique solution of VI(3.2). For this purpose, we choose a subsequence { x n i } of { x n } such that

lim sup n σ f ( x ) B x , x n x = lim i σ f ( x ) B x , x n i x .

Since { x n i } is bounded, there exists a subsequence of { x n i } which converges weakly to a point x ˜ . Without loss of generality, we may assume that { x n i } converges weakly to x ˜ . Since P Ω (Iγ( i = 1 N α i (I P C i )+ j = 1 M β j A (I P Q j )A)) is nonexpansive, by Lemma 2.2, we have x n i x ˜ Fix( P Ω (Iγ( i = 1 N α i (I P C i )+ j = 1 M β j A (I P Q j )A))). Therefore,

lim sup n σ f ( x ) B x , x n x = lim i σ f ( x ) B x , x n i x = σ f ( x ) B x , x ˜ x 0 .

Since x n P Ω ( y n )= P Ω ( x n ) P Ω ( y n ) x n y n 0, we obtain

lim sup n σ f ( x ) B x , P Ω ( y n ) x 0.

Note that

P Ω ( y n ) x 2 = P Ω ( y n ) y n , P Ω ( y n ) x + y n x , P Ω ( y n ) x .

From the property of the metric P Ω , we have P Ω ( y n ) y n , P Ω ( y n ) x 0. Hence,

P Ω ( y n ) x 2 y n x , P Ω ( y n ) x = ξ n σ ( f ( x n ) f ( x ) ) + ( I ξ n B ) ( x n x ) , P Ω ( y n ) x + ξ n σ f ( x ) B x , P Ω ( y n ) x ( ξ n σ f ( x n ) f ( x ) + I ξ n B x n x ) P Ω ( y n ) x + ξ n σ f ( x ) B x , P Ω ( y n ) x ( 1 ξ n ( α σ ρ ) ) x n x P Ω ( y n ) x + ξ n σ f ( x ) B x , P Ω ( y n ) x 1 ξ n ( α σ ρ ) 2 x n x 2 + 1 2 P Ω ( y n ) x 2 + ξ n σ f ( x ) B x , P Ω ( y n ) x .

It follows that

P Ω ( y n ) x 2 [ 1 ( α σ ρ ) ξ n ] x n x 2 +2 ξ n σ f ( x ) B x , P Ω ( y n ) x .

Finally, we show that x n x . From (3.1), we have

x n + 1 x 2 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω ( y n ) x 2 P Ω ( y n ) x 2 [ 1 ( α σ ρ ) ξ n ] x n x 2 + ( α σ ρ ) ξ n 2 α σ ρ σ f ( x ) B x , P Ω ( y n ) x = ( 1 γ n ) x n x 2 + δ n ,

where γ n =(ασρ) ξ n and δ n =(ασρ) ξ n 2 α σ ρ σf( x )B x , P Ω ( y n ) x . Since n = 1 γ n = and lim sup n δ n γ n = lim sup n 2 α σ ρ σf( x )B x , P Ω ( y n ) x 0, all conditions of Lemma 2.3 are satisfied. Therefore, we immediately deduce that x n x . This completes the proof. □

From (3.1) and Theorem 3.6, we can deduce easily the following results.

Algorithm 3.7 For an arbitrary initial point x 0 H 1 , we define a sequence { x n } iteratively by

x n + 1 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) × P Ω ( ξ n σ f ( x n ) + ( 1 ξ n ) x n ) ,
(3.3)

for all n0, where { ξ n } is a real sequence in (0,1).

Corollary 3.8 Suppose that S. Assume that the sequence { ξ n } satisfies the conditions

  1. (i)

    lim n ξ n =0 and

  2. (ii)

    n = 0 ξ n =.

Then the sequence { x n } generated by (3.3) converges to a point x , which solves the following variational inequality:

x S such that σ f ( x ) x , x ˜ x 0 for all x ˜ S.

Algorithm 3.9 For an arbitrary initial point x 0 , we define a sequence { x n } iteratively by

x n + 1 = P Ω ( I γ ( i = 1 N α i ( I P C i ) + j = 1 M β j A ( I P Q j ) A ) ) P Ω ( ( 1 ξ n ) x n ) ,
(3.4)

for all n0, where { ξ n } is a real sequence in (0,1).

Corollary 3.10 Suppose that S. Assume that the sequence { ξ n } satisfies the conditions

  1. (i)

    lim n ξ n =0 and

  2. (ii)

    n = 0 ξ n =.

Then the sequence { x n } generated by (3.4) converges to a point x S which is the minimum norm element in S.

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  Google Scholar 

  2. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    Article  MathSciNet  Google Scholar 

  3. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    Article  MathSciNet  Google Scholar 

  4. Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010

    Article  MathSciNet  Google Scholar 

  5. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  6. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

    Article  Google Scholar 

  7. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

    Google Scholar 

  8. Yao Y, Chen R, Marino G, Liou YC: Applications of fixed point and optimization methods to the multiple-sets split feasibility problem. J. Appl. Math. 2012., 2012: Article ID 927530

    Google Scholar 

  9. Yao Y, Wu J, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 140679. doi:10.1155/2012/140679

    Google Scholar 

  10. Yao Y, Yang PX, Kang SM: Composite projection algorithms for the split feasibility problem. Math. Comput. Model. 2013, 57: 693–700. 10.1016/j.mcm.2012.07.026

    Article  MathSciNet  Google Scholar 

  11. Byrne C: An unified treatment of some iterative algorithm algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  Google Scholar 

  12. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

    Article  MathSciNet  Google Scholar 

  13. Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014

    Article  Google Scholar 

  14. Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799. 10.1088/0266-5611/21/5/017

    Article  Google Scholar 

  15. Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010. doi:10.1155/2010/102085

    Google Scholar 

  16. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007

    Google Scholar 

  17. Yu X, Shahzad N, Yao Y: Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 2012, 6: 1447–1462. 10.1007/s11590-011-0340-0

    Article  MathSciNet  Google Scholar 

  18. Dang Y, Gao Y, Han Y: A perturbed projection algorithm with inertial technique for split feasibility problem. J. Appl. Math. 2012., 2012: Article ID 207323

    Google Scholar 

  19. Dang Y, Gao Y: An extrapolated iterative algorithm for multiple-set split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 149508

    Google Scholar 

  20. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27(1): Article ID 015007

    Google Scholar 

  21. Yao Y, Kim TH, Chebbi S, Xu HK: A modified extragradient method for the split feasibility and fixed point problems. J. Nonlinear Convex Anal. 2012, 13(3):383–396.

    MathSciNet  Google Scholar 

  22. Wang Z, Yang Q, Yang Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2010. doi:10.1016/j.amc.2010.11.058

    Google Scholar 

  23. Ceng LC, Yao JC: Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl. 2013., 2013: Article ID 43

    Google Scholar 

  24. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal., Theory Methods Appl. 2012, 75: 2116–2125. 10.1016/j.na.2011.10.012

    Article  MathSciNet  Google Scholar 

  25. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103–123.

    Article  Google Scholar 

  26. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  27. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  Google Scholar 

  28. Baillon JB, Haddad G: Quelques proprietes des operateurs anglebornes et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Jinwei Shi was supported in part by the scientific research fund of the Educational Commission of Hebei Province of China (No. 936101101) and the National Natural Science Foundation of China (No. 51077053). Yeong-Cheng Liou was supported in part by NSC 101-2628-E-230-001-MY3 and NSC 101-2622-E-230-005-CC3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinwei Shi.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Zheng, Y., Shi, J. & Liou, YC. The constrained multiple-sets split feasibility problem and its projection algorithms. J Inequal Appl 2013, 272 (2013). https://doi.org/10.1186/1029-242X-2013-272

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-272

Keywords

  • Nonexpansive Mapping
  • Real Hilbert Space
  • Common Fixed Point
  • Nonempty Closed Convex Subset
  • Projection Algorithm