Skip to main content

Preconditioning methods for solving a general split feasibility problem

Abstract

We introduce and study a new general split feasibility problem (GSFP) in a Hilbert space. This problem generalizes the split feasibility problem (SFP). The GSFP extends the SFP with a nonlinear continuous operator. We apply the preconditioning methods to increase the efficiency of the CQ algorithm, two general preconditioning CQ algorithms for solving the GSFP are presented. We also propose a new inexact method to approximate the preconditioner. The convergence theorems are established under the projections with respect to special norms. Some numerical results illustrate the efficiency of the proposed methods.

MSC:46E20, 47J20, 47J25, 47L50.

1 Introduction

As preconditioning methods can improve the condition number of the ill-posed system matrix, the convergence rate of the iterative algorithm can also be improved [1]. In [2, 3], a preconditioning method is applied to modify the projected Landweber algorithm for solving a linear feasibility problem (LFP). The modified algorithm is

x n + 1 = P C [ x n τ D A ( A x n b ) ] ,n0,

where τ(0,2/D A A), A:XY is a linear and continuous operator, means 2-norm, X and Y are Hilbert spaces and bY is the datum of the problem, corrupted by noise or experimental errors.

While under the nonlinear conditions, Auslender and Dafermos [4, 5] proposed an algorithm to solve variational inequalities (VI),

x n + 1 = P S [ x n τ n G 1 F ( x n ) ] ,n0,
(1.1)

where P S is the projection operator onto S with respect to the norm G . Bertsekas and Gafni [6] and Marcotte and Wu [7] improved it with variable symmetric positive defined matrices G n , Fukushima [8] modified it by a relaxed projection method with half-space; then in [9], Yang established the convergence of Auslender’s algorithm under the weak co-coercivity of F.

Further, the general variational inequality problem (GVIP) has been investigated by many authors (see [1013]). It is to find u R n such that g( u )K and

F ( u ) , g ( u ) g ( u ) 0,g(u)K,

where K is a nonempty closed convex set in R n , F,g: R n R n are nonlinear operators. In [12], Santos and Scheimberg extended and applied (1.1) to solve the GVI.

However, a general split feasibility problem (GSFP) equals to a GVI, and preconditioning methods for solving the GSFP have not been studied. By introducing a convex minimization problem, the split feasibility problem (SFP) is equivalent to a variational inequality problem (VIP), which involves a Lipschitz continuous and inversely strong monotone (ism) operator, see [1422]. Similarly, by the same way, in this paper we introduce that a new GSFP equals to a GVI involving a Lipschitz continuous and co-coercive operator [9, 1719].

Otherwise, Mohammad and Abdul [23] considered a general split feasibility in infinite-dimensional real Hilbert spaces. It is to find x such that

x i = 1 C i ,A x j = 1 Q j ,

where A: H 1 H 2 is a bounded linear operator, { C i } i = 1 and { Q j } j = 1 are the families of nonempty closed convex subsets of H 1 and H 2 , respectively.

Let C and Q be nonempty closed convex subsets in real Hilbert spaces H 1 and H 2 , respectively. We consider a general split feasibility problem which is different from the one in [23]. Our GSFP is to find

x H 1 ,g ( x ) C such that Ag ( x ) Q,
(1.2)

where A: H 1 H 2 is a bounded linear operator and g: H 1 C is a continuous operator. We see that the SFP in [24] and the GSFP in [23] are particular cases of GSFP (1.2). It has applications in many special fields such as signal decryption, demodulating the digital signal and noise processing, etc. In order to solve GSFP (1.2), two preconditioning algorithms are developed in this paper following the iterative scheme

g( x n + 1 )= P C [ g ( x n ) γ D A ( I P Q ) A g ( x n ) ] ,n0,

where the two general constraints C and Q deal with projections with respect to the norms corresponding to some symmetric positive definite matrices. Define the solution set of (1.2) Γ={ x H 1 Ag( x )Q}, as Γ is nonempty, and by virtue of the related G-co-coercive operator, we can establish the convergence of the proposed algorithms.

The paper is organized as follows. Section 2 presents two useful propositions. In Section 3, we define the algorithms with fixed preconditioner, variable preconditioner, relaxed projection and preconditioner approximation and analyze the convergence. Numerical results are reported in Section 4. Finally, Section 5 gives some concluding remarks.

2 Preliminaries

In what follows, we state some concepts and propositions.

The SFP is to find a point x C such that A x Q, where A: H 1 H 2 is a bounded linear operator [24].

Let G be a symmetric positive definite matrix, and set D= G 1 . Then the norm G is defined by x G 2 =x,Gx for xH. We denote by P C the projection operator onto C with respect to the norm G [6], i.e.,

P C (x)=arg min y C { x y G } .

Let λ min and λ max be the minimum and the largest eigenvalues of G, respectively. Then for the 2-norm [18, 19], we have

λ min x 2 x G 2 λ max x 2 ,xH.
(2.1)

Proposition 2.1 [7, 10]

Let C be a nonempty closed convex subset in H, for x,yH and zC, the G-projection operator onto C has the following properties:

  1. (i)

    G(x P C x),z P C (x)0;

  2. (ii)

    x ± y G 2 = x G 2 ±2x,Gy+ y G 2 ;

  3. (iii)

    P C ( x ) P C ( y ) G 2 x y G 2 ( P C ( x ) x ) ( P C ( y ) y ) G 2 .

Let D ˜ be a symmetric positive definite matrix, and A T D ˜ =D A T . Then the norm D ˜ is defined by y D ˜ 2 =y, D ˜ y for yH. We denote by P Q the projection operator onto Q with respect to the norm D ˜ . According to the SFP, when GSFP (1.2) has no solution (refer to [2, 14, 15]), we can define

f g (x)= 1 2 ( I P Q ) A g ( x ) 2

and

f D ˜ g (x)= 1 2 A g ( x ) P Q A g ( x ) D ˜ 2 = D ˜ ( A g ( x ) P Q A g ( x ) ) , A g ( x ) P Q A g ( x ) ,

f D ˜ g (x) is also convex and continuously differentiable in H. Its gradient operator is

f D g (x)=D A T (I P Q )Ag(x).

As D=I, we define

f g (x)= A T (I P Q )Ag(x),

f g (x) is also Lipschitz continuous.

Proposition 2.2 If we consider the constrained minimization problem

min { f D ˜ g ( x ) x H 1  s.t.  g ( x ) C } ,

its stationary point x H 1 satisfies

{ x H , g ( x ) C  such that f D g ( x ) , g ( x ) g ( x ) 0 , g ( x ) C ,

which is a general variational inequality involving a Lipschitz continuous and G-co-coercive operator.

Proof For x,yH, from (2.1) and Lemma 8.1 in [15], we have

f D g ( x ) f D g ( y ) G 2 λ max ( G ) f D g ( x ) f D g ( y ) 2 λ max 2 ( D ) λ min ( D ) f g ( x ) f g ( y ) 2 λ max 2 ( D ) λ min ( D ) L 2 g ( x ) g ( y ) 2 λ max 3 ( D ) λ min ( D ) L 2 g ( x ) g ( y ) G 2 ,

where L is the largest eigenvalue of A T A; therefore, the operator f D g is Lipschitz continuous,

f D g ( x ) f D g ( y ) , g ( x ) g ( y ) λ min ( D ) f g ( x ) f g ( y ) , g ( x ) g ( y ) λ min ( D ) L f g ( x ) f g ( y ) 2 λ min ( D ) L λ max ( D ) f g ( x ) f g ( y ) D 2 = λ min ( D ) L λ max ( D ) D ( f g ( x ) f g ( y ) ) , G D ( f g ( x ) f g ( y ) ) = λ min ( D ) L λ max ( D ) f D g ( x ) f D g ( y ) G 2 .

Thus, the operator f D g is co-coercive. □

3 Main results

In this section, we propose several modified CQ algorithms with preconditioning techniques and prove the convergence.

3.1 General preconditioning CQ algorithm

In this part, we have our first algorithm with fixed stepsize and preconditioner to solve GSFP (1.2). The algorithm is as follows.

Algorithm 3.1 Choose x 0 H 1 such that g( x 0 )C, and let x n H 1 such that g( x n )C, then we calculate x n + 1 such that

g( x n + 1 )= P C [ g ( x n ) γ f D g ( x n ) ] ,n0,
(3.1)

where γ(0, 2 L L D ), L and L D are the largest eigenvalues of A T A and D, respectively.

Now we establish the weak convergence of Algorithm 3.1.

Theorem 3.1 Suppose that the operators g: H 1 C and g 1 :C H 1 are continuous. If Γ, then the sequence { x n } generated by Algorithm 3.1 converges to the solution of GSFP (1.2).

Proof Firstly, for x Γ, we have

g ( x ) = P C [ g ( x ) γ f D g ( x ) ] .

From (3.1), (iii), (ii) and the definition of ism, we have

g ( x n + 1 ) g ( x ) G 2 = P C [ g ( x n ) γ f D g ( x n ) ] P C [ g ( x ) γ f D g ( x ) ] G 2 g ( x n ) g ( x ) γ [ f D g ( x n ) f D g ( x ) ] G 2 g ( x n ) g ( x ) G 2 2 γ g ( x n ) g ( x ) , f g ( x n ) f g ( x ) + γ 2 f D g ( x n ) f D g ( x ) , f g ( x n ) f g ( x ) g ( x n ) g ( x ) G 2 ( 2 γ L γ 2 L D ) f g ( x n ) f g ( x ) 2 ,
(3.2)

as 2 γ L γ 2 L D >0, which implies that the sequence { g ( x n ) g ( x ) G } n N is monotonically decreasing, then we can obtain that the sequence { g ( x n ) g ( x ) G } n N is also convergent, especially, the sequence { g ( x n ) } n N is bounded. Consequently, we get from (3.2)

lim n f g ( x n ) f g ( x ) = lim n f g ( x n ) =0.
(3.3)

Moreover, for each g( x n )C, from (iii) and (2.1), we have

g ( x n + 1 ) g ( x n ) G γ f D ( x n ) G γ L D f g ( x n ) G γ L D λ max ( G ) f g ( x n ) .

Then by virtue of (3.3) we have

lim n g ( x n + 1 ) g ( x n ) G =0.
(3.4)

Hence, there exists a subsequence { g ( x j ) } j N ¯ of { g ( x n ) } n N such that

lim j g ( x j + 1 ) g ( x j ) G =0.

Thus, { g ( x j ) } j N ¯ is also bounded.

Let x ¯ be an accumulation point of { x n }, then the subsequence of { x n }, { x j } j N ¯ x ¯ as j. Because of the continuity of g, there exists an accumulation point g( x ¯ )C of the sequence { g ( x n ) } n N ; for the subsequence { g ( x j ) } j N ¯ , we have { g ( x j ) } j N ¯ g( x ¯ ) as j. After that, from (3.3) we obtain

lim j f g ( x n j ) = f g ( x ¯ ) = A T A g ( x ¯ ) A T P Q A g ( x ¯ ) =0,

that is, Ag( x ¯ )Q.

We use x ¯ in place of x in (3.2) and obtain that { g ( x n ) g ( x ¯ ) G } is convergent. Because its subsequence { g ( x n j ) g ( x ¯ ) G }0, then we get that { g ( x n ) } n N converges to g( x ¯ ) as j. As well as g 1 is continuous, we finally have

lim n x n = x ¯ Γ.

Therefore, x ¯ is a solution of GSFP (1.2). □

From Algorithm 3.1 and Theorem 3.1 we can deduce the following results easily.

Corollary 3.1 If g=I, then GSFP (1.2) reduces to SFP, Algorithm 3.1 also reduces to a preconditioning CQ (PCQ) algorithm for x 0 H 1 :

x n + 1 = P C [ x n γ f D ( x n ) ] ,n0,
(3.5)

where γ(0, 2 L L D ), L and L D are the largest eigenvalues of A T A and D, respectively. P C and P Q are still the projection operators onto C and Q with respect to the norm D and D ˜ , respectively.

Corollary 3.2 If g=I, D=I, D ˜ =I, then GSFP reduces to SFP, then Algorithm 3.1 reduces to the CQ algorithm proposed in [25].

Corollary 3.3 If g=I, D ˜ =I, P C and P Q are the projections onto C and Q with respect to the norm , set F( x n )=D A T (I P Q )AD x n , (3.5) transforms into the algorithm in [26]

x n + 1 = P C [ x n γ F ( x n ) ] ,n0,

where γ(0, 2 L ), L= D A T 2 . Then the GSFP reduces to the extended split feasibility problem (ESFP) in [26].

3.2 An algorithm with variable projection metric

The algorithms above can speed the convergence of CQ algorithm, but the stepsize and preconditioner are fixed. In this subsection, we extend the results in [6] and construct an iterative scheme with variable stepsize and preconditioner D n from one iteration to the next. As a key role, D n will change arbitrarily or following some rules to achieve the convergence progress and better results.

Let D n and D ˜ n be two symmetric positive definite matrices for n=0,1,2, . Denote by P C and P Q the projections onto C and Q with respect to the norm D n and D ˜ n , respectively. Let χ be a set of symmetric positive definite matrices, we have the following algorithm.

Algorithm 3.2 Choose x 0 H 1 such that g( x 0 )C, and let x n H 1 such that g( x n )C; for D n χ, we compute x n + 1 such that

g( x n + 1 )= P C [ g ( x n ) γ n D n f g ( x n ) ] ,n0,
(3.6)

where γ n (0, 2 L M D ), L is the largest eigenvalue of A T A, M D is the minimum value of all the largest eigenvalue values L D n to matrices D n .

Remark 3.1 Define d n = g ( x n + 1 ) g ( x n ) G n , then for the next iteration, D n + 1 is either chosen arbitrarily from χ or equivalent to D n . It is conditional on whether d n has decreased or not. More particularly, we define a scalar d ¯ n with initial value d ¯ 0 =. Having chosen a scalar α(0,1) at the n th iteration, then d ¯ n + 1 is calculated by

d ¯ n + 1 ={ α d ¯ n , if  d n d ¯ n ; d ¯ n , if  d n > d ¯ n ,

then we select

D n + 1 ={ D χ , if  d ¯ n + 1 < d ¯ n ; D n , if  d ¯ n + 1 = d ¯ n .

Theorem 3.2 If Γ, then the sequence { x n } generated by Algorithm 3.2 converges to the solution of GSFP (1.2).

Proof To obtain variable D n at each iteration and keep the convergence of Algorithm 3.2, d n must be a descending behavior for n=0,1,2, . We first show that

lim _ n d n =0.
(3.7)

Indeed if (3.7) is not true, we have lim _ n d n >0, then D n must have changed a finite number of times, we set that this number is κN. Therefore, let x Γ be a solution of GSFP, refer to (3.2) and for n>κ, we have

g ( x n + 1 ) g ( x ) G κ 2 g ( x n ) g ( x ) G κ 2 ( 2 γ κ L γ κ 2 L D κ ) f g ( x n ) f g ( x ) 2 ,

as M D =min{ L D n n=0,1,κ}, 2 γ κ L γ κ 2 L D κ >0. Then, following the proof of Theorem 3.1, we also get

lim n f g ( x n ) =0
(3.8)

and

lim n d n = lim n g ( x n + 1 ) g ( x n ) G κ =0.
(3.9)

Equation (3.9) contradicts the above hypothesis, so (3.7) is true.

By using (3.6) and (iii), for the n th iteration, we have

d n = g ( x n + 1 ) g ( x n ) G n 2 1 2 g ( x n ) P C [ g ( x n ) ] G n 2 g ( x n + 1 ) P C [ g ( x n ) ] G n 2 = 1 2 g ( x n ) P C [ g ( x n ) ] G n 2 P C [ g ( x n ) γ n D n f g ( x n ) ] P C [ g ( x n ) ] G n 2 1 2 g ( x n ) P C [ g ( x n ) ] G n 2 γ n L D n f g ( x n ) G n 2 ,

where γ n L D n >0. Then from (2.1) and (3.8) we know that

lim n f g ( x n ) G n = lim n A T ( I P Q ) A g ( x n ) G n =0,

so Ag( x n )Q. By virtue of (3.7), we also have

lim _ n g ( x n ) P C [ g ( x n ) ] =0.

This means that at least a subsequence of { g ( x n ) } n N converges to a solution of g( x ¯ )Γ. Similar to the argumentation of accumulation in the proof of Theorem 3.1, we know that { x n } converges to a solution of GSFP. □

3.3 Some methods for execution

In Algorithms 3.1 and 3.2, there still exists difficulty to implement the projections P C and P Q with respect to the defined norms, especially when C and Q are general closed convex sets. According to the relaxed method in [8, 27, 28], we consider the above algorithm in which the closed convex subsets C and Q are the following particular formula:

C= { g ( x ) H 1 c ( g ( x ) ) 0 } andQ= { A g ( x ) H 2 q ( A g ( x ) ) 0 } ,

where c: H 1 R and q: H 2 R are convex functions. C n and Q n are given as

C n = { g ( x ) H 1 c ( g ( x n ) ) + ξ n , g ( x ) g ( x n ) 0 } , Q n = { A g ( x ) H 1 q ( A g ( x n ) ) + η n , A g ( x ) A g ( x n ) 0 } ,

where ξ n c(g( x n )), η n q(Ag( x n )).

Here, we also replace P C and P Q by P C n and P Q n . However, in this paper, take Algorithm 3.2 for example, the projections are with respect to the norms corresponding to G n and D ˜ n , we should use the following methods to calculate them. For z H 1 and y H 2 ,

P C n (z)={ z D n c [ g ( x n ) ] + D n ξ n , z g ( x n ) ξ n D n 2 ξ n , if  c [ g ( x n ) ] + D n ξ n , z g ( x n ) > 0 ; z , otherwise

and

P Q n (y)={ y D ˜ n q [ A g ( x n ) ] + D ˜ n η n , y A g ( x n ) η n D ˜ n 2 η n , if  q [ A g ( x n ) ] + D ˜ n η n , y A g ( x n ) > 0 ; y , otherwise .

Set z=g( x n ) γ n D n f g ( x n ), y=Ag( x n ), let x ¯ H 1 be an accumulation of { x n } n N . From the proof above, it is easy to deduce that

lim n c [ g ( x n ) ] + D n ξ n , z g ( x n ) = c [ g ( x ¯ ) ] 0 , lim n q [ A g ( x n ) ] + D ˜ n η n , y A g ( x n ) = q [ A g ( x ¯ ) ] 0 .

Therefore, g( x ¯ )C C n , Ag( x ¯ )Q Q n , with the projections P C n and P Q n , x ¯ is a solution of GSFP.

Next, we present a new approximation method to estimate γ n and D n in Algorithm 3.2.

If Γ, for x Γ and n0, such that

D n f g ( x ) = D n A T Ag ( x ) D n A T P Q n Ag ( x ) =0,

under the ideal condition, if D n A T AI, the solution is done, but unfortunately, ( A T A ) 1 cannot be calculated directly when A is a large matrix in practice. As

A T Ag ( x ) = A T P Q n Ag ( x ) =λg ( x ) ,

where λ is an eigenvalue of A T A. Let D 0 =I, for the n th iteration, we have the next j×j approximation of D n + 1

D n + 1 j j ={ [ g ( x n ) ] j [ A T P Q n A g ( x n ) ] j , if  [ g ( x n ) ] j 0  and  [ A T P Q n A g ( x n ) ] j 0 ; D n j j , otherwise ,

where j=1,2, . So, at the n th iteration, let l D n be the minimum eigenvalue of D n , take M D n min{ L D k k=0,1,,n}, L n max{ ( l D k ) 1 k=0,1,,n}, the variable stepsize is approximated by

γ n = ρ n L n M D n , ρ n (0,2),n=1,2,.

4 Numerical results

We consider the following problem from [29] in a finite dimensional Hilbert space:

Let C={x H 1 c(x)0}, where c(x)= x 1 + x 2 2 ++ x N 2 , and Q={y H 2 q(y)0}, where q(y)= y 1 + y 2 2 ++ y M 2 1. A M × N is a random matrix where every element of A is in (0,1) satisfying Γ. Let x 0 be a random vector in H 1 where every element of x 0 is in (0,1).

We set x n + 1 x n ε as the stop rule, and let N=10, M=20, g=I, D ˜ n =I, for n0. Using the methods in Section 3.3, we compare Algorithm 3.2 with the relaxed CQ algorithm (RCQ) in [30], with different ε and initial values. The results can be seen in Table 1. We see that the proposed methods in this paper behave better.

Table 1 The comparison between the results of preconditioning and relaxed CQ algorithms

5 Concluding remarks

In this paper, we have discussed a new general split feasibility problem, which is related to the general variational inequalities involving a co-coercive operator. By using the G-norm method, variable modulus method and relaxed method, two modified projection algorithms for solving the GSFP and some approximate methods for algorithm executing have been presented. The numerical results show that by preconditioning method, the convergence speed of CQ algorithm can be improved, but the way to obtain variable stepsize in the paper is inexact. To continue to improve it or combine it with the methods in [14] and [28] is another interesting subject.

References

  1. Chen K: Matrix Preconditioning Techniques and Applications. Cambridge University Press, New York; 2005.

    Book  MATH  Google Scholar 

  2. Piana M, Bertero M: Projected Landweber method and preconditioning. Inverse Probl. 1997, 13: 441–463. 10.1088/0266-5611/13/2/016

    Article  MathSciNet  MATH  Google Scholar 

  3. Strand ON: Theory and methods related to the singular-function expansion and Landweber’s iteration for integral equations of the first kind. SIAM J. Numer. Anal. 1974, 11: 798–824. 10.1137/0711066

    Article  MathSciNet  MATH  Google Scholar 

  4. Auslender A: Optimisation: Méthodes Numérique. Masson, Paris; 1976.

    MATH  Google Scholar 

  5. Dafermos S: Traffic equilibrium and variational inequalities. Transp. Sci. 1980, 14: 42–54. 10.1287/trsc.14.1.42

    Article  MathSciNet  Google Scholar 

  6. Bertsekas PD, Gafni EM: Projection methods for variational inequities with application to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

    Article  MathSciNet  MATH  Google Scholar 

  7. Marcotte P, Wu JH: On the convergence of projection methods: application to the decomposition of affine variational inequalities. J. Optim. Theory Appl. 1995, 85: 347–362. 10.1007/BF02192231

    Article  MathSciNet  MATH  Google Scholar 

  8. Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 58–70. 10.1007/BF01589441

    Article  MathSciNet  MATH  Google Scholar 

  9. Yang Q: The revisit of a projection algorithm with variable steps for variational inequalities. J. Ind. Manag. Optim. 2005, 1: 211–217.

    Article  MathSciNet  MATH  Google Scholar 

  10. He BS: Inexact implicit methods for monotone general variational inequalities. Math. Program. 1999, 86: 199–217. 10.1007/s101070050086

    Article  MathSciNet  MATH  Google Scholar 

  11. Noor MA, Wang YJ, Xiu N: Projection iterative schemes for general variational inequalities. J. Inequal. Pure Appl. Math. 2002., 3: Article ID 34

    Google Scholar 

  12. Santos PSM, Scheimberg S: A projection algorithm for general variational inequalities with perturbed constraint sets. Appl. Math. Comput. 2006, 181: 649–661. 10.1016/j.amc.2006.01.050

    Article  MathSciNet  MATH  Google Scholar 

  13. Muhammad AN, Abdellah B, Saleem U: Self-adaptive methods for general variational inequalities. Nonlinear Anal. 2009, 71: 3728–3738. 10.1016/j.na.2009.02.033

    Article  MathSciNet  MATH  Google Scholar 

  14. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

    Article  MathSciNet  MATH  Google Scholar 

  15. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  MATH  Google Scholar 

  16. Dolidze Z: Solution of variational inequalities associated with a class of monotone maps. Èkon. Mat. Metody 1982, 18: 925–927.

    MathSciNet  Google Scholar 

  17. He B, He X, Liu H, Wu T: Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 2009, 196: 43–48. 10.1016/j.ejor.2008.03.004

    Article  MathSciNet  MATH  Google Scholar 

  18. Facchinei F, Pang JS I. In Finite-Dimensional Variational Inequality and Complementarity Problems. Springer, New York; 2003.

    Google Scholar 

  19. Facchinei F, Pang JS II. In Finite-Dimensional Variational Inequality and Complementarity Problems. Springer, New York; 2003.

    Google Scholar 

  20. Yao YH, Postolache M, Liou YC: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013., 2013: Article ID 201

    Google Scholar 

  21. Yao YH, Yang PX, Kang SM: Composite projection algorithms for the split feasibility problem. Math. Comput. Model. 2013, 57: 693–700. 10.1016/j.mcm.2012.07.026

    Article  MathSciNet  MATH  Google Scholar 

  22. Yao YH, Liou YC, Shahzad N: A strongly convergent method for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 125046

    Google Scholar 

  23. Mohammad E, Abdul L: General split feasibility problems in Hilbert spaces. Abstr. Appl. Anal. 2013., 2013: Article ID 805104

    Google Scholar 

  24. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  MATH  Google Scholar 

  25. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    Article  MathSciNet  MATH  Google Scholar 

  26. Wang PY, Zhou HY: A preconditioning method of the CQ algorithm for solving the extended split feasibility problem. J. Inequal. Appl. 2014., 2014: Article ID 163 10.1186/1029-242X-2014-163

    Google Scholar 

  27. Yang Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048

    Article  MathSciNet  MATH  Google Scholar 

  28. López G, Martín-Márquez M, Wang F, Xu H-K: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004

    Google Scholar 

  29. Wang Z, Yang Q, Yang Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2011, 217: 5347–5359. 10.1016/j.amc.2010.11.058

    Article  MathSciNet  MATH  Google Scholar 

  30. Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the associate editor and the referees for their comments and suggestions. This research was supported by the National Natural Science Foundation of China (11071053).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peiyuan Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors take equal roles in deriving results and writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, P., Zhou, H. & Zhou, Y. Preconditioning methods for solving a general split feasibility problem. J Inequal Appl 2014, 435 (2014). https://doi.org/10.1186/1029-242X-2014-435

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-435

Keywords