Skip to main content

A preconditioning method of the CQ algorithm for solving an extended split feasibility problem

Abstract

In virtue of preconditioning technology, we propose a preconditioning CQ algorithm for an extend split feasibility problem (ESFP). Comparing with the others, the proposed algorithm can get faster convergence without considering to adjust the stepsize. The convergence is also established under mild conditions. Several extensions of the preconditioning CQ algorithm are presented. Moreover, we present an approximate variable preconditioner which does not compute the matrix inverse. Finally, some numerical experiments show the better behaviors of the proposed methods.

MSC:47J10, 47J20, 65B05.

1 Introduction

The problem to find xC with AxQ, if such x exists, was called the split feasibility problem (SFP) by Censor and Elfving [1], where C R N and Q R M are nonempty closed convex sets, and A is an M by N matrix. This problem plays an important role in the study of signal processing, image reconstruction, and so on [2, 3]. Censor and Elfving’s algorithm in [1], as well as others obtained later [4, 5] involve matrix inverses at each step. Byrne [6] presented a method called the CQ algorithm for solving the SFP that does not involve matrix inverses.

The CQ algorithm Let x 0 be arbitrary. For k=0,1, , calculate

x k + 1 = P C ( x k γ A T ( I P Q ) A x k ) ,
(1)

where γ(0,2/L) and L denotes the largest eigenvalue of the matrix A T A, I is the identical matrix. P C and P Q are the orthogonal projections onto C and Q, respectively.

In recent years, how to modify the CQ algorithm so that it can easily be implemented and converge faster is the hot topic. The typical modifications are as follows: Yang [7] presented a relaxed CQ algorithm for solving the SFP, then the orthogonal projections onto halfspaces C k and Q k can be executed exactly. Qu and Xiu [8] proposed the modified relaxed algorithm which does not need to compute the largest eigenvalue of the matrix A T A and can get an adaptive stepsize by adopting an Armijo-like search. The paper [9] extended the algorithm in [10] and proposed a relaxed inexact projection method for the SFP. Xu [11] extended the problem into infinite-dimensional Hilbert spaces, and modified the CQ algorithm with Mann’s iteration. In [12], López et al. presented a variable stepsize, and improved the algorithm with a Halpern-type iteration.

However, using preconditioning technology to accelerate the CQ algorithm not only has not been taken into account, but also one will obtain a special effect. In this paper, we consider to modify the CQ algorithm from the views of fixed point and variational inequality. Combining with the appropriate preconditioner, the SFP can be transformed into an extended split feasibility problem (ESFP). Naturally, a preconditioning CQ algorithm for solving the ESFP can also solve the SFP indirectly.

The rest of the paper is organized as follows. In Section 2, we review some concepts and existing results. In Section 3, we propose a preconditioning CQ algorithm for solving ESFP and establish its convergence. Several extensions are presented in Section 4. In Section 5, we discuses the methods how to estimate the approximate inverse preconditioner. In Section 6, we report some computational results with the proposed algorithm and methods. Finally, Section 7 gives some concluding remarks.

2 Preliminaries

Our argument mainly depends on monotone operators, nonexpansive mappings, and the metric projections.

Definition 2.1 [13]

Let T be a mapping from a set C R N into itself. Then

  1. (i)

    T is said to be monotone on C, if

    TxTy,xy0,for all x,yC,
  2. (ii)

    a mapping T:CC is nonexpansive if

    TxTyxy,for all x,yC.

We denote by Fix(T) the set of fixed points of T; that is, Fix(T)={xC:Tx=x}. Note that Fix(T) is always closed and convex (but maybe empty).

The metric projection from R N onto C is the mapping P C : R N C, which assigns to each point xC the unique point P C xC satisfying the property

x P C x= inf y C xy=:d(x,C),

where is the 2-norm.

The following properties of projections are useful and pertinent to our purpose.

Lemma 2.1 [13]

Given x R N

( i ) x P C x, P C xy0,for all yC,
(2)
( i i ) x P C x 2 x y 2 y P C x 2 ,for all yC,
(3)
( i i i ) P C x P C y,xy P C x P C y 2 ,for all y R N .
(4)

Consequently, P C is nonexpansive and monotone, and I P C is also nonexpansive, then

( i v ) ( I P C ) x ( I P C ) y , x y ( I P C ) x ( I P C ) y 2 ,for all y R N .
(5)

Lemma 2.2 For x,y R N

( i ) x ± y 2 = x 2 ±2x,y+ y 2 ,
(6)
( i i ) t x + ( 1 t ) y 2 =t x 2 +(1t) y 2 t(1t) x y 2 ,t R 1 .
(7)

Lemma 2.3 [14]

Let U=Iγ A T (I P Q )A, where γ(0,2/L).

  1. (i)

    U is an averaged operator; i.e. there exist some β(0,1) and a nonexpansive operator V, U=(1β)U+βV.

  2. (ii)

    Fix(U)= A 1 (Q), then Fix( P C U)=C A 1 (Q).

Proposition 2.1 [15]

For every k0, let x k R N , C k and Q k be defined as in [7]. Then for any x R N and y R M we have

P C k (x)= { x c ( x k ) + ξ k , x x k ξ k 2 ξ k , if  c ( x k ) + ξ k , x x k > 0 ; x , otherwise

and

P Q k (y)= { y q ( A x k ) + η k , y A x k η k 2 η k , if  q ( A x k ) + η k , y A x k > 0 ; y , otherwise .

3 The preconditioning CQ algorithm

Stand [16] and Piana and Bertero [17] have applied the preconditioning matrix technologies to improve the Landweber and projected Landweber algorithms. The analyses deal with the operators A and A A, which are based on the singular value decomposition and a more general spectrum, respectively. We can also extend the technologies to improve the CQ algorithm.

As the SFP is to find a point x C, with A x Q. Firstly, we set Ω=C A 1 (Q), A 1 (Q)={ x R N |A x Q}. From Lemma 2.3, (1) can be depicted from the view of fixed point,

x = P C ( U x ) .
(8)

Assume that Ω, i.e. the SFP has a nonempty solution set, and x is the solution of SFP. Thus, we have

x =U x ,

so

A T (I P Q )A x =0.
(9)

Then let D:CC be a N×N symmetrical positive definite matrix, and (AD) x Q. Referring to (9), we can deduce that

D A T ( I P Q ) A D x = ( A D ) T ( I P Q ) ( A D ) x = ( A D ) T ( A D ) x ( A D ) T P Q ( A D ) x = ( A D ) T ( A D ) x ( A D ) T ( A D ) x = 0
(10)

or

x = x γD A T (I P Q )AD x = U D x .

Then U D also has the same properties of U in Lemma 2.3, and we can obtain

x = P C ( U D x ) .
(11)

Now we present a new algorithm, which is named a preconditioning CQ algorithm (PCQ).

Algorithm 3.1 Let D:CC be a N×N symmetrical positive definite matrix, x 0 C be arbitrary. For k=0,1, , calculate

x k + 1 = P C ( x k γ D A T ( I P Q ) A D x k ) ,
(12)

where γ(0,2/L), L= D A T 2 .

Algorithm 3.1 is to solve an extended SFP (ESFP), which can be represented as follows.

Definition 3.1 Let C and Q be nonempty closed convex sets in R N and R M , respectively, and A is an M by N matrix, D is an N by N symmetrical positive definite matrix, the ESFP is to find xC with ADxQ. We denote the solution set of ESFP by G.

Remark 3.1 If we set x ˜ =DxD(C)= C ˜ , then A x ˜ Q, the problem in Definition 3.1 is transformed into SFP.

Remark 3.2 If we set D is an unit matrix, then to find xC with AxQ, the problem in Definition 3.1 is transformed into SFP.

From Remark 3.1 we know that the SFP is to minimize the equation

f( x ˜ )= 1 2 ( I P Q ) A x ˜ 2 , x ˜ C ˜ .
(13)

Substituting x ˜ =Dx into (13), its gradient operator is

f(x)=D A T (I P Q )ADx,xC.

While C= R N and C ( A D ) 1 (Q), we also have

f ( x ) =0,

where x R N is the solution set of the extended SFP. We can obtain the following variational inequality:

D A T ( I P Q ) A D x , x x 0,xC.

Therefore, we have the next constrained least-squares problem:

min { f ( x ) : x C } .

The following immediately follows.

Theorem 3.1 Assume G, then x G, if and only if x =argmin{f(x)|xC}, if and only if f( x ),x x 0, xC, where

f(x)= 1 2 ( I P Q ) A D x 2 ,xC.
(14)

As U D =IγD A T (I P Q )AD, from (12) we have

x 0 C, x k + 1 = P C ( U D x k ) ,k=0,1,.
(15)

In order to establish the convergence of Algorithm 3.1, we need the following theorem.

Theorem 3.2 Assume that G=C ( A D ) 1 (Q), then

  1. (i)

    Fix( U D )= ( A D ) 1 (Q)={x R N |ADxQ};

  2. (ii)

    Fix( P C U D )=G.

Proof As D T =D, ( A D ) T =D A T , we have U D =Iγ ( A D ) T (I P Q )(AD).

Firstly, we prove ( A D ) 1 (Q)Fix( U D ). For x ( A D ) 1 (Q), then x R N and ADxQ, we have P Q ADx=ADx. So

U D x=xγ ( A D ) T ( ( A D ) x P Q ( A D ) x ) =x0=x.

Therefore, xFix( U D ).

Secondly, we prove Fix( U D ) ( A D ) 1 (Q).

As G=C ( A D ) 1 (Q), we choose zG, then zC and z ( A D ) 1 (Q), so (AD)zQ.

For xFix( U D ), we have A T (I P Q )ADx=0. From the properties of a projection, we can deduce

( I P Q ) A D x , ( A D ) z P Q ( A D ) x 0,

therefore,

( I P Q ) A D x 2 = ( I P Q ) A D x , ( I P Q ) A D x = ( I P Q ) A D x , ( A D ) x ( A D ) z + ( I P Q ) A D x , ( A D ) z P Q ( A D ) x ( I P Q ) A D x , A D ( x z ) = A T ( I P Q ) A D x , D ( x z ) = 0 ,

then (I P Q )ADx=0, and ADx= P Q (AD)xQ.

We obtain x ( A D ) 1 (Q), thus (i) is proved.

We can also deduce that Fix( P C U D )=Fix( P C )Fix( U D )G=C ( A D ) 1 (Q). □

Theorem 3.3 Assume G, 0<γ<2/L, L= D A T 2 , the sequence { x k } is generated by (15), there exists lim k x k x G.

Proof Firstly, we show that if γ=2/L, the operator

V=I 2 L D A T (I P Q )AD
(16)

is nonexpansive.

For x,yC, from (4) and (6) we have

V x V y 2 = x y 2 L ( D A T ( I P Q ) A D x D A T ( I P Q ) A D y ) 2 = x y 2 4 L D A T ( I P Q ) A D x D A T ( I P Q ) A D y , x y + 4 L 2 D A T ( I P Q ) A D x D A T ( I P Q ) A D y 2 = x y 2 4 L ( I P Q ) A D x ( I P Q ) A D y , A D x A D y + 4 L 2 D A T ( I P Q ) A D x D A T ( I P Q ) A D y 2 x y 2 4 L ( I P Q ) A D x ( I P Q ) A D y 2 + 4 L ( I P Q ) A D x ( I P Q ) A D y 2 = x y 2 ,

therefore,

V x V y 2 x y 2 .
(17)

Next, we can easily obtain 0<γL/2<1, and we set

β= γ L 2 (0,1).

From (16) we deduce that

U D = I γ D A T ( I P Q ) A D = ( 1 γ L 2 ) I + γ L 2 ( I 2 L D A T ( I P Q ) A D ) = ( 1 β ) I + β V
(18)

and V is nonexpansive, hence, while 0<γ<2/L, U D is an averaged nonexpansive operator.

Finally, we choose pG, where pC and p= U D p. We have p= P C U D p= P C p. From (7), we have

x k + 1 p 2 = P C U D x k P C U D p 2 U D x k p 2 = ( 1 β ) x k + β V x k p 2 = ( 1 β ) ( x k p ) + β ( V x k p ) 2 = ( 1 β ) x k p 2 + β V x k p 2 β ( 1 β ) x k V x k 2 x k p 2 β ( 1 β ) x k V x k 2 ,
(19)

which implies that { x k p 2 } is monotonically decreasing and hence lim k p x k 2 =d0. Specially, { x k } is bounded.

From (19) we can deduce that

β(1β) x k V x k 2 x k p 2 x k + 1 p 2 ,

as x k p 2 x k + 1 p 2 0, we have x k V x k 0.

Let x be an arbitrary cluster point of the sequence { x k }. Then there exists a subsequence { x k j }{ x k }, then x k j x (j). As { x k }C, x C, and x = P C x . Because V is nonexpansive and continuous, then V x k j V x (j).

As x V x x x k j + x k j V x k j +V x k j V x 0, we have x =V x , then U D x =(1β) x +βV x = x . Therefore, x = P C x = P C U D x , from Theorem 3.2, we have x G. However, lim n x k x =d0 exists, and there exists a subsequence { x k j } of { x k } s.t. x k j x (j), therefore, there must be x k x (j). □

4 Several extensions of the preconditioning CQ algorithm

In virtue of kinds of CQ-like algorithms for solving the SFP, we can also deduce the following meaningful results for solving the ESFP without proof.

According to the relaxed CQ algorithm [7], we firstly obtain the relaxed projection method.

Algorithm 4.1 Let D: C k C k be a N×N symmetrical positive definite matrix, x 0 be arbitrary. For k=0,1, , calculate

x k + 1 = P C k ( x k γ D A T ( I P Q k ) A D x k ) ,
(20)

where γ(0,2/L), L= D A T 2 .

Theorem 4.1 Let { x k } be a sequence generated by the relaxed preconditioning CQ algorithm. Then { x k } converges to a solution of ESFP.

Next, from the papers [8] and [14], define f k : R N R N by

f k (x)=D A T (I P Q k )ADx,

and we can obtain an adaptive algorithm with strong convergence.

Algorithm 4.2 Let D: C k C k be a N×N symmetrical positive definite matrix, given constants λ>0, l(0,1), μ(0,1). Let x 0 be arbitrary, for k=0,1, , let

x ¯ k = P C k ( x k ρ k D A T ( I P Q k ) A D x k ) ,
(21)

where ρ k =λ l m k and m k is the smallest nonnegative integer m such that

ρ k f k ( x k ) f ( x ¯ k ) μ x k x ¯ k .
(22)

Set

x k + 1 = P C k [ ( 1 α k ) ( x k ρ k D A T ( I P Q k ) A D x ¯ k ) ] ,
(23)

where { α k } is a real sequence in (0,1) that satisfies conditions (C1) lim k α k =0; and (C2 k = 1 α k =.

Lemma 4.1 For all k=0,1, , f k is Lipschitz continuous on R N with constant L and co-coercive on R N with modulus 1/L, where L is the largest eigenvalue of the matrix A T A. Therefore, the Armijo-like search rule (22) is well defined.

Lemma 4.2 For all k=0,1, , μ l L < ρ k γ.

Theorem 4.2 Let { x k } be a sequence generated by Algorithm 4.2. If the solution set of the SFP is nonempty, then { x k } converges strongly to a solution of the ESFP.

As there exists an Armijo-like search step in Algorithm 4.2, the complexity of the implementation will be increased. Next, we propose a new variable stepsize to improve Algorithm 3.1.

Algorithm 4.3 Let D k :CC be a variable N×N symmetrical positive definite matrix, k=0,1, . For x 0 C, calculate

x k + 1 = P C ( x k γ k D k A T ( I P Q ) A D k x k ) ,
(24)

where γ k (0,2/(L M D )), L is the largest eigenvalue of A T A, M D is the minimum value of all the largest eigenvalues of D k , for k=0,1, . Specially, set γ k =1/(L M D ).

5 Approximating a variable preconditioner

In the above algorithms, the preconditioner D is continuous, positive definite and bounded so that it has a continuous inverse [17]. According to the preconditioning CQ algorithm, we set D commutes with the operator A T A. Therefore, we set a matrix function F with positive value, and its dimension should be consistent with dim( A T A). Moreover, F should have a positive lower bound in order to satisfy the existing inverse. Strand [16] also assumes that F is a polynomial or a rational function. Thus, the operator D is given by

D=F ( A T A ) .
(25)

Assume ( A T A ) 1 is existed, the product D A T AD is without restrictions in (10), we can deduced that sometimes D should be chose close to ( A T A ) 1 / 2 as more as possible.

The best condition is to calculate ( A T A ) 1 exactly, but it is always hard with the signal and image reconstruction problems. If F be a polynomial function, Stand [16] have provided an example with seventh-order, and Neumann’s series approach can also express ( A T A ) 1 . However, the polynomial method needs to calculate the high order matrix multiplication, therefore, it can not be implemented easily.

If we choose F be a rational function, a simple example, closely related to the Tikhonov regularization method has been used in [17]. According to the example, the approximate inverse preconditioner D can be given by

D=Re [ ( A T A + α I ) 1 / 2 ] ,
(26)

where Re denotes the real part, α is a positive real parameter, and good choices of α may be much smaller than the values provided by the methods used for estimating optimum values of the Tikhonov regularization parameter.

As (26) involves the matrix inverse, we next propose a diagonal format of D that does not calculate matrix inverses. Furthermore, the choice of D is related to the convergence properties of the algorithm. If D is evolutive following the iterations, the convergence rate of algorithm will also be accelerated.

From (10), we can deduce that

A T A x ˜ = A T P Q k (A x ˜ ), x ˜ Ω.
(27)

As D should be chosen closely to ( A T A ) 1 / 2 , we assume λ is the approximate eigenvalue matrix of A T A, and then we set

λ j × j x ˜ A T P Q k (A x ˜ ),j=1,2,,N.
(28)

Therefore, we can obtain the approximate variable preconditioners with respect to ( A T A ) 1 on the (k+1)th iteration:

D ¯ j j k + 1 = { x j k / ( A T P Q k A x k ) j , if  x j k 0  and  ( A T P Q k A x k ) j 0 ; D ¯ j j k otherwise .
(29)

Then we can also get the approximate variable preconditioners with respect to ( A T A ) 1 / 2 on the (k+1)th iteration:

D j j k + 1 =Re [ ( D ¯ j j k + 1 ) 1 / 2 ] .
(30)

Otherwise, a variable stepsize in Algorithm 4.3 can be estimated. Set L D k is the largest eigenvalue of D k , k=0,1, , on the k th iteration, we have M D k =min{ L D n |n=0,1,,k}. Then set l D ¯ k , the minimum eigenvalue of D ¯ k , k=0,1, , on the k th iteration, we have L k =max{1/ l D ¯ n |n=0,1,,k}. Therefore, a variable stepsize with respect to Algorithm 4.3 can be approximated by

γ k =1/( L k M D k ).
(31)

6 Numerical results

In this section, we present some numerical results for the proposed method. The following three examples are taken from the test problems in [15]. For Examples 6.2 and 6.3, we should first transform into the ESFP. The stopping criterion is x k + 1 x k <ε, and we took ε= 10 10 , γ=1/ D A T 2 , α=0.1. The projections are computed by Proposition 2.1.

Algorithm 4.1 was implemented in the Matlab R2011b (Windows version) programming environment. The codes were ran on a PC with 1.98 GB memory and Intel(R) Pentium(R) dual-core CPU G630 running at 2.69 GHz. The iteration numbers and the computational time for the methods in Section 5 with different starting points are given in Tables 1, 2, and 3, all CPU times reported are in seconds.

Table 1 Numerical results for Example 6.1
Table 2 Numerical results for Example 6.2
Table 3 Numerical results for Example 6.3

Example 6.1 (A convex feasibility problem, CFP)

Let C={x R 3 | x 2 2 x 3 2 40}, Q={y R 3 | y 3 1 y 1 2 0}. Find some point x in CQ.

Example 6.2 (A split feasibility problem)

Let A= ( 2 1 3 4 2 5 2 0 2 ) , C={x R 3 | x 1 + x 2 2 +2 x 3 0}, Q={y R 3 | y 1 2 + y 2 y 3 0}. Find xC with AxQ.

Example 6.3 (A split feasibility problem)

Let A= ( 1 1 0 0 1 1 2 0 1 ) , C={x R 3 | x 1 + x 2 2 +2 x 3 0}, Q={y R 3 | y 1 2 + y 2 y 3 0}. Find xC with AxQ.

Compared with the results in [15], we can obtain:

  1. (1)

    For the CFP, as A=I, the relaxed preconditioning method played a role inconspicuously.

  2. (2)

    For the SFP, the results are better than the ones in [15], and when A is not sparse the effect is obvious.

7 Conclusions

In this paper, by adopting the preconditioning techniques, a modified CQ algorithm is named preconditioning CQ algorithm, and its extensions for solving the ESFP have been presented. The approximate methods for how to estimate the preconditioner D are also discussed; the approximate diagonal preconditioner method does not need to compute the matrix inverses and the largest eigenvalue of the matrix A T A. Thus, the algorithm can be implemented easily. Moreover, the corresponding convergence property has been established in the feasible case of ESFP. The numerical results showed that the proposed algorithms and methods are effective to solve some problems.

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221-239. 10.1007/BF02142692

    Article  MATH  MathSciNet  Google Scholar 

  2. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367-426. 10.1137/S0036144593251710

    Article  MATH  MathSciNet  Google Scholar 

  3. Byrne CL: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103-120. 10.1088/0266-5611/20/1/006

    Article  MATH  MathSciNet  Google Scholar 

  4. Byrne CL: Iterative projection onto convex sets using multiple Bregman distances. Inverse Probl. 1999, 15: 1295-1313. 10.1088/0266-5611/15/5/313

    Article  MATH  MathSciNet  Google Scholar 

  5. Byrne CL: Bregman-Legendre multidistances projection algorithm for convex feasibility and optimization. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu D, Censor Y, Reich S. Elsevier, Amsterdam; 2001:87-100.

    Chapter  Google Scholar 

  6. Byrne CL: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441-453. 10.1088/0266-5611/18/2/310

    Article  MATH  MathSciNet  Google Scholar 

  7. Yang QZ: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261-1266. 10.1088/0266-5611/20/4/014

    Article  MATH  Google Scholar 

  8. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655-1665. 10.1088/0266-5611/21/5/009

    Article  MATH  MathSciNet  Google Scholar 

  9. Wang Z, Yang Q: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2011, 217: 5347-5359. 10.1016/j.amc.2010.11.058

    Article  MATH  MathSciNet  Google Scholar 

  10. Yang Q, Zhao J: The projection-type methods for solving the split feasibility problem. Math. Numer. Sin. 2006, 28: 121-132. (in Chinese)

    MathSciNet  Google Scholar 

  11. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26: 1-17.

    Article  MATH  Google Scholar 

  12. López G, Martín-Márquez V, Wang F, Xu H-K: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28: 1-18.

    Article  Google Scholar 

  13. Zarantonello EH: Projections on convex sets in Hilbert space and spectral theory. In Contributions to Nonlinear Functional Analysis. Edited by: Zarantonello EH. Academic Press, New York; 1971.

    Google Scholar 

  14. Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010., 2010: Article ID 102085

    Google Scholar 

  15. Abdellah B, Muhammad AN: On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 2012, 54: 627-639. 10.1007/s10898-011-9782-2

    Article  MATH  Google Scholar 

  16. Strand ON: Theory and methods related to the singular-function expansion and Landweber’s iteration for integral equations of the first kind. SIAM J. Numer. Anal. 1974, 11: 798-825. 10.1137/0711066

    Article  MATH  MathSciNet  Google Scholar 

  17. Piana M, Bertero M: Projected Landweber method and preconditioning. Inverse Probl. 1997, 13: 441-463. 10.1088/0266-5611/13/2/016

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the associate editor and the referees for their comments and suggestions. This research was supported by the National Natural Science Foundation of China (11071053).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peiyuan Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors take equal roles in deriving results and writing of this paper. Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, P., Zhou, H. A preconditioning method of the CQ algorithm for solving an extended split feasibility problem. J Inequal Appl 2014, 163 (2014). https://doi.org/10.1186/1029-242X-2014-163

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-163

Keywords