 Research
 Open Access
 Published:
Preconditioning methods for solving a general split feasibility problem
Journal of Inequalities and Applications volume 2014, Article number: 435 (2014)
Abstract
We introduce and study a new general split feasibility problem (GSFP) in a Hilbert space. This problem generalizes the split feasibility problem (SFP). The GSFP extends the SFP with a nonlinear continuous operator. We apply the preconditioning methods to increase the efficiency of the CQ algorithm, two general preconditioning CQ algorithms for solving the GSFP are presented. We also propose a new inexact method to approximate the preconditioner. The convergence theorems are established under the projections with respect to special norms. Some numerical results illustrate the efficiency of the proposed methods.
MSC:46E20, 47J20, 47J25, 47L50.
1 Introduction
As preconditioning methods can improve the condition number of the illposed system matrix, the convergence rate of the iterative algorithm can also be improved [1]. In [2, 3], a preconditioning method is applied to modify the projected Landweber algorithm for solving a linear feasibility problem (LFP). The modified algorithm is
where $\tau \in (0,2/\parallel D{A}^{\ast}A\parallel )$, $A:X\to Y$ is a linear and continuous operator, $\parallel \cdot \parallel $ means 2norm, X and Y are Hilbert spaces and $b\in Y$ is the datum of the problem, corrupted by noise or experimental errors.
While under the nonlinear conditions, Auslender and Dafermos [4, 5] proposed an algorithm to solve variational inequalities (VI),
where ${P}_{S}$ is the projection operator onto S with respect to the norm ${\parallel \cdot \parallel}_{G}$. Bertsekas and Gafni [6] and Marcotte and Wu [7] improved it with variable symmetric positive defined matrices ${G}_{n}$, Fukushima [8] modified it by a relaxed projection method with halfspace; then in [9], Yang established the convergence of Auslender’s algorithm under the weak cocoercivity of F.
Further, the general variational inequality problem (GVIP) has been investigated by many authors (see [10–13]). It is to find ${u}^{\ast}\in {\mathbb{R}}^{n}$ such that $g({u}^{\ast})\in K$ and
where K is a nonempty closed convex set in ${\mathbb{R}}^{n}$, $F,g:{R}^{n}\to {R}^{n}$ are nonlinear operators. In [12], Santos and Scheimberg extended and applied (1.1) to solve the GVI.
However, a general split feasibility problem (GSFP) equals to a GVI, and preconditioning methods for solving the GSFP have not been studied. By introducing a convex minimization problem, the split feasibility problem (SFP) is equivalent to a variational inequality problem (VIP), which involves a Lipschitz continuous and inversely strong monotone (ism) operator, see [14–22]. Similarly, by the same way, in this paper we introduce that a new GSFP equals to a GVI involving a Lipschitz continuous and cocoercive operator [9, 17–19].
Otherwise, Mohammad and Abdul [23] considered a general split feasibility in infinitedimensional real Hilbert spaces. It is to find ${x}^{\ast}$ such that
where $A:{H}_{1}\to {H}_{2}$ is a bounded linear operator, ${\{{C}_{i}\}}_{i=1}^{\mathrm{\infty}}$ and ${\{{Q}_{j}\}}_{j=1}^{\mathrm{\infty}}$ are the families of nonempty closed convex subsets of ${H}_{1}$ and ${H}_{2}$, respectively.
Let C and Q be nonempty closed convex subsets in real Hilbert spaces ${H}_{1}$ and ${H}_{2}$, respectively. We consider a general split feasibility problem which is different from the one in [23]. Our GSFP is to find
where $A:{H}_{1}\to {H}_{2}$ is a bounded linear operator and $g:{H}_{1}\to C$ is a continuous operator. We see that the SFP in [24] and the GSFP in [23] are particular cases of GSFP (1.2). It has applications in many special fields such as signal decryption, demodulating the digital signal and noise processing, etc. In order to solve GSFP (1.2), two preconditioning algorithms are developed in this paper following the iterative scheme
where the two general constraints C and Q deal with projections with respect to the norms corresponding to some symmetric positive definite matrices. Define the solution set of (1.2) $\mathrm{\Gamma}=\{{x}^{\ast}\in {H}_{1}\mid Ag({x}^{\ast})\in Q\}$, as Γ is nonempty, and by virtue of the related Gcocoercive operator, we can establish the convergence of the proposed algorithms.
The paper is organized as follows. Section 2 presents two useful propositions. In Section 3, we define the algorithms with fixed preconditioner, variable preconditioner, relaxed projection and preconditioner approximation and analyze the convergence. Numerical results are reported in Section 4. Finally, Section 5 gives some concluding remarks.
2 Preliminaries
In what follows, we state some concepts and propositions.
The SFP is to find a point ${x}^{\ast}\in C$ such that $A{x}^{\ast}\in Q$, where $A:{H}_{1}\to {H}_{2}$ is a bounded linear operator [24].
Let G be a symmetric positive definite matrix, and set $D={G}^{1}$. Then the norm ${\parallel \cdot \parallel}_{G}$ is defined by ${\parallel x\parallel}_{G}^{2}=\u3008x,Gx\u3009$ for $\mathrm{\forall}x\in H$. We denote by ${P}_{C}$ the projection operator onto C with respect to the norm ${\parallel \cdot \parallel}_{G}$ [6], i.e.,
Let ${\lambda}_{\mathrm{min}}$ and ${\lambda}_{\mathrm{max}}$ be the minimum and the largest eigenvalues of G, respectively. Then for the 2norm $\parallel \cdot \parallel $ [18, 19], we have
Let C be a nonempty closed convex subset in H, for $\mathrm{\forall}x,y\in H$ and $\mathrm{\forall}z\in C$, the Gprojection operator onto C has the following properties:

(i)
$\u3008G(x{P}_{C}x),z{P}_{C}(x)\u3009\le 0$;

(ii)
${\parallel x\pm y\parallel}_{G}^{2}={\parallel x\parallel}_{G}^{2}\pm 2\u3008x,Gy\u3009+{\parallel y\parallel}_{G}^{2}$;

(iii)
${\parallel {P}_{C}(x){P}_{C}(y)\parallel}_{G}^{2}\le {\parallel xy\parallel}_{G}^{2}{\parallel ({P}_{C}(x)x)({P}_{C}(y)y)\parallel}_{G}^{2}$.
Let $\tilde{D}$ be a symmetric positive definite matrix, and ${A}^{T}\tilde{D}=D{A}^{T}$. Then the norm ${\parallel \cdot \parallel}_{\tilde{D}}$ is defined by ${\parallel y\parallel}_{\tilde{D}}^{2}=\u3008y,\tilde{D}y\u3009$ for $\mathrm{\forall}y\in H$. We denote by ${P}_{Q}$ the projection operator onto Q with respect to the norm ${\parallel \cdot \parallel}_{\tilde{D}}$. According to the SFP, when GSFP (1.2) has no solution (refer to [2, 14, 15]), we can define
and
${f}_{\tilde{D}}^{g}(x)$ is also convex and continuously differentiable in H. Its gradient operator is
As $D=I$, we define
$\mathrm{\nabla}{f}_{g}(x)$ is also Lipschitz continuous.
Proposition 2.2 If we consider the constrained minimization problem
its stationary point ${x}^{\ast}\in {H}_{1}$ satisfies
which is a general variational inequality involving a Lipschitz continuous and Gcocoercive operator.
Proof For $\mathrm{\forall}x,y\in H$, from (2.1) and Lemma 8.1 in [15], we have
where L is the largest eigenvalue of ${A}^{T}A$; therefore, the operator $\mathrm{\nabla}{f}_{D}^{g}$ is Lipschitz continuous,
Thus, the operator $\mathrm{\nabla}{f}_{D}^{g}$ is cocoercive. □
3 Main results
In this section, we propose several modified CQ algorithms with preconditioning techniques and prove the convergence.
3.1 General preconditioning CQ algorithm
In this part, we have our first algorithm with fixed stepsize and preconditioner to solve GSFP (1.2). The algorithm is as follows.
Algorithm 3.1 Choose $\mathrm{\forall}{x}_{0}\in {H}_{1}$ such that $g({x}_{0})\in C$, and let ${x}_{n}\in {H}_{1}$ such that $g({x}_{n})\in C$, then we calculate ${x}_{n+1}$ such that
where $\gamma \in (0,\frac{2}{L\cdot {L}_{D}})$, L and ${L}_{D}$ are the largest eigenvalues of ${A}^{T}A$ and D, respectively.
Now we establish the weak convergence of Algorithm 3.1.
Theorem 3.1 Suppose that the operators $g:{H}_{1}\to C$ and ${g}^{1}:C\to {H}_{1}$ are continuous. If $\mathrm{\Gamma}\ne \mathrm{\varnothing}$, then the sequence $\{{x}_{n}\}$ generated by Algorithm 3.1 converges to the solution of GSFP (1.2).
Proof Firstly, for $\mathrm{\forall}{x}^{\ast}\in \mathrm{\Gamma}$, we have
From (3.1), (iii), (ii) and the definition of ism, we have
as $\frac{2\gamma}{L}{\gamma}^{2}{L}_{D}>0$, which implies that the sequence ${\{{\parallel g({x}_{n})g({x}^{\ast})\parallel}_{G}\}}_{n\in \mathbb{N}}$ is monotonically decreasing, then we can obtain that the sequence ${\{{\parallel g({x}_{n})g({x}^{\ast})\parallel}_{G}\}}_{n\in \mathbb{N}}$ is also convergent, especially, the sequence ${\{g({x}_{n})\}}_{n\in \mathbb{N}}$ is bounded. Consequently, we get from (3.2)
Moreover, for each $g({x}_{n})\in C$, from (iii) and (2.1), we have
Then by virtue of (3.3) we have
Hence, there exists a subsequence ${\{g({x}_{j})\}}_{j\in \overline{N}}$ of ${\{g({x}_{n})\}}_{n\in \mathbb{N}}$ such that
Thus, ${\{g({x}_{j})\}}_{j\in \overline{N}}$ is also bounded.
Let $\overline{x}$ be an accumulation point of $\{{x}_{n}\}$, then the subsequence of $\{{x}_{n}\}$, ${\{{x}_{j}\}}_{j\in \overline{N}}\to \overline{x}$ as $j\to \mathrm{\infty}$. Because of the continuity of g, there exists an accumulation point $g(\overline{x})\in C$ of the sequence ${\{g({x}_{n})\}}_{n\in \mathbb{N}}$; for the subsequence ${\{g({x}_{j})\}}_{j\in \overline{N}}$, we have ${\{g({x}_{j})\}}_{j\in \overline{N}}\to g(\overline{x})$ as $j\to \mathrm{\infty}$. After that, from (3.3) we obtain
that is, $Ag(\overline{x})\in Q$.
We use $\overline{x}$ in place of ${x}^{\ast}$ in (3.2) and obtain that $\{{\parallel g({x}_{n})g(\overline{x})\parallel}_{G}\}$ is convergent. Because its subsequence $\{{\parallel g({x}_{{n}_{j}})g(\overline{x})\parallel}_{G}\}\to 0$, then we get that ${\{g({x}_{n})\}}_{n\in \mathbb{N}}$ converges to $g(\overline{x})$ as $j\to \mathrm{\infty}$. As well as ${g}^{1}$ is continuous, we finally have
Therefore, $\overline{x}$ is a solution of GSFP (1.2). □
From Algorithm 3.1 and Theorem 3.1 we can deduce the following results easily.
Corollary 3.1 If $g=I$, then GSFP (1.2) reduces to SFP, Algorithm 3.1 also reduces to a preconditioning CQ (PCQ) algorithm for $\mathrm{\forall}{x}_{0}\in {H}_{1}$:
where $\gamma \in (0,\frac{2}{L\cdot {L}_{D}})$, L and ${L}_{D}$ are the largest eigenvalues of ${A}^{T}A$ and D, respectively. ${P}_{C}$ and ${P}_{Q}$ are still the projection operators onto C and Q with respect to the norm ${\parallel \cdot \parallel}_{D}$ and ${\parallel \cdot \parallel}_{\tilde{D}}$, respectively.
Corollary 3.2 If $g=I$, $D=I$, $\tilde{D}=I$, then GSFP reduces to SFP, then Algorithm 3.1 reduces to the CQ algorithm proposed in [25].
Corollary 3.3 If $g=I$, $\tilde{D}=I$, ${P}_{C}$ and ${P}_{Q}$ are the projections onto C and Q with respect to the norm $\parallel \cdot \parallel $, set $F({x}_{n})=D{A}^{T}(I{P}_{Q})AD{x}_{n}$, (3.5) transforms into the algorithm in [26]
where $\gamma \in (0,\frac{2}{L})$, $L={\parallel D{A}^{T}\parallel}^{2}$. Then the GSFP reduces to the extended split feasibility problem (ESFP) in [26].
3.2 An algorithm with variable projection metric
The algorithms above can speed the convergence of CQ algorithm, but the stepsize and preconditioner are fixed. In this subsection, we extend the results in [6] and construct an iterative scheme with variable stepsize and preconditioner ${D}_{n}$ from one iteration to the next. As a key role, ${D}_{n}$ will change arbitrarily or following some rules to achieve the convergence progress and better results.
Let ${D}_{n}$ and ${\tilde{D}}_{n}$ be two symmetric positive definite matrices for $n=0,1,2,\dots $ . Denote by ${P}_{C}$ and ${P}_{Q}$ the projections onto C and Q with respect to the norm ${\parallel \cdot \parallel}_{{D}_{n}}$ and ${\parallel \cdot \parallel}_{{\tilde{D}}_{n}}$, respectively. Let χ be a set of symmetric positive definite matrices, we have the following algorithm.
Algorithm 3.2 Choose $\mathrm{\forall}{x}_{0}\in {H}_{1}$ such that $g({x}_{0})\in C$, and let ${x}_{n}\in {H}_{1}$ such that $g({x}_{n})\in C$; for $\mathrm{\forall}{D}_{n}\in \chi $, we compute ${x}_{n+1}$ such that
where ${\gamma}_{n}\in (0,\frac{2}{L\cdot {M}_{D}})$, L is the largest eigenvalue of ${A}^{T}A$, ${M}_{D}$ is the minimum value of all the largest eigenvalue values ${L}_{{D}_{n}}$ to matrices ${D}_{n}$.
Remark 3.1 Define ${d}_{n}={\parallel g({x}_{n+1})g({x}_{n})\parallel}_{{G}_{n}}$, then for the next iteration, ${D}_{n+1}$ is either chosen arbitrarily from χ or equivalent to ${D}_{n}$. It is conditional on whether ${d}_{n}$ has decreased or not. More particularly, we define a scalar ${\overline{d}}_{n}$ with initial value ${\overline{d}}_{0}=\mathrm{\infty}$. Having chosen a scalar $\alpha \in (0,1)$ at the n th iteration, then ${\overline{d}}_{n+1}$ is calculated by
then we select
Theorem 3.2 If $\mathrm{\Gamma}\ne \mathrm{\varnothing}$, then the sequence $\{{x}_{n}\}$ generated by Algorithm 3.2 converges to the solution of GSFP (1.2).
Proof To obtain variable ${D}_{n}$ at each iteration and keep the convergence of Algorithm 3.2, ${d}_{n}$ must be a descending behavior for $n=0,1,2,\dots $ . We first show that
Indeed if (3.7) is not true, we have ${\underset{\_}{lim}}_{n\to \mathrm{\infty}}{d}_{n}>0$, then ${D}_{n}$ must have changed a finite number of times, we set that this number is $\kappa \in N$. Therefore, let ${x}^{\ast}\in \mathrm{\Gamma}$ be a solution of GSFP, refer to (3.2) and for $n>\kappa $, we have
as ${M}_{D}=min\{{L}_{{D}_{n}}\mid n=0,1,\dots \kappa \}$, $\frac{2{\gamma}_{\kappa}}{L}{\gamma}_{\kappa}^{2}{L}_{{D}_{\kappa}}>0$. Then, following the proof of Theorem 3.1, we also get
and
Equation (3.9) contradicts the above hypothesis, so (3.7) is true.
By using (3.6) and (iii), for the n th iteration, we have
where ${\gamma}_{n}{L}_{{D}_{n}}>0$. Then from (2.1) and (3.8) we know that
so $Ag({x}_{n})\in Q$. By virtue of (3.7), we also have
This means that at least a subsequence of ${\{g({x}_{n})\}}_{n\in \mathbb{N}}$ converges to a solution of $g(\overline{x})\in \mathrm{\Gamma}$. Similar to the argumentation of accumulation in the proof of Theorem 3.1, we know that $\{{x}_{n}\}$ converges to a solution of GSFP. □
3.3 Some methods for execution
In Algorithms 3.1 and 3.2, there still exists difficulty to implement the projections ${P}_{C}$ and ${P}_{Q}$ with respect to the defined norms, especially when C and Q are general closed convex sets. According to the relaxed method in [8, 27, 28], we consider the above algorithm in which the closed convex subsets C and Q are the following particular formula:
where $c:{H}_{1}\to \mathbb{R}$ and $q:{H}_{2}\to \mathbb{R}$ are convex functions. ${C}_{n}$ and ${Q}_{n}$ are given as
where ${\xi}_{n}\in \partial c(g({x}_{n}))$, ${\eta}_{n}\in \partial q(Ag({x}_{n}))$.
Here, we also replace ${P}_{C}$ and ${P}_{Q}$ by ${P}_{{C}_{n}}$ and ${P}_{{Q}_{n}}$. However, in this paper, take Algorithm 3.2 for example, the projections are with respect to the norms corresponding to ${G}_{n}$ and ${\tilde{D}}_{n}$, we should use the following methods to calculate them. For $\mathrm{\forall}z\in {H}_{1}$ and $\mathrm{\forall}y\in {H}_{2}$,
and
Set $z=g({x}_{n}){\gamma}_{n}{D}_{n}\mathrm{\nabla}{f}_{g}({x}_{n})$, $y=Ag({x}_{n})$, let $\overline{x}\in {H}_{1}$ be an accumulation of ${\{{x}_{n}\}}_{n\in \mathbb{N}}$. From the proof above, it is easy to deduce that
Therefore, $g(\overline{x})\in C\subseteq {C}_{n}$, $Ag(\overline{x})\in Q\subseteq {Q}_{n}$, with the projections ${P}_{{C}_{n}}$ and ${P}_{{Q}_{n}}$, $\overline{x}$ is a solution of GSFP.
Next, we present a new approximation method to estimate ${\gamma}_{n}$ and ${D}_{n}$ in Algorithm 3.2.
If $\mathrm{\Gamma}\ne \mathrm{\varnothing}$, for $\mathrm{\forall}{x}^{\ast}\in \mathrm{\Gamma}$ and $n\ge 0$, such that
under the ideal condition, if ${D}_{n}{A}^{T}A\approx I$, the solution is done, but unfortunately, ${({A}^{T}A)}^{1}$ cannot be calculated directly when A is a large matrix in practice. As
where λ is an eigenvalue of ${A}^{T}A$. Let ${D}_{0}=I$, for the n th iteration, we have the next $j\times j$ approximation of ${D}_{n+1}$
where $j=1,2,\dots $ . So, at the n th iteration, let ${l}_{{D}_{n}}$ be the minimum eigenvalue of ${D}_{n}$, take ${M}_{{D}_{n}}\approx min\{{L}_{{D}_{k}}\mid k=0,1,\dots ,n\}$, ${L}_{n}\approx max\{{({l}_{{D}_{k}})}^{1}\mid k=0,1,\dots ,n\}$, the variable stepsize is approximated by
4 Numerical results
We consider the following problem from [29] in a finite dimensional Hilbert space:
Let $C=\{x\in {H}_{1}\mid c(x)\le 0\}$, where $c(x)={x}_{1}+{x}_{2}^{2}+\cdots +{x}_{N}^{2}$, and $Q=\{y\in {H}_{2}\mid q(y)\le 0\}$, where $q(y)={y}_{1}+{y}_{2}^{2}+\cdots +{y}_{M}^{2}1$. ${A}_{M\times N}$ is a random matrix where every element of A is in $(0,1)$ satisfying $\mathrm{\Gamma}\ne \mathrm{\varnothing}$. Let ${x}_{0}$ be a random vector in ${H}_{1}$ where every element of ${x}_{0}$ is in $(0,1)$.
We set $\parallel {x}_{n+1}{x}_{n}\parallel \le \epsilon $ as the stop rule, and let $N=10$, $M=20$, $g=I$, ${\tilde{D}}_{n}=I$, for $n\ge 0$. Using the methods in Section 3.3, we compare Algorithm 3.2 with the relaxed CQ algorithm (RCQ) in [30], with different ε and initial values. The results can be seen in Table 1. We see that the proposed methods in this paper behave better.
5 Concluding remarks
In this paper, we have discussed a new general split feasibility problem, which is related to the general variational inequalities involving a cocoercive operator. By using the Gnorm method, variable modulus method and relaxed method, two modified projection algorithms for solving the GSFP and some approximate methods for algorithm executing have been presented. The numerical results show that by preconditioning method, the convergence speed of CQ algorithm can be improved, but the way to obtain variable stepsize in the paper is inexact. To continue to improve it or combine it with the methods in [14] and [28] is another interesting subject.
References
 1.
Chen K: Matrix Preconditioning Techniques and Applications. Cambridge University Press, New York; 2005.
 2.
Piana M, Bertero M: Projected Landweber method and preconditioning. Inverse Probl. 1997, 13: 441–463. 10.1088/02665611/13/2/016
 3.
Strand ON: Theory and methods related to the singularfunction expansion and Landweber’s iteration for integral equations of the first kind. SIAM J. Numer. Anal. 1974, 11: 798–824. 10.1137/0711066
 4.
Auslender A: Optimisation: Méthodes Numérique. Masson, Paris; 1976.
 5.
Dafermos S: Traffic equilibrium and variational inequalities. Transp. Sci. 1980, 14: 42–54. 10.1287/trsc.14.1.42
 6.
Bertsekas PD, Gafni EM: Projection methods for variational inequities with application to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965
 7.
Marcotte P, Wu JH: On the convergence of projection methods: application to the decomposition of affine variational inequalities. J. Optim. Theory Appl. 1995, 85: 347–362. 10.1007/BF02192231
 8.
Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 58–70. 10.1007/BF01589441
 9.
Yang Q: The revisit of a projection algorithm with variable steps for variational inequalities. J. Ind. Manag. Optim. 2005, 1: 211–217.
 10.
He BS: Inexact implicit methods for monotone general variational inequalities. Math. Program. 1999, 86: 199–217. 10.1007/s101070050086
 11.
Noor MA, Wang YJ, Xiu N: Projection iterative schemes for general variational inequalities. J. Inequal. Pure Appl. Math. 2002., 3: Article ID 34
 12.
Santos PSM, Scheimberg S: A projection algorithm for general variational inequalities with perturbed constraint sets. Appl. Math. Comput. 2006, 181: 649–661. 10.1016/j.amc.2006.01.050
 13.
Muhammad AN, Abdellah B, Saleem U: Selfadaptive methods for general variational inequalities. Nonlinear Anal. 2009, 71: 3728–3738. 10.1016/j.na.2009.02.033
 14.
Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/02665611/21/5/009
 15.
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/02665611/20/1/006
 16.
Dolidze Z: Solution of variational inequalities associated with a class of monotone maps. Èkon. Mat. Metody 1982, 18: 925–927.
 17.
He B, He X, Liu H, Wu T: Selfadaptive projection method for cocoercive variational inequalities. Eur. J. Oper. Res. 2009, 196: 43–48. 10.1016/j.ejor.2008.03.004
 18.
Facchinei F, Pang JS I. In FiniteDimensional Variational Inequality and Complementarity Problems. Springer, New York; 2003.
 19.
Facchinei F, Pang JS II. In FiniteDimensional Variational Inequality and Complementarity Problems. Springer, New York; 2003.
 20.
Yao YH, Postolache M, Liou YC: Strong convergence of a selfadaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013., 2013: Article ID 201
 21.
Yao YH, Yang PX, Kang SM: Composite projection algorithms for the split feasibility problem. Math. Comput. Model. 2013, 57: 693–700. 10.1016/j.mcm.2012.07.026
 22.
Yao YH, Liou YC, Shahzad N: A strongly convergent method for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 125046
 23.
Mohammad E, Abdul L: General split feasibility problems in Hilbert spaces. Abstr. Appl. Anal. 2013., 2013: Article ID 805104
 24.
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692
 25.
Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/02665611/18/2/310
 26.
Wang PY, Zhou HY: A preconditioning method of the CQ algorithm for solving the extended split feasibility problem. J. Inequal. Appl. 2014., 2014: Article ID 163 10.1186/1029242X2014163
 27.
Yang Q: On variablestep relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048
 28.
López G, MartínMárquez M, Wang F, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012., 28: Article ID 085004
 29.
Wang Z, Yang Q, Yang Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2011, 217: 5347–5359. 10.1016/j.amc.2010.11.058
 30.
Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/02665611/20/4/014
Acknowledgements
The authors would like to thank the associate editor and the referees for their comments and suggestions. This research was supported by the National Natural Science Foundation of China (11071053).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The authors take equal roles in deriving results and writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, P., Zhou, H. & Zhou, Y. Preconditioning methods for solving a general split feasibility problem. J Inequal Appl 2014, 435 (2014). https://doi.org/10.1186/1029242X2014435
Received:
Accepted:
Published:
Keywords
 general split feasibility problem
 general variational inequality
 preconditioning method
 projection method