Open Access

Hybrid CQ projection algorithm with line-search process for the split feasibility problem

Journal of Inequalities and Applications20162016:106

Received: 8 August 2015

Accepted: 15 March 2016

Published: 31 March 2016


In this paper, we propose a hybrid CQ projection algorithm with two projection steps and one Armijo-type line-search step for the split feasibility problem. The line-search technique is intended to construct a hyperplane that strictly separates the current point from the solution set. The next iteration is obtained by the projection of the initial point on a regress region (the intersection of three sets). Hence, algorithm converges faster than some other algorithms. Under some mild conditions, we show the convergence. Preliminary numerical experiments show that our algorithm is efficient.


split feasible problem Armijo-type line-search technique projection algorithm convergence


47H05 47J05 47J25

1 Introduction

Split feasibility problem (SFP) is to find a point x satisfying
$$ x\in C, \qquad Ax\in Q, $$
where C and Q are nonempty convex sets in \(\Re^{N}\) and \(\Re^{M}\), respectively, and A is an M by N real matrix. The SFP was originally introduced in [1] and has broad applications in many fields, such as image reconstruction problem, signal processing, and radiation therapy [24]. Various algorithms have been invented to solve it (see [516] and references therein). The well-known CQ algorithm presented in [1] is defined as follows: Denote by \(P_{C}\) the orthogonal projection onto C, that is, \(P_{C}(x)=\arg\min_{y\in C}\Vert x-y\Vert \) for \(x\in C\); then take an initial point \(x^{0}\) arbitrarily and define the iterative step by
$$ x^{k+1}=P_{C}\bigl(I-\gamma A^{T}(I-P_{Q})A \bigr) \bigl(x^{k}\bigr), $$
where \(0<\gamma<2/\rho(A^{T}A)\), and \(\rho(A^{T}A)\) is the spectral radius of \(A^{T}A\).

The algorithms mentioned use a fixed stepsize restricted by the Lipschitz constant L, which depends on the largest eigenvalue (spectral radius) of the matrix. We know that computing the largest eigenvalue may be very hard and conservative estimate of the constant L usually results in slow convergence. To overcome the difficulty in estimating the Lipschitz constant, He et al. [17] developed a selfadaptive method for solving variational problems. The numerical results reported in [17] have shown that the selfadaptive strategy is valid and robust for solving variational inequality problems. Subsequently, many selfadaptive projection methods were presented to solve the split feasibility problem [18, 19]. On the other hand, hybrid projection method was developed by Nakajo and Takahashi [20], Kamimure and Takahashi [21], and Martines-Yanes and Xu [22] to solve the problem of finding a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of an equilibrium problem. Many modified hybrid projection methods were presented to solve different problems [23, 24].

In this paper, motivated by the selfadaptive method and hybrid projection method for solving variational inequality problem, based on the CQ algorithm for the SFP, we propose a hybrid CQ projection algorithm for the split feasibility problem, which uses different variable stepsizes in two projection steps. Algorithm performs a computationally inexpensive Armijo-type linear search along the search direction in order to generate separating hyperplane, which is different from the general selfadaptive Armijo-type procedure [18, 19]. For the second projection step, we select the projection onto the intersection of the set C with the halfspaces, which makes an optimal stepsize available at each iteration and hence guarantees that the next iteration is the ‘closest’ to the solution set. Therefore, the iterative sequence generated by the algorithm converges more quickly. The algorithm is shown to be convergent to a point in the solution set under some assumptions.

The paper is organized as follows. In Section 2, we recall some preliminaries. In Section 3, we propose a hybrid CQ projection algorithm for the split feasibility problem and show its convergence. In Section 4, we give an example to test the efficiency. In Section 5, we give some concluding remarks.

2 Preliminaries

We denote by I the identity operator and by \(\operatorname {Fix}(T)\) the set of fixed points of an operator T, that is, \(\operatorname {Fix}(T):=\{x | x=Tx\}\).

Recall that a mapping \(T: \Re^{n}\rightarrow\Re\) is said to be monotone if
$$\bigl\langle T(x)-T(y), x-y\bigr\rangle \geq0,\quad \forall x,y \in \Re^{n}. $$
For a monotone mapping T, if \(\langle T(x)-T(y), x-y\rangle= 0\) iff \(x=y\), then it is said to be strictly monotone.
A mapping \(T: \Re^{n}\rightarrow\Re^{n}\) is called nonexpansive if
$$\bigl\Vert T(x)-T(y)\bigr\Vert \leq \Vert x-y\Vert ,\quad \forall x,y \in \Re^{n}. $$

Lemma 2.1

Let Ω be a nonempty closed and convex subset in H. Then, for any \(x,y\in H\) and \(z\in\Omega\), it is well known that the following statements hold:
  1. (1)

    \(\langle P_{\Omega}(x)-x, z-P_{\Omega}(x)\rangle\geq0\).

  2. (2)

    \(\langle P_{\Omega}(x)-P_{\Omega}(y), x-y\rangle\geq0\).

  3. (3)
    \(\Vert P_{\Omega}(x)-P_{\Omega}(y)\Vert \leq \Vert x-y\Vert \), \(\forall x,y \in \Re^{n}\), or more precisely,
    $$\bigl\Vert P_{\Omega}(x)-P_{\Omega}(y)\bigr\Vert ^{2} \leq \Vert x-y\Vert ^{2}-\bigl\Vert P_{\Omega}(x)-x+y-P_{\Omega}(y) \bigr\Vert ^{2}. $$
  4. (4)

    \(\Vert P_{\Omega}(x)-z\Vert \leq \Vert x-z\Vert -\Vert P_{\Omega}(x)-x\Vert \).


Remark 2.1

In fact, the projection property (1) also provides a sufficient and necessary condition for a vector \(u\in K\) to be the projection of the vector x; that is, \(u=P_{K}(x)\) if and only if
$$\langle u-x, z-u\rangle\geq0,\quad \forall z\in K. $$
Throughout the paper, we by denote Γ the solution set of split feasibility problem, that is,
$$ \Gamma:=\{y\in C | Ay\in Q\}. $$

3 Algorithm and convergence analysis

$$F(x):=\bigl(A^{T}(I-P_{Q})A\bigr) (x). $$
From [10] we know that F is Lipschitz-continuous with constant \(L=1/\rho(A^{T}A)\) and is \(\frac{1}{\Vert A\Vert }\)-inverse strongly monotone. We first note that the solution set coincides with zeros of the following projected residual function:
$$e(x):=x-P_{C}\bigl(x-F(x)\bigr), \qquad e(x,\mu):=x-P_{C} \bigl(x-\mu F(x)\bigr); $$
with this definition, we have \(e(x,1)=e(x)\), and \(x\in\Gamma\) if and only if \(e(x,\mu)=0\). For any \(x\in\Re^{N}\) and \(\alpha\geq0\), define
$$x(\alpha)=P_{C}\bigl(x-\alpha F(x)\bigr),\qquad e(x,\alpha)=x-x( \alpha). $$
The following lemma is useful for the convergence analysis in the next section.

Lemma 3.1


Let F be a mapping from \(\Re^{N}\) into \(\Re^{N}\). For any \(x\in\Re^{N}\) and \(\alpha\geq0\), we have
$$\min\{1,\alpha\}\bigl\Vert e(x,1)\bigr\Vert \leq\bigl\Vert e(x,\alpha) \bigr\Vert \leq\max\{1,\alpha\}\bigl\Vert e(x,1)\bigr\Vert . $$

The detailed iterative processes are as follows:

Algorithm 3.1

Step 0. Choose an arbitrary initial point \(x^{0}\in C\), \(\eta_{0}>0\), and three parameters \(\gamma\in(0,1)\), \(\sigma\in(0,1)\), and \(\theta >1\), and set \(k=0\).

Step 1. Given the current iterative point \(x^{k}\), compute
$$ z^{k}=P_{C}\bigl[x^{k}-\mu_{k}F \bigl(x^{k}\bigr)\bigr], $$
where \(\mu_{k}:=\min\{\theta\eta_{k-1},1\}\). Obviously, \(e(x^{k},\mu_{k})=x^{k}-z^{k}\). If \(e(x^{k}, \mu_{k})=0\), then stop;
Step 2. Compute
$$y^{k}=x^{k}-\eta_{k}e\bigl(x^{k}, \mu_{k}\bigr), $$
where \(\eta_{k}=\gamma^{m_{k}}\mu_{k}\) with \(m_{k}\) being the smallest nonnegative integer m satisfying
$$ \bigl\langle F\bigl(x^{k}-\gamma^{k}\mu_{k}e \bigl(x^{k},\mu_{k}\bigr)\bigr), e\bigl(x^{k},\mu _{k}\bigr)\bigr\rangle \geq\frac{\sigma}{\mu_{k}}\bigl\Vert e \bigl(x^{k},\mu_{k}\bigr)\bigr\Vert ^{2}. $$
Step 3. Compute
$$ x^{k+1}=P_{C\cap H_{k}^{1}\cap H_{k}^{2}}\bigl(x^{0}\bigr), $$
$$\begin{aligned}& H_{k}^{1}=\bigl\{ x\in\Re^{N}| \bigl\langle x-y^{k}, F\bigl(y^{k}\bigr)\bigr\rangle \leq0\bigr\} , \\& H_{k}^{2}=\bigl\{ x\in\Re^{N}| \bigl\langle x-x^{k}, x^{0}-x^{k}\bigr\rangle \leq0\bigr\} . \end{aligned}$$
Select \(k=k+1\) and go to Step 1.

Remark 3.1

(1) In the algorithm, a projection from \(\Re^{N}\) onto the intersection \(C\cap H_{k}^{1}\cap H_{k}^{2}\) needs to be computed, that is, procedure (3.3) at each iteration. Surely, if the domain set C has a special structure such as a box or a ball, then the next iteration \(x^{k+1}\) can easily be computed. If the domain set C is defined by a set of linear (in)equalities, then computing the projection is equivalent to solving a strictly convex quadratic optimization problem.

(2) It can readily be verified that the hyperplane \(H_{k}^{1}\) strictly separates the current point \(x^{k}\) from the solution set Γ if \(x^{k}\) is not a solution of the problem. That is, \(\Gamma\subset H_{k}^{-}\), and the hyperplane \(H_{k}^{2}\) strictly separates the initial point \(x^{0}\) from the solution set Γ.

(3) Compared with the general hybrid projection method in [21, 22], besides the major modification made in the projection domain in the last projection step, the values of some parameters involved in the algorithm are also adjusted.

Before establishing the global convergence of Algorithm 3.1, we first give some lemmas.

Lemma 3.2

For all \(k\geq0\), there exists a nonnegative number m satisfying (3.2).


Suppose that, for some k, (3.2) is not satisfied for any integer m, that is,
$$ \bigl\langle F\bigl(x^{k}-\gamma^{m}\mu_{k}e \bigl(x^{k},\mu_{k}\bigr)\bigr), e\bigl(x^{k},\mu _{k}\bigr)\bigr\rangle \leq\frac{\sigma}{\mu_{k}}\bigl\Vert e \bigl(x^{k},\mu_{k}\bigr)\bigr\Vert ^{2}. $$
By the definition of \(e (x^{k},\mu_{k})\) and Lemma 2.1 we know that
$$\bigl\langle P_{C}\bigl(x^{k}-\mu_{k}F \bigl(x^{k}\bigr)\bigr)-\bigl(x^{k}-\mu _{k}F \bigl(x^{k}\bigr)\bigr),x^{k}-P_{C} \bigl(x^{k}-\mu_{k}F\bigl(x^{k}\bigr)\bigr)\bigr\rangle \geq 0. $$
$$ \bigl\langle F\bigl(x^{k}\bigr), e\bigl(x^{k}, \mu_{k}\bigr)\bigr\rangle \geq\frac{1}{\mu_{k}}\bigl\Vert e \bigl(x^{k},\mu_{k}\bigr)\bigr\Vert ^{2}>0. $$
Since \(\gamma\in(0,1)\) and \(\mu_{k}:=\min\{\theta\eta_{k-1},1\}\), from (3.4) we get
$$\lim_{m\rightarrow\infty}\bigl(x^{k}-\gamma^{m} \mu_{k}e\bigl(x^{k},\mu_{k}\bigr) \bigr)=x^{k}. $$
$$ \bigl\langle F\bigl(x^{k}\bigr), e\bigl(x^{k}, \mu_{k}\bigr)\bigr\rangle \leq \frac{\sigma}{\mu_{k}}\bigl\Vert e \bigl(x^{k},\mu_{k}\bigr)\bigr\Vert ^{2}. $$
But (3.6) contradicts (3.5) because \(\sigma<1\) and \(\Vert e(x^{k},\mu_{k})\Vert \geq0\). Hence, (3.2) is satisfied for some integer m. □

The following lemma shows that the halfspace \(H_{k}^{1}\) in Algorithm 3.1 strictly separates \(x^{k}\) from the solution set Γ if Γ is nonempty.

Lemma 3.3

If \(\Gamma\neq\emptyset\), then the halfspace \(H_{k}^{1}\) in Algorithm 3.1 separates the point \(x^{k}\) from the set Γ. Moreover,
$$\Gamma\subset H_{k}^{1}\cap C,\quad \forall k\geq0. $$


By the definition of \(e(x^{k},\mu_{k})\) and Algorithm 3.1 we have
$$y^{k}=(1-\eta_{k})x^{k}+\eta_{k}z^{k}=x^{k}- \eta_{k}e\bigl(x^{k},\mu_{k}\bigr), $$
which can be rewritten as
$$\eta_{k}e\bigl(x^{k},\mu_{k}\bigr)=x^{k}-y^{k}. $$
Then, by this and by (3.2) we get
$$ \bigl\langle F\bigl(y^{k}\bigr), x^{k}-y^{k}\bigr\rangle >0. $$
Hence, by the definition of \(H_{k}^{1}\) and by (3.7) we get \(x^{k}\notin H_{k}^{1}\).
On the other hand, for any \(x^{\ast} \in\Gamma\) and \(x\in C\), by the monotonicity of F we have
$$ \bigl\langle F(x), x-x^{\ast}\bigr\rangle \geq 0. $$
By the convexity of C it is easy to see that \(y^{k}\in C\). Letting \(x=y^{k}\) in (3.9), we have
$$\bigl\langle F\bigl(y^{k}\bigr), y^{k}-x^{\ast}\bigr\rangle \geq0, $$
which implies \(x^{\ast}\in H_{k}^{1}\). Moreover, it is easy to see that \(\Gamma\subseteq H_{k}^{1}\cap C\), \(\forall k\geq0\). □

The following lemma says that if the solution set is nonempty, then \(\Gamma\subset H_{k}^{1}\cap H_{k}^{2}\cap C\) and thus \(H_{k}^{1}\cap H_{k}^{2}\cap C\) is a nonempty set.

Lemma 3.4

If the solution set \(\Gamma\neq\emptyset\), then \(\Gamma\subset H_{k}^{1}\cap H_{k}^{2}\cap C\) for all \(k\geq 0\).


From the previous analysis it is sufficient to prove that \(\Gamma\subset H_{k}^{2}\) for all \(k\geq0\). The proof will be given by induction. Obviously, if \(k=0\), then
$$\Gamma\subseteq H_{0}^{2}=\Re^{N}. $$
Now, suppose that
$$\Gamma\subset H_{k}^{2} $$
for \(k=l\geq0\). Then
$$\Gamma\subset H_{l}^{1}\cap H_{l}^{2} \cap C. $$
For any \(x^{\ast}\in\Gamma\), by Lemma 2.1 and the fact that
$$x^{l+1}=P_{H_{l}^{1}\cap H_{l}^{2}\cap C}\bigl(x^{0}\bigr) $$
we have that
$$\bigl\langle x^{\ast}-x^{l+1}, x^{0}-x^{l+1} \bigr\rangle \leq0. $$
Thus, \(\Gamma\subset H_{l+1}^{2}\). This shows that \(\Gamma\subset H_{k}^{2}\) for all \(k\geq0\), and the desired result follows.

For the case that the solution set is empty, we have that \(H_{l}^{1}\cap H_{l}^{2}\cap C\) is also nonempty from the following lemma, which implies the feasibility of Algorithm 3.1. □

Lemma 3.5

Suppose that \(\Gamma=\emptyset\). Then \(H_{l}^{1}\cap H_{l}^{2}\cap C\neq\emptyset\) for all \(k\geq 0\).

We next prove our main convergence result.

Theorem 3.1

Suppose the solution set Γ is nonempty. Then the sequence \(\{x^{k}\}\) generated by Algorithm 3.1 is bounded, and all its cluster points belong to the solution set. Moreover, the sequence \(\{x^{k}\}\) globally converges to a solution \(x^{\ast}\) such that \(x^{\ast}=P_{\Gamma}(x^{0})\).


Take an arbitrary point \(x^{\ast}\in\Gamma\). Then \(x^{\ast }\in H_{k}^{1}\cap H_{k}^{2}\cap C\). Since
$$x^{k+1}=P_{H_{k}^{1}\cap H_{k}^{2}\cap C}\bigl(x^{0}\bigr), $$
by the definition of the projection we have that
$$\bigl\Vert x^{k+1}-x^{0}\bigr\Vert \leq\bigl\Vert x^{\ast}-x^{0}\bigr\Vert . $$
So, \(\{x^{k}\}\) is a bounded sequence, and so is \(\{y^{k}\}\).
Since \(x^{k+1}\in H_{k}^{2}\), from the definition of the projection operator it is obvious that
$$P_{H_{k}^{2}}\bigl(x^{k+1}\bigr)=x^{k+1}. $$
For \(x^{k}\), by the definition of \(H_{k}^{2}\), for all \(z\in H_{k}^{2}\), we have
$$\bigl\langle z-x^{k}, x^{0}-x^{k}\bigr\rangle \leq0. $$
Obviously, \(x^{k}=P_{H_{k}^{2}}(x^{0})\). Thus, using Lemma 2.1, we have
$$\bigl\Vert P_{H_{k}^{2}}\bigl(x^{k+1}\bigr)-P_{H_{k}^{2}} \bigl(x^{0}\bigr)\bigr\Vert ^{2}\leq\bigl\Vert x^{k+1}-x^{0}\bigr\Vert ^{2}-\bigl\Vert P_{H_{k}^{2}}\bigl(x^{k+1}\bigr)-x^{k+1}+x^{0}-P_{H_{k}^{2}} \bigl(x^{0}\bigr)\bigr\Vert , $$
that is,
$$\bigl\Vert x^{k+1}-x^{k}\bigr\Vert ^{2}\leq\bigl\Vert x^{k+1}-x^{0}\bigr\Vert ^{2}-\bigl\Vert x^{k}-x^{0}\bigr\Vert ^{2}, $$
which can be written as
$$\bigl\Vert x^{k+1}-x^{k}\bigr\Vert ^{2}+\bigl\Vert x^{k}-x^{0}\bigr\Vert ^{2}\leq\bigl\Vert x^{k+1}-x^{0}\bigr\Vert ^{2}. $$
Thus, the sequence \(\{\Vert x^{k}-x^{0}\Vert \}\) is nondecreasing and bounded and hence convergent, which implies that
$$ \lim_{k\rightarrow\infty}\bigl\Vert x^{k+1}-x^{k}\bigr\Vert ^{2}=0. $$
On the other hand, by \(x^{k+1}\in H_{k}^{1}\) we get
$$ \bigl\langle x^{k+1}-y^{k}, F\bigl(y^{k}\bigr)\bigr\rangle \leq0. $$
$$y^{k}=(1-\eta_{k})x^{k}+\eta_{k}z^{k}=x^{k}- \eta_{k}e\bigl(x^{k},\mu_{k}\bigr), $$
by (3.10) we have
$$\bigl\langle x^{k+1}-y^{k}, F\bigl(y^{k}\bigr)\bigr\rangle =\bigl\langle x^{k+1}-x^{k}+\eta _{k}e \bigl(x^{k},\mu_{k}\bigr), F\bigl(y^{k}\bigr)\bigr\rangle \leq0, $$
which implies
$$\eta_{k}\bigl\langle e\bigl(x^{k},\mu_{k}\bigr), F \bigl(y^{k}\bigr)\bigr\rangle \leq\bigl\langle x^{k}-x^{k+1}, F\bigl(y^{k}\bigr)\bigr\rangle \leq0. $$
Using the Cauchy-Schwarz inequality and (3.2), we obtain
$$ \eta_{k}\frac{\sigma}{\mu_{k}}\bigl\Vert e\bigl(x^{k}, \mu_{k}\bigr)\bigr\Vert ^{2}\leq\eta _{k}\bigl\langle e\bigl(x^{k},\mu_{k}\bigr), F\bigl(y^{k} \bigr)\bigr\rangle \leq\bigl\Vert x^{k}-x^{k+1}\bigr\Vert \cdot\bigl\Vert F\bigl(y^{k}\bigr)\bigr\Vert . $$
By Lemma 3.1,
$$\bigl\Vert e\bigl(x^{k},\mu^{k}\bigr)\bigr\Vert \geq\min \{1,\mu_{k}\}\bigl\Vert e\bigl(x^{k}\bigr)\bigr\Vert . $$
Since \(\mu_{k}=\min\{\theta\eta_{k-1},1\}\), we have \(\mu_{k}\leq1\). Hence,
$$\bigl\Vert e\bigl(x^{k},\mu^{k}\bigr)\bigr\Vert \geq\min \{1,\mu_{k}\}\bigl\Vert e\bigl(x^{k}\bigr)\bigr\Vert \geq \mu_{k}\bigl\Vert e\bigl(x^{k}\bigr)\bigr\Vert . $$
$$\eta_{k} \mu_{k}\sigma\bigl\Vert e\bigl(x^{k} \bigr)\bigr\Vert ^{2}\leq\eta_{k}\frac{\sigma}{\mu_{k}}\bigl\Vert e\bigl(x^{k},\mu_{k}\bigr)\bigr\Vert ^{2} \leq\bigl\Vert x^{k}-x^{k+1}\bigr\Vert \cdot\bigl\Vert F \bigl(y^{k}\bigr)\bigr\Vert . $$
Taking into account that \(\eta_{k}=\gamma ^{m_{k}}\mu_{k}\leq\mu_{k}\), we further obtain
$$ \eta_{k}^{2} \sigma\bigl\Vert e\bigl(x^{k}\bigr) \bigr\Vert ^{2}\leq\eta_{k}\frac{\sigma}{\mu_{k}}\bigl\Vert e \bigl(x^{k},\mu _{k}\bigr)\bigr\Vert ^{2}\leq\bigl\Vert x^{k}-x^{k+1}\bigr\Vert \cdot\bigl\Vert F \bigl(y^{k}\bigr)\bigr\Vert . $$
Since F is continuous, there exists a constant M such that \(\Vert F(y^{k})\Vert \leq M\). By (3.9) and (3.12) it follows that
$$ \lim_{k\rightarrow\infty}\eta_{k}\bigl\Vert e \bigl(x^{k}\bigr)\bigr\Vert =0. $$
For any convergent subsequence \(\{x^{k_{j}}\}\) of \(\{x^{k}\}\), its limit is denoted by , that is,
$$\lim_{j\rightarrow\infty}x^{k_{j}}=\bar{x}. $$

Now, we consider the two possible cases for (3.13).

Suppose first that \(\{\eta_{k_{j}}\}\) has a limit. Then
$$\lim_{j\rightarrow\infty}\eta_{k_{j}}=0. $$
By the choice of \(\eta_{k_{j}}\) in Algorithm 3.1 we know that
$$ \biggl\langle F\biggl(x^{k_{j}}-\frac{\eta_{k_{j}}}{\gamma}e\bigl(x^{k_{j}}, \mu _{k_{j}}\bigr)\biggr),e\bigl(x^{k_{j}},\mu_{k_{j}}\bigr) \biggr\rangle \leq\frac{\sigma}{\mu_{k}}\bigl\Vert e\bigl(x^{k_{j}}, \mu_{k_{j}}\bigr)\bigr\Vert ^{2}. $$
$$\lim_{j\rightarrow\infty}\biggl(x^{k_{j}}-\frac{\eta_{k_{j}}}{\gamma }e \bigl(x^{k_{j}},\mu_{k_{j}}\bigr)\biggr)=\bar{x}. $$
$$\lim_{j\rightarrow\infty}F\biggl(x^{k_{j}}-\frac{\eta_{k_{j}}}{\gamma }e \bigl(x^{k_{j}},\mu_{k_{j}}\bigr)\biggr)=F(\bar{x}). $$
So, by (3.14) we obtain
$$ \begin{aligned}[b] &\lim_{j\rightarrow\infty}\biggl\langle F\biggl(x^{k_{j}}- \frac{\eta_{k_{j}}}{\gamma }e\bigl(x^{k_{j}},\mu_{k_{j}}\bigr)\biggr),e \bigl(x^{k_{j}},\mu_{k_{j}}\bigr)\biggr\rangle \\ &\quad =\bigl\langle F(\bar{x}),e(\bar{x},\mu_{k_{j}})\bigr\rangle \\ &\quad \leq\lim_{j\rightarrow\infty}\frac{\sigma}{\mu_{k}}\bigl\Vert e \bigl(x^{k_{j}},\mu _{k_{j}}\bigr)\bigr\Vert ^{2}= \frac{\sigma}{\mu_{k}}\bigl\Vert e(\bar{x},\mu_{k})\bigr\Vert ^{2}. \end{aligned} $$
Using the similar arguments of (3.2), we have
$$\bigl\langle F(\bar{x}),e(\bar{x},\mu_{k_{j}})\bigr\rangle \geq \frac{1}{\mu_{k}}\bigl\Vert e(\bar{x},\mu_{k})\bigr\Vert ^{2}. $$
$$\bigl\Vert e\bigl(x^{k},\mu^{k}\bigr)\bigr\Vert \geq\min \{1,\mu_{k}\}\bigl\Vert e\bigl(x^{k}\bigr)\bigr\Vert , $$
from (3.15) we get that \(e(\bar{x})=0\), and thus is a solution of problem (1.1).

Suppose now that \(\limsup_{k\rightarrow\infty}\eta_{k}>0\). Because of (3.13), it must be that \(\liminf_{k\rightarrow\infty} \Vert e(x^{k})\Vert =0\). Since \(e(\cdot)\) is continuous, we get \(e(\bar{x})=0\), and thus is a solution of problem (1.1).

Now, we prove that the sequence \(\{x^{k}\}\) converges to a point contained in Γ.

Let \(x^{\ast}=P_{\Gamma}(x^{0})\). Since \(x^{\ast}\in\Gamma\), by Lemma 3.4 we have
$$x^{\ast}\in H_{k_{j}-1}^{1}\cap H_{k_{j}-1}^{2} \cap C $$
for all j. So, by the iterative sequence of Algorithm 3.1 we have
$$\bigl\Vert x^{k_{j}}-x^{0}\bigr\Vert \leq\bigl\Vert x^{\ast}-x^{0}\bigr\Vert . $$
$$\begin{aligned} \bigl\Vert x^{k_{j}}-x^{\ast}\bigr\Vert ^{2}&=\bigl\Vert x^{k_{j}}-x^{0}+x^{0}-x^{\ast}\bigr\Vert ^{2} \\ &=\bigl\Vert x^{k_{j}}-x^{0}\bigr\Vert ^{2}+\bigl\Vert x^{0}-x^{\ast}\bigr\Vert ^{2}+2\bigl\langle x^{k_{j}}-x^{0},x^{0}-x^{\ast}\bigr\rangle \\ &\leq\bigl\Vert x^{\ast}-x^{0}\bigr\Vert ^{2}+ \bigl\Vert x^{0}-x^{\ast}\bigr\Vert ^{2}+2\bigl\langle x^{k_{j}}-x^{0},x^{0}-x^{\ast}\bigr\rangle . \end{aligned}$$
Letting \(j\rightarrow\infty\), we have
$$\begin{aligned} \bigl\Vert \bar{x}-x^{\ast}\bigr\Vert ^{2}&\leq2\bigl\Vert x^{0}-x^{\ast}\bigr\Vert ^{2}+2\bigl\langle x^{k_{j}}-x^{0},x^{0}-x^{\ast}\bigr\rangle \\ &=2\bigl\langle \bar{x}-x^{\ast},x^{0}-x^{\ast}\bigr\rangle \leq0, \end{aligned}$$
where the last inequality is due to Lemma 2.1 and the fact that \(x^{\ast}=P_{\Gamma}(x^{0})\) and \(\bar{x}\in\Gamma\). So,
$$\bar{x}=x^{\ast}=P_{\Gamma}\bigl(x^{0}\bigr). $$
Thus, the sequence \(\{x^{k}\}\) has a unique cluster point \(P_{\Gamma}(x^{0})\), which shows the global convergence of \(\{x^{k}\}\). □

4 Numerical experiments

To test the effectiveness of our algorithm in this paper, we implemented it in MATLAB to solve the following example. We use \(\Vert e(x^{k},\mu _{k})\Vert \leq\varepsilon=10^{-5}\) as the stopping criterion. We denote Algorithm 3.1 in [14] as Algorithm 3.1. Throughout the computational experiment, the parameters used in Algorithm 3.1 were set as \(\gamma=0.6\), \(\sigma=0.8\), \(\theta=1.5\), \(\eta_{0}=0.3\), \(\beta=1\). The numerical results of the example are given in Table 1, Iter denotes the number of iterations, CPU denotes the computing time, and \(x^{\ast}\) denotes the approximate solution.
Table 1

Results for Example

Initiative point

Algorithm 3.1 \(\boldsymbol{^{\ast}}\) with β  = 1

Algorithm 3.1


Iter = 78; CPU (s)=0.113

Iter = 43; CPU (s)=0.098


\(x^{\ast}=(1.0512;-2.3679; 1.0613)\)


Iter = 39; CPU (s)=0.096

Iter = 20; CPU (s)=0.053




Let \(C = \{x \in\Re^{3} | x_{1}^{2}+x_{2}^{2} \leq 40\}\) and \(Q=\{x\in\Re^{3}| x_{2}-x_{3}^{2}\leq1\}\), \(A=I\). Find \(x\in C \) with \(Ax\in Q\).

From the numerical experiments for the simple example we can see that our proposed method has good convergence properties.

5 Some concluding remarks

This paper presented a hybrid CQ projection algorithm with two projection steps and one Armijo linear-search step for solving the split feasibility problem (SFP). Different from the self-adaptive projection methods proposed by Zhang et al. [18], we use a new liner-search rule, which ensures that the hyperplane \(H_{k}\) separates the current \(x^{k}\) from the solution set Γ. The next iteration is generated by the projection of the starting point on a shrinking projection region (the intersection of three sets). Preliminary numerical experiments demonstrate a good behavior. However, whether the idea can be used to solve multiple-set SFP deserves further research.



This work was supported by Natural Science Foundation of Shanghai (14ZR1429200) and Innovation Program of Shanghai Municipal Education Commission (15ZZ074).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Management, University of Shanghai for Science and Technology
Henan Polytechnic University


  1. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994) MathSciNetView ArticleMATHGoogle Scholar
  2. Byne, C: An unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004) MathSciNetView ArticleGoogle Scholar
  3. Censor, Y, Motova, A, Segal, A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327, 1244-1256 (2007) MathSciNetView ArticleMATHGoogle Scholar
  4. Censor, T, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071-2084 (2005) MathSciNetView ArticleMATHGoogle Scholar
  5. Dang, Y, Gao, Y: Bi-extrapolated subgradient projection algorithm for solving multiple-sets split feasibility problem. Appl. Math. J. Chin. Univ. Ser. B 29(3), 283-294 (2014) MathSciNetView ArticleMATHGoogle Scholar
  6. Qu, B, Xiu, N: A new halfspace-relaxation projection method for the split feasibility problem. Linear Algebra Appl. 428, 1218-1229 (2008) MathSciNetView ArticleMATHGoogle Scholar
  7. Dang, Y, Xue, ZH: Iterative process for solving a multiple-set split feasibility problem. J. Inequal. Appl. (2015). doi:10.1186/s13660-015-0576-9 MathSciNetMATHGoogle Scholar
  8. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002) MathSciNetView ArticleMATHGoogle Scholar
  9. Xu, H: A variable Krasnosel’ski-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22, 2021-2034 (2006) View ArticleMATHGoogle Scholar
  10. Dang, Y, Gao, Y: The strong convergence of a KM-CQ-like algorithm for split feasibility problem. Inverse Probl. (2011). doi:10.1088/0266-5611/27/1/015007 MathSciNetMATHGoogle Scholar
  11. Yan, AL, Wang, GY, Xiu, NH: Robust solutions of split feasibility problem with uncertain linear operator. J. Ind. Manag. Optim. 3, 749-761 (2007) MathSciNetView ArticleMATHGoogle Scholar
  12. Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20, 1261-1266 (2004) MathSciNetView ArticleMATHGoogle Scholar
  13. Zhao, J, Yang, Q: Several solution methods for the split feasibility problem. Inverse Probl. 21, 1791-1799 (2005) MathSciNetView ArticleMATHGoogle Scholar
  14. Qu, B, Xiu, NH: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655-1665 (2005) MathSciNetView ArticleMATHGoogle Scholar
  15. Ansari, QH, Rehan, A: Split feasibility and fixed point problems. In: Nonlinear Analysis: Approximation Theory, Optimization and Applications, pp. 281-322. Springer, New York (2014) Google Scholar
  16. Latif, A, Sahu, DR, Ansari, QH: Variable KM-like algorithms for fixed point problems and split feasibility problems. Fixed Point Theory Appl. 2014, Article ID 211 (2014) MathSciNetView ArticleGoogle Scholar
  17. He, B, Yang, H, Meng, Q, Han, D: Modified Goldstein-Levitin-Polyak projection method for asymmetric strong monotone variational inequalities. J. Optim. Theory Appl. 112, 129-143 (2002) MathSciNetView ArticleMATHGoogle Scholar
  18. Zhang, WX, Han, D, Li, ZB: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 25, 115001 (2009) MathSciNetView ArticleMATHGoogle Scholar
  19. Zhao, JL, Yang, Q: Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 27, 035009 (2011) MathSciNetView ArticleMATHGoogle Scholar
  20. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279(2), 372-379 (2003) MathSciNetView ArticleMATHGoogle Scholar
  21. Kamimura, S, Takahashi, W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938-945 (2002) MathSciNetView ArticleMATHGoogle Scholar
  22. Martinez-Yanes, C, Xu, H-K: Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal., Theory Methods Appl. 64(11), 2400-2411 (2006) MathSciNetView ArticleMATHGoogle Scholar
  23. Takahashi, W, Takeuchi, Y, Kubota, R: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276-286 (2008) MathSciNetView ArticleMATHGoogle Scholar
  24. Poom, K: A hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping. Nonlinear Anal. Hybrid Syst. 2, 1245-1255 (2008) MathSciNetView ArticleMATHGoogle Scholar


© Dang et al. 2016