Open Access

Iterative process for solving a multiple-set split feasibility problem

Contributed equally
Journal of Inequalities and Applications20152015:47

https://doi.org/10.1186/s13660-015-0576-9

Received: 2 November 2014

Accepted: 26 January 2015

Published: 5 February 2015

Abstract

This paper deals with a variant relaxed CQ algorithm by using a new searching direction, which is not the gradient of a corresponding function. The strategy is to intend to improve the convergence. Its convergence is proved under some suitable conditions. Numerical results illustrate that our variant relaxed CQ algorithm converges more quickly than the existing algorithms.

Keywords

multiple-set split feasibility problem subgradient accelerated iterative algorithm convergence

MSC

47H05 47J05 47J25

1 Introduction

The multiple-set split feasibility problem (MSSFP) is to find a point contained in the intersection of a family of closed convex sets in one space such that its image under a linear transformation is contained in the intersection of another family of closed convex sets in the image space. Formally, given nonempty closed convex sets \(C_{i}\subseteq\Re^{N}\), \(i=1,2,\ldots,t\), in the N-dimensional Euclidean space \(\Re^{N}\) and nonempty closed convex sets \(Q_{j}\subseteq\Re^{M}\), \(j=1,2,\ldots,r\), and an \(M\times N\) real matrix A, the MSSFP is to find a point x such that
$$ x\in C=\bigcap_{i=1}^{t}C_{i}, \qquad Ax\in Q=\bigcap_{j=1}^{r}Q_{j}. $$
(1.1)
Such MSSFP, formulated in [1], arises in the field of intensity-modulated radiation therapy (IMRT) when one attempts to describe physical dose constrains and equivalent uniform dose (EUD) constraints within a single model, see [2, 3]. Specially, when \(t=r=1\), the problem reduces to the two-set split feasibility problem (abbreviated as SFP), which is to find a point \(x\in C\) such that \(Ax\in Q\) (see [46]).
For solving the MSSFP, Censor et al. in [1] introduced a proximity function \(p(x)\) to measure the aggregate distance of a point to all sets. The function \(p(x)\) is defined as
$$p(x) :=\frac{1}{2}\sum_{i=1}^{t} \alpha_{i}\bigl\| x-P_{C_{i}}(x)\bigr\| ^{2}+\frac {1}{2}\sum _{j=1}^{r}\beta_{j} \bigl\| Ax-P_{Q_{j}}(Ax)\bigr\| ^{2}, $$
where \(\alpha_{i}>0\), \(\beta_{j}>0\) for all i and j, respectively, and \(\sum_{i=1}^{t}\alpha_{i}+\sum_{j=1}^{r}\beta_{j}=1\). Then they proposed a projection algorithm as follows:
$$x^{k+1}=P_{\Omega}\bigl(x^{k}-\gamma\nabla p \bigl(x^{k}\bigr)\bigr), $$
where \(\Omega\subset\Re^{N}\) is an auxiliary set, \(x^{k}\) is the current iterative point. \(0<\gamma<2/L\) with \(L=\sum_{i=1}^{t}\alpha_{i}+\rho(A^{T}A)\sum_{j=1}^{r}\beta_{j}\) and \(\rho(A^{T}A)\) is the spectral radius of \(A^{T}A\). Subsequently, many methods have been developed for solving the MSSFP [714], while most of these algorithms aimed at minimizing the proximity function \(p(x)\) and used its gradient p.

Different from most of the existing methods, in this paper, we construct a new searching direction, which is not the gradient p. And this difference causes a very different way of analysis. Moreover, some preliminary numerical experiments show that our new method converges faster than most existing methods.

The paper is organized as follows. Section 2 reviews some preliminaries. Section 3 gives a variant relaxed projection algorithm and shows its convergence. Section 4 gives some numerical experiments. Some conclusions are drawn in Section 5.

2 Preliminaries

Throughout the rest of the paper, I denotes the identity operator, \(\operatorname{Fix}(T)\) denotes the set of the fixed points of an operator T, i.e., \(\operatorname{Fix}(T):=\{x \mid x=T(x)\}\).

Let T be a mapping from \(\aleph\subseteq \Re^{N}\) into \(\Re^{N}\). T is called co-coercive on with modulus \(\mu>0\) if
$$\bigl\langle T(x)-T(y), x-y\bigr\rangle \geq\mu\bigl\| T(x)-T(y)\bigr\| ^{2},\quad \forall x, y\in \aleph; $$
it is called Lipschitz continuous on for constant \(L>0\) if
$$\bigl\| T(x)-T(y)\bigr\| \leq L \|x-y\|, \quad x,y\in\aleph; $$
it is called monotone on if
$$\bigl\langle T(x)-T(y), x-y\bigr\rangle \geq0,\quad \forall x,y\in\aleph. $$

It is obvious that the co-coercivity (with modulus μ) implies the Lipschitz continuity (with constant \(1/\mu\)) and monotonicity.

Let S be a nonempty closed convex subset of \(\Re^{N}\). Denote by \(P_{S}\) the orthogonal projection onto S; that is,
$$P_{S}(x)=\arg\min_{y\in\Re^{N}}\|x-y\|, $$
over all \(x\in S\).
It is well known that the orthogonal projection operator \(P_{S}\), for any \(x,y\in\Re^{N}\) and any \(z\in S\), is characterized by the inequalities [15]
$$ \bigl\langle x-P_{S}(x), z-P_{S}(x)\bigr\rangle \leq0 $$
(2.1)
and
$$ \bigl\| P_{S}(x)-z\bigr\| ^{2}\leq\|x-z\|^{2}- \bigl\| P_{S}(x)-x\bigr\| ^{2}. $$
(2.2)

Recall the notion of the subdifferential for an appropriate convex function.

Definition 2.1

Let \(f : \Re^{N}\rightarrow\Re\) be convex. The subdifferential of f at x is defined as
$$ \partial f(x)=\bigl\{ \xi\in\Re^{N}\mid f(y)\geq f(x)+ \langle\xi, y-x \rangle, \forall y\in\Re^{N}\bigr\} . $$
(2.3)
Evidently, an element of \(\partial f(x)\) is said to be a subgradient.

Lemma 2.1

[16]

An operator T is co-coercive with modulus 1 if and only if the operator \(I-T\) is co-coercive with modulus 1, where I denotes the identity operator.

It is easy to see from the above lemmas that the orthogonal projection operators are monotone, co-coercive with modulus 1, and the operator \(I-P_{Q}\) is also co-coercive with modulus 1.

3 Algorithm and its convergence

3.1 The variant relaxed-CQ algorithm

As in [12], we suppose that the following conditions are satisfied:
  1. (1)

    The solution set of the MSSFP is nonempty.

     
  2. (2)
    The sets \(C_{i}\), \(i=1,3,\ldots,t\), are denoted as
    $$ C_{i}=\bigl\{ x\in\Re^{N} \mid c_{i}(x)\leq0\bigr\} , $$
    (3.1)
    where \(c_{i}:\Re^{N}\rightarrow\Re\), \(i=1,2,\ldots,t\), are appropriately convex and \(C_{i}\), \(i=1,2,\ldots,t\), are nonempty.
     
The set \(Q_{j}\), \(j=1,2,\ldots,r\), is denoted as
$$ Q_{j}=\bigl\{ y\in\Re^{M} \mid q_{j}(y)\leq0\bigr\} , $$
(3.2)
where \(q_{j}:\Re^{M}\rightarrow\Re\), \(j=1,2,\ldots,r\), are appropriately convex and \(Q_{j}\), \(j=1,2,\ldots,r\), are nonempty.
  1. (3)

    For any \(x\in\Re^{N} \), at least one subgradient \(\xi_{i}\in\partial c_{i}(x)\) can be calculated.

     

For any \(y\in \Re^{M}\), at least one subgradient \(\eta_{j}\in\partial q_{j}(y)\) can be computed.

Now, we define the following half-spaces at point \(x^{k}\):
$$ C_{i}^{k}=\bigl\{ x\in \Re^{N} \mid c_{i}\bigl(x^{k}\bigr)+\bigl\langle \xi_{i}^{k}, x-x^{k}\bigr\rangle \leq0\bigr\} , $$
(3.3)
where \(\xi_{i}^{k}\) is an element in \(\partial c_{i}(x^{k})\) for \(i=1,2,\ldots,t\), and
$$ Q_{j}^{k}=\bigl\{ y\in \Re^{M} \mid q_{j}\bigl(Ax^{k}\bigr)+\bigl\langle \eta_{j}^{k}, y-Ax^{k}\bigr\rangle \leq0\bigr\} , $$
(3.4)
where \(\eta_{j}^{k}\) is an element in \(\partial q_{j}(Ax^{k})\) for \(j=1,2,\ldots,r\).

By the definition of subgradient, it is clear that the half-spaces \(C_{i}^{k}\) and \(Q_{j}^{k}\) contain \(C_{i}\) and \(Q_{j}\), \(i=1,2,\ldots,r\); \(j=1,2,\ldots,t\), respectively. Due to the specific form of \(C_{i}^{k}\) and \(Q_{j}^{k}\), the orthogonal projections onto \(C_{i}^{k}\) and \(Q_{j}^{k}\), \(i=1,2,\ldots,r\); \(j=1,2,\ldots,t\), may be computed directly, see [15].

Now, we give the variant relaxed CQ algorithm.

Algorithm 3.1

Given \(\alpha_{i}>0\) and \(\beta_{j}\geq0\) such that \(\sum_{i=1}^{t}\alpha_{i}=1\), \(\sum_{j=1}^{r}\beta_{j}=1\), \(\gamma\in (0,\frac{1}{\rho(A^{T}A)})\), \(t_{k}\in(0,2)\).

For an arbitrary initial point, \(x^{0}\in\Re^{n}\) is the current point. Define a mapping \(F_{k}: \Re^{N}\rightarrow\Re^{N}\) as
$$ F_{k}(x)=\sum_{j=1}^{r} \beta_{j}A^{T}(I-P_{Q_{j}^{k}})Ax. $$
(3.5)
For \(k=0,1,2,\ldots\) , compute
$$ y^{k}=\sum_{i=1}^{t} \alpha_{i}P_{C_{i}^{k}}\bigl(x^{k}-\gamma F_{k} \bigl(x^{k}\bigr)\bigr). $$
(3.6)
Let
$$ d^{k}=x^{k}-y^{k}+\gamma\bigl(F_{k} \bigl(y^{k}\bigr)- F_{k}\bigl(x^{k}\bigr) \bigr). $$
(3.7)
Set
$$ x^{k+1}=x^{k}-t_{k} d^{k}. $$
(3.8)

In this algorithm, we can take \(\|d^{k}\|<\varepsilon\) for some given precision as the stopping criterion. And we apply \(y^{k}\) and \(F_{k}\) to construct the searching direction \(d^{k}\). The choice of a new searching direction leads to quite different in establishing the convergence result of Algorithm 3.1.

By Lemma 8.1 in [17], the operator \(A^{T} (I - P_{Q_{j}^{k}})A\) is \(1/\rho(A^{T}A)\)-inverse strongly monotone (\(1/\rho(A^{T}A)\)-ism) or co-coercive with modulus \(1/\rho(A^{T}A)\) and Lipschitz continuous with \(\rho (A^{T}A)\).

3.2 Convergence of the variant relaxed-CQ algorithm

In this subsection, we establish the convergence of Algorithm 3.1.

The following results will be needed in convergence analysis of the proposed algorithm.

Lemma 3.1

[18, 19]

Suppose that \(f: \Re^{N}\rightarrow\Re\) is convex. Then its subdifferential is uniformly bounded on any bounded subsets of \(\Re^{N}\).

Lemma 3.2

Assume that z is an arbitrary solution of the MSSFP (i.e., \(z\in SOL(MSSFP)\)) and \(u\in\Re^{N}\), it holds that
$$ \bigl\langle F_{k}(u), u-z\bigr\rangle \geq\sum _{j=1}^{r}\beta_{j}\bigl\| (I-P_{Q_{j}^{k}}) (Au)\bigr\| ^{2}\geq 0. $$
(3.9)

Proof

If \(z\in SOL(MSSFP)\), then \(Az\in Q_{j}\subset Q_{j}^{k}\) for all \(j=1,\ldots,r\), thus \(F_{k}(z)=0\), we have known that the mappings \(I-P_{Q_{j}^{k}}\) are co-coercive with modulus 1, it follows that
$$\begin{aligned} \bigl\langle F_{k}(u), u-z\bigr\rangle =&\bigl\langle F_{k}(u)-F_{k}(z), u-z\bigr\rangle \\ =&\sum_{j=1}^{r}\beta_{j}\bigl\langle (A^{T}(I-P_{Q_{j}^{k}})Au-A^{T}(I-P_{Q_{j}^{k}})Az, u-z\bigr\rangle \\ =&\sum_{j=1}^{r}\beta_{j}\bigl\langle (I-P_{Q_{j}^{k}})Au-(I-P_{Q_{j}^{k}})Az, Au-Az\bigr\rangle \\ \geq& \sum_{j=1}^{r}\beta_{j} \bigl\| (I-P_{Q_{j}^{k}})Au-(I-P_{Q_{j}^{k}})Az\bigr\| ^{2} \\ =&\sum_{j=1}^{r}\beta_{j} \bigl\| (I-P_{Q_{j}^{k}})Au\bigr\| ^{2}. \end{aligned}$$
 □

Now, we state the convergence of Algorithm 3.1.

Theorem 3.1

Assume that the set of solutions of the constrained multiple-set split feasibility problem is nonempty. Then any sequence \(\{x^{k}\}_{k=0}^{\infty}\) generated by Algorithm 3.1 converges to a solution of the multiple-set split feasibility problem.

Proof

Let z be a solution of MSSFP. Since \(C_{i}\subset C_{i,k}\), \(Q_{j}\subset Q_{j}^{k}\), then \(z=P_{C_{i}}z=P_{C_{i,k}}z \) and \(Az=P_{Q_{j}}Az=P_{Q_{j,k}}Az\) for all i and j and therefore \(F_{k}(z)=0\). By Algorithm 3.1, we have
$$\begin{aligned} \bigl\| x^{k+1}-z\bigr\| ^{2}=\bigl\| x^{k}-t_{k} d^{k}-z\bigr\| ^{2} =\bigl\| x^{k}-z\bigr\| ^{2}-2t_{k}\bigl\langle d^{k}, x^{k}-z\bigr\rangle +t_{k}^{2} \bigl\| d^{k}\bigr\| ^{2}, \end{aligned}$$
hence
$$ \bigl\| x^{k+1}-z\bigr\| ^{2}=\bigl\| x^{k}-z\bigr\| ^{2}-2t_{k} \bigl\langle d^{k}, y^{k}-z\bigr\rangle -2t_{k}\bigl\langle d^{k}, x^{k}-y^{k}\bigr\rangle +t_{k}^{2} \bigl\| d^{k}\bigr\| ^{2}. $$
(3.10)
By (3.7) we have
$$ \bigl\langle d^{k}, y^{k}-z\bigr\rangle =\bigl\langle x^{k}-\gamma F_{k}\bigl(x^{k}\bigr)-y^{k}, y^{k}-z\bigr\rangle +\gamma\bigl\langle F_{k} \bigl(y^{k}\bigr), y^{k}-z\bigr\rangle . $$
(3.11)
From Lemma 3.2, we obtain
$$ \bigl\langle F_{k}\bigl(y^{k}\bigr), y^{k}-z\bigr\rangle \geq \sum_{j=1}^{r} \beta_{j}\bigl\| (I-P_{Q_{j}^{k}})Ay^{k}\bigr\| ^{2} \geq0. $$
(3.12)
Let \(z^{k}=x^{k}-\gamma F_{k}(x^{k})\). For that \(\sum_{i=1}^{t}\alpha _{i}=1\), we obtain from (3.5) that
$$\begin{aligned}[b] \bigl\langle x^{k}-\gamma F_{k}\bigl(x^{k} \bigr)-y^{k}, y^{k}-z\bigr\rangle &=\bigl\langle z^{k}-y^{k},y^{k}-z\bigr\rangle \\ &=\Biggl\langle \sum_{i=1}^{t} \alpha_{i}\bigl(z^{k}-P_{C_{i}^{k}}\bigl(z^{k} \bigr)\bigr),\sum_{i=1}^{t} \alpha_{i}P_{C_{i}^{k}}\bigl(z^{k}\bigr)-z\Biggr\rangle \\ &=\sum_{i=1}^{t}\sum _{h=1}^{t}\alpha_{i}\alpha_{h} \bigl\langle z^{k}-P_{C_{i}^{k}}\bigl(z^{k} \bigr),P_{C_{h}^{k}}\bigl(z^{k}\bigr)-z\bigr\rangle . \end{aligned} $$
If \(i=h\), then \(\langle z^{k}-P_{C_{i}^{k}}(z^{k}), P_{C_{h}^{k}}(z^{k})-z\rangle\geq0\), since \(z\in C_{i}\subset C_{i}^{k}\) by Lemma 2.1. Otherwise, if \(i\neq h\), we have
$$\begin{aligned} &\alpha_{i}\alpha_{h}\bigl\langle z^{k}-P_{C_{i}^{k}} \bigl(z^{k}\bigr),P_{C_{h}^{k}}\bigl(z^{k}\bigr)-z\bigr\rangle +\alpha_{h}\alpha_{i}\bigl\langle z^{k}-P_{C_{h}^{k}} \bigl(z^{k}\bigr),P_{C_{i}^{k}}\bigl(z^{k}\bigr)-z\bigr\rangle \\ &\quad=\alpha_{i}\alpha_{h}\bigl[\bigl\langle z^{k}-P_{C_{i}^{k}}\bigl(z^{k}\bigr),P_{C_{i}^{k}} \bigl(z^{k}\bigr)-z\bigr\rangle +\bigl\langle z^{k}-P_{C_{i}^{k}}\bigl(z^{k} \bigr),P_{C_{h}^{k}}\bigl(z^{k}\bigr)-P_{C_{i}^{k}} \bigl(z^{k}\bigr)\bigr\rangle \bigr] \\ &\qquad{}+\alpha_{h}\alpha_{i}\bigl[\bigl\langle z^{k}-P_{C_{h}^{k}}\bigl(z^{k}\bigr),P_{C_{h}^{k}} \bigl(z^{k}\bigr)-z\bigr\rangle +\bigl\langle z^{k}-P_{C_{h}^{k}}\bigl(z^{k} \bigr),P_{C_{i}^{k}}\bigl(z^{k}\bigr)-P_{C_{h}^{k}} \bigl(z^{k}\bigr)\bigr\rangle \bigr] \\ &\quad\geq\alpha_{i}\alpha_{h}\bigl\| P_{C_{i}^{k}} \bigl(z^{k}\bigr)-P_{C_{h}^{k}}\bigl(z^{k}\bigr) \bigr\| ^{2}. \end{aligned}$$
It means
$$ \bigl\langle x^{k}-\gamma_{k}F_{k} \bigl(x^{k}\bigr)-y^{k}, y^{k}-z\bigr\rangle \geq \sum_{i< h}\alpha_{i}\alpha_{h}\bigl\| P_{C_{i}^{k}}\bigl(z^{k}\bigr)-P_{C_{h}^{k}}\bigl(z^{k} \bigr)\bigr\| ^{2}\geq 0. $$
(3.13)
By combining (3.12) and (3.13) with (3.11), we obtain
$$ \bigl\langle d^{k}, y^{k}-z\bigr\rangle \geq\sum _{i< h}\alpha_{i}\alpha_{h}\bigl\| P_{C_{i}^{k}}\bigl(z^{k}\bigr)-P_{C_{h}^{k}}\bigl(z^{k} \bigr)\bigr\| ^{2}+\gamma \sum_{j=1}^{r} \beta_{j}\bigl\| (I-P_{Q_{j}^{k}})Ay^{k}\bigr\| ^{2} \geq0. $$
(3.14)
On the other hand, by definition of \(d^{k}\) in (3.7), we have
$$\begin{aligned} \bigl\langle d^{k}, x^{k}-y^{k}\bigr\rangle =& \bigl\langle d^{k}, x^{k}-y^{k}+\gamma F_{k}\bigl(y^{k}\bigr)-\gamma_{k}F_{k} \bigl(x^{k}\bigr)\bigr\rangle +\gamma \bigl\langle d^{k},F_{k} \bigl(x^{k}\bigr)-F_{k}\bigl(y^{k}\bigr) \bigr\rangle \\ =&\bigl\| d^{k}\bigr\| ^{2}+\gamma\bigl\langle x^{k}-y^{k}+ \gamma F_{k}\bigl(y^{k}\bigr)-\gamma F_{k} \bigl(x^{k}\bigr),F_{k}\bigl(x^{k} \bigr)-F_{k}\bigl(y^{k}\bigr)\bigr\rangle \\ =&\bigl\| d^{k}\bigr\| ^{2}+\gamma\bigl\langle x^{k}-y^{k}, F_{k}\bigl(x^{k}\bigr)-F_{k}\bigl(y^{k} \bigr)\bigr\rangle -\gamma ^{2}\bigl\| F_{k}\bigl(x^{k} \bigr)-F_{k}\bigl(y^{k}\bigr)\bigr\| ^{2}. \end{aligned}$$
From Lemma 3.1, we arrive at \(\langle x^{k}-y^{k}, F_{k}(x^{k})-F_{k}(y^{k})\rangle\geq1/\rho(A^{T}A)\| F_{k}(x^{k})-F_{k}(y^{k})\|^{2}\) for all k, hence
$$\begin{aligned} \bigl\langle d^{k}, x^{k}-y^{k}\bigr\rangle \geq& \bigl\| d^{k}\bigr\| ^{2} +\bigl(\gamma-\gamma^{2}\rho\bigl(A^{T}A\bigr)\bigr) \bigl\langle x^{k}-y^{k}, F_{k}\bigl(x^{k} \bigr)-F_{k}\bigl(y^{k}\bigr)\bigr\rangle \\ =&\bigl\| d^{k}\bigr\| ^{2}+\gamma\bigl(1-\gamma \rho \bigl(A^{T}A\bigr)\bigr)\sum_{j=1}^{r} \beta_{j}\bigl\langle Ax^{k}-Ay^{k}, \\ &{}(I-P_{Q_{j}^{k}})Ax^{k}-(I-P_{Q_{j}^{k}})Ay^{k} \bigr\rangle . \end{aligned}$$
Furthermore, from the 1-co-coercivity of \(I-P_{Q_{j}^{k}}\), we have
$$ \bigl\langle d^{k}, x^{k}-y^{k}\bigr\rangle \geq \bigl\| d^{k}\bigr\| ^{2}+\gamma\bigl(1-\gamma\rho \bigl(A^{T}A \bigr)\bigr)\sum_{j=1}^{r} \beta_{j}\bigl\| (I-P_{Q_{j}^{k}})Ax^{k}-(I-P_{Q_{j}^{k}})Ay^{k} \bigr\| ^{2}. $$
(3.15)
From (3.14), (3.15) and (3.10), we have
$$\begin{aligned} \bigl\| x^{k+1}-z\bigr\| ^{2} =&\bigl\| x^{k}-z \bigr\| ^{2}-2t_{k}\bigl\langle d^{k},y^{k}-z \bigr\rangle -2t_{k}\bigl\langle d_{k},x^{k}-y^{k} \bigr\rangle +t_{k}^{2}\bigl\| d^{k}\bigr\| ^{2}, \\ \leq&\bigl\| x^{k}-z\bigr\| ^{2}-t_{k}(2-t_{k}) \bigl\| d^{k}\bigr\| ^{2} \\ &{}-2t_{k}\gamma(1-\gamma\rho\bigl(A^{T}A\bigr)\sum _{j=1}^{r}\beta_{j}\bigl\| ( I-P_{Q_{j}^{k}})Ax^{k} -( I-P_{Q_{j}^{k}})Ay^{k}\bigr\| ^{2} \\ &{}-2t_{k} \Biggl[\sum_{i< h}\alpha_{i} \alpha_{h}\bigl\| P_{C_{i}^{k}}\bigl(z^{k}\bigr)-P_{C_{h}^{k}} \bigl(z^{k}\bigr)\bigr\| ^{2} \\ &{}+\gamma \sum_{j=1}^{r}\beta_{j} \bigl\| (I-P_{Q_{j}^{k}})Ay^{k}\bigr\| ^{2}\Biggr]. \end{aligned}$$
(3.16)
Since \(t_{k}\in(0,2)\), \(\gamma\in(0,\frac{1}{\rho(A^{T}A)})\) in the algorithm, we conclude that the sequence \(\{\|x^{k}-z\|\}\) is monotonously nonincreasing and convergent and \(\{x^{k}\}\) is bounded. We have shown that the sequence \(\{\|x^{k}-z\|\}\) is monotonically decreasing and bounded, therefore there exists the limit
$$ \lim_{k\rightarrow\infty}\bigl\| x^{k}-z\bigr\| =d, $$
(3.17)
which combined with (3.9)-(3.10), (3.16) implies
$$\begin{aligned}& \lim_{k\rightarrow\infty}\bigl\| d^{k}\bigr\| =\lim_{k\rightarrow\infty}\bigl\| x^{k}-y^{k}+\gamma\bigl(F_{k}\bigl(y^{k} \bigr)-F_{k}\bigl(x^{k}\bigr)\bigr)\bigr\| =0, \end{aligned}$$
(3.18)
$$\begin{aligned}& \Bigl\| \lim_{k\rightarrow\infty}( I-P_{Q_{j}^{k}})Ax^{k}-( I-P_{Q_{j}^{k}})Ay^{k}\Bigr\| ^{2}=0,\quad\forall j, \end{aligned}$$
(3.19)
$$\begin{aligned}& \lim_{k\rightarrow\infty}\bigl\| P_{C_{i}^{k}}\bigl(z^{k} \bigr)-P_{C_{h}^{k}}\bigl(z^{k}\bigr)\bigr\| =0, \quad\forall i\neq h, \end{aligned}$$
(3.20)
$$\begin{aligned}& \lim_{k\rightarrow\infty}\bigl\| (I-P_{Q_{j}^{k}})Ay^{k}\bigr\| =0,\quad \forall j. \end{aligned}$$
(3.21)

Since the sequence \(\{x^{k}\}\) is bounded, there exist a subsequence \(\{x^{k_{l}}\}\) of \(\{x^{k}\}\) converging to a point \(x^{\ast}\) and a corresponding subsequence \(\{Ax^{k_{l}}\}\) of \(\{Ax^{k}\}\) converging to a point \(Ax^{\ast}\). Now we will show that \(x^{\ast}\in SOL(MSFP)\), namely we will show \(\lim_{k_{l}\rightarrow\infty} c_{i}(x^{k_{l}})\leq0\) and \(\lim_{k_{l}\rightarrow\infty} q_{i}(x^{k_{l}})\leq 0\) for all i and j.

First, since \(P_{Q_{j}^{k_{l}}}\in Q_{j}^{k_{l}}\), we have
$$q_{j}\bigl(Ax^{k_{l}}\bigr)+\bigl\langle \eta_{j}^{k_{l}}, P_{Q_{j}^{k_{l}}}\bigl(A x^{k_{l}}\bigr)-A x^{k_{l}}\bigr\rangle \leq0. $$
We know from Lemma 3.1 that the subgradient sequence \(\{\eta_{j}^{k}\}\) is bounded. By (3.16) we get \(P_{Q_{j}^{k_{l}}}(A x^{k_{l}})-A x^{k_{l}}\rightarrow0\). Thus, we have \(\lim_{k_{l}\rightarrow\infty} q_{i}(x^{k_{l}})\leq 0\) for all and j.
Second, noting that \(P_{C_{i}^{k_{l}}}\in C_{i}^{k_{l}}\), we have
$$c_{i}\bigl(x^{k_{l}}\bigr)+\bigl\langle \xi_{i}^{k_{l}}, P_{C_{i}^{k_{l}}}\bigl( x^{k_{l}}\bigr)- x^{k_{l}}\bigr\rangle \leq0. $$
Since \(\{x^{k}\}\) is bounded, by Lemma 3.1 the sequence \(\{\xi_{i}^{k}\}\) is also bounded. Then all we need is to show that \(P_{C_{i}^{k_{l}}}-x^{k_{l}}\rightarrow0\). We know from (3.19) and (3.21) that \(F_{k_{l}}(y^{k_{l}})\rightarrow0\) and \(F_{k_{l}}(x^{k_{l}})\rightarrow 0\). It follows that \(z^{k_{l}}=x^{k_{l}}-\gamma F_{k_{l}}(x^{k_{l}})\rightarrow x^{\ast}\), and then by (3.6), \(y^{k_{l}}\rightarrow x^{\ast}\). Combining \(y^{k_{l}}=\sum_{i=1}^{t}\alpha_{i}P_{C_{i}^{k_{l}}}(x^{k_{l}}-\gamma F_{k_{l}}(x^{k_{l}}))\) with \(F_{k_{l}}(x^{k_{l}})\rightarrow0\) and \(\|P_{C_{i}^{k_{l}}}(x^{k_{l}})-P_{C_{h}^{k_{l}}}(x^{k_{l}})\| \rightarrow0\), \(\forall i\neq h\) by (3.20), we conclude that \(y^{k_{l}}-P_{C_{i}^{k_{l}}}(x^{k_{l}})\rightarrow 0\) since \(\sum_{i=1}^{t}\alpha_{i}=1\). This leads to \(P_{C_{i}^{k_{l}}}(x^{k_{l}})-x^{k_{l}}\rightarrow 0\), and thereby \(\lim_{k_{l}\rightarrow\infty}c_{i}(x^{k_{l}})\leq 0\) for \(i=1,2,\ldots,t\).
Replacing z by \(x^{\ast}\) in (3.17), we have
$$\lim_{k\rightarrow\infty}\bigl\| x^{k}-x^{\ast}\bigr\| =d, $$
furthermore
$$\lim_{k\rightarrow\infty}\bigl\| Ax^{k}-Ax^{\ast}\bigr\| =Ad, $$
on the other hand,
$$\lim_{l\rightarrow\infty}\bigl\| x^{k_{l}}-x^{\ast}\bigr\| =\lim _{l\rightarrow\infty }\bigl\| Ax^{k_{l}}-Ax^{\ast}\bigr\| =0. $$
Thus, \(\lim_{k\rightarrow\infty}\|x^{k}-x^{\ast}\|=\lim_{l\rightarrow\infty}\| Ax^{k}-Ax^{\ast}\|=0\). The proof of Theorem 3.1 is complete. □

4 Numerical experiments

In the numerical results listed in Tables 1 and 2, ‘Iter.’, ‘Sec.’ denote the number of iterations and the cpu time in seconds, respectively. We denote \(e_{0}=(0,0,\ldots,0)\in\Re^{N}\) and \(e_{1}=(1,1,\ldots,1)\in\Re^{N}\). In the both numerical experiments, we take the weights \(1/(r+t)\) for both Algorithm 3.1 and Censor’s algorithm. The stopping criterion is \(\|d\|<\varepsilon=10^{-5}\).
Table 1

The numerical results of Example 4.1

Case

Censor

γ  = 1

Algo. 3.1

γ  = 1

\(\boldsymbol {t_{k}=0.1}\)

Censor

γ  = 0.6

Algo. 3.1

γ  = 0.6

\(\boldsymbol {t_{k}=0.1} \)

Censor

γ  = 1.8

Algo. 3.1

γ  = 1.8

\(\boldsymbol {t_{k}=0.1}\)

I

\(\mathrm{Iter.}=1{,}051\)

\(\mathrm{Sec.}=1.043\)

\(\mathrm{Iter.}=146\)

\(\mathrm{Sec.} =0.401\)

\(\mathrm{Iter.}=1{,}867\)

\(\mathrm{Sec.}=1.480\)

\(\mathrm{Iter.}=224\)

\(\mathrm{Sec.}=0.334\)

\(\mathrm{Iter.}=832\)

\(\mathrm{Sec.}=0.700\)

\(\mathrm{Iter.}= 89\)

\(\mathrm{Sec.}=0.062\)

II

\(\mathrm{Iter.}= 197\)

\(\mathrm{Sec.}= 0.320\)

\(\mathrm{Iter.}=28\)

\(\mathrm{Sec.}=0.017\)

\(\mathrm{Iter.}=289\)

\(\mathrm{Sec.}=0.466\)

\(\mathrm{Iter.}=62\)

\(\mathrm{Sec.}=0.0751\)

\(\mathrm{Iter.}=87\)

\(\mathrm{Sec.}=0.068\)

\(\mathrm{Iter.}=9\)

\(\mathrm{Sec.}=0.010\)

III

\(\mathrm{Iter.}=207\)

\(\mathrm{Sec.}= 0.360\)

\(\mathrm{Iter.}=62\)

\(\mathrm{Sec.}=0.049\)

\(\mathrm{Iter.}=362\)

\(\mathrm{Sec.}=0.551\)

\(\mathrm{Iter.}= 67\)

\(\mathrm{Sec.}=0.0728\)

\(\mathrm{Iter.}=139\)

\(\mathrm{Sec.}=0.217\)

\(\mathrm{Iter.}=17\)

\(\mathrm{Sec.}=0.020\)

Table 2

The numerical results of Example 4.2

N

t , r

Censor

γ  = 1

Algo. 3.1

γ  = 1

\(\boldsymbol {t_{k}=0.01}\)

Censor

γ  = 0.8

Algo. 3.1

γ  = 0.8

\(\boldsymbol {t_{k}=0.01}\)

Censor

γ  = 1.6

Algo. 3.1

γ  = 1.6

\(\boldsymbol {t_{k}=0.01}\)

N = 20

t = 5

r = 5

\(\mathrm{Iter.}=181\)

\(\mathrm{Sec.}=0.268\)

\(\mathrm{Iter.}=16\)

\(\mathrm{Sec.}=0.021\)

\(\mathrm{Iter.}=288\)

\(\mathrm{Sec.}=0.499\)

\(\mathrm{Iter.}=20\)

\(\mathrm{Sec.}=0.022\)

\(\mathrm{Iter.}=147\)

\(\mathrm{Sec.}=0.213\)

\(\mathrm{Iter.}=9\)

\(\mathrm{Sec.}= 0.017\)

N = 40

t = 10

r = 15

\(\mathrm{Iter.}=1{,}012\)

\(\mathrm{Sec.}=1.032\)

\(\mathrm{Iter.}=39\)

\(\mathrm{Sec.}=0.048\)

\(\mathrm{Iter.}=2{,}320\)

\(\mathrm{Sec.}=2.122\)

\(\mathrm{Iter.}=57\)

\(\mathrm{Sec.}=0.059\)

\(\mathrm{Iter.}=893\)

\(\mathrm{Sec.}= 0.795\)

\(\mathrm{Iter.}=19\)

\(\mathrm{Sec.}= 0.031\)

Example 4.1

The MSFP with
$$\begin{aligned}& A= \begin{bmatrix} 2 & -1& 3 &2&3\\ 1 & 2& 5&2&1 \\ 2 & 0& 2&1&-2\\ 2&-1&0&-3&5 \end{bmatrix}; \\& C_{1}=\bigl\{ x\in\Re^{5}\mid x_{1}+2x_{2}+x_{3}+x_{4} \leq5\bigr\} ; \\& C_{2}=\bigl\{ x\in\Re^{5}\mid x_{2}+ 4x_{4}+4x_{5}\leq1\bigr\} \end{aligned}$$
and
$$\begin{aligned}& Q_{1}=\bigl\{ y\in\Re^{4} \mid y_{1}+y_{4} \leq1\bigr\} ; \\& Q_{2}=\bigl\{ y\in\Re^{4} \mid 2y_{2}+3y_{3} \leq6\bigr\} ; \\& Q_{3}=\bigl\{ y\in\Re^{4} \mid y_{3}+2y_{4} \leq10\bigr\} . \end{aligned}$$
Consider the following three cases:
  • Case 1: \(x^{0}=(1,-1,1,-1,1)\);

  • Case 2: \(x^{0}=(1,1,1,1,1)\);

  • Case 3: \(x^{0}=(5,0,5,0,5)\).

Example 4.2

[19]

In this example, because the step is related to \(\rho(A^{T}A)\), for easy control of the spectral radius, we take diagonal matrices A and \(a_{ii}\in(0,1)\) generated randomly
$$\begin{aligned}& C_{i}=\bigl\{ x\in\Re^{N} \mid \|x-d_{i} \|^{2}\leq r_{i}\bigr\} ,\quad i=1,2,\ldots,t; \\& Q_{j}=\bigl\{ x\in\Re^{N} \mid L_{j}\leq y\leq U_{j}\bigr\} ,\quad j=1,2,\ldots,r; \end{aligned}$$
where \(d_{i}\) is thecenter of the ball \(C_{i}\), \(e_{0}\leq d_{i}\leq10e_{1}\), and \(r_{i}\in(40,50)\) is the radius, \(d_{i}\) and \(r_{i}\) are all generated randomly. \(L_{j}\) and \(U_{j}\) are the boundary of the box \(Q_{j}\) and are also generated randomly, satisfying \(20e_{1}\leq L_{j}\leq30e_{1}\), \(40e_{1}\leq U_{j}\leq 80e_{1}\). In this test, we take \(e_{0}\) as the initial point.

In Tables 1-2, the results showed that for most of the initial point, the number of iterative steps and the CPU time of Algorithm 3.1 are obviously less than those of Censor et al.’s algorithm. Moreover, when we take \(N=1{,}000\), the number of iteration steps of Algorithm 3.1 is only hundreds of times. The numerical results also show that for large scale problems Algorithm 3.1 converges faster than Censor’s algorithm.

5 Conclusion

The multiple-set split feasibility problem arises in many practical applications in the real world. This paper constructed a new searching direction, which is not the gradient of a corresponding function. This different direction results in a very different way of analysis. And preliminary numerical results show that our new method converges faster, and this becomes more obvious while the dimension is increasing. Finally, the theoretically analysis is based on the assumption that the solution set of the MSSFP is nonempty.

Notes

Declarations

Acknowledgements

This work was supported by Natural Science Foundation of Shanghai (14ZR1429200), National Science Foundation of China (11171221), National Natural Science Foundation of China (61403255), Shanghai Leading Academic Discipline Project under Grant XTKX2012, Innovation Program of Shanghai Municipal Education Commission under Grant 14YZ094, Doctoral Program Foundation of Institutions of Higher Education of China under Grant 20123120110004, Doctoral Starting Projection of the University of Shanghai for Science and Technology under Grant ID-10-303-002, and Young Teacher Training Projection Program of Shanghai for Science and Technology.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
School of Management, University of Shanghai for Science and Technology
(2)
College of Computer Science and Technology, Henan Polytechnic University
(3)
School of Physics and Chemistry, Henan Polytechnic University

References

  1. Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems 21, 2071-2084 (2005) View ArticleMATHMathSciNetGoogle Scholar
  2. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994) View ArticleMATHMathSciNetGoogle Scholar
  3. Censor, Y, Bortfel, D, Martin, B, Trofimov, A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353-2365 (2006) View ArticleGoogle Scholar
  4. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002) View ArticleMATHMathSciNetGoogle Scholar
  5. Dang, Y, Gao, Y: The strong convergence of a KM-CQ-like algorithm for split feasibility problem. Inverse Problems 27, 015007 (2011) View ArticleMathSciNetGoogle Scholar
  6. Dang, Y, Gao, Y: A perturbed projection algorithm with inertial technique for split feasibility problem. J. Appl. Math. (2012). doi:https://doi.org/10.1155/2012/207323 MathSciNetGoogle Scholar
  7. Dang, Y, Gao, Y: An extrapolated iterative algorithm for multiple-set split feasibility problem. Abstr. Appl. Anal. 2012, Article ID 149508 (2012) View ArticleMathSciNetGoogle Scholar
  8. Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems 20, 1261-1266 (2004) View ArticleMATHMathSciNetGoogle Scholar
  9. Xu, H: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Problems 22, 2021-2034 (2006) View ArticleMATHMathSciNetGoogle Scholar
  10. Masad, E, Reich, S: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367-371 (2007) MATHMathSciNetGoogle Scholar
  11. Censor, Y, Alexander, S: On string-averaging for sparse problems and on the split common fixed point problem. Contemp. Math. 513, 125-142 (2010) View ArticleGoogle Scholar
  12. Censor, Y, Alexander, S: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587-600 (2009) MATHMathSciNetGoogle Scholar
  13. Censor, Y, Motova, A, Segal, A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327, 1244-1256 (2007) View ArticleMATHMathSciNetGoogle Scholar
  14. Gao, Y: Piecewise smooth Lyapunov function for a nonlinear dynamical system. J. Convex Anal. 19, 1009-1016 (2012) MATHMathSciNetGoogle Scholar
  15. Bauschke, HH, Jonathan, MB: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367-426 (1996) View ArticleMATHMathSciNetGoogle Scholar
  16. Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011) View ArticleMATHGoogle Scholar
  17. Byne, C: An unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004) View ArticleGoogle Scholar
  18. Gao, Y: Nonsmooth Optimization. Science Press, Beijing (2008) (in Chinese) Google Scholar
  19. Rocafellar, RT: Convex Analysis. Princeton University Press, Princeton (1971) Google Scholar

Copyright

© Dang and Xue; licensee Springer. 2015