 Research
 Open Access
 Published:
Iterative process for solving a multipleset split feasibility problem
Journal of Inequalities and Applications volume 2015, Article number: 47 (2015)
Abstract
This paper deals with a variant relaxed CQ algorithm by using a new searching direction, which is not the gradient of a corresponding function. The strategy is to intend to improve the convergence. Its convergence is proved under some suitable conditions. Numerical results illustrate that our variant relaxed CQ algorithm converges more quickly than the existing algorithms.
Introduction
The multipleset split feasibility problem (MSSFP) is to find a point contained in the intersection of a family of closed convex sets in one space such that its image under a linear transformation is contained in the intersection of another family of closed convex sets in the image space. Formally, given nonempty closed convex sets \(C_{i}\subseteq\Re^{N}\), \(i=1,2,\ldots,t\), in the Ndimensional Euclidean space \(\Re^{N}\) and nonempty closed convex sets \(Q_{j}\subseteq\Re^{M}\), \(j=1,2,\ldots,r\), and an \(M\times N\) real matrix A, the MSSFP is to find a point x such that
Such MSSFP, formulated in [1], arises in the field of intensitymodulated radiation therapy (IMRT) when one attempts to describe physical dose constrains and equivalent uniform dose (EUD) constraints within a single model, see [2, 3]. Specially, when \(t=r=1\), the problem reduces to the twoset split feasibility problem (abbreviated as SFP), which is to find a point \(x\in C\) such that \(Ax\in Q\) (see [4–6]).
For solving the MSSFP, Censor et al. in [1] introduced a proximity function \(p(x)\) to measure the aggregate distance of a point to all sets. The function \(p(x)\) is defined as
where \(\alpha_{i}>0\), \(\beta_{j}>0\) for all i and j, respectively, and \(\sum_{i=1}^{t}\alpha_{i}+\sum_{j=1}^{r}\beta_{j}=1\). Then they proposed a projection algorithm as follows:
where \(\Omega\subset\Re^{N}\) is an auxiliary set, \(x^{k}\) is the current iterative point. \(0<\gamma<2/L\) with \(L=\sum_{i=1}^{t}\alpha_{i}+\rho(A^{T}A)\sum_{j=1}^{r}\beta_{j}\) and \(\rho(A^{T}A)\) is the spectral radius of \(A^{T}A\). Subsequently, many methods have been developed for solving the MSSFP [7–14], while most of these algorithms aimed at minimizing the proximity function \(p(x)\) and used its gradient ∇p.
Different from most of the existing methods, in this paper, we construct a new searching direction, which is not the gradient ∇p. And this difference causes a very different way of analysis. Moreover, some preliminary numerical experiments show that our new method converges faster than most existing methods.
The paper is organized as follows. Section 2 reviews some preliminaries. Section 3 gives a variant relaxed projection algorithm and shows its convergence. Section 4 gives some numerical experiments. Some conclusions are drawn in Section 5.
Preliminaries
Throughout the rest of the paper, I denotes the identity operator, \(\operatorname{Fix}(T)\) denotes the set of the fixed points of an operator T, i.e., \(\operatorname{Fix}(T):=\{x \mid x=T(x)\}\).
Let T be a mapping from \(\aleph\subseteq \Re^{N}\) into \(\Re^{N}\). T is called cocoercive on ℵ with modulus \(\mu>0\) if
it is called Lipschitz continuous on ℵ for constant \(L>0\) if
it is called monotone on ℵ if
It is obvious that the cocoercivity (with modulus μ) implies the Lipschitz continuity (with constant \(1/\mu\)) and monotonicity.
Let S be a nonempty closed convex subset of \(\Re^{N}\). Denote by \(P_{S}\) the orthogonal projection onto S; that is,
over all \(x\in S\).
It is well known that the orthogonal projection operator \(P_{S}\), for any \(x,y\in\Re^{N}\) and any \(z\in S\), is characterized by the inequalities [15]
and
Recall the notion of the subdifferential for an appropriate convex function.
Definition 2.1
Let \(f : \Re^{N}\rightarrow\Re\) be convex. The subdifferential of f at x is defined as
Evidently, an element of \(\partial f(x)\) is said to be a subgradient.
Lemma 2.1
[16]
An operator T is cocoercive with modulus 1 if and only if the operator \(IT\) is cocoercive with modulus 1, where I denotes the identity operator.
It is easy to see from the above lemmas that the orthogonal projection operators are monotone, cocoercive with modulus 1, and the operator \(IP_{Q}\) is also cocoercive with modulus 1.
Algorithm and its convergence
The variant relaxedCQ algorithm
As in [12], we suppose that the following conditions are satisfied:

(1)
The solution set of the MSSFP is nonempty.

(2)
The sets \(C_{i}\), \(i=1,3,\ldots,t\), are denoted as
$$ C_{i}=\bigl\{ x\in\Re^{N} \mid c_{i}(x)\leq0\bigr\} , $$(3.1)where \(c_{i}:\Re^{N}\rightarrow\Re\), \(i=1,2,\ldots,t\), are appropriately convex and \(C_{i}\), \(i=1,2,\ldots,t\), are nonempty.
The set \(Q_{j}\), \(j=1,2,\ldots,r\), is denoted as
where \(q_{j}:\Re^{M}\rightarrow\Re\), \(j=1,2,\ldots,r\), are appropriately convex and \(Q_{j}\), \(j=1,2,\ldots,r\), are nonempty.

(3)
For any \(x\in\Re^{N} \), at least one subgradient \(\xi_{i}\in\partial c_{i}(x)\) can be calculated.
For any \(y\in \Re^{M}\), at least one subgradient \(\eta_{j}\in\partial q_{j}(y)\) can be computed.
Now, we define the following halfspaces at point \(x^{k}\):
where \(\xi_{i}^{k}\) is an element in \(\partial c_{i}(x^{k})\) for \(i=1,2,\ldots,t\), and
where \(\eta_{j}^{k}\) is an element in \(\partial q_{j}(Ax^{k})\) for \(j=1,2,\ldots,r\).
By the definition of subgradient, it is clear that the halfspaces \(C_{i}^{k}\) and \(Q_{j}^{k}\) contain \(C_{i}\) and \(Q_{j}\), \(i=1,2,\ldots,r\); \(j=1,2,\ldots,t\), respectively. Due to the specific form of \(C_{i}^{k}\) and \(Q_{j}^{k}\), the orthogonal projections onto \(C_{i}^{k}\) and \(Q_{j}^{k}\), \(i=1,2,\ldots,r\); \(j=1,2,\ldots,t\), may be computed directly, see [15].
Now, we give the variant relaxed CQ algorithm.
Algorithm 3.1
Given \(\alpha_{i}>0\) and \(\beta_{j}\geq0\) such that \(\sum_{i=1}^{t}\alpha_{i}=1\), \(\sum_{j=1}^{r}\beta_{j}=1\), \(\gamma\in (0,\frac{1}{\rho(A^{T}A)})\), \(t_{k}\in(0,2)\).
For an arbitrary initial point, \(x^{0}\in\Re^{n}\) is the current point. Define a mapping \(F_{k}: \Re^{N}\rightarrow\Re^{N}\) as
For \(k=0,1,2,\ldots\) , compute
Let
Set
In this algorithm, we can take \(\d^{k}\<\varepsilon\) for some given precision as the stopping criterion. And we apply \(y^{k}\) and \(F_{k}\) to construct the searching direction \(d^{k}\). The choice of a new searching direction leads to quite different in establishing the convergence result of Algorithm 3.1.
By Lemma 8.1 in [17], the operator \(A^{T} (I  P_{Q_{j}^{k}})A\) is \(1/\rho(A^{T}A)\)inverse strongly monotone (\(1/\rho(A^{T}A)\)ism) or cocoercive with modulus \(1/\rho(A^{T}A)\) and Lipschitz continuous with \(\rho (A^{T}A)\).
Convergence of the variant relaxedCQ algorithm
In this subsection, we establish the convergence of Algorithm 3.1.
The following results will be needed in convergence analysis of the proposed algorithm.
Lemma 3.1
Suppose that \(f: \Re^{N}\rightarrow\Re\) is convex. Then its subdifferential is uniformly bounded on any bounded subsets of \(\Re^{N}\).
Lemma 3.2
Assume that z is an arbitrary solution of the MSSFP (i.e., \(z\in SOL(MSSFP)\)) and \(u\in\Re^{N}\), it holds that
Proof
If \(z\in SOL(MSSFP)\), then \(Az\in Q_{j}\subset Q_{j}^{k}\) for all \(j=1,\ldots,r\), thus \(F_{k}(z)=0\), we have known that the mappings \(IP_{Q_{j}^{k}}\) are cocoercive with modulus 1, it follows that
□
Now, we state the convergence of Algorithm 3.1.
Theorem 3.1
Assume that the set of solutions of the constrained multipleset split feasibility problem is nonempty. Then any sequence \(\{x^{k}\}_{k=0}^{\infty}\) generated by Algorithm 3.1 converges to a solution of the multipleset split feasibility problem.
Proof
Let z be a solution of MSSFP. Since \(C_{i}\subset C_{i,k}\), \(Q_{j}\subset Q_{j}^{k}\), then \(z=P_{C_{i}}z=P_{C_{i,k}}z \) and \(Az=P_{Q_{j}}Az=P_{Q_{j,k}}Az\) for all i and j and therefore \(F_{k}(z)=0\). By Algorithm 3.1, we have
hence
By (3.7) we have
From Lemma 3.2, we obtain
Let \(z^{k}=x^{k}\gamma F_{k}(x^{k})\). For that \(\sum_{i=1}^{t}\alpha _{i}=1\), we obtain from (3.5) that
If \(i=h\), then \(\langle z^{k}P_{C_{i}^{k}}(z^{k}), P_{C_{h}^{k}}(z^{k})z\rangle\geq0\), since \(z\in C_{i}\subset C_{i}^{k}\) by Lemma 2.1. Otherwise, if \(i\neq h\), we have
It means
By combining (3.12) and (3.13) with (3.11), we obtain
On the other hand, by definition of \(d^{k}\) in (3.7), we have
From Lemma 3.1, we arrive at \(\langle x^{k}y^{k}, F_{k}(x^{k})F_{k}(y^{k})\rangle\geq1/\rho(A^{T}A)\ F_{k}(x^{k})F_{k}(y^{k})\^{2}\) for all k, hence
Furthermore, from the 1cocoercivity of \(IP_{Q_{j}^{k}}\), we have
From (3.14), (3.15) and (3.10), we have
Since \(t_{k}\in(0,2)\), \(\gamma\in(0,\frac{1}{\rho(A^{T}A)})\) in the algorithm, we conclude that the sequence \(\{\x^{k}z\\}\) is monotonously nonincreasing and convergent and \(\{x^{k}\}\) is bounded. We have shown that the sequence \(\{\x^{k}z\\}\) is monotonically decreasing and bounded, therefore there exists the limit
which combined with (3.9)(3.10), (3.16) implies
Since the sequence \(\{x^{k}\}\) is bounded, there exist a subsequence \(\{x^{k_{l}}\}\) of \(\{x^{k}\}\) converging to a point \(x^{\ast}\) and a corresponding subsequence \(\{Ax^{k_{l}}\}\) of \(\{Ax^{k}\}\) converging to a point \(Ax^{\ast}\). Now we will show that \(x^{\ast}\in SOL(MSFP)\), namely we will show \(\lim_{k_{l}\rightarrow\infty} c_{i}(x^{k_{l}})\leq0\) and \(\lim_{k_{l}\rightarrow\infty} q_{i}(x^{k_{l}})\leq 0\) for all i and j.
First, since \(P_{Q_{j}^{k_{l}}}\in Q_{j}^{k_{l}}\), we have
We know from Lemma 3.1 that the subgradient sequence \(\{\eta_{j}^{k}\}\) is bounded. By (3.16) we get \(P_{Q_{j}^{k_{l}}}(A x^{k_{l}})A x^{k_{l}}\rightarrow0\). Thus, we have \(\lim_{k_{l}\rightarrow\infty} q_{i}(x^{k_{l}})\leq 0\) for all and j.
Second, noting that \(P_{C_{i}^{k_{l}}}\in C_{i}^{k_{l}}\), we have
Since \(\{x^{k}\}\) is bounded, by Lemma 3.1 the sequence \(\{\xi_{i}^{k}\}\) is also bounded. Then all we need is to show that \(P_{C_{i}^{k_{l}}}x^{k_{l}}\rightarrow0\). We know from (3.19) and (3.21) that \(F_{k_{l}}(y^{k_{l}})\rightarrow0\) and \(F_{k_{l}}(x^{k_{l}})\rightarrow 0\). It follows that \(z^{k_{l}}=x^{k_{l}}\gamma F_{k_{l}}(x^{k_{l}})\rightarrow x^{\ast}\), and then by (3.6), \(y^{k_{l}}\rightarrow x^{\ast}\). Combining \(y^{k_{l}}=\sum_{i=1}^{t}\alpha_{i}P_{C_{i}^{k_{l}}}(x^{k_{l}}\gamma F_{k_{l}}(x^{k_{l}}))\) with \(F_{k_{l}}(x^{k_{l}})\rightarrow0\) and \(\P_{C_{i}^{k_{l}}}(x^{k_{l}})P_{C_{h}^{k_{l}}}(x^{k_{l}})\ \rightarrow0\), \(\forall i\neq h\) by (3.20), we conclude that \(y^{k_{l}}P_{C_{i}^{k_{l}}}(x^{k_{l}})\rightarrow 0\) since \(\sum_{i=1}^{t}\alpha_{i}=1\). This leads to \(P_{C_{i}^{k_{l}}}(x^{k_{l}})x^{k_{l}}\rightarrow 0\), and thereby \(\lim_{k_{l}\rightarrow\infty}c_{i}(x^{k_{l}})\leq 0\) for \(i=1,2,\ldots,t\).
Replacing z by \(x^{\ast}\) in (3.17), we have
furthermore
on the other hand,
Thus, \(\lim_{k\rightarrow\infty}\x^{k}x^{\ast}\=\lim_{l\rightarrow\infty}\ Ax^{k}Ax^{\ast}\=0\). The proof of Theorem 3.1 is complete. □
Numerical experiments
In the numerical results listed in Tables 1 and 2, ‘Iter.’, ‘Sec.’ denote the number of iterations and the cpu time in seconds, respectively. We denote \(e_{0}=(0,0,\ldots,0)\in\Re^{N}\) and \(e_{1}=(1,1,\ldots,1)\in\Re^{N}\). In the both numerical experiments, we take the weights \(1/(r+t)\) for both Algorithm 3.1 and Censor’s algorithm. The stopping criterion is \(\d\<\varepsilon=10^{5}\).
Example 4.1
The MSFP with
and
Consider the following three cases:

Case 1: \(x^{0}=(1,1,1,1,1)\);

Case 2: \(x^{0}=(1,1,1,1,1)\);

Case 3: \(x^{0}=(5,0,5,0,5)\).
Example 4.2
[19]
In this example, because the step is related to \(\rho(A^{T}A)\), for easy control of the spectral radius, we take diagonal matrices A and \(a_{ii}\in(0,1)\) generated randomly
where \(d_{i}\) is thecenter of the ball \(C_{i}\), \(e_{0}\leq d_{i}\leq10e_{1}\), and \(r_{i}\in(40,50)\) is the radius, \(d_{i}\) and \(r_{i}\) are all generated randomly. \(L_{j}\) and \(U_{j}\) are the boundary of the box \(Q_{j}\) and are also generated randomly, satisfying \(20e_{1}\leq L_{j}\leq30e_{1}\), \(40e_{1}\leq U_{j}\leq 80e_{1}\). In this test, we take \(e_{0}\) as the initial point.
In Tables 12, the results showed that for most of the initial point, the number of iterative steps and the CPU time of Algorithm 3.1 are obviously less than those of Censor et al.’s algorithm. Moreover, when we take \(N=1{,}000\), the number of iteration steps of Algorithm 3.1 is only hundreds of times. The numerical results also show that for large scale problems Algorithm 3.1 converges faster than Censor’s algorithm.
Conclusion
The multipleset split feasibility problem arises in many practical applications in the real world. This paper constructed a new searching direction, which is not the gradient of a corresponding function. This different direction results in a very different way of analysis. And preliminary numerical results show that our new method converges faster, and this becomes more obvious while the dimension is increasing. Finally, the theoretically analysis is based on the assumption that the solution set of the MSSFP is nonempty.
References
 1.
Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiplesets split feasibility problem and its applications for inverse problems. Inverse Problems 21, 20712084 (2005)
 2.
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221239 (1994)
 3.
Censor, Y, Bortfel, D, Martin, B, Trofimov, A: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 51, 23532365 (2006)
 4.
Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441453 (2002)
 5.
Dang, Y, Gao, Y: The strong convergence of a KMCQlike algorithm for split feasibility problem. Inverse Problems 27, 015007 (2011)
 6.
Dang, Y, Gao, Y: A perturbed projection algorithm with inertial technique for split feasibility problem. J. Appl. Math. (2012). doi:10.1155/2012/207323
 7.
Dang, Y, Gao, Y: An extrapolated iterative algorithm for multipleset split feasibility problem. Abstr. Appl. Anal. 2012, Article ID 149508 (2012)
 8.
Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems 20, 12611266 (2004)
 9.
Xu, H: A variable Krasnosel’skiiMann algorithm and the multipleset split feasibility problem. Inverse Problems 22, 20212034 (2006)
 10.
Masad, E, Reich, S: A note on the multipleset split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367371 (2007)
 11.
Censor, Y, Alexander, S: On stringaveraging for sparse problems and on the split common fixed point problem. Contemp. Math. 513, 125142 (2010)
 12.
Censor, Y, Alexander, S: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587600 (2009)
 13.
Censor, Y, Motova, A, Segal, A: Perturbed projections and subgradient projections for the multiplesets split feasibility problem. J. Math. Anal. Appl. 327, 12441256 (2007)
 14.
Gao, Y: Piecewise smooth Lyapunov function for a nonlinear dynamical system. J. Convex Anal. 19, 10091016 (2012)
 15.
Bauschke, HH, Jonathan, MB: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367426 (1996)
 16.
Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)
 17.
Byne, C: An unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004)
 18.
Gao, Y: Nonsmooth Optimization. Science Press, Beijing (2008) (in Chinese)
 19.
Rocafellar, RT: Convex Analysis. Princeton University Press, Princeton (1971)
Acknowledgements
This work was supported by Natural Science Foundation of Shanghai (14ZR1429200), National Science Foundation of China (11171221), National Natural Science Foundation of China (61403255), Shanghai Leading Academic Discipline Project under Grant XTKX2012, Innovation Program of Shanghai Municipal Education Commission under Grant 14YZ094, Doctoral Program Foundation of Institutions of Higher Education of China under Grant 20123120110004, Doctoral Starting Projection of the University of Shanghai for Science and Technology under Grant ID10303002, and Young Teacher Training Projection Program of Shanghai for Science and Technology.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Yazheng Dang and Zhonghui Xue contributed equally to this work.
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Dang, Y., Xue, Z. Iterative process for solving a multipleset split feasibility problem. J Inequal Appl 2015, 47 (2015). https://doi.org/10.1186/s1366001505769
Received:
Accepted:
Published:
MSC
 47H05
 47J05
 47J25
Keywords
 multipleset split feasibility problem
 subgradient
 accelerated iterative algorithm
 convergence