Iterative process for solving a multipleset split feasibility problem
 Yazheng Dang†^{1, 2}Email author and
 Zhonghui Xue†^{3}
https://doi.org/10.1186/s1366001505769
© Dang and Xue; licensee Springer. 2015
Received: 2 November 2014
Accepted: 26 January 2015
Published: 5 February 2015
Abstract
This paper deals with a variant relaxed CQ algorithm by using a new searching direction, which is not the gradient of a corresponding function. The strategy is to intend to improve the convergence. Its convergence is proved under some suitable conditions. Numerical results illustrate that our variant relaxed CQ algorithm converges more quickly than the existing algorithms.
Keywords
multipleset split feasibility problem subgradient accelerated iterative algorithm convergenceMSC
47H05 47J05 47J251 Introduction
Different from most of the existing methods, in this paper, we construct a new searching direction, which is not the gradient ∇p. And this difference causes a very different way of analysis. Moreover, some preliminary numerical experiments show that our new method converges faster than most existing methods.
The paper is organized as follows. Section 2 reviews some preliminaries. Section 3 gives a variant relaxed projection algorithm and shows its convergence. Section 4 gives some numerical experiments. Some conclusions are drawn in Section 5.
2 Preliminaries
Throughout the rest of the paper, I denotes the identity operator, \(\operatorname{Fix}(T)\) denotes the set of the fixed points of an operator T, i.e., \(\operatorname{Fix}(T):=\{x \mid x=T(x)\}\).
It is obvious that the cocoercivity (with modulus μ) implies the Lipschitz continuity (with constant \(1/\mu\)) and monotonicity.
Recall the notion of the subdifferential for an appropriate convex function.
Definition 2.1
Lemma 2.1
[16]
An operator T is cocoercive with modulus 1 if and only if the operator \(IT\) is cocoercive with modulus 1, where I denotes the identity operator.
It is easy to see from the above lemmas that the orthogonal projection operators are monotone, cocoercive with modulus 1, and the operator \(IP_{Q}\) is also cocoercive with modulus 1.
3 Algorithm and its convergence
3.1 The variant relaxedCQ algorithm
 (1)
The solution set of the MSSFP is nonempty.
 (2)The sets \(C_{i}\), \(i=1,3,\ldots,t\), are denoted aswhere \(c_{i}:\Re^{N}\rightarrow\Re\), \(i=1,2,\ldots,t\), are appropriately convex and \(C_{i}\), \(i=1,2,\ldots,t\), are nonempty.$$ C_{i}=\bigl\{ x\in\Re^{N} \mid c_{i}(x)\leq0\bigr\} , $$(3.1)
 (3)
For any \(x\in\Re^{N} \), at least one subgradient \(\xi_{i}\in\partial c_{i}(x)\) can be calculated.
For any \(y\in \Re^{M}\), at least one subgradient \(\eta_{j}\in\partial q_{j}(y)\) can be computed.
By the definition of subgradient, it is clear that the halfspaces \(C_{i}^{k}\) and \(Q_{j}^{k}\) contain \(C_{i}\) and \(Q_{j}\), \(i=1,2,\ldots,r\); \(j=1,2,\ldots,t\), respectively. Due to the specific form of \(C_{i}^{k}\) and \(Q_{j}^{k}\), the orthogonal projections onto \(C_{i}^{k}\) and \(Q_{j}^{k}\), \(i=1,2,\ldots,r\); \(j=1,2,\ldots,t\), may be computed directly, see [15].
Now, we give the variant relaxed CQ algorithm.
Algorithm 3.1
Given \(\alpha_{i}>0\) and \(\beta_{j}\geq0\) such that \(\sum_{i=1}^{t}\alpha_{i}=1\), \(\sum_{j=1}^{r}\beta_{j}=1\), \(\gamma\in (0,\frac{1}{\rho(A^{T}A)})\), \(t_{k}\in(0,2)\).
In this algorithm, we can take \(\d^{k}\<\varepsilon\) for some given precision as the stopping criterion. And we apply \(y^{k}\) and \(F_{k}\) to construct the searching direction \(d^{k}\). The choice of a new searching direction leads to quite different in establishing the convergence result of Algorithm 3.1.
By Lemma 8.1 in [17], the operator \(A^{T} (I  P_{Q_{j}^{k}})A\) is \(1/\rho(A^{T}A)\)inverse strongly monotone (\(1/\rho(A^{T}A)\)ism) or cocoercive with modulus \(1/\rho(A^{T}A)\) and Lipschitz continuous with \(\rho (A^{T}A)\).
3.2 Convergence of the variant relaxedCQ algorithm
In this subsection, we establish the convergence of Algorithm 3.1.
The following results will be needed in convergence analysis of the proposed algorithm.
Lemma 3.1
Suppose that \(f: \Re^{N}\rightarrow\Re\) is convex. Then its subdifferential is uniformly bounded on any bounded subsets of \(\Re^{N}\).
Lemma 3.2
Proof
Now, we state the convergence of Algorithm 3.1.
Theorem 3.1
Assume that the set of solutions of the constrained multipleset split feasibility problem is nonempty. Then any sequence \(\{x^{k}\}_{k=0}^{\infty}\) generated by Algorithm 3.1 converges to a solution of the multipleset split feasibility problem.
Proof
Since the sequence \(\{x^{k}\}\) is bounded, there exist a subsequence \(\{x^{k_{l}}\}\) of \(\{x^{k}\}\) converging to a point \(x^{\ast}\) and a corresponding subsequence \(\{Ax^{k_{l}}\}\) of \(\{Ax^{k}\}\) converging to a point \(Ax^{\ast}\). Now we will show that \(x^{\ast}\in SOL(MSFP)\), namely we will show \(\lim_{k_{l}\rightarrow\infty} c_{i}(x^{k_{l}})\leq0\) and \(\lim_{k_{l}\rightarrow\infty} q_{i}(x^{k_{l}})\leq 0\) for all i and j.
4 Numerical experiments
The numerical results of Example 4.1
Case  Censor γ = 1  Algo. 3.1 γ = 1 \(\boldsymbol {t_{k}=0.1}\)  Censor γ = 0.6  Algo. 3.1 γ = 0.6 \(\boldsymbol {t_{k}=0.1} \)  Censor γ = 1.8  Algo. 3.1 γ = 1.8 \(\boldsymbol {t_{k}=0.1}\) 

I  \(\mathrm{Iter.}=1{,}051\) \(\mathrm{Sec.}=1.043\)  \(\mathrm{Iter.}=146\) \(\mathrm{Sec.} =0.401\)  \(\mathrm{Iter.}=1{,}867\) \(\mathrm{Sec.}=1.480\)  \(\mathrm{Iter.}=224\) \(\mathrm{Sec.}=0.334\)  \(\mathrm{Iter.}=832\) \(\mathrm{Sec.}=0.700\)  \(\mathrm{Iter.}= 89\) \(\mathrm{Sec.}=0.062\) 
II  \(\mathrm{Iter.}= 197\) \(\mathrm{Sec.}= 0.320\)  \(\mathrm{Iter.}=28\) \(\mathrm{Sec.}=0.017\)  \(\mathrm{Iter.}=289\) \(\mathrm{Sec.}=0.466\)  \(\mathrm{Iter.}=62\) \(\mathrm{Sec.}=0.0751\)  \(\mathrm{Iter.}=87\) \(\mathrm{Sec.}=0.068\)  \(\mathrm{Iter.}=9\) \(\mathrm{Sec.}=0.010\) 
III  \(\mathrm{Iter.}=207\) \(\mathrm{Sec.}= 0.360\)  \(\mathrm{Iter.}=62\) \(\mathrm{Sec.}=0.049\)  \(\mathrm{Iter.}=362\) \(\mathrm{Sec.}=0.551\)  \(\mathrm{Iter.}= 67\) \(\mathrm{Sec.}=0.0728\)  \(\mathrm{Iter.}=139\) \(\mathrm{Sec.}=0.217\)  \(\mathrm{Iter.}=17\) \(\mathrm{Sec.}=0.020\) 
The numerical results of Example 4.2
N  t , r  Censor γ = 1  Algo. 3.1 γ = 1 \(\boldsymbol {t_{k}=0.01}\)  Censor γ = 0.8  Algo. 3.1 γ = 0.8 \(\boldsymbol {t_{k}=0.01}\)  Censor γ = 1.6  Algo. 3.1 γ = 1.6 \(\boldsymbol {t_{k}=0.01}\) 

N = 20  t = 5 r = 5  \(\mathrm{Iter.}=181\) \(\mathrm{Sec.}=0.268\)  \(\mathrm{Iter.}=16\) \(\mathrm{Sec.}=0.021\)  \(\mathrm{Iter.}=288\) \(\mathrm{Sec.}=0.499\)  \(\mathrm{Iter.}=20\) \(\mathrm{Sec.}=0.022\)  \(\mathrm{Iter.}=147\) \(\mathrm{Sec.}=0.213\)  \(\mathrm{Iter.}=9\) \(\mathrm{Sec.}= 0.017\) 
N = 40  t = 10 r = 15  \(\mathrm{Iter.}=1{,}012\) \(\mathrm{Sec.}=1.032\)  \(\mathrm{Iter.}=39\) \(\mathrm{Sec.}=0.048\)  \(\mathrm{Iter.}=2{,}320\) \(\mathrm{Sec.}=2.122\)  \(\mathrm{Iter.}=57\) \(\mathrm{Sec.}=0.059\)  \(\mathrm{Iter.}=893\) \(\mathrm{Sec.}= 0.795\)  \(\mathrm{Iter.}=19\) \(\mathrm{Sec.}= 0.031\) 
Example 4.1

Case 1: \(x^{0}=(1,1,1,1,1)\);

Case 2: \(x^{0}=(1,1,1,1,1)\);

Case 3: \(x^{0}=(5,0,5,0,5)\).
Example 4.2
[19]
In Tables 12, the results showed that for most of the initial point, the number of iterative steps and the CPU time of Algorithm 3.1 are obviously less than those of Censor et al.’s algorithm. Moreover, when we take \(N=1{,}000\), the number of iteration steps of Algorithm 3.1 is only hundreds of times. The numerical results also show that for large scale problems Algorithm 3.1 converges faster than Censor’s algorithm.
5 Conclusion
The multipleset split feasibility problem arises in many practical applications in the real world. This paper constructed a new searching direction, which is not the gradient of a corresponding function. This different direction results in a very different way of analysis. And preliminary numerical results show that our new method converges faster, and this becomes more obvious while the dimension is increasing. Finally, the theoretically analysis is based on the assumption that the solution set of the MSSFP is nonempty.
Notes
Declarations
Acknowledgements
This work was supported by Natural Science Foundation of Shanghai (14ZR1429200), National Science Foundation of China (11171221), National Natural Science Foundation of China (61403255), Shanghai Leading Academic Discipline Project under Grant XTKX2012, Innovation Program of Shanghai Municipal Education Commission under Grant 14YZ094, Doctoral Program Foundation of Institutions of Higher Education of China under Grant 20123120110004, Doctoral Starting Projection of the University of Shanghai for Science and Technology under Grant ID10303002, and Young Teacher Training Projection Program of Shanghai for Science and Technology.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Authors’ Affiliations
References
 Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiplesets split feasibility problem and its applications for inverse problems. Inverse Problems 21, 20712084 (2005) View ArticleMATHMathSciNetGoogle Scholar
 Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221239 (1994) View ArticleMATHMathSciNetGoogle Scholar
 Censor, Y, Bortfel, D, Martin, B, Trofimov, A: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 51, 23532365 (2006) View ArticleGoogle Scholar
 Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441453 (2002) View ArticleMATHMathSciNetGoogle Scholar
 Dang, Y, Gao, Y: The strong convergence of a KMCQlike algorithm for split feasibility problem. Inverse Problems 27, 015007 (2011) View ArticleMathSciNetGoogle Scholar
 Dang, Y, Gao, Y: A perturbed projection algorithm with inertial technique for split feasibility problem. J. Appl. Math. (2012). doi:https://doi.org/10.1155/2012/207323 MathSciNetGoogle Scholar
 Dang, Y, Gao, Y: An extrapolated iterative algorithm for multipleset split feasibility problem. Abstr. Appl. Anal. 2012, Article ID 149508 (2012) View ArticleMathSciNetGoogle Scholar
 Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Problems 20, 12611266 (2004) View ArticleMATHMathSciNetGoogle Scholar
 Xu, H: A variable Krasnosel’skiiMann algorithm and the multipleset split feasibility problem. Inverse Problems 22, 20212034 (2006) View ArticleMATHMathSciNetGoogle Scholar
 Masad, E, Reich, S: A note on the multipleset split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367371 (2007) MATHMathSciNetGoogle Scholar
 Censor, Y, Alexander, S: On stringaveraging for sparse problems and on the split common fixed point problem. Contemp. Math. 513, 125142 (2010) View ArticleGoogle Scholar
 Censor, Y, Alexander, S: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587600 (2009) MATHMathSciNetGoogle Scholar
 Censor, Y, Motova, A, Segal, A: Perturbed projections and subgradient projections for the multiplesets split feasibility problem. J. Math. Anal. Appl. 327, 12441256 (2007) View ArticleMATHMathSciNetGoogle Scholar
 Gao, Y: Piecewise smooth Lyapunov function for a nonlinear dynamical system. J. Convex Anal. 19, 10091016 (2012) MATHMathSciNetGoogle Scholar
 Bauschke, HH, Jonathan, MB: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367426 (1996) View ArticleMATHMathSciNetGoogle Scholar
 Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011) View ArticleMATHGoogle Scholar
 Byne, C: An unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004) View ArticleGoogle Scholar
 Gao, Y: Nonsmooth Optimization. Science Press, Beijing (2008) (in Chinese) Google Scholar
 Rocafellar, RT: Convex Analysis. Princeton University Press, Princeton (1971) Google Scholar