A simple algorithm for computing projection onto intersection of finite level sets
 Songnian He^{1}Email author,
 Ziyi Zhao^{1} and
 Biao Luo^{1}
https://doi.org/10.1186/1029242X2014307
© He et al.; licensee Springer. 2014
Received: 21 August 2013
Accepted: 30 July 2014
Published: 21 August 2014
Abstract
We consider the problem of computing the projection ${P}_{C}u$, where u is chosen in a real Hilbert space H arbitrarily and the closed convex subset C of H is the intersection of finite level sets of convex functions given as follows: $C={\bigcap}_{i=1}^{m}{C}^{i}\triangleq {\bigcap}_{i=1}^{m}\{x\in H:{c}_{i}(x)\le 0\}$, where m is a positive integer and ${c}_{i}:H\to \mathbb{R}$ is a convex function for $i=1,\dots ,m$. A relaxed Halperntype algorithm is proposed for computing the projection ${P}_{C}u$ in this paper, which is defined by ${x}^{n+1}={\lambda}_{n}u+(1{\lambda}_{n}){P}_{{C}_{n}^{m}}\cdots {P}_{{C}_{n}^{2}}{P}_{{C}_{n}^{1}}{x}^{n}$, $n\ge 0$, where the initial guess ${x}^{0}\in H$ is chosen arbitrarily, the sequence $({\lambda}_{n})$ is in $(0,1)$ and $({C}_{n}^{i})$ is a sequence of halfspaces containing ${C}^{i}$ for $i=1,\dots ,m$. Since calculations of the projections onto halfspaces ${C}_{n}^{i}$ ($i=1,\dots ,m$; $n=1,2,\dots $) are easy in practice, this algorithm is quite implementable. Strong convergence of our algorithm is proved under some ordinary conditions. Some numerical experiments are provided which show advantages of our algorithm.
MSC:58E35, 47H09, 65J15.
Keywords
1 Introduction
converges in norm to ${P}_{{C}^{1}\cap {C}^{2}}u$ when ${C}^{1}$ and ${C}^{2}$ are two closed subspaces of H. In 1965, Bregman [8] showed that the iterates generated by (1.2) converge weakly to ${P}_{{C}^{1}\cap {C}^{2}}u$ for any pair of closed convex subsets ${C}^{1}$ and ${C}^{2}$. Gubin et al. [9] proved that the iterates will converge linearly to ${P}_{{C}^{1}\cap {C}^{2}}u$ if ${C}^{1}$ and ${C}^{2}$ are ‘boundedly regular’. Actually, they proved this result for alternating projections between any finite collection of closed convex sets. Strong convergence also holds when the sets are symmetric [10], Theorem 2.2; [11], Corollary 2.6]. However, in 2004, Hundal [12] proved that the sequence of iterates generated by (1.2) does not always converge in norm to ${P}_{{C}^{1}\cap {C}^{2}}u$ by providing an explicit counterexample.
Since the computation of a projection onto a closed convex subset is generally difficult, to overcome this difficulty, Fukushima [13] suggested a way to calculate the projection onto a level set of a convex function by computing a sequence of projections onto halfspaces containing the original level set. This idea is followed by Yang [3] and Lopez et al. [14], respectively, who introduced the relaxed CQ algorithms for solving the split feasibility problem in the setting of finitedimensional and infinitedimensional Hilbert spaces, respectively. The idea is also used by Gibali et al. [11] and He et al. [15] for solving variational inequalities in a Hilbert space.
where m is a positive integer and ${c}_{i}:H\to \mathbb{R}$ is a convex function for $i=1,\dots ,m$.
where the initial guess ${x}^{0}\in H$ is chosen arbitrarily, the sequence $({\lambda}_{n})$ is in $(0,1)$ and $({C}_{n}^{i})$ is a sequence of halfspaces containing ${C}^{i}$ for $i=1,\dots ,m$ (the specific structure of the halfspaces ${C}_{n}^{i}$ will be described in Section 3). Since calculations of the projections onto halfspaces ${C}_{n}^{i}$ ($i=1,\dots ,m$; $n=1,2,\dots $) are easy in practice, this algorithm is quite implementable. Moreover, strong convergence of our algorithm can be proved under some ordinary conditions.
The rest of this paper is organized as follows. Some useful lemmas are given in Section 2. In Section 3, the strong convergence of our algorithm is proved. Some numerical experiments are given in Section 4 which show advantages of our algorithm.
2 Preliminaries
Throughout the rest of this paper, we denote by H a real Hilbert space. We will use the notations:

→ denotes strong convergence.

⇀ denotes weak convergence.

${\omega}_{w}({x}^{n})=\{x\mid \mathrm{\exists}\{{x}^{{n}_{k}}\}\subset \{{x}^{n}\}\text{such that}{x}^{{n}_{k}}\rightharpoonup x\}$ denotes the weak ωlimit set of $\{{x}^{n}\}$.
A function $f:H\to \mathbb{R}$ is said to be subdifferentiable at x, if it has at least one subgradient at x. The set of subgradients of f at the point x, denoted by $\partial f(x)$, is called the subdifferential of f at x. The last relation above is called the subdifferential inequality of f at x. A function f is called subdifferentiable, if it is subdifferentiable at all $x\in H$.
This inequality is trivial but in common use.
The following lemma is the key to the proofs of strong convergence of our algorithms. In fact, it can be used as a new fundamental tool for solving some nonlinear problems, particularly, some problems related to projection operator.
Lemma 2.2 [15]
 (i)
${\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}=\mathrm{\infty}$,
 (ii)
${lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0$,
 (iii)
${lim}_{k\to \mathrm{\infty}}{\eta}_{{n}_{k}}=0$ implies ${lim\hspace{0.17em}sup}_{k\to \mathrm{\infty}}{\delta}_{{n}_{k}}\le 0$ for any subsequence $({n}_{k})\subset (n)$.
Then ${lim}_{n\to \mathrm{\infty}}{s}_{n}=0$.
3 Iterative algorithms
where ${C}_{n}^{1}$ and ${C}_{n}^{2}$ are given by (3.2) and the sequence $({\lambda}_{n})$ is in $(0,1)$.
Theorem 3.2 Assume that ${\lambda}_{n}\to 0$ ($n\to \mathrm{\infty}$) and ${\sum}_{n=1}^{+\mathrm{\infty}}{\lambda}_{n}=+\mathrm{\infty}$. Then the sequence $({x}^{n})$ generated by Algorithm 3.1 converges strongly to the point ${P}_{{C}^{1}\cap {C}^{2}}u$.
which means that $({x}^{n})$ is bounded.
where M is some positive constant such that $2\parallel u{x}^{\ast}\parallel \cdot \parallel {x}^{n+1}{x}^{\ast}\parallel \le M$ (noting that $({x}^{n})$ is bounded).
for any subsequence $({n}_{k})\subset (n)$. From the Lemma 2.2, we get ${lim}_{n\to \mathrm{\infty}}{s}_{n}=0$, which means ${x}^{n}\to {x}^{\ast}={P}_{{C}^{1}\cap {C}^{2}}u$. □
where ${\xi}_{n}^{1}\in \partial {c}_{1}({x}^{n}),{\xi}_{n}^{2}\in \partial {c}_{2}({P}_{{C}_{n}^{1}}{x}^{n}),\dots ,{\xi}_{n}^{m}\in \partial {c}_{m}({P}_{{C}_{n}^{m1}}\cdots {P}_{{C}_{n}^{2}}{P}_{{C}_{n}^{1}}{x}^{n})$. By the subdifferential inequality, it is easy to see that ${C}_{n}^{i}\supset {C}^{i}$ holds for all $n\ge 0$ and $i=1,\dots ,m$.
where ${C}_{n}^{1},{C}_{n}^{2},\dots ,{C}_{n}^{m}$ are given by (3.15) and the sequence $({\lambda}_{n})$ is in $(0,1)$.
By an argument very similar to the proof of Theorem 3.2, it is not difficult to see that our result of Theorem 3.2 can be extended easily to the general case.
Theorem 3.4 Assume that ${\lambda}_{n}\to 0$ ($n\to \mathrm{\infty}$) and ${\sum}_{n=1}^{+\mathrm{\infty}}{\lambda}_{n}=+\mathrm{\infty}$. Then the sequence $({x}^{n})$ generated by Algorithm 3.3 converges strongly to the point ${P}_{{C}^{1}\cap {C}^{2}\cap \cdots \cap {C}^{m}}u$.
Finally, we point out that if the computation of the projection operator ${P}_{{C}^{i}}$ is easy for all $i=1,\dots ,m$ (for example, ${C}^{i}$ is a closed ball or a halfspace for all $i=1,\dots ,m$), then we have no need to adopt the relaxation technique in the algorithm designs, that is, one can use the following algorithm to compute the projection ${P}_{{C}^{1}\cap {C}^{2}\cap \cdots \cap {C}^{m}}u$ for a given point $u\in H$. Moreover, the strong convergence of this algorithm can be proved by an argument similar to the proof of Theorem 3.4 (in fact, its proof is much simpler than that of Theorem 3.4).
where the sequence $({\lambda}_{n})$ is in $(0,1)$.
Theorem 3.6 Assume that ${\lambda}_{n}\to 0$ ($n\to \mathrm{\infty}$) and ${\sum}_{n=1}^{+\mathrm{\infty}}{\lambda}_{n}=+\mathrm{\infty}$. Then the sequence $({x}^{n})$ generated by Algorithm 3.5 converges strongly to the point ${P}_{{C}^{1}\cap {C}^{2}\cap \cdots \cap {C}^{m}}u$.
4 Numerical experiments
In this section, in order to show advantages of our algorithms, we present some numerical results via implementing Algorithm 3.1 and Algorithm 3.5 for two examples, respectively, in the setting of finitedimensional Hilbert space. The codes were written in Matlab 2013a and run on an Amd Liano APU A43300M Core4 CPU k43t (CPU 1.9 GHz) personal computer. In the following two examples, we always take $H={\mathbb{R}}^{3}$ and ${\lambda}_{n}=\frac{1}{n}$ for $n\ge 1$. The n th step iterate is denoted by ${x}^{n}={({x}_{1}^{n},{x}_{2}^{n},{x}_{3}^{n})}^{\mathrm{\top}}$. Since we do not know the exact projection ${P}_{{C}^{1}\cap {C}^{2}}u$, we use ${E}_{n}\triangleq \frac{\parallel {x}^{n+1}{x}^{n}\parallel}{\parallel {x}^{n}\parallel}$ to measure the error of the n th step iteration.
Use Algorithm 3.1 to calculate the projection ${P}_{{C}^{1}\cap {C}^{2}}u$.
Use Algorithm 3.5 to calculate the projection ${P}_{{C}^{1}\cap {C}^{2}}u$.
Numerical results as regards Example 4.1
n  ${\mathit{x}}_{\mathbf{1}}^{\mathit{n}}$  ${\mathit{x}}_{\mathbf{2}}^{\mathit{n}}$  ${\mathit{x}}_{\mathbf{3}}^{\mathit{n}}$  ${\mathit{E}}_{\mathit{n}}$ 

1  4.269110  1.992852  0.999360  1.33E − 01 
110  3.272412  1.954756  0.995899  4.00E − 03 
262  3.165479  1.951198  0.995565  1.82E − 03 
300  3.397080  1.957359  0.996125  1.32E − 03 
398  3.080705  1.948784  0.995342  1.28E − 03 
466  3.360174  1.956241  0.996020  8.78E − 04 
509  3.498434  1.959931  0.996356  7.15E − 04 
671  3.187363  1.951509  0.995587  7.01E − 04 
891  2.840956  1.942152  0.994733  6.77E − 04 
969  3.010468  1.946694  0.995147  5.53E − 04 
987  3.050742  1.947773  0.995245  5.27E − 04 
1,000  3.076057  1.948451  0.995307  5.10E − 04 
Numerical results as regards Example 4.2
n  ${\mathit{x}}_{\mathbf{1}}^{\mathit{n}}$  ${\mathit{x}}_{\mathbf{2}}^{\mathit{n}}$  ${\mathit{x}}_{\mathbf{3}}^{\mathit{n}}$  ${\mathit{E}}_{\mathit{n}}$ 

1  2.298142  4.596285  5.745356  1.16E − 01 
110  0.626949  1.253898  1.567372  9.69E − 03 
262  0.609227  1.218453  1.523067  3.94E − 03 
311  0.607194  1.214388  1.517985  3.45E − 03 
371  0.605435  1.210869  1.513586  2.98E − 03 
422  0.604331  1.208663  1.510829  2.63E − 03 
504  0.603025  1.206050  1.507562  2.02E − 03 
575  0.602194  1.204388  1.505485  1.77E − 03 
611  0.601846  1.203693  1.504616  1.67E − 03 
687  0.601232  1.202464  1.503080  1.52E − 03 
742  0.600980  1.201959  1.502449  1.47E − 03 
808  0.600492  1.200984  1.501230  1.37E − 03 
919  0.599984  1.199969  1.499961  1.20E − 03 
1,000  0.599685  1.199370  1.499213  1.07E − 03 
Declarations
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant No. 11201476) and the Fundamental Research Funds for the Central Universities (3122014K010).
Authors’ Affiliations
References
 Yang Q: On variableset relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166179. 10.1016/j.jmaa.2004.07.048MATHMathSciNetView ArticleGoogle Scholar
 Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in product space. Numer. Algorithms 1994,8(24):221239.MATHMathSciNetView ArticleGoogle Scholar
 Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 12611266. 10.1088/02665611/20/4/014MATHView ArticleGoogle Scholar
 Xu HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert spaces. Inverse Probl. 2010. Article ID 105018, 26: Article ID 105018Google Scholar
 Zhang W, Han D, Li Z: A selfadaptive projection method for solving the multiplesets split feasibility problem. Inverse Probl. 2009. Article ID 115001, 25: Article ID 115001Google Scholar
 Zhao J, Yang Q: Selfadaptive projection methods for the multiplesets split feasibility problem. Inverse Probl. 2011. Article ID 035009, 27: Article ID 035009Google Scholar
 Von Neumann J Annals of Mathematical Studies 22. In Functional Operators, Vol. II: The Geometry of Orthogonal Spaces. Princeton University Press, Princeton; 1950.Google Scholar
 Bregman LM: The method of successive projections for finding a common point of convex sets. Sov. Math. Dokl. 1965, 6: 688692.MATHGoogle Scholar
 Gubin LG, Polyak BT, Raik EV: The method of projections for finding the common point of convex sets. USSR Comput. Math. Math. Phys. 1967, 7: 124.View ArticleGoogle Scholar
 Bruck RE, Reich S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3: 459470.MATHMathSciNetGoogle Scholar
 Gibali A, Censor Y, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318335. 10.1007/s1095701097573MATHMathSciNetView ArticleGoogle Scholar
 Hundal HS: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 3561. 10.1016/j.na.2003.11.004MATHMathSciNetView ArticleGoogle Scholar
 Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 5870. 10.1007/BF01589441MATHMathSciNetView ArticleGoogle Scholar
 López G, MartínMárquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012. 10.1088/02665611/28/8/085004Google Scholar
 He S, Yang C: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013. 10.1155/2013/942315Google Scholar
 Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problem. SIAM Rev. 1996, 38: 367426. 10.1137/S0036144593251710MATHMathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.