- Open Access
A simple algorithm for computing projection onto intersection of finite level sets
Journal of Inequalities and Applications volume 2014, Article number: 307 (2014)
We consider the problem of computing the projection , where u is chosen in a real Hilbert space H arbitrarily and the closed convex subset C of H is the intersection of finite level sets of convex functions given as follows: , where m is a positive integer and is a convex function for . A relaxed Halpern-type algorithm is proposed for computing the projection in this paper, which is defined by , , where the initial guess is chosen arbitrarily, the sequence is in and is a sequence of half-spaces containing for . Since calculations of the projections onto half-spaces (; ) are easy in practice, this algorithm is quite implementable. Strong convergence of our algorithm is proved under some ordinary conditions. Some numerical experiments are provided which show advantages of our algorithm.
MSC:58E35, 47H09, 65J15.
Let H be a real Hilbert space with the inner product and the norm and let C be a nonempty closed convex subset of H. The projection from H onto C is defined by
It is well known that is characterized by the inequality (for )
The projection operator has a variety of specific applications in different areas, such as the fixed point problem, the convex optimization problem, the variational inequality , the split feasibility problem [2–6], and many other applied fields. But the projection onto a general closed convex subset has no explicit expression (unless C is a closed ball or half-space and so on), so the computation of a projection is generally difficult. We know that the method of alternating projections (MAP) (also known as successive orthogonal projections (SOP)) has been thoroughly studied. The simplest and earliest convergence results concerning alternating projections between two sets were discovered by von Neumann , Bregman  and Gubin et al. . Now we briefly list some results of MAP. In 1933, von Neumann  proved that the sequence generated by the scheme:
converges in norm to when and are two closed subspaces of H. In 1965, Bregman  showed that the iterates generated by (1.2) converge weakly to for any pair of closed convex subsets and . Gubin et al.  proved that the iterates will converge linearly to if and are ‘boundedly regular’. Actually, they proved this result for alternating projections between any finite collection of closed convex sets. Strong convergence also holds when the sets are symmetric , Theorem 2.2; , Corollary 2.6]. However, in 2004, Hundal  proved that the sequence of iterates generated by (1.2) does not always converge in norm to by providing an explicit counterexample.
Since the computation of a projection onto a closed convex subset is generally difficult, to overcome this difficulty, Fukushima  suggested a way to calculate the projection onto a level set of a convex function by computing a sequence of projections onto half-spaces containing the original level set. This idea is followed by Yang  and Lopez et al. , respectively, who introduced the relaxed CQ algorithms for solving the split feasibility problem in the setting of finite-dimensional and infinite-dimensional Hilbert spaces, respectively. The idea is also used by Gibali et al.  and He et al.  for solving variational inequalities in a Hilbert space.
The main purpose of this paper is to consider the problem of computing the projection , where u is chosen in H arbitrarily and the closed convex subset C is of the particular structure, i.e., the intersection of finite level sets of convex functions given as follows:
where m is a positive integer and is a convex function for .
A relaxed Halpern-type algorithm proposed for computing the projection in this paper is defined by
where the initial guess is chosen arbitrarily, the sequence is in and is a sequence of half-spaces containing for (the specific structure of the half-spaces will be described in Section 3). Since calculations of the projections onto half-spaces (; ) are easy in practice, this algorithm is quite implementable. Moreover, strong convergence of our algorithm can be proved under some ordinary conditions.
The rest of this paper is organized as follows. Some useful lemmas are given in Section 2. In Section 3, the strong convergence of our algorithm is proved. Some numerical experiments are given in Section 4 which show advantages of our algorithm.
Throughout the rest of this paper, we denote by H a real Hilbert space. We will use the notations:
→ denotes strong convergence.
⇀ denotes weak convergence.
denotes the weak ω-limit set of .
Recall that a mapping is said to be nonexpansive if
is said to be firmly nonexpansive if, for ,
Recall some definitions which are also useful for us to prove the main results. Recall that an element is said to be a subgradient of at x if
A function is said to be subdifferentiable at x, if it has at least one subgradient at x. The set of subgradients of f at the point x, denoted by , is called the subdifferential of f at x. The last relation above is called the subdifferential inequality of f at x. A function f is called subdifferentiable, if it is subdifferentiable at all .
Recall that a function is said to be weakly lower semi-continuous (w-lsc) at x if implies
Lemma 2.1 For all , we have the relation
This inequality is trivial but in common use.
The following lemma is the key to the proofs of strong convergence of our algorithms. In fact, it can be used as a new fundamental tool for solving some nonlinear problems, particularly, some problems related to projection operator.
Lemma 2.2 
Assume is a sequence of nonnegative real numbers such that
where is a sequence in , is a sequence of nonnegative real numbers and and are two sequences in ℝ such that
implies for any subsequence .
3 Iterative algorithms
In this section, we precisely introduce algorithm (1.3) and analyze its strong convergence. For the sake of simplicity, it suffices to consider the case without loss of the generality, that is, , where
and and are two convex functions. We always assume that and are subdifferentiable on H, and are bounded operators (i.e. bounded on bounded sets). It is worth noting that every convex function defined on a finite-dimensional Hilbert space is subdifferentiable and its subdifferential operator is a bounded operator (see ). Suppose the n th iteraction has been constructed, by using the subdifferential inequality, we construct the two half-spaces as follows:
where , . By the subdifferential inequality, it is easy to see that
Algorithm 3.1 For a given , take an initial guess arbitrarily and construct the sequence via the formula
where and are given by (3.2) and the sequence is in .
Theorem 3.2 Assume that () and . Then the sequence generated by Algorithm 3.1 converges strongly to the point .
Proof Firstly, we verify that is bounded. Setting , since , we obtain from (3.4) and Lemma 2.1 that
it turns out that
which means that is bounded.
Secondly, we use Lemma 2.2 to prove the strong convergence of Algorithm 3.1. Since a projection is firmly nonexpansive, we obtain
Using (3.6) and the first inequality of (3.5), we have
where M is some positive constant such that (noting that is bounded).
then (3.7) is rewritten as follows:
From the first inequality of (3.5), we also have
then (3.9) is rewritten as follows:
Observing that the condition implies and the condition holds, we assert from (3.8) and (3.10) that in order to complete the proof using Lemma 2.2, it suffices to verify that
for any subsequence . In fact, if as , then and hold. Since and are bounded on bounded sets, there are two positive constants and such that and for all (noting that is also bounded due to the fact that ). From the trivial fact that and , it follows that
Take arbitrarily and assume that holds without loss of generality, then the w-lsc of and (3.11) imply that
this means that holds. Noting that implies , this together with (3.12) and the w-lsc of leads to the fact that
which implies that . Moreover, we obtain , and hence
On the other hand
we can deduce from (3.13) and (1.1) that
which implies that
for any subsequence . From the Lemma 2.2, we get , which means . □
Now we turn to a sketch of the general case. Let m be a positive integer and let be a level set of a convex function for . We always assume that is subdifferentiable on H and is a bounded operator for all . Suppose that the n th iterate has been obtained, similar to (3.2), we construct m half-spaces from the subdifferential inequality as follows:
where . By the subdifferential inequality, it is easy to see that holds for all and .
Algorithm 3.3 For a given , take an initial guess arbitrarily and construct the sequence via the formula
where are given by (3.15) and the sequence is in .
By an argument very similar to the proof of Theorem 3.2, it is not difficult to see that our result of Theorem 3.2 can be extended easily to the general case.
Theorem 3.4 Assume that () and . Then the sequence generated by Algorithm 3.3 converges strongly to the point .
Finally, we point out that if the computation of the projection operator is easy for all (for example, is a closed ball or a half-space for all ), then we have no need to adopt the relaxation technique in the algorithm designs, that is, one can use the following algorithm to compute the projection for a given point . Moreover, the strong convergence of this algorithm can be proved by an argument similar to the proof of Theorem 3.4 (in fact, its proof is much simpler than that of Theorem 3.4).
Algorithm 3.5 Let and start an initial guess arbitrarily. The sequence is constructed via the formula
where the sequence is in .
Theorem 3.6 Assume that () and . Then the sequence generated by Algorithm 3.5 converges strongly to the point .
4 Numerical experiments
In this section, in order to show advantages of our algorithms, we present some numerical results via implementing Algorithm 3.1 and Algorithm 3.5 for two examples, respectively, in the setting of finite-dimensional Hilbert space. The codes were written in Matlab 2013a and run on an Amd Liano APU A4-3300M Core4 CPU k43t (CPU 1.9 GHz) personal computer. In the following two examples, we always take and for . The n th step iterate is denoted by . Since we do not know the exact projection , we use to measure the error of the n th step iteration.
Example 4.1 Take , , and
Use Algorithm 3.1 to calculate the projection .
Example 4.2 Take , , and
Use Algorithm 3.5 to calculate the projection .
Yang Q: On variable-set relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166-179. 10.1016/j.jmaa.2004.07.048
Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in product space. Numer. Algorithms 1994,8(2-4):221-239.
Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261-1266. 10.1088/0266-5611/20/4/014
Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010. Article ID 105018, 26: Article ID 105018
Zhang W, Han D, Li Z: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 2009. Article ID 115001, 25: Article ID 115001
Zhao J, Yang Q: Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 2011. Article ID 035009, 27: Article ID 035009
Von Neumann J Annals of Mathematical Studies 22. In Functional Operators, Vol. II: The Geometry of Orthogonal Spaces. Princeton University Press, Princeton; 1950.
Bregman LM: The method of successive projections for finding a common point of convex sets. Sov. Math. Dokl. 1965, 6: 688-692.
Gubin LG, Polyak BT, Raik EV: The method of projections for finding the common point of convex sets. USSR Comput. Math. Math. Phys. 1967, 7: 1-24.
Bruck RE, Reich S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3: 459-470.
Gibali A, Censor Y, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318-335. 10.1007/s10957-010-9757-3
Hundal HS: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35-61. 10.1016/j.na.2003.11.004
Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 58-70. 10.1007/BF01589441
López G, Martín-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012. 10.1088/0266-5611/28/8/085004
He S, Yang C: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013. 10.1155/2013/942315
Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problem. SIAM Rev. 1996, 38: 367-426. 10.1137/S0036144593251710
This work was supported by National Natural Science Foundation of China (Grant No. 11201476) and the Fundamental Research Funds for the Central Universities (3122014K010).
The authors declare that they have no competing interests.
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
About this article
Cite this article
He, S., Zhao, Z. & Luo, B. A simple algorithm for computing projection onto intersection of finite level sets. J Inequal Appl 2014, 307 (2014). https://doi.org/10.1186/1029-242X-2014-307
- Hilbert space
- strong convergence