A customized proximal point algorithm for stable principal component pursuit with nonnegative constraint
 Kaizhan Huai^{1}Email author,
 Mingfang Ni^{1},
 Feng Ma^{1} and
 Zhanke Yu^{1}
https://doi.org/10.1186/s1366001506686
© Huai et al.; licensee Springer. 2015
Received: 3 December 2014
Accepted: 17 April 2015
Published: 29 April 2015
Abstract
The stable principal component pursuit (SPCP) problem represents a large class of mathematical models appearing in sparse optimizationrelated applications such as image restoration, web data ranking. In this paper, we focus on designing a new primaldual algorithm for the SPCP problem with nonnegative constraint. Our method is based on the framework of proximal point algorithm. By taking full exploitation to the special structure of the SPCP problem, the method enjoys the advantage of being easily implementable. Global convergence result is established for the proposed method. Preliminary numerical results demonstrate that the method is efficient.
Keywords
proximal point method customized stable principal component pursuit primaldual algorithm1 Introduction
Recently, many algorithms using only firstorder information for solving the SPCP problem (1.2) have been proposed. Aybat and Iyengar proposed a firstorder augmented Lagrangian algorithm (FALC) which was the first algorithm with a known complexity bound that solves the SPCP problem in [5]. Tao and Yuan developed the alternating splitting augmented Lagrangian method (ASALM) and its variant (VASALM) for solving (1.2) in [6]. Aybat et al. advanced a new firstorder algorithm NSA based on partial variable splitting in [7]. Nevertheless, how to solve (1.3) has not caused enough attention. We can only find that Ma proposed an alternating proximal gradient method (APGM) which was based on the framework of alternating direction method of multipliers for solving (1.3) in [4]. In this paper, we propose a customized proximal point algorithm with a special proximal regularization parameter to solve the SPCP problem. Note that (1.3) is wellstructured in the sense that the separable structure emerges in both the objective function and the constraints. A natural idea is to develop a customized algorithm to take advantage of the favorable structure of (1.3). This is the main motivation of our paper.
The rest of this paper is organized as follows. In Section 2, we give some useful preliminaries. In Section 3, we present the customized PPA for solving (1.3) and the convergence analysis is shown in Section 4. In Section 5, we compare our algorithm with APGM to illustrate the efficiency by performing numerical experiments. Finally, some conclusions are drawn in Section 6.
2 Preliminaries
3 The new algorithm
In this section, we will present our new algorithm to solve VI (2.9). However, at the beginning, we first review the classical PPA.
Algorithm 1
(The main algorithm for (2.1))

Let \(r>0\) and \(s>\frac{1}{r}\B^{T}B\\), take \((x^{0},y^{0};\lambda^{0})\in\mathcal{X}\times\mathcal{Y}\times\mathbb{R}^{m}\) as the initial point.
 Step 1.:

Update y, x and λ:$$\begin{aligned}& y^{k+1}=\operatorname{argmin} \biggl\{ g(y)+\frac{r}{2} \biggl\ yy^{k}\frac{1}{r}B^{T}\lambda^{k} \biggr\ ^{2}\Bigm{y\in \mathcal{Y}} \biggr\} , \\& x^{k+1}=\operatorname{argmin} \biggl\{ f(x)\biggl(\lambda^{k} \frac{1}{s}\bigl(Ax^{k}+B\bigl(2y^{k+1}y^{k} \bigr)b\bigr)\biggr)^{T}Ax \\& \hphantom{x^{k+1}=}{}+\frac{1}{2s}\bigl\ A\bigl(xx^{k}\bigr) \bigr\ ^{2}+\frac{1}{2}\bigl\ xx^{k}\bigr\ ^{2}\Bigm{x\in \mathcal{X}}\biggr\} , \\& \lambda^{k+1}=\lambda^{k}\frac{1}{s} \bigl(Ax^{k+1}+B\bigl(2y^{k+1}y^{k}\bigr)b \bigr). \end{aligned}$$
 Step 2.:

If the termination criterion is met, stop the algorithm; otherwise, go to Step 1.
4 Convergence analysis
In this section, we will show the global convergence result of the algorithm proposed for solving (2.1) or (3.2). First, we need to prove the following lemma.
Lemma 1
Proof
Now we show and prove the contractive property of the proposed algorithm.
Lemma 2
Proof
5 Numerical results
Comparison of the CPU times between APGM and Algorithm 1
n  Algorithm  \(\boldsymbol{R_{r} =0.01}\) , \(\boldsymbol{C_{r} =0.01}\)  \(\boldsymbol{R_{r} =0.02}\) , \(\boldsymbol{C_{r} =0.02}\)  \(\boldsymbol{R_{r} =0.03}\) , \(\boldsymbol{C_{r} =0.03}\) 

min/avg/max  min/avg/max  min/avg/max  
100  APGM  0.4/0.5/0.9  0.9/1.1/1.7  0.9/1.1/1.4 
Algorithm 1  0.7/0.9/1.6  1.0/1.4/2.3  1.1/1.1/1.2  
150  APGM  1.5/1.8/2.0  2.2/2.4/2.6  2.1/2.2/2.4 
Algorithm 1  1.8/2.4/2.9  2.0/2.2/2.4  2.2/2.3/2.5  
200  APGM  3.3/4.0/6.6  4.1/4.7/6.5  3.4/4.0/5.0 
Algorithm 1  3.6/4.2/5.4  3.8/4.0/4.9  4.0/4.5/5.3  
250  APGM  6.3/8.9/18.5  6.2/6.9/8.5  5.4/6.9/9.7 
Algorithm 1  6.3/8.0/12.5  6.5/6.7/7.3  7.3/9.6/16.4  
300  APGM  10.8/11.7/12.5  9.1/10.5/15.4  8.0/9.8/12.6 
Algorithm 1  10.5/11.7/14.0  10.5/11.8/15.6  11.5/14.1/16.1  
400  APGM  33.6/36.1/38.6  28.9/30.1/31.2  29.7/30.2/33.5 
Algorithm 1  33.6/36.0/38.0  37.8/39.7/41.4  30.9/45.1/47.2  
500  APGM  69.6/74.9/83.0  63.1/66.0/67.3  62.1/64.7/66.5 
Algorithm 1  79.8/83.7/93.9  86.8/89.5/91.6  90.7/94.5/97.6 
Comparison of the iteration numbers between APGM and Algorithm 1
n  Algorithm  \(\boldsymbol{R_{r} =0.01}\) , \(\boldsymbol{C_{r} =0.01}\)  \(\boldsymbol{R_{r} =0.02}\) , \(\boldsymbol{C_{r} =0.02}\)  \(\boldsymbol{R_{r} =0.03}\) , \(\boldsymbol{C_{r} =0.03}\) 

min/avg/max  min/avg/max  min/avg/max  
100  APGM  30/43/70  62/68/74  67/ 72/77 
Algorithm 1  49/60/72  56/62/73  60/ 63/66  
150  APGM  43/53/59  60/63/64  53/ 55/57 
Algorithm 1  47/57/71  44/48/51  46/ 49/51  
200  APGM  45/50/54  51/53/57  43/ 45/47 
Algorithm 1  42/47/59  42/43/44  44/ 44/45  
250  APGM  47/51/53  43/45/46  36/ 37/38 
Algorithm 1  41/44/50  41/41/42  43/ 44/44  
300  APGM  46/47/48  39/40/41  34/ 34/34 
Algorithm 1  39/40/40  41/41/42  44/ 44/44  
400  APGM  41/43/43  33/33/33  32/ 33/35 
Algorithm 1  39/40/40  42/42/42  45/ 45/46  
500  APGM  36/37/38  33/33/33  33/ 33/33 
Algorithm 1  40/40/40  43/43/43  45/ 45/46 
6 Conclusions
For solving the SPCP problem (1.3), we proposed a new algorithm based on the PPA in this paper. The global convergence of our algorithm is established. Then the computational results indicate that our algorithm achieves comparable performance with APGM. In certain circumstances, our algorithm can get better results than APGM.
Declarations
Acknowledgements
The work is supported in part by the Natural Science Foundation of China Grant 71401176 and the Natural Science Foundation of Jiangsu Province Grant BK20141071.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Aybat, NS, Goldfarb, D, Ma, S: Efficient algorithms for robust and stable principal component pursuit problems. Comput. Optim. Appl. 58(1), 129 (2014) View ArticleMATHMathSciNetGoogle Scholar
 Candès, EJ, Li, X, Ma, Y, Wright, J: Robust principal component analysis? J. ACM 58(3), 11 (2011) View ArticleMathSciNetGoogle Scholar
 Zhou, Z, Li, X, Wright, J, Candes, E, Ma, Y: Stable principal component pursuit. In: Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, pp. 15181522. IEEE Press, New York (2010) View ArticleGoogle Scholar
 Ma, S: Alternating proximal gradient method for convex minimization. Preprint (2012) Google Scholar
 Aybat, NS, Iyengar, G: A unified approach for minimizing composite norms. Math. Program. 144(12), 181226 (2014) View ArticleMATHMathSciNetGoogle Scholar
 Tao, M, Yuan, X: Recovering lowrank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 21(1), 5781 (2011) View ArticleMATHMathSciNetGoogle Scholar
 Aybat, NS, Goldfarb, D, Iyengar, G: Fast firstorder methods for stable principal component pursuit (2011). arXiv:1105.2126
 He, B, Yuan, X: On the direct extension of ADMM for multiblock separable convex programming and beyond: from variational inequality perspective Google Scholar
 He, B, Yuan, X, Zhang, W: A customized proximal point algorithm for convex minimization with linear constraints. Comput. Optim. Appl. 56(3), 559572 (2013) View ArticleMATHMathSciNetGoogle Scholar
 Gu, G, He, B, Yuan, X: Customized proximal point algorithms for linearly constrained convex minimization and saddlepoint problems: a unified approach. Comput. Optim. Appl. 59(12), 135161 (2014) View ArticleMATHMathSciNetGoogle Scholar
 Boyd, S: EE364b Course Notes: SubGradient Methods. Stanford University, Stanford, CA (2010) Google Scholar
 Martinet, B: Brève communication. Régularisation d’inéquations variationnelles par approximations successives. ESAIM: Math. Model. Numer. Anal. 4(R3), 154158 (1970) MATHMathSciNetGoogle Scholar
 Rockafellar, RT: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97116 (1976) View ArticleMATHMathSciNetGoogle Scholar
 Ma, F, Ni, M, Zhu, L, Yu, Z: An implementable firstorder primaldual algorithm for structured convex optimization. Abstr. Appl. Anal. 2014, Article ID 396753 (2014) MathSciNetGoogle Scholar
 Boyd, S, Parikh, N, Chu, E, Peleato, B, Eckstein, J: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1122 (2011) View ArticleGoogle Scholar
 Chen, C, He, B, Ye, Y, Yuan, X: The direct extension of ADMM for multiblock convex minimization problems is not necessarily convergent. Math. Program., 123 (2014) Google Scholar
 Ma, S, Goldfarb, D, Chen, L: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128(12), 321353 (2011) View ArticleMATHMathSciNetGoogle Scholar
 Cai, JF, Candès, EJ, Shen, Z: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 19561982 (2010) View ArticleMATHMathSciNetGoogle Scholar
 Parikh, N, Boyd, S: Proximal algorithms. Found. Trends Optim. 1(3), 123231 (2013) Google Scholar
 Ma, S, Xue, L, Zou, H: Alternating direction methods for latent variable Gaussian graphical model selection. Neural Comput. 25(8), 21722198 (2013) View ArticleMathSciNetGoogle Scholar