 Research
 Open Access
Proximal iteratively reweighted algorithm for lowrank matrix recovery
 ChaoQun Ma^{1} and
 YiShuai Ren^{1}Email authorView ORCID ID profile
https://doi.org/10.1186/s136600171602x
© The Author(s) 2018
 Received: 4 November 2017
 Accepted: 28 December 2017
 Published: 8 January 2018
Abstract
This paper proposes a proximal iteratively reweighted algorithm to recover a lowrank matrix based on the weighted fixed point method. The weighted singular value thresholding problem gains a closed form solution because of the special properties of nonconvex surrogate functions. Besides, this study also has shown that the proximal iteratively reweighted algorithm lessens the objective function value monotonically, and any limit point is a stationary point theoretically.
Keywords
 compressed sensing
 matrix rank minimization
 reweighted nuclear norm minimization
 Schattenp quasinorm minimization
1 Introduction
Different from previous studies, based on the weighted fixed point method, this paper puts forward a proximal iteratively reweighted algorithm to recover a lowrank matrix. Due to the special properties of nonconvex surrogate functions, the algorithm iteratively has a closed form solution to solve a weighted singular value thresholding problem. Also, in theory, this study has proved that the proximal iteratively reweighted algorithm decreases the objective function value monotonically, and any limit point is a stationary point.
The remainder of this paper is organized as follows. Section 2 introduces some notations and preliminary lemmas, and Section 3 describes the main results. The conclusion is followed in Section 4.
2 Preliminaries
Lemma 2.1
([16])
3 Main results
In order to introduce the following lemma, the definitions of Lipschitz continuous of a function and the norm \(\ \cdot\_{F}\) are given, namely a function is Lipschitz continuous with constant L if, for any x, y, \(f(x)  f(y) \le L\x  y\\); and the \(\ \cdot\_{F}\) of a matrix X is defined as \(\X\_{F}: = \sqrt{\sum_{i = 1}^{m} \sum_{j = 1}^{n} x_{ij}^{2}} \).
Lemma 3.1
([17])
Now let \(f ( X ) = \frac{1}{2} \Vert \mathrm{A} ( X )  b \Vert _{2}^{2}\), thus the Lipschitz constant of the gradient \(\nabla f ( X ) = \mathrm{A}^{\bigstar} ( \mathrm{A} ( X )  b )\) is \(L ( f ) = \lambda ( \mathrm{A}^{\bigstar} \mathrm{A} )\), where \(\lambda ( \mathrm{A}^{\bigstar} \mathrm{A} )\) is the maximum eigenvalue of \(\mathrm{A}^{\bigstar} \mathrm{A}\).
Lemma 3.2
If the function \(g ( X ) = \langle Q,X^{T}X \rangle\) with \(X \in\mathbb{R}^{m \times n}\) and \(Q \in\mathbb{R}^{n \times n}\), then the gradient of \(g ( X )\) is \(\nabla g ( X ) = 2XQ\).
Proof
Based on the above analysis, this paper proposes the following algorithm.
Algorithm 1
(Proximal iteratively reweighted algorithm to solve problem (2.1))
 1::

Input: \(\rho\ge\frac{L ( f )}{2}\), where \(L ( f )\) is the Lipschitz constant of \(f ( x )\).
 2::

Initialization: \(k = 0\), \(W_{k}\).
 3::

Update \(X^{k + 1}\) by solving the following problem:$$X_{k + 1} = \arg\min\tau \bigl\langle X^{T}X,W_{k} \bigr\rangle + \frac{\rho}{2} \bigg\Vert X  \biggl( X_{k}  \frac{1}{\rho} \mathrm {A}^{\bigstar} \bigl( \mathrm{A} ( X_{k} )  b \bigr) \biggr) \bigg\Vert _{F}^{2}.$$
 4::

Update the weight \(W_{k + 1}\) by$$W_{k + 1} \in \partial \bigl(  \varphi ( Y_{k + 1} ) \bigr),\quad\text{where } Y_{k + 1} = X_{k}^{T}X_{k}.$$
 5::

Output lowrank matrix \(X^{k}\).
Theorem 3.3
 (1)$$F ( X_{k} )  F ( X_{k + 1} ) \ge \biggl( \rho \frac{L ( f )}{2} \biggr) \Vert X_{k}  X_{k + 1} \Vert _{F}^{2}. $$
 (2)
The sequence \(\{ X_{k} \}\) is bounded.
 (3)
\(\sum_{k = 1}^{\infty} \Vert X_{k}  X_{k + 1} \Vert _{F}^{2} < \frac{2\theta}{2\rho L ( f )}\). In particular, \(\lim_{k \to\infty} \Vert X_{k}  X_{k + 1} \Vert _{F} = 0\).
Proof
Therefore, the proof has been completed. □
Theorem 3.4
Proof
4 Conclusion
A proximal iteratively reweighted algorithm based on the weighted fixed point method for recovering a lowrank matrix problem has been presented in this paper. Due to the special properties of the nonconvex surrogate function, the algorithm in this study iteratively has a closed form solution to solving a weighted singular value thresholding problem. Finally, it has been proved that the algorithm can decrease the objective function value monotonically and any limit point is a stationary point.
Declarations
Acknowledgements
We gratefully acknowledge the financial support from the National Natural Science Foundation of China (No. 71431008).
Authors’ contributions
The authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Lu, C, Lin, Z, Yan, S: Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization. IEEE Trans. Image Process. 24, 646654 (2015) MathSciNetView ArticleGoogle Scholar
 Fornasier, M, Rauhut, H, Ward, R: Lowrank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim. 24, 16141640 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Akcay, H: Subspacebased spectrum estimation infrequencydomain by regularized nuclear norm minimization. Signal Process. 99, 6985 (2014) View ArticleGoogle Scholar
 Takahashi, T, Konishi, K, Furukawa, T: Rank minimization approach to image inpainting using null space based alternating optimization. In: Proceedings of IEEE International Conference on Image Processing, pp. 17171720 (2012) Google Scholar
 Liu, G, Lin, Z, Yu, Y: Robust subspace segmentation by lowrank representation. In: Proceedings of the 27th International Conference on Machine Learning, pp. 663670 (2010) Google Scholar
 Candes, EJ, Recht, B: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717772 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Liu, Z, Vandenberghe, L: Interiorpoint method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31(3), 12351256 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Recht, B, Fazel, M, Parrilo, P: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Rev. 52, 471501 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Recht, B, Xu, W, Hassibi, B: Null space conditions and thresholds for rank minimization. Math. Program. 127, 175202 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Baraniuk, R: Compressive sensing. IEEE Signal Process. Mag. 24(4), 118124 (2007) View ArticleGoogle Scholar
 Zhang, M, Haihuang, Z, Zhang, Y: Restricted pisometry properties of nonconvex matrix recovery. IEEE Trans. Inf. Theory 59(7), 43164323 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Liu, L, Huang, W, Chen, DR: Exact minimum rank approximation via Schatten pnorm minimization. J. Comput. Appl. Math. 267, 218227 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Lai, M, Xu, Y, Yin, W: Improved iteratively reweighted least squares for unconstrained smoothed \(\ell_{q}\) minimization. SIAM J. Numer. Anal. 51(2), 927957 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Konishi, K, Uruma, K, Takahashi, T, Furukawa, T: Iterative partial matrix shrinkage algorithm for matrix rank minimization. Signal Process. 100, 124131 (2014) View ArticleGoogle Scholar
 Mohan, K, Fazel, M: Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn. Res. 13(1), 34413473 (2012) MathSciNetMATHGoogle Scholar
 Li, YF, Zhang, YJZ, Huang, ZH: A reweighted nuclear norm minimization algorithm for low rank matrix recovery. J. Comput. Appl. Math. 263, 338350 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Toh, KC, Yun, S: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6, 615640 (2010) MathSciNetMATHGoogle Scholar