 Research
 Open access
 Published:
Proximal iteratively reweighted algorithm for lowrank matrix recovery
Journal of Inequalities and Applications volumeÂ 2018, ArticleÂ number:Â 12 (2018)
Abstract
This paper proposes a proximal iteratively reweighted algorithm to recover a lowrank matrix based on the weighted fixed point method. The weighted singular value thresholding problem gains a closed form solution because of the special properties of nonconvex surrogate functions. Besides, this study also has shown that the proximal iteratively reweighted algorithm lessens the objective function value monotonically, and any limit point is a stationary point theoretically.
1 Introduction
The lowrank matrix recovery problem has been a research hotpot recently [1, 2], and it has a range of applications in many fields such as signal or image processing [3, 4], subspace segmentation [5], collaborative filtering [6], and system identification [7]. Matrix rank minimization under affine equality constraints is generally formulated as follows:
where the linear map \(\mathrm{A}:R^{m \times n} \to R^{P}\) and the vector b are known.
Unfortunately, solving the above rank minimization problem (1.1) directly is an NPhard problem [8], thus this problem is computationally infeasible. Therefore, the convex relations of these methods have been proposed and studied in the literature. For example, Recht et al. [8] proposed a nuclear norm minimization method for the matrix reconstruction. The tightest convex relaxation of problem (1.1) is the following nuclear norm minimization problem:
where \(\Vert X \Vert _{\bigstar} = \sum_{i = 1}^{r} \sigma_{i} ( X )\) is the sum of all the singular values of \(X \in R^{m \times n}\) with \(\operatorname{rank} ( X ) = r\) (without loss of generality, \(n \le m\)). It has been shown that problem (1.2) shares common solutions with problem (1.1) under some sufficient conditions (see, e.g., [8, 9]).
However, the exact recovery of the lowrank matrix requires more measurements via nuclear norm minimization. Recently, some experimental observations and theoretical guarantees have shown the superiority of \(\ell_{p}\) quasinorm minimization to \(\ell_{1}\) minimization in compressive sampling [10]. Therefore, the \(\ell_{p}\) quasinorm minimization [11â€“13] was introduced instead of the nuclear norm minimization in order to give a better approximation to the original problem (1.1). Therefore, the \(\ell_{p}\) quasinorm minimization can be formulated as
where \(\Vert X \Vert _{p} = ( \sum_{i = 1}^{r} \sigma_{i}^{p} ( X ) )^{1/p}\) for some \(p \in ( 0,1 )\).
However, in practice, the observed data in the lowrank matrix recovery problem may be contaminated with noise, namely \(b = \mathrm{A}X + e\), where e contains measurement errors dominated by certain normal distribution. In order to recover the lowrank matrix robustly, problem (1.3) can be modified to
where \(\Vert \cdot \Vert _{2}\) is the \(\ell_{2}\) norm of vector and \(\varepsilon\ge \Vert e \Vert _{2}\) is some constant.
Under some conditions, problems (1.3) and (1.4) can be rewritten as the following unconstrained model:
where \(\tau> 0\) is a given parameter. Since the above problem (1.5) is nonconvex and NPhard, thus the researchers throughout the world proposed and analyzed some iterative reweighted algorithms [13â€“15]. The key idea of the iterative reweighted technique is to solve a convex problem with a given weight at each iteration and update the weight at every turn.
Different from previous studies, based on the weighted fixed point method, this paper puts forward a proximal iteratively reweighted algorithm to recover a lowrank matrix. Due to the special properties of nonconvex surrogate functions, the algorithm iteratively has a closed form solution to solve a weighted singular value thresholding problem. Also, in theory, this study has proved that the proximal iteratively reweighted algorithm decreases the objective function value monotonically, and any limit point is a stationary point.
The remainder of this paper is organized as follows. SectionÂ 2 introduces some notations and preliminary lemmas, and SectionÂ 3 describes the main results. The conclusion is followed in SectionÂ 4.
2 Preliminaries
Recently, Lai et al. [13] considered the following unconstrained problem:
where I is the \(n \times n\) identity matrix and \(\varepsilon> 0\) is a smoothing parameter. By the definition in [13], we have
Lemma 2.1
([16])
Let \(\varphi ( X ) = \psi \circ\sigma ( X ) = \sum_{i = 1}^{n} ( \vert \sigma_{i} ( X ) \vert + \varepsilon )^{p}\), where the function \(\varphi: \mathbb{R}^{m \times n} \to [  \infty, + \infty ]\) with \(n \le m\) is orthogonally invariant; \(\psi: \mathbb{R}^{m \times n} \to [  \infty, + \infty ]\) is an absolutely symmetric function and \(p \in ( 0,1 )\), then \(\varphi= \psi\circ\sigma\) is subdifferentiable at matrix \(X \in\mathbb{R}^{m \times n}\) and
with \(X = U\Sigma V^{T}\) being the SVD of X, and
is a constant depending only on the value of \(\sigma_{i} ( X )\) for each \(i \in\Omega\).
From Lemma 2.1, let \(m = n\) and the matrix Y be a semidefinite matrix, then \(Y = Y^{T}\) and the subdifferentiable of the function
is
with \(Y = U_{1}\Sigma_{1}U_{1}^{T}\) being the SVD of Y, and \(\Omega= \{ 1,2, \ldots,n \}\).
From (2.3), it is easier to know exactly that \(\varphi ( Y )\) is concave, thus \( \varphi ( Y )\) is convex. Besides, a vector \(Y^{*}\) is said to be a subgradient of a convex function f at a point Y if \(f(z) \ge f(Y) + \langle Y^{*},Y  x \rangle\), for any Z. Therefore, based on the definition of subgradient of the convex function, we have
where \( G_{k}\) is the subgradient of \( \varphi ( Y )\) at \(Y_{k}\), i.e., \( G_{k} \in\partial (  \varphi ( Y_{k} ) )\). The inequality of (2.4) is equivalent to
Then \(\varphi ( Y_{k} ) + \langle G_{k},Y  Y_{k} \rangle\) is used as a surrogate function of \(\varphi ( Y )\).
3 Main results
Let \(Y = X^{T}X\), then \(Y = V\Sigma^{2}V^{T}\) can be obtained, where \(X = U\Sigma V^{T}\) with \(U \in\mathbb{R}^{m \times n}\), \(V \in\mathbb{R}^{n \times n}\), and \(\Sigma= \operatorname{Diag} \{ \sigma_{i} ( X ) \} \in\mathbb{R}^{n \times n}\), then \(\sigma_{i} ( Y ) = ( \sigma_{i} ( X ) )^{2}\). From (2.2), (2.3), and (2.5),
can be obtained, where\(W_{k} \in\frac{p}{2}V\operatorname{Diag} \{ \frac{c_{i}}{ ( ( \sigma_{i} ( X_{k} ) )^{2} + \varepsilon )^{1  \frac{p}{2}}}:i \in\Omega \}V^{T}\).
In order to introduce the following lemma, the definitions of Lipschitz continuous of a function and the norm \(\ \cdot\_{F}\) are given, namely a function is Lipschitz continuous with constant L if, for any x, y, \(f(x)  f(y) \le L\x  y\\); and the \(\ \cdot\_{F}\) of a matrix X is defined as \(\X\_{F}: = \sqrt{\sum_{i = 1}^{m} \sum_{j = 1}^{n} x_{ij}^{2}} \).
Lemma 3.1
([17])
Let \(f:\mathbb{R}^{m \times n} \to \mathbb{R}\) be a continuously differentiable function with Lipschitz continuous gradient and the Lipschitz constant \(L ( f )\). Then, for any \(L \ge L ( f )\),
Now let \(f ( X ) = \frac{1}{2} \Vert \mathrm{A} ( X )  b \Vert _{2}^{2}\), thus the Lipschitz constant of the gradient \(\nabla f ( X ) = \mathrm{A}^{\bigstar} ( \mathrm{A} ( X )  b )\) is \(L ( f ) = \lambda ( \mathrm{A}^{\bigstar} \mathrm{A} )\), where \(\lambda ( \mathrm{A}^{\bigstar} \mathrm{A} )\) is the maximum eigenvalue of \(\mathrm{A}^{\bigstar} \mathrm{A}\).
By using (2.1), (2.3), (3.1), and (3.2), we update \(X_{k + 1}\) by minimizing the sum of these two surrogate functions
where \(\rho\ge\frac{L ( f )}{2}\).
Lemma 3.2
If the function \(g ( X ) = \langle Q,X^{T}X \rangle\) with \(X \in\mathbb{R}^{m \times n}\) and \(Q \in\mathbb{R}^{n \times n}\), then the gradient of \(g ( X )\) is \(\nabla g ( X ) = 2XQ\).
Proof
Consider the auxiliary function \(\theta:\mathbb{R} \to \mathbb{R}\), given by \(\theta ( t ) = g ( X + tY )\), for any arbitrary matrix \(Y \in\mathbb{R}^{m \times n}\). From the basic calculus, it can be known that \(\theta' ( 0 ) = \langle \nabla g ( X ),Y \rangle\). By the definition of the derivative of function, it follows that
thus the gradient of \(g ( X )\) is \(\nabla g ( X ) = 2XQ\).â€ƒâ–¡
Based on the above analysis, this paper proposes the following algorithm.
Algorithm 1
(Proximal iteratively reweighted algorithm to solve problem (2.1))
 1::

Input: \(\rho\ge\frac{L ( f )}{2}\), where \(L ( f )\) is the Lipschitz constant of \(f ( x )\).
 2::

Initialization: \(k = 0\), \(W_{k}\).
 3::

Update \(X^{k + 1}\) by solving the following problem:
$$X_{k + 1} = \arg\min\tau \bigl\langle X^{T}X,W_{k} \bigr\rangle + \frac{\rho}{2} \bigg\Vert X  \biggl( X_{k}  \frac{1}{\rho} \mathrm {A}^{\bigstar} \bigl( \mathrm{A} ( X_{k} )  b \bigr) \biggr) \bigg\Vert _{F}^{2}.$$  4::

Update the weight \(W_{k + 1}\) by
$$W_{k + 1} \in \partial \bigl(  \varphi ( Y_{k + 1} ) \bigr),\quad\text{where } Y_{k + 1} = X_{k}^{T}X_{k}.$$  5::

Output lowrank matrix \(X^{k}\).
Theorem 3.3
Let \(\rho\ge\frac{L ( f )}{2}\), where \(L ( f )\) is the Lipschitz constant of \(f ( x )\). The sequence \(\{ X_{k} \}\) generated in Algorithm 1 satisfies

(1)
$$F ( X_{k} )  F ( X_{k + 1} ) \ge \biggl( \rho \frac{L ( f )}{2} \biggr) \Vert X_{k}  X_{k + 1} \Vert _{F}^{2}. $$

(2)
The sequence \(\{ X_{k} \}\) is bounded.

(3)
\(\sum_{k = 1}^{\infty} \Vert X_{k}  X_{k + 1} \Vert _{F}^{2} < \frac{2\theta}{2\rho L ( f )}\). In particular, \(\lim_{k \to\infty} \Vert X_{k}  X_{k + 1} \Vert _{F} = 0\).
Proof
Since \(X_{k + 1}\) is the globally optimal solution of problem (3.3), and the zero matrix is contained in the subgradient with respect to X. That is, there exists a matrix \(X_{k + 1} \in\) such that
By using the above equality of (3.4) and (3.5), we get
Since the function \(\langle W_{k},X^{T}X \rangle\) is a convex function on X, thus
and the above equality also can be rewritten as
Then it follows from (3.6) and (3.7) that
Let \(f ( X ) = \frac{1}{2} \Vert \mathrm{A} ( X )  b \Vert _{2}^{2}\), and according to LemmaÂ 3.1,
can be obtained. Since the function \(\operatorname{tr} ( ( X^{T}X + \varepsilon I )^{p/2} )\) is concave, and just like (3.1), then it can be obtained
Now, combining (3.8), (3.9), and (3.10), we get
Thus, \(F ( X_{k} )\) is monotonically decreasing. Given the facts of all inequalities above for \(k \ge1\), it can be obtained
and from (3.11) it follows that
Then, for \(k \to\infty\), (3.12) implies that
Since the objective function \(F ( X )\) in problem (2.1) is nonnegative and satisfies
then \(X_{k} \in \{ X:0 \le F ( X ) \le F ( X_{1} ) \}\) and the sequence \(\{ X_{k} \}\) is bounded.
Therefore, the proof has been completed.â€ƒâ–¡
Theorem 3.4
Let \(\{ X_{k} \}\) be the sequence generated in Algorithm 1. Then any accumulation point of \(\{ X_{k} \}\) is a stationary point \(X^{\bigstar} \) of the problem. Moreover, for \(k = 1,2, \ldots,N\), there always exists
Proof
Since the sequence \(\{ X_{k} \}\) generated in Algorithm 1 is bounded, there exist an accumulation point \(X^{\bigstar} \) and a subsequence \(\{ X_{kj} \}\) such that \(\lim_{j \to\infty} X_{kj} = X^{\bigstar} \). Assume that \(X_{kj}\) is the solution of problem (3.3), it can be obtained
Let \(j \to\infty\), according to TheoremÂ 3.3, \(\lim_{j \to\infty} \Vert X_{kj + 1}  X_{kj} \Vert _{F} = 0\) can be obtained. Hence, there exists the matrix
where \(X^{\bigstar} = U_{2}\Sigma_{2}V_{2}^{T}\) with \(U_{2} \in\mathbb{R}^{m \times n}\), \(V_{2} \in\mathbb{R}^{n \times n}\), and \(\sum= \operatorname{Diag} \{ \frac{1}{ ( ( \sigma_{i} ( X^{\bigstar} ) )^{2} + \varepsilon )^{1  \frac{p}{2}}} \}\).
By the above analysis, it can be known that
then \(X^{\bigstar} \) is a stationary point of problem (2.1).
Moreover, by using (3.11), for \(k = 1,2, \ldots,N\), it can be obtained
Thus
can be obtained, which completes the proof.â€ƒâ–¡
4 Conclusion
A proximal iteratively reweighted algorithm based on the weighted fixed point method for recovering a lowrank matrix problem has been presented in this paper. Due to the special properties of the nonconvex surrogate function, the algorithm in this study iteratively has a closed form solution to solving a weighted singular value thresholding problem. Finally, it has been proved that the algorithm can decrease the objective function value monotonically and any limit point is a stationary point.
References
Lu, C, Lin, Z, Yan, S: Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization. IEEE Trans. Image Process. 24, 646654 (2015)
Fornasier, M, Rauhut, H, Ward, R: Lowrank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim. 24, 16141640 (2011)
Akcay, H: Subspacebased spectrum estimation infrequencydomain by regularized nuclear norm minimization. Signal Process. 99, 6985 (2014)
Takahashi, T, Konishi, K, Furukawa, T: Rank minimization approach to image inpainting using null space based alternating optimization. In: Proceedings of IEEE International Conference on Image Processing, pp.Â 17171720 (2012)
Liu, G, Lin, Z, Yu, Y: Robust subspace segmentation by lowrank representation. In: Proceedings of the 27th International Conference on Machine Learning, pp.Â 663670 (2010)
Candes, EJ, Recht, B: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717772 (2009)
Liu, Z, Vandenberghe, L: Interiorpoint method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31(3), 12351256 (2009)
Recht, B, Fazel, M, Parrilo, P: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Rev. 52, 471501 (2010)
Recht, B, Xu, W, Hassibi, B: Null space conditions and thresholds for rank minimization. Math. Program. 127, 175202 (2011)
Baraniuk, R: Compressive sensing. IEEE Signal Process. Mag. 24(4), 118124 (2007)
Zhang, M, Haihuang, Z, Zhang, Y: Restricted pisometry properties of nonconvex matrix recovery. IEEE Trans. Inf. Theory 59(7), 43164323 (2013)
Liu, L, Huang, W, Chen, DR: Exact minimum rank approximation via Schatten pnorm minimization. J. Comput. Appl. Math. 267, 218227 (2014)
Lai, M, Xu, Y, Yin, W: Improved iteratively reweighted least squares for unconstrained smoothed \(\ell_{q}\) minimization. SIAM J. Numer. Anal. 51(2), 927957 (2014)
Konishi, K, Uruma, K, Takahashi, T, Furukawa, T: Iterative partial matrix shrinkage algorithm for matrix rank minimization. Signal Process. 100, 124131 (2014)
Mohan, K, Fazel, M: Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn. Res. 13(1), 34413473 (2012)
Li, YF, Zhang, YJZ, Huang, ZH: A reweighted nuclear norm minimization algorithm for low rank matrix recovery. J.Â Comput. Appl. Math. 263, 338350 (2014)
Toh, KC, Yun, S: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6, 615640 (2010)
Acknowledgements
We gratefully acknowledge the financial support from the National Natural Science Foundation of China (No. 71431008).
Author information
Authors and Affiliations
Contributions
The authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ma, CQ., Ren, YS. Proximal iteratively reweighted algorithm for lowrank matrix recovery. J Inequal Appl 2018, 12 (2018). https://doi.org/10.1186/s136600171602x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136600171602x