 Research
 Open Access
A new method based on the manifoldalternative approximating for lowrank matrix completion
 Fujiao Ren^{1} and
 Ruiping Wen^{2}Email author
https://doi.org/10.1186/s1366001819314
© The Author(s) 2018
 Received: 18 September 2018
 Accepted: 3 December 2018
 Published: 11 December 2018
Abstract
In this paper, a new method is proposed for lowrank matrix completion which is based on the least squares approximating to the known elements in the manifold formed by the singular vectors of the partial singular value decomposition alternatively. The method can achieve a reduction of the rank of the manifold by gradually reducing the number of the singular value of the thresholding and get the optimal lowrank matrix. It is proven that the manifoldalternative approximating method is convergent under some conditions. Furthermore, compared with the augmented Lagrange multiplier and the orthogonal rankone matrix pursuit algorithms by random experiments, it is more effective as regards the CPU time and the lowrank property.
Keywords
 Manifoldalternative approximating
 Low rank
 Matrix completion
 Convergence
1 Introduction
There have been many algorithms which were designed to attempt to solve the global minimum of (1.2) directly. For example, the hard thresholding algorithms [4, 15, 17, 26], the singular value theresholding (SVT) method [6], the accelerated singular values thresholding method (ASVT [14]), the proximal forward–backward splitting [9], the augmented Lagrange multiplier (ALM [19]) method, the interior point methods [7, 28], and the new gradient projection (NGP [34]) method.
Based on the bilinear decomposition of an rrank matrix, some algorithms have been presented to solve (1.1) under the rrank that is known or can be estimated [20, 21]. We mention the Riemannian geometry method [30] and the Riemannian trustregion method [5, 23], the alternating minimization method [16] and the alternating steepest descent method [26]. The rank of many completion matrices, however, is unknown, so that one has to estimate it ahead of time or approximate it from a lower rank, which causes the difficulty of solving the matrix completion problem. Wen et al. [33] presented the twostage iteration algorithms for the unknownrank problem. To decrease the computational cost, based on extending the orthogonal matching pursuit (OMP) procedure from the vector to matrix level, Wang et al. [31] presented an orthogonal rankone matrix pursuit (OR1MP) method, in which only the top singular vector pair was calculated at each iteration step and an ϵfeasible solution can be obtained in only \(O(\log(\frac{1}{\epsilon}))\) iterations with less computational cost. However, the method converges to a feasible point rather than the optimal one with minimization rank such that the accuracy is poor and cannot be improved if the rank is reached. In this study, we come up with a manifoldalternative approximating method for solving the problem (1.2) motivated by the above. In an outer iteration, the approximated process can be done in the leftsingular vector subspace and the approximation will be alternatively carried out in the rightsingular vector subspace in an inner iteration. In a whole iteration, the reduction of the rank results in an alternating optimization, while the completed matrix satisfies \(M_{ij}=(UV^{T})_{ij}\), for \((i,j)\in\varOmega\).
Here are some notations and preliminaries. Let \(\varOmega\subset\{ 1,2,\ldots,n\}\times\{1,2,\ldots,n\}\) denote the indices of the observed entries of the matrix \(X\in\mathbb{R}^{n\times n}\), Ω̄ denote the indices of the missing entries. \(\X\_{*}\) represents the nuclear norm (also called Schatten 1norm) of X, that is, the sum of the singular values of X, \(\X\_{2}, \X\_{F}\) denote 2norm and Fnorm of X, respectively. We denote by \(\langle X,Y\rangle=\operatorname{trace}(X^{*},Y)\) the inner product between two matrices \((\X\_{F}^{2}=\langle X,X\rangle)\). The Cauchy–Schwartz inequality gives \(\langle X,Y\rangle\leq\X\_{F}\cdot\Y\_{F}\) and it is well known that \(\langle X,Y\rangle\leq\X\_{2}\cdot\Y\_{*}\) [7, 32].
For a matrix \(A\in\mathbb{R}^{n\times n}\), \(\operatorname{vec}(A)=(a_{1}^{T},a_{2}^{T},\ldots ,a_{n}^{T})^{T}\) denotes a vector reshaped from matrix A by concatenating all its column vectors, dim\((A)\) is always used to represent the dimensions of A and \(r(A)\) stands for the rank of A.
The rest of the paper is organized as follows. After we provide a brief review of the ALM and the OR1MP methods, a manifoldalternative approximating method is proposed in Sect. 2. The convergence results of the new method are discussed in Sect. 3. Finally, numerical experiments are shown with comparison to other methods in Sect. 4. We end the paper with a concluding remark in Sect. 5.
2 Methods
2.1 The method of augmented Lagrange multipliers
The method of augmented Lagrange multipliers (ALMs) was proposed in [19] for solving a convex optimization (1.2). It should be described subsequently.
Since the matrix completion problem is closely connected to the robust principal component analysis (RPCA) problem, it can be formulated in the same way as RPCA, an equivalent problem of (1.2) can be considered as follows.
Method 2.1
(Algorithm 6 of [19])
 1.
\(Y_{0}=0\); \(E_{0}=0\); \(\mu_{0}>0\); \(\rho>1\); \(k=0\).
 2.
while not converged do
 3.
// Lines 4–5 solve \(A_{k+1}=\arg\min_{X} L(X,E_{k},Y_{k},\mu_{k})\).
 4.
\((U,S,V)=\operatorname{svd}(ME_{k}\mu_{k}^{1}Y_{k})\);
 5.
\(A_{k+1}=US_{\mu_{k}^{1}}[S]V^{T}\).
 6.
// Line 7 solves \(E_{k+1}=\arg\min_{\pi_{\varOmega}(E)=0}L(A_{k+1},E,Y_{k},\mu_{k})\).
 7.
\(E_{k+1}=\pi_{\overline{\varOmega}}(MX_{k+1}+\mu_{k}^{1}Y_{k})\).
 8.
\(Y_{k+1}=Y_{k}+\mu_{k}(MX_{k+1}E_{k+1})\).
 9.
Update \(\mu_{k}\) to \(\mu_{k+1}\).
 10.
\(k\leftarrow k+1\).
 11.
end while
Remark
It is reported that the method of augmented Lagrange multipliers has been applied to the problem (1.2). It is of much better numerical behavior, and it is also of much higher accuracy. However, the method has the disadvantage of the penalty function: the matrix sequences \(\{X_{k}\}\) generated by the method are not feasible. Hence, the accepted solutions are not feasible.
2.2 The method of the orthogonal rankone matrix pursuit (OR1MP)
Method 2.2
(Algorithm 1 of [31])
Input: \(Y_{\varOmega}\) and stopping criterion.
Initialize: Set \(X_{0}=0; \theta^{0}=0\) and \(k=1\).
 Step 1::

Find a pair of top left and rightsingular vectors \((u_{k},v_{k})\) of the observed residual matrix \(R_{k}=Y_{\varOmega}X_{k1}\) and set \(M_{k}=u_{k}v_{k}^{T}\).
 Step 2::

Compute the weight vector \(\theta^{k}\) using the closed form least squares solution \(\theta^{k}=(\bar{M}_{k}^{T}\bar {M}_{k})^{1}\bar{M}_{k}^{T}\dot{y}\).
 Step 3::

Set \(X_{k}=\sum_{i=1}^{k}\theta _{i}^{k}(M_{i})_{\varOmega}\) and \(k\leftarrow k+1\).
until stopping criterion is satisfied
Output: Constructed matrix \(\hat{Y}=\sum_{i=1}^{k}\theta_{i}^{k}M_{i}\).
Remark
To decrease the computational cost, based on extending the orthogonal matching pursuit (OMP) procedure from the vector to matrix level, Wang et al. [31] presented an orthogonal rankone matrix pursuit (OR1MP) method, in which only the top singular vector pair was calculated at each iteration step and an ϵfeasible solution can be obtained in only \(O(\log(\frac{1}{\epsilon }))\) iterations with less computational cost. However, the method converges to a feasible point rather than the optimal one with minimization rank such that the accuracy is poor and cannot be improved if the rank is reached.
2.3 The method of a manifoldalternative approximating (MAA)
For convenience, \([U_{k},\varSigma_{k},V_{k}]_{\tau_{k}}=\operatorname{lansvd}(Y_{k})\) denotes the top\(\tau_{k}\) singular pairs of the matrix \(Y_{k}\) by using the Lanczos method, where \(U_{k}=(u_{1},u_{2},\ldots,u_{\tau_{k}}), V_{k}=(v_{1},v_{2},\ldots,v_{\tau_{k}})\) and \(\varSigma_{k}=\operatorname{diag}(\sigma _{1k},\sigma_{2k},\ldots,\sigma_{\tau_{k},k}), \sigma_{1k}\geq\sigma _{2k}\geq\cdots\geq\sigma_{\tau_{k},k}>0\).
Method 2.3
(MAA)
Input: \(D=P_{\varOmega}(M)\), \(\operatorname{vec}(D)=D(i,j)\), \((i,j)\in\varOmega\), \(\tau_{0}>0\) (\(\tau_{k}\in{N}^{+}\)), \(0< c_{1},c_{2}<1\), a tolerance \(\epsilon>0\).
Initialize: Set \(Y_{0}=D\) and \(k=0\).
 Step 1::

Compute the partial SVD of the matrix \(Y_{k}: [U_{k},\varSigma_{k},V_{k}]_{\tau_{k}}=\operatorname{lansvd}(Y_{k})\).
 Step 2::

Solve the following optimization models, \(\min\\operatorname{vec}(D)\operatorname{vec}(P_{\varOmega}(U_{k}X_{k}))\_{F}\), set \(Y_{k+1}=U_{k}X_{k}\).
 Step 3::

When \(\frac{\Y_{k+1}Y_{k}\_{F}}{\D\ _{F}}<\epsilon\), stop; otherwise, go to the next step.
 Step 4::

For \(k>0\), if \(\\operatorname{vec}(D)\operatorname{vec}(P_{\varOmega}(Y_{k+1}))\_{F}< c_{2}\\operatorname{vec}(D)\operatorname{vec}(P_{\varOmega}(Y_{k}))\_{F}, \tau_{k+1}=[c_{1}\tau_{k}]\) go to the next step; otherwise, do
 (1):
Set \(Z_{k}=D+P_{\overline{\varOmega}}(Y_{k+1})\), compute the partial SVD of the matrix \(Z_{k}\): \([U_{k},\varSigma_{k},V_{k}]_{\tau_{k}}=\operatorname{lansvd}(Z_{k})\). Let \(W_{K}=U_{k}\varSigma_{k}V_{k}^{T}, \alpha_{k}=\\operatorname{vec}(D)\operatorname{vec}(P_{\varOmega}(W_{k}))\_{F}\).
Set \(Z_{k+\frac{1}{2}}=D+P_{\overline{\varOmega}}(W_{k})\).
 (2):Do SVD:Then \(W_{k+\frac{1}{2}}=U_{k+\frac{1}{2}}\varSigma_{k+\frac{1}{2}}V_{k+\frac{1}{2}}^{T}\).$$[U_{k+\frac{1}{2}},\varSigma_{k+\frac{1}{2}},V_{k+\frac{1}{2}}]_{\tau_{k}}= \operatorname{lansvd}(Z_{k+\frac{1}{2}}). $$
 (3):
Solve the following minimum problems, yielding \(Y_{k+\frac{1}{2}}\) and \(\alpha_{k+\frac{1}{2}}\), \(\min \\operatorname{vec}(D)\operatorname{vec}(P_{\varOmega}(X_{k+\frac {1}{2}}V_{k+\frac{1}{2}}^{T}))\_{F}\), set \(Y_{k+\frac{1}{2}}=X_{k+\frac{1}{2}}V_{k+\frac{1}{2}}^{T}\), \(\alpha_{k+\frac {1}{2}}=\\operatorname{vec}(D)\operatorname{vec}(P_{\varOmega}(Y_{k+\frac{1}{2}}))\ _{F}\).
Set \(Z_{k+1}=D+P_{\overline{\varOmega}}(Y_{k+\frac{1}{2}})\).
 (4):
If \(\alpha_{k+\frac{1}{2}}\leq c_{2}\alpha_{k}\), \(\tau_{k+1}=\tau_{k}1\); if \(\alpha_{k+\frac{1}{2}}\geq\alpha_{k}\), \(\tau_{k+1}=\tau_{k}+1\), go to Step 1. Otherwise, if \(c_{2}\alpha_{k}\leq \alpha_{k+\frac{1}{2}}<\alpha_{k}\), \(\tau_{k+1}=\tau_{k}\), go to the next step.
 (1):
 Step 5::

\(k:=k+1\), go to Step 2.
Output: Constructed matrix \(Y_{k}\).
3 Convergence analysis
Now, the convergence theory will be discussed in the following.
Lemma 3.1
Proof
This is in contrast to \(r(Y^{*})\leqslant r(Y^{*})1\). □
Lemma 3.2
Proof
Lemma 3.3
Proof
Theorem 3.1
Proof
From the Method 2.3, we can see the following:
Therefore, there exists an index \(k_{0}\) such that \(r(W_{k_{0}})< r(Y^{*})\).
From Lemma 3.1, the inequality (3.1) holds true.
At that time, the procedure can be transferred into Step 4 of Method 2.3, and then \(\tau_{k_{0}+1}=\tau_{k_{0}}+1\); repeat it, there exists an index \(k_{1}\) such that \(r(W_{k_{1}})=r(Y^{*})\).
4 Numerical experiments
It is well known that the OR1MP methd is the most simple and efficient for solving problem (1.1) and the ALM method is one of the most popular and efficient methods for solving problem (1.2). In this section we test several experiments to analyze the performance of our Method 2.3, and compare with the ALM and OR1MP methods.
We compare the methods using general matrix completion problem. In the experiments, \(p=m/n^{2}\) denotes the observation ratio, where m is the number of observed entries. Here, \(p=0.1,0.2,0.3,0.5\) are the different choices of the above ratio. The relative error is \(\mathrm{RES}=\frac{\Y_{k}D\ _{F}}{\D\_{F}}\). The values of the parameters are: \(\tau _{0}=100\), \(c_{1}=0.8\), \(c_{2}=0.9\) and \(\epsilon=5e{}6\).
Comparison results of three methods for \(p=0.1\)
Size  \(r(Y_{0})\)  Method  RES  IT  CPU 

2000 × 2000  20  MAA  6.3989e−05  19  61.0538 
ALM  1.3289e−05  147  814.4656  
OR1MP  1.5920e−02  100  80.2022  
3000 × 3000  30  MAA  2.4436e−04  15  113.8439 
ALM  1.2810e−05  155  3448.8579  
OR1MP  1.5641e−02  100  177.3123  
4000 × 4000  40  MAA  1.2204e−04  13  163.4071 
ALM  1.1951e−05  166  9876.8939  
OR1MP  1.8042e−02  100  318.5400  
5000 × 5000  50  MAA  5.3731e−05  11  210.7015 
ALM  9.6254e−06  173  22,641.7724  
OR1MP  2.0254e−02  100  505.3112 
Comparison results of three methods for \(p=0.2\)
Size  \(r(Y_{0})\)  Method  RES  IT  CPU 

2000 × 2000  20  MAA  2.4308e−04  10  54.8639 
ALM  9.2238e−06  70  237.4327  
OR1MP  4.3432e−03  100  95.9820  
3000 × 3000  30  MAA  5.0593e−05  8  94.3904 
ALM  5.6067e−05  72  863.4068  
OR1MP  5.9270e−02  100  213.9196  
4000 × 4000  40  MAA  1.3172e−04  8  166.8769 
ALM  5.4632e−06  72  2336.2629  
OR1MP  8.5351e−03  100  382.8628  
5000 × 5000  50  MAA  9.1096e−06  8  248.6944 
ALM  1.0802e−05  64  5141.8507  
OR1MP  1.1188e−02  100  603.1532 
Comparison results of three methods for \(p=0.3\)
Size  \(r(Y_{0})\)  Method  RES  IT  CPU 

2000 × 2000  20  MAA  6.3561e−05  7  53.9095 
ALM  6.8401e−06  44  74.5572  
OR1MP  2.0841e−03  100  112.3723  
3000 × 3000  30  MAA  6.8760e−06  7  112.3313 
ALM  7.8787e−06  46  157.1458  
OR1MP  3.3314e−03  100  251.2893  
4000 × 4000  40  MAA  1.4268e−05  6  170.5440 
ALM  8.7585e−06  45  258.4726  
OR1MP  5.5726e−03  100  447.9783  
5000 × 5000  50  MAA  1.2876e−05  6  279.8556 
ALM  8.7935e−06  43  420.4459  
OR1MP  8.0070e−03  100  724.6392 
Comparison results of three methods for \(p=0.5\)
Size  \(r(Y_{0})\)  Method  RES  IT  CPU 

2000 × 2000  20  MAA  1.2636e−05  5  65.7244 
ALM  8.7149e−06  25  44.0239  
OR1MP  7.3980e−04  100  163.9605  
3000 × 3000  30  MAA  7.1438e−06  5  149.9019 
ALM  2.7670e−06  24  95.7395  
OR1MP  1.4161e−03  100  338.6274  
4000 × 4000  40  MAA  5.7499e−06  5  285.8564 
ALM  3.6098e−06  25  205.6450  
OR1MP  3.1864e−03  100  601.8650  
5000 × 5000  50  MAA  5.1190e−06  5  443.8769 
ALM  8.8482e−06  25  245.6650  
OR1MP  4.9806e−03  100  938.3361 
5 Concluding remark
Based on the least squares approximation to the known elements, we proposed a manifoldalternative approximating method for the low matrix completion problem. Compared with the ALM and OR1MP methods, shown in Tables 1–4, our method performs better as regards the computing time and the lowrank property. The method can achieve a reduction of the rank of the manifold by gradually reducing the number of the singular value of the thresholding and get the optimal lowrank matrix each iteration step.
Declarations
Acknowledgements
The authors are very much indebted to the editor and anonymous referees for their helpful comments and suggestions. The authors are thankful for the support from the NSF of Shanxi Province (201601D011004).
Availability of data and materials
Please contact author for data requests.
Funding
It is not applicable.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Amit, Y., Fink, M., Srebro, N., Ullman, S.: Uncovering shared structures in multiclass classification. In: Proceeding of the 24th International Conference on Machine Learning, pp. 17–24. ACM, New York (2007) Google Scholar
 Argyriou, A., Evgeniou, T., Pontil, M.: Multitask feature learning. Adv. Neural Inf. Process. Syst. 19, 41–48 (2007) Google Scholar
 Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Multitask feature learning, image inpainting. Comput. Graph. 34, 417–424 (2000) Google Scholar
 Blanchard, J., Tanner, J., Wei, K.: CGIHT: conjugate gradient iterative and thresholding for compressed sensing and matrix completion. In: Numerical Analysis Group (2014) Preprint 14/08 Google Scholar
 Boumal, N., Absil, P.A.: RTRMC: a Riemannian trustregion method for lowrank matrix completion. In: ShaweTaylor, J., Zemel, R.S., Bartlett, P., Pereira, F.C.N., Weinberger, K.Q. (eds.) Advances in Neural Inf. Processing Systems, NIPS, vol. 24, pp. 406–414 (2011) Google Scholar
 Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding method for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010) MathSciNetView ArticleGoogle Scholar
 Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009) MathSciNetView ArticleGoogle Scholar
 Candès, E.J., Tao, T.: The power of convex relaxation: nearoptimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2009) MathSciNetView ArticleGoogle Scholar
 Combettes, P.L., Wajs, V.R.: Signal recovered by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005) MathSciNetView ArticleGoogle Scholar
 Eldén, L.: Matrix Methods in Data Mining and Pattern Recognization. Society for Industrial and Applied Mathematics, Philadelphia (2007) View ArticleGoogle Scholar
 Fazel, M.: Matrix rank minimization with applications. Ph.D. Dissertation, Stanford University (2002) Google Scholar
 Haldar, J.P., Hernando, D.: Rankconstrained solutions to linear matrix equations using PowerFactorization. IEEE Signal Process. Lett. 16(7), 584–587 (2009) View ArticleGoogle Scholar
 Harvey, N.J., Karger, D.R., Yekhanin, S.: The complexity of matrix completion. In: Proceeding of the Seventeenth Annual ACMSIAM Symposium on Discrete Algorithms, SODA, pp. 1103–1111 (2006) View ArticleGoogle Scholar
 Hu, Y., Zhang, D.B., Liu, J., Ye, J.P., He, X.F.: Accelerated singular value thresholding for matrix completion. In: KDD’12, Beijing, China, August 12–16, 2012 (2012) Google Scholar
 Jain, P., Meka, R., Dhillon, I.: Guaranteed rank minimization via singular value projection. In: Proceeding of the Neural Information Processing Systems Conf., NIPS, pp. 937–945 (2010) Google Scholar
 Jain, P., Netrapalli, P., Sanghavi, S.: Lowrank matrix completion using alternating minimization. In: Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), pp. 665–674 (2013) Google Scholar
 Kyríllidis, A., Cevher, V.: Matrix recipes for hard thresholding methods. J. Math. Imaging Vis. 48(2), 235–265 (2014) MathSciNetView ArticleGoogle Scholar
 Lai, M.J., Xu, Y., Yin, W.: Improved iteratively reweighted least squares for unconstrained smoothed \(l_{q}\) minimization. SIAM J. Numer. Anal. 51, 927–957 (2013) MathSciNetView ArticleGoogle Scholar
 Lin, Z.C., Chen, M.M., Ma, Y.: A fast augmented Lagrange multiplier method for exact recovery of corrupted lowrank matrices. In: Proceeding of the 27th International Conference on Machine Learning, Haifa, Israel (2010) Google Scholar
 Liu, Z., Vandenberghe, L.: Interiorpoint method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31, 1235–1256 (2009) MathSciNetView ArticleGoogle Scholar
 Lu, Z., Zhang, Y.: Penalty decomposition methods for rank minimization (2010) https://arxiv.org/abs/1008.5373
 Mesbahi, M., Papavassilopoulos, G.P.: On the rank minimization problem over a positive semidefinite linear matrix inequality. IEEE Trans. Autom. Control 42, 239–243 (1997) MathSciNetView ArticleGoogle Scholar
 Mishra, B., Apuroop, K.A., Sepulchre, R.: A Riemannian geometry for lowrank matrix completion (2013) arXiv:1306.2672
 Ngo, T., Saad, Y.: Scaled gradients on Grassmann manifolds for matrix completion. In: Advances in Neural Information Processing Systems, NIPS, (2012) Google Scholar
 Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 47–501 (2010) MathSciNetView ArticleGoogle Scholar
 Tanner, J., Wei, K.: Lowrank matrix completion by alternating steepest descent methods. Appl. Comput. Harmon. Anal. 40, 417–429 (2016) MathSciNetView ArticleGoogle Scholar
 Toh, K.C., Todd, M.J., Tutuncu, R.H.: SDPTSa Matlab software package for semidefinitequadraticlinear programming (2001) version 3.0, Web page http://www.math.nus.edu.sg/mattohkc/sdpt3.html
 Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6, 615–640 (2010) MathSciNetMATHGoogle Scholar
 Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vis. 9, 137–154 (1992) View ArticleGoogle Scholar
 Vanderreycken, B.: Low rank matrix completion by Riemannian optimization. SIAM J. Control Optim. 23(2), 1214–1236 (2013) MathSciNetView ArticleGoogle Scholar
 Wang, Z., Lai, M.J., Lu, Z.S., Fan, W., Hansan, D., Ye, J.P.: Orthogonal rankone matrix pursuit for low rank matrix completion. SIAM J. Sci. Comput. 37, 488–514 (2015) MathSciNetView ArticleGoogle Scholar
 Waston, G.A.: Characterization of subdifferential of some matrix norms. Linear Algebra Appl. 170, 33–45 (1992) MathSciNetView ArticleGoogle Scholar
 Wen, R.P., Liu, L.X.: The twostage iteration algorithms based on the shortest distance for low rank matrix completion. Appl. Math. Comput. 314, 133–141 (2017) MathSciNetGoogle Scholar
 Wen, R.P., Yan, X.H.: A new gradient projection method for matrix completion. Appl. Math. Comput. 258, 537–544 (2015) MathSciNetMATHGoogle Scholar
 Wen, Z., Yin, W., Zhang, Y.: Solving a lowrank factorization model for matrix completion by a nonlinear successive overrelaxation algorithm. Math. Program. Comput. 4, 333–361 (2012) MathSciNetView ArticleGoogle Scholar
 Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6(3), 1758–1789 (2013) MathSciNetView ArticleGoogle Scholar