 Research
 Open access
 Published:
Lower bounds for the lowrank matrix approximation
Journal of Inequalities and Applications volume 2017, Article number: 288 (2017)
Abstract
Lowrank matrix recovery is an active topic drawing the attention of many researchers. It addresses the problem of approximating the observed data matrix by an unknown lowrank matrix. Suppose that A is a lowrank matrix approximation of D, where D and A are \(m \times n\) matrices. Based on a useful decomposition of \(D^{\dagger}  A^{\dagger}\), for the unitarily invariant norm \(\\cdot\\), when \(\D\\geq\A\ \) and \(\D\\leq\A\\), two sharp lower bounds of \(D  A\) are derived respectively. The presented simulations and applications demonstrate our results when the approximation matrix A is lowrank and the perturbation matrix is sparse.
1 Introduction
In mathematics, lowrank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data.
Lowrank approximation of a linear operator is ubiquitous in applied mathematics, scientific computing, numerical analysis, and a number of other areas. For example, a lowrank matrix could correspond to a lowdegree statistical model for a random process (e.g., factor analysis), a loworder realization of a linear system [1], or a lowdimensional embedding of data in the Euclidean space [2], the image and computer vision [3–5], bioinformatics, background modeling and face recognition [6], latent semantic indexing [7, 8], machine learning [9–12] and control [13] etc. These data may have thousands or even billions of dimensions, and a large number of samples may have the same or similar structure. As we know, the important information lies in some lowdimensional subspace or lowdimensional manifold, but interfered with some perturbative components (sometimes interfered by the sparse component).
Let \(D\in\mathbb{R}^{m\times n}\) be an observed data matrix which is combined as
where \(A\in\mathbb{R}^{m\times n}\) is the lowrank component and \(E\in \mathbb{R}^{m\times n}\) is the perturbation component of D. The singular value decomposition (SVD [14]) is a method for dealing with such highdimensional data. If the matrix E is small, the classical principal components analysis (PCA [15–17]) can seek the best rankr estimation of A by solving the following constrained optimization via SVD of D and then projecting the columns of D onto the subspace spanned by the r principal left singular vectors of D:
where \(r\ll\min(m,n)\) is the target dimension of the subspace, ϵ is an upper bound on the perturbative component \(\E\_{F}\) and \(\\cdot\_{F}\) is the Frobenius norm.
Despite its many advantages, the traditional PCA suffers from the fact that the estimation Â obtained by classical PCA can be arbitrarily far from the true A, when E is sufficiently sparse (relative to the rank of A). The reason for this poor performance is precisely that the traditional PCA makes sense for Gaussian noise and not for sparse noise. Recently, robust PCA (RPCA [6]) is a family of methods that aims to make PCA robust to large errors and outliers. That is, RPCA is an upgrade of PCA.
There are some reasons for the study of lower bound of a lowrank matrix approximation problem. Firstly, as far as we know, there is no literature to consider the lower bound of the lowrank matrix approximation problem. In our paper, we first put forward the lower bound. Secondly, for the lowrank approximation, when a perturbation E exists, there is an approximation error which cannot be avoided, that is, the approximation error cannot equal 0, but tends to 0. Thirdly, from our main results, we can clearly find the influence of the spectral norm (\(\\cdot\_{2}\)) on the lowrank matrix approximation. For example, for our main result of Case II, when the maximum eigenvalue of the matrix D is larger, the approximation error of \((DA)\) is smaller. In addition, the lower bound can verify whether the solution obtained by algorithms is optimal. For details, please refer to the experiments Section 4 of our paper. Therefore, it is necessary and significant to study the lower bound of the lowrank matrix approximation problem.
Remark 1.1
PCA and RPCA are methods for the lowrank approximation problem when perturbation item exists. Our aim is to prove that no matter what method is used, the lower bound of error always exists and it cannot be avoided with the perturbation item E. Considering the existence of error, this paper focuses on the specific situation of this lower bound.
1.1 Notations
For a matrix \(A\in\mathbb{R}^{m\times n}\), let \(\A\_{2}\) and \(\A\ _{\ast}\) denote the spectral norm and the nuclear norm (i.e., the sum of its singular values), respectively. Let \(\\cdot\\) be a unitarily invariant norm. The pseudoinverse and the conjugate transpose of A are denoted by \(A^{\dagger}\) and \(A^{H}\), respectively. We consider the singular value decomposition (SVD) of a matrix A of rank r
where U and V are \(m\times r\) and \(n\times r\) matrices with orthonormal columns, respectively, and \(\sigma_{i}\) is the positive singular values. We always assume that the SVD of a matrix is given in the reduced form above. Furthermore, \(\langle A,B \rangle=\operatorname{trace}(A^{H} B)\) denotes the standard inner product, then the Frobenius norm is
1.2 Organization
In this paper, we study a perturbation theory for lowrank matrix approximation. When \(\D\\geq\A\ \) or \(\D\\leq\A\\), two sharp lower bounds of \(D  A\) are derived for a unitarily invariant norm respectively. This work is organized as follows. In Section 2, we provide a review of relevant linear algebra and some preliminary results. In Section 3, under different norms, two sharp lower bounds of \(D  A\) are given for the lowrank approximation problem and some proofs of Theorem 3.5 are presented. In Section 4, example and applications are given to verify the provided lower bounds. Finally, we conclude the paper with a short discussion.
2 Preliminaries
In order to prove our main results, we mention the following results for our further discussions.
2.1 Unitarily invariant norm
An important property of a Euclidean space is that shapes and distance do not change under rotation. In particular, for any vector x and for any unitary matrices U, we have
An analogous property is shared by the spectral and Frobenius norms: namely, for any unitary matrices U and V, the product \(UAV^{H}\) is defined by
These examples suggest the following definition.
Definition 2.1
([18])
A norm \(\\cdot\\) on \(\mathbb{C}^{m\times n}\) is unitarily invariant if it satisfies
for any unitary matrices U and V. It is normalized if
whenever A is of rank one.
Remark 2.2
Let \(\Sigma= UAV^{H} \) be the singular value decomposition of the matrix A with order n. Let \(\\cdot\\) be a unitarily invariant norm. Since U and V are unitary,
Thus \(\A\\) is a function of the singular values of A.
The 2norm plays a special role in the theory of unitarily invariant norms as the following theorem shows.
Theorem 2.3
([18])
Let \(\\cdot\\) be a family of unitarily invariant norm. Then
and
Moreover, if \(\operatorname{Rank}(A)=1\), then
We have observed that the spectral and Frobenius norms are unitarily invariant. However, not all norms are unitarily invariant as the following example shows.
Example 2.4
Let
obviously, \(\A\_{\infty} =2\), but for a unitary matrix
we have
Remark 2.5
It is easy to verify that the nuclear norm \(\\cdot\_{\ast}\) is a unitarily invariant norm.
2.2 Projection
Let \(\mathbb{C}^{m}\) and \(\mathbb{C}^{n}\) be m and ndimensional inner product spaces over the complex field, respectively, and \(A\in\mathbb{C}^{m\times n}\) be a linear transformation from \(\mathbb{C}^{n}\) into \(\mathbb{C}^{m}\).
Definition 2.6
([18])
The column space (range) of A is denoted by
and the null space of A by
Further, we let ⊥ denote the orthogonal complement and get \(\mathcal{R} = \mathcal{N}(A^{H})^{\perp}\) and \(\mathcal{N}(A) = \mathcal {R}(A^{H})^{\perp}\).
The following properties [18] of the pseudoinverse are easily established.
Theorem 2.7
([18])
For any matrix A, the following hold.

1.
If \(A\in\mathbb{C}^{m\times n}\) has rank n, then \(A^{\dagger }=(A^{H} A)^{1}A^{H}\) and \(A^{\dagger}A=I^{(n)}\).

2.
If \(A\in\mathbb{C}^{m\times n}\) has rank m, then \(A^{\dagger }=A^{H}(A A^{H} )^{1}\) and \(A A^{\dagger}=I^{(m)}\).
Here \(I^{(n)}\in\mathbb{R}^{n\times n}\) is the identity matrix.
Theorem 2.8
([18])
For any matrix A, \(P_{A} = AA^{\dagger}\) is the orthogonal projector onto \(\mathcal{R}(A)\), \(P_{A^{H}}= A^{\dagger}A\) is the orthogonal projector onto \(\mathcal{R}(A^{H})\), \(I  P_{A^{H}}\) is the orthogonal projector onto \(\mathcal{N}(A)\).
2.3 The decomposition of \(D^{\dagger}  A^{\dagger}\)
In this section, we focus on the decomposition of \(D^{\dagger}  A^{\dagger}\) and a general bound of the perturbation theory for pseudoinverses. Firstly, according to the orthogonal projection, we can deduce the following lemma.
Lemma 2.9
For any matrix A, \(P_{A} = AA^{\dagger}\) and \(P_{A^{H}}= A^{\dagger}A\), then we have
Proof
Since \(P_{A}^{\bot}= IP_{A}\) and \(P_{A^{H}}^{\bot}= IP_{A^{H}}\), then we have that
The proof is completed. □
Using Lemma 2.9, the decompositions of \(D^{\dagger}  A^{\dagger}\) are developed by Wedin [19].
Theorem 2.10
([19])
Let \(D = A +E\), then the difference \(D^{\dagger}  A^{\dagger}\) is given by the expressions
By Lemma 2.9, using \(P_{A} = AA^{\dagger}\), \(P_{A^{H}}= A^{\dagger}A\), \(P_{A}^{\perp}=IP_{A}\), \(P_{A^{H}}^{\perp}=IP_{A^{H}}\), these expressions can be verified.
In previous work [19], Wedin developed a general bound of the perturbation theory for pseudoinverses. Theorem 2.11 is based on a useful decomposition of \(D^{\dagger}  A^{\dagger}\), where D and A are \(m \times n\) matrices. Sharp estimates of \(\D^{\dagger}  A^{\dagger}\\) are derived for a unitarily invariant norm. In [20], Chen et al. presented some new perturbation bounds for the orthogonal projections \(\P_{D}  P_{A}\\).
Theorem 2.11
([19])
Suppose \(D = A + E\), then the error of \(D^{\dagger}  A^{\dagger}\) has the following bound:
where γ is given in Table 1.
Remark 2.12
For the spectral norm, by formula (11) we can achieve \(\gamma= \frac {1+\sqrt{5}}{2}\). When \(\\cdot\\) is the Frobenius norm, by formula (12), we have \(\gamma= \sqrt{2}\). Similarly, for an arbitrary unitarily invariant norm, according to formula (13), we can deduce \(\gamma=3\).
Remark 2.13
From Theorem 2.11, since \(E= DA\), in fact, if \(\operatorname{Rank}(A)\leq \operatorname{Rank}(D)\), then (11) gives the lower bound of the lowrank matrix approximation:
In the following section, based on Theorem 2.11, we provide two lower error bounds of \(D  A\) for a unitarily invariant norm.
3 Our main results
In this section, we consider the lower bound theory for the lowrank matrix approximation based on a useful decomposition of \(D^{\dagger}  A^{\dagger}\). When \(\operatorname{Rank}(A)\leq \operatorname{Rank}(D)\), some sharp lower bounds of \(D  A \) are derived in terms of a unitarily invariant norm. In order to prove our result, some lemmas are listed below.
Lemma 3.1
([18])
Let \(D = A + E\), the projections \(P_{D}\) and \(P_{A}\) satisfy
therefore
If \(\operatorname{Rank}(A)\leq \operatorname{Rank}(D)\), then
Lemma 3.2
([21])
Let \(A, D\in\mathbb{C}^{m\times n}\), \(\operatorname{Rank}(A)=r\), \(\operatorname{Rank}(D)=s\), \(r\leq s\), then there exists a unitary matrix \(Q\in\mathbb{C}^{m\times m}\) such that
where
\(\Gamma_{1}=\operatorname{diag} (\gamma_{1},\ldots, \gamma_{r_{1}})\), \(0 \leq\gamma_{1} \leq \cdots\leq\gamma_{r_{1}}\) and \(\Sigma_{1}=\operatorname{diag} (\sigma_{1},\ldots, \sigma_{r_{1}})\), \(0 \leq\sigma_{1} \leq \cdots\leq\sigma_{r_{1}}\). Moreover, \(\gamma_{i}\) and \(\sigma_{i}\) satisfy \(\gamma_{i}^{2} + \sigma_{i}^{2} =1\), \(i=1, \ldots, r_{1}\).
According to Lemma 3.2, we can easily get the following result.
Lemma 3.3
Let \(A, D\in\mathbb{C}^{m\times n}\), \(\operatorname{Rank}(A)=r\), \(\operatorname{Rank}(D)=s\), \(r\leq s\), then we have
Proof
Since
and
then
and
Therefore, they have the same singular values which yield that \(\P_{A}^{\bot} P_{D}\ = \P_{D} P_{A}^{\bot}\\). □
This is a useful lemma that we will use in the proof of the main result. In order to prove our main theorem, two lower bounds of \(D  A\) are required by the following lemma.
Lemma 3.4
For the unitarily invariant norm, if \(\operatorname{Rank}(A)\leq \operatorname{Rank}(D)\), then the lower bound of \(DA\) satisfies:
Case I: For \(\ D\ \geq\A\\), we have
Case II: For \(\ D\ \leq\A\\), we have
Proof
Case I: Since \(\ D\ \geq\A\\), we have \(\D  A\ \geq \D \  \A\\). Using Theorem 2.3 and Lemma 3.1, we have \(\AB\\leq\A\ _{2}\B\\) and \(\P_{D}^{\bot}P_{A}\\leq\P_{D} P_{A}^{\bot}\\), respectively. By Lemma 2.9, we have \(P_{D}^{\bot}D=0\), \(AP_{A^{H}}^{\bot}=0\) and \(A^{\dagger}P_{A}^{\bot}=0\), this also yields
Case II: Since \(\ D\ \leq\A\\), we have \(\D  A\ \geq \ A \  \D\\). Similarly, by Lemma 3.3, using \(\P_{A}^{\bot} P_{D}\ = \P_{D} P_{A}^{\bot}\\), we have
We complete the proof of Lemma 3.4. □
Our main results can be described as the following theorem.
Theorem 3.5
Suppose that \(D = A + E\), \(\operatorname{Rank}(A)\leq \operatorname{Rank}(D)\), for the unitarily invariant norm \(\\cdot\\), the error of \(D  A\) has the following bounds.
Case I: For \(\D\\geq\A\\), we have
Case II: For \(\D\\leq\A\\), we have
where the value options for γ are the same as in Table 1.
Proof
Case I: For \(\ D\ \geq\A\\), by Theorem 2.11 and Lemma 3.4 (22), we can deduce
this yields
Case II: Similarly, for \(\ D\ \leq\A\\), by Theorem 2.11 and Lemma 3.4 (23), we can deduce
this yields
where the value options for γ are the same as in Table 1. In summary, we prove the lower bounds of Theorem 3.5. □
Remark 3.6
From the main theorem, we can see that if \(\D\=\A\\), then \(\D  A \=0\). However, in the problem of lowrank matrix approximation, \(\D\\) is not necessarily equal to \(\A\\), so the approximation error is present. Furthermore, when \(\D\\) is close to \(\A\\), simulations demonstrate that the error has a very small magnitude (see Section 4).
In this section, we discuss the error bounds under different conditions for the unitarily invariant norm. Based on a useful decomposition of \(D^{\dagger}  A^{\dagger}\), for \(\ D\ \geq\A\\) and \(\ D\ \leq\A\\), we have bounds (26) and (27), respectively. The two error bounds are useful in lowrank matrix approximation. The following experiments illustrate our results when the approximation matrix A is lowrank and the perturbation matrix E is sparse.
4 Experiments
4.1 The singular value thresholding algorithm
Our results are obtained by a singular value thresholding (SVT [22]) algorithm. This algorithm is easy to implement and surprisingly effective both in terms of computational cost and storage requirement when the minimum nuclear norm solution is also the lowestrank solution. The specific algorithm is described as follows.
For the lowrank matrix approximation problem which is contaminated with perturbation item E, we observe that the data matrix \(D = A + E\). To approximate D, we can solve the convex optimization problem
where \(\\cdot\_{\ast}\) denotes the nuclear norm of a matrix (i.e., the sum of its singular values).
For solving (28), we introduce the softthresholding operator \(\mathcal{D}_{\tau}\) [22] which is defined as
where \((\sigma_{i}\tau)_{+}=\max{\{0,\sigma_{i}\tau}\}\). In general, this operator can effectively shrink some singular values toward zero. The following theorem is with respect to the shrinkage operators [22–24], which will be used at each iteration of the proposed algorithms.
Theorem 4.1
([22])
For each \(\tau> 0\) and \(W\in\mathbb{R}^{m\times n}\), the singular value shrinkage operator \(\mathcal{D}_{\tau}(\cdot)\) obeys
where \(\mathcal{D}_{\tau}(W):=U\mathcal{D}_{\tau}(S)V^{\ast}\).
By introducing a Lagrange multiplier Y to remove the inequality constraint, one has the augmented Lagrangian function of (28)
The iterative scheme of the classical augmented Lagrangian multipliers method is
Based on the optimality conditions, (29) is equivalent to
where \(\partial(\cdot)\) denotes the subgradient operator of a convex function. Then, by Theorem 4.1 above, we have the iterative solution
The SVT approach works as described in Algorithm 1.
4.2 Simulations
In this section, we use the SVT algorithm for the lowrank matrix approximation problem. Let \(D = A + E\in\mathbb{R}^{m\times n}\) be the available data. Simply, we restrict our examples to square matrices (\(m=n\)). We draw A according to the independent random matrices and generate the perturbation matrix E to be sparse, which satisfies the i.i.d. Gaussian distribution. Specially, the rank of the matrix A and the sparse entries of the perturbation matrix E are selected to be \(5\% m\) and \(5\% m^{2}\), respectively.
Table 2 reports the results obtained by lower bounds (24), (25) and (12), respectively. Bounds (24) and (25) are our new result, bound (12) is the previous result. Then, comparing the bounds with each other by numerical experiments, we find that lower bounds (24), (25) are smaller than lower bound (12).
4.3 Applications
In this section, we use the SVT algorithm for the lowrank image approximation. From Figures 1 and 2, comparing with the original image (a), the lowrank image (b) loses some details. We can hardly get any detailed information from incomplete image (c). However, the output image (d) \(=A^{k}\), which is obtained by the SVT algorithm, can recover the details of the lowrank image (b). If we denote image (b) to be a lowrank matrix A, then image (c) is the observed data matrix D which is perturbed by a sparse matrix E, that is,
Using the SVT algorithm for the lowrank image approximation problem, the lower bound comparison results are shown in Table 3. We calculate \(\E\_{F}=\D  A \_{F}\) are 8.71e2 and 7.23e2 for images Cameraman and Barbara, respectively. But for Fnorm of our lower bound (25), we can see that they are 2.59e5 and 1.09e5 for images Cameraman and Barbara, respectively. That is to say, our error bounds can verify that the SVT algorithm still can be improved.
5 Conclusion
Lowrank matrix approximation problem is a field which arises in a number of applications in model selection, system identification, complexity theory, and optics. Based on a useful decomposition of \(D^{\dagger}  A^{\dagger}\), this paper reviewed the previous work and provided two sharp lower bounds for the lowrank matrices recovery problem with a unitarily invariant norm.
From our main Theorem 3.5, we can see that if \(\D\=\A\\), then \(\D  A \=0\). However, in the problem of lowrank matrix approximation, \(\D\\) is not necessarily equal to \(\A\\), so the approximation error is present. Furthermore, from the main results, we can clearly find the influence of the spectral norm (\(\\cdot\_{2}\)) on the lowrank matrix approximation. For example, in Case II, when the maximum eigenvalue of the matrix D is larger, the error of \(DA\) is smaller.
Finally, we use the SVT algorithm for the lowrank matrix approximation problem. Table 2 shows that our lower bounds (24), (25) are smaller than lower bound (12). Simulation results demonstrate that the lower bounds have a very small magnitude. In applications section, we use the SVT algorithm for the lowrank image approximation problem, the lower bounds comparison results are shown in Table 3. From the comparison results, we find that our lower bounds can verify whether the SVT algorithm can be improved.
References
Fazel, M, Hindi, H, Boyd, S: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference, vol. 6, pp. 47344739 (2002)
Linial, N, London, E, Rabinovich, Y: The geometry of graphs and some of its algorithmic applications. Combinatorica 15, 215245 (1995)
Tomasi, C, Kanade, T: Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vis. 9, 137154 (1992)
Chen, P, Suter, D: Recovering the missing components in a large noisy lowrank matrix: application to SFM. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 10511063 (2004)
Liu, ZS, Li, JC, Li, G, Bai, JC, Liu, XN: A new model for sparse and lowrank matrix decomposition. J. Appl. Anal. Comput. 2, 600617 (2017)
Wright, J, Ganesh, A, Shankar, R, Yigang, P, Ma, Y: Robust principal component analysis: exact recovery of corrupted lowrank matrices via convex optimization. In: TwentyThird Annual Conference on Neural Information Processing Systems (NIPS 2009) (2009)
Deerwester, S, Dumains, ST, Landauer, T, Furnas, G, Harshman, R: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. Technol. 41(6), 391407 (1990)
Papadimitriou, C, Raghavan, P, Tamaki, H, Vempala, S: Latent semantic indexing, a probabilistic analysis. J. Comput. Syst. Sci. 61(2), 217235 (2000)
Argyriou, A, Evgeniou, T, Pontil, M: Multitask feature learning. Adv. Neural Inf. Process. Syst. 19, 4148 (2007)
Abernethy, J, Bach, F, Evgeniou, T, Vert, JP: Lowrank matrix factorization with attributes. arXiv:cs/0611124 (2006)
Amit, Y, Fink, M, Srebro, N, Ullman, S: Uncovering shared structures in multiclass classification. In: Proceedings of the 24th International Conference on Machine Learning, pp. 1724. ACM, New York (2007)
Zhang, HY, Lin, ZC, Zhang, C, Gao, J: Robust latent low rank representation for subspace clustering. Neurocomputing 145, 369373 (2014)
Mesbahi, M, Papavassilopoulos, GP: On the rank minimization problem over a positive semidefinite linear matrix inequality. IEEE Trans. Autom. Control 42, 239243 (1997)
Golub, GH, Van Loan, CF: Matrix Computations, 4th edn. Johns Hopkins University Press, Baltimore (2013)
Eckart, C, Young, G: The approximation of one matrix by another of lower rank. Psychometrika 1(3), 211218 (1936)
Hotelling, H: Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24(6), 417520 (1932)
Jolliffe, I: Principal Component Analysis. Springer, Berlin (1986)
Stewart, GW, Sun, JG: Matrix Perturbation Theory. Academic Press, New York (1990)
Wedin, PÅ: Perturbation theory for pseudoinverses. BIT Numer. Math. 13(2), 217232 (1973)
Chen, YM, Chen, XS, Li, W: On perturbation bounds for orthogonal projections. Numer. Algorithms 73, 433444 (2016)
Sun, JG: Matrix Perturbation Analysis, 2nd edn. Science Press, Beijing (2001)
Cai, JF, Candès, EJ, Shen, ZW: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 19561982 (2010)
Candès, EJ, Li, X, Ma, Y, Wright, J: Robust principal component analysis? J. ACM 58(3), 137 (2011)
Tao, M, Yuan, XM: Recovering lowrank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 20(1), 5781 (2011)
Acknowledgements
This work is partially supported by the National Natural Science Foundation of China under grant No. 11671318, and the Fundamental Research Funds for the Central Universities (Xi’an Jiaotong University, Grant No. xkjc2014008).
Author information
Authors and Affiliations
Contributions
All authors worked in coordination. All authors carried out the proof, read and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Li, J., Liu, Z. & Li, G. Lower bounds for the lowrank matrix approximation. J Inequal Appl 2017, 288 (2017). https://doi.org/10.1186/s136600171564z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136600171564z