- Research
- Open Access

# Matrix spectral norm Wielandt inequalities with statistical applications

- Jibo Wu
^{1, 2}Email author and - Wende Yi
^{1, 2}

**2014**:110

https://doi.org/10.1186/1029-242X-2014-110

© Wu and Yi; licensee Springer. 2014

**Received:**12 October 2013**Accepted:**25 February 2014**Published:**4 March 2014

## Abstract

In this article, we construct a new matrix spectral norm Wielandt inequality. Then we apply it to give the upper bound of a new measure of association. Finally, a new alterative based on the spectral norm for the relative gain of the covariance adjusted estimator of parameters vector is given.

## Keywords

- Kantorovich inequality
- Wielandt inequality
- spectral norm

## 1 Introduction

*A*is an $n\times n$ positive definite symmetric matrix,

*x*and

*y*are two nonnull real vectors satisfying ${x}^{\prime}y=0$ such that

where ${\lambda}_{1}\ge \cdots \ge {\lambda}_{n}>0$ are the ordered eigenvalues of *A*. Inequality (1) is usually called Wielandt inequality in literature; see Drury *et al.* [1]. Gustafson [2] gave some meaning of this inequality.

*h*has the covariance matrix

*A*, then the maximum of the squared correlation is given as follows:

Many authors have been studied the Kantorovich inequality, for more details, see Liu [3, 4], Rao and Rao [5] and Liu and Heyde [6].

*X*and

*Y*be $n\times p$ and $n\times q$ matrices satisfying ${X}^{\prime}Y=0$, then

where inequality (5) refers to the Lo\"{w}ner partial ordering.

In inequality (5), *A* be a positive definite matrix, Lu [8] has extended *A* to be a nonnegative definite matrix. Drury *et al.* [1] introduced the matrix, determinant and trace version of the Wielandt inequality. Liu *et al.* [9] has improved two matrix trace Wielandt inequalities and proposed their statistical applications. Wang and Yang [10] presented the Euclidean norm matrix Wielandt inequality and showed the statistical applications. In this article, we will provide a matrix spectral norm Wielandt inequality and give its application to statistics.

The rest of the article is given as follows. In Section 2, we present a matrix spectral norm versions of the Wielandt inequality. In Section 3, a new measure of association based on the spectral norm is proposed and its upper bound is obtained by using the results in previous section; then we propose an alterative based on the spectral norm of the relative gain of the covariance adjusted estimators of the parameters and its upper bound. Finally, some concluding remarks are given in Section 4.

## 2 Matrix spectral norm Wielandt inequality

*a*with $a\le n$; ${A}^{1/2}$ is the nonnegative definite square root of

*A*;

*X*is an $n\times p$ matrix of rank

*k*with $k\le p\le a$; ${(\cdot )}^{-}$ stands for a generalized inverse of a matrix; ${(\cdot )}^{+}$ represents the Moore-Penrose inverse of a matrix; $rank(\cdot )$ denotes the rank of a matrix; ${(\cdot )}^{\prime}$ shows for the transpose of a matrix; and $\mathrm{\Re}(\cdot )$ stands for the column space of a matrix. Suppose that ${P}_{A}=A{A}^{+}$ stands for the orthogonal projectors onto the column space of matrix

*A*, and use the notation

for the orthogonal projector onto $\mathrm{\Re}(X)$.

In order to prove the main results it is necessary to introduce some lemmas.

**Lemma 2.1** [4]

*Let*$A\ge 0$

*be an*$n\times n$

*matrix with*rank

*a*,

*and let*

*X*

*be an*$n\times p$

*matrix of*rank

*k*

*satisfying*$\mathrm{\Re}(X)\subset \mathrm{\Re}(A)$,

*with*$k\le p\le a\le n$.

*Then*

*where* ${\lambda}_{1}\ge \cdots \ge {\lambda}_{a}>0$ *are the nonzero eigenvalues of* *A*.

**Lemma 2.2** [9]

*If*$A\ge 0$, $\mathrm{\Re}(X)\subset \mathrm{\Re}(A)$

*and*${X}^{\prime}Y=0$,

*then*

**Lemma 2.3** *Let* $A\ge 0$ *be an* $n\times n$ *matrix with* rank*a*, *and let* *X* *be an* $n\times p$ *matrix of* rank*k* *satisfying* $\mathrm{\Re}(X)\subset \mathrm{\Re}(A)$, *with* $k\le p\le a\le n$. *Then* $rank(HAH)=k$.

*Proof*As

*A*be a nonnegative definite matrix, we can easily get $rank(HAH)=rank(HA)$. By Marsaglia and Styan [11], we have

So we get $rank(HAH)=k$. □

**Lemma 2.4**

*Let*$A\ge 0$

*be an*$n\times n$

*matrix with*rank

*a*,

*and let*

*X*

*be an*$n\times p$

*matrix of*rank

*k*

*satisfying*$\mathrm{\Re}(X)\subset \mathrm{\Re}(A)$,

*with*$k\le p\le a\le n$.

*Then*

*where* ${\parallel G\parallel}_{2}={\lambda}_{1}(G)$ *denotes the spectral norm of the matrix* *G*, ${\lambda}_{1}(G)$ *stands for the largest eigenvalues of matrix* *G*, ${\lambda}_{1}\ge \cdots \ge {\lambda}_{a}>0$, *are the nonzero eigenvalues of* *A*, $\tau =\frac{{\lambda}_{1}(HAH)}{{\lambda}_{k}(HAH)}$ *is the condition number of matrix* $HAH$.

*Proof*By the definition of the spectral norm, we obtain

□

Now we present the first theorem of this article.

**Theorem 2.1**

*Suppose*$A\ge 0$

*to be an*$n\times n$

*matrix of*rank

*a*,

*and suppose*

*X*

*to be an*$n\times p$

*matrix of*rank

*k*,

*and suppose*

*Y*

*to be an*$n\times q$

*matrix such that*$\mathrm{\Re}(X)\subset \mathrm{\Re}(A)$

*and*${X}^{\prime}{P}_{A}Y={X}^{\prime}Y=0$

*with*$k\le p\le q\le a\le n$.

*Then*

*where* ${\parallel G\parallel}_{2}={\lambda}_{1}(G)$ *denotes the spectral norm of the matrix* *G*, ${\lambda}_{1}(G)$ *stands for the largest eigenvalues of matrix* *G*, ${\lambda}_{1}\ge \cdots \ge {\lambda}_{a}>0$ *are the nonzero eigenvalues of* *A*, $\tau =\frac{{\lambda}_{1}(HAH)}{{\lambda}_{k}(HAH)}$ *is the condition number of the matrix* $HAH$.

*Proof*(1) For (19), using Lemma 2.2 and Lemma 2.4, we obtain

Inequality (19) is proved.

(2) For (20), from Lemma 2.2 and Lemma 2.4, we can obtain

The proof of inequality (20) is completed. □

where $A\ge 0$ of rank*a*, ${A}_{11}\ge 0$ of rank*k*, ${A}_{11}$ is $p\times p$ and ${A}_{22}$ is $q\times q$, $p+q=n$.

Now we give another theorem.

**Theorem 2.2**

*Suppose*

*A*

*be an*$n\times n$

*nonnegative definite matrix of*rank

*a*

*partitioned as in*(24)

*and suppose that*

*then*

*where* ${\lambda}_{1}\ge \cdots \ge {\lambda}_{a}>0$ *are the nonzero eigenvalues of* *A*, $\tau =\frac{{\lambda}_{1}({A}_{11})}{{\lambda}_{k}({A}_{11})}$ *is the condition number of matrix* ${A}_{11}$.

*Proof* (1) Since $A>0$, let the $n\times p$ matrix *X* be $\left(\begin{array}{c}{I}_{p}\\ 0\end{array}\right)$ and $n\times q$ matrix *Y* be $\left(\begin{array}{c}0\\ {I}_{q}\end{array}\right)$, then we obtain ${X}^{\prime}AY={A}_{12}$, ${Y}^{\prime}AY={A}_{22}$, ${Y}^{\prime}AX={A}_{21}$, ${X}^{\prime}AX={A}_{11}$, ${X}^{\prime}X={I}_{p}$, $H=X{X}^{\prime}$, $\tau =\frac{{\lambda}_{1}({A}_{11})}{{\lambda}_{k}({A}_{11})}$, $\mathrm{\Re}(X)\subset \mathrm{\Re}(A)$ and ${X}^{\prime}{P}_{A}Y=0$. Substituting it into Theorem 2.1, we can get the two inequalities involving ${A}_{22}^{-1}$.

(2) As $A\ge 0$, *A* can be partitioned as in (24) with ${A}_{11}\ge 0$ and ${A}_{22}\ge 0$, then $\mathrm{\Re}({A}_{12})\subset \mathrm{\Re}({A}_{11})$ and $\mathrm{\Re}({A}_{21})\subset \mathrm{\Re}({A}_{22})$. On the other hand, using $rank(A)=rank({A}_{11})+rank({A}_{22})$, we get $\mathrm{\Re}(X)\subset \mathrm{\Re}(A)$, which is needed in Theorem 2.1. □

## 3 Applications to statistics

In this section, we give several inequalities involving covariance matrices, an alternative based on the spectral norm of the relative gain of the covariance adjusted estimator and its upper bound by using the inequalities in Section 2.

### 3.1 New measure of association

*μ*and

*ν*are $p\times 1$ and $q\times 1$ random vectors and that we have the covariance matrix

where $n=p+q$.

*et al.*[9] introduced a new measure association:

They also gave an upper bound of ${\rho}_{2}$ and they pointed out that ${\rho}_{2}$ is useful in canonical correlations and regression analysis areas as discussed by Lu [8], Wang and Ip [7], and Anderson [13].

where ${\parallel \cdot \parallel}_{E}$ stands for the Euclidean norm of concerned matrix and they also gave an upper bound of ${\rho}_{3}$.

**Theorem 3.1**

*The upper bound of*${\rho}_{4}$

*is given as follows*:

*where* ${\lambda}_{1}\ge \cdots \ge {\lambda}_{n}>0$ *are the ordered eigenvalues of* Σ, $\tau =\frac{{\lambda}_{1}(\mathrm{\Sigma})}{{\lambda}_{p}(\mathrm{\Sigma})}$ *is the condition number of matrix* Σ.

*Proof* It is easy to prove inequality (32) by using Theorem 2.2 and (31). □

### 3.2 Wishart matrices

*S*be an estimator of Σ, partitioned

*S*as follows:

where ${S}_{11}$ is a $p\times p$ matrix.

*S*, respectively. They also considered the concept of the relative gain of the covariance adjusted estimator of a parameter vector discussed by Rao [14] and Wang and Yang [15]. $\frac{|{\mathrm{\Sigma}}_{12}{\mathrm{\Sigma}}_{22}^{-1}{\mathrm{\Sigma}}_{21}|}{|{\mathrm{\Sigma}}_{11}|}$ can be regarded as the relative gain and it can be estimated by $\frac{|{S}_{12}{S}_{22}^{-1}{S}_{21}|}{|{S}_{11}|}$. Liu

*et al.*[9] use $\frac{tr({S}_{12}{S}_{22}^{-1}{S}_{21})}{tr({S}_{11})}$ to estimate $\frac{tr({\mathrm{\Sigma}}_{12}{\mathrm{\Sigma}}_{22}^{-1}{\mathrm{\Sigma}}_{21})}{tr({\mathrm{\Sigma}}_{11})}$ and they also showed that

where ${\lambda}_{1}\ge \cdots \ge {\lambda}_{n}>0$ are the ordered eigenvalues of *S*.

where $rank({S}_{12}{S}_{22}^{-1}{S}_{21})=h$ and $l(h,p)=\frac{{max}_{h}{\sum}_{i=1}^{h}{\lambda}_{i}^{2}}{{max}_{p}{\sum}_{i=1}^{p}{\lambda}_{i}^{2}}$.

*ω*is estimated by

Now we give the upper bound of *ω*.

**Theorem 3.2**

*The relative gain*$\stackrel{\u02c6}{\omega}$

*is bounded as follows*:

*where* ${\lambda}_{1}\ge \cdots \ge {\lambda}_{n}>0$ *are the ordered eigenvalues of* *S*, $\tau =\frac{{\lambda}_{1}(S)}{{\lambda}_{p}(S)}$ *is the condition number of matrix* *S*.

*Proof* Using Theorem 2.2, we can easily get the proof of Theorem 3.2. □

**Remark 3.1** The result in Theorem 3.2 can be extended to the nonnegative definite matrix $S\ge 0$, but ${S}_{11}\ne 0$.

## 4 Concluding remarks

In this article, we have presented two matrix spectral norm Wielandt inequalities and some applications of the spectral norm Wielandt inequalities, and we also can see that these applications are meaningful, useful, and practical in statistics.

## Declarations

### Acknowledgements

The authors are grateful to the editor and the two anonymous referees for their valuable comments which improved the quality of the paper. This work was supported by the Scientific Research Foundation of Chongqing University of Arts and Sciences (Grant No: R2013SC12), the National Natural Science Foundation of China (Grant Nos: 71271227, 11201505), and Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant No: KJTD201321).

## Authors’ Affiliations

## References

- Drury SW, Liu S, Lu CY, Puntanen S, Styan GPH:
**Some comments on several matrix inequalities with applications to canonical correlations: historical background and recent developments.***Sankhya, Ser. A*2002,**64:**453-507.MathSciNetGoogle Scholar - Gustafson K:
**The geometrical meaning of the Kantorovich-Wielandt inequalities.***Linear Algebra Appl.*1999,**296:**143-151. 10.1016/S0024-3795(99)00106-8MathSciNetView ArticleGoogle Scholar - Liu SZ
**Tinbergen Institute Research Series 106.**In*Contributions to Matrix Calculus and Applications in Econometrics*. Thesis Publishers, Amsterdam; 1995.Google Scholar - Liu SZ:
**Efficiency comparisons between the OLSE and the BLUE in a singular linear model.***J. Stat. Plan. Inference*2000,**84:**191-200. 10.1016/S0378-3758(99)00149-4View ArticleGoogle Scholar - Rao CR, Rao MB:
*Matrix Algebra and Its Applications to Statistics and Econometrics*. World Scientific, Singapore; 1998.View ArticleGoogle Scholar - Liu SZ, Heyde CC:
**Some efficiency comparisons for estimators from quasi-likelihood and generalized estimating equations. Lecture Notes-Monograph Series 42.**In*Mathematical Statistics and Applications: Festschrift for Constance van Eeden*. Edited by: Moore M, Froda S, Léger C. Inst. Math. Statist., Beachwood; 2003:357-371.Google Scholar - Wang SG, Ip WC:
**A matrix version of the Wielandt inequality and its applications to statistics.***Linear Algebra Appl.*1999,**296:**171-181. 10.1016/S0024-3795(99)00117-2MathSciNetView ArticleGoogle Scholar - Lu, CY: A generalized matrix version of the Wielandt inequality with some applications. Research Report, Department of Mathematics, North east Normal University, Changchun, China, 8 pp. (1999)Google Scholar
- Liu SZ, Lu CY, Puntanen S:
**Matrix trace Wielandt inequalities with statistical applications.***J. Stat. Plan. Inference*2009,**139:**2254-2260. 10.1016/j.jspi.2008.10.026MathSciNetView ArticleGoogle Scholar - Wang LT, Yang H:
**Matrix Euclidean norm Wielandt inequalities and their applications to statistics.***Stat. Pap.*2012,**53:**521-530. 10.1007/s00362-010-0357-yView ArticleGoogle Scholar - Marsaglia G, Styan GPH:
**Equalities and inequalities of ranks of matrices.***Linear Multilinear Algebra*1974,**2:**269-292. 10.1080/03081087408817070MathSciNetView ArticleGoogle Scholar - Groß J:
**The general Gauss-Markov model with possibly singular dispersion matrix.***Stat. Pap.*2004,**45:**311-336. 10.1007/BF02777575View ArticleGoogle Scholar - Anderson TW:
*An Introduction to Multivariate Statistical Analysis*. 3rd edition. Wiley, New York; 2003.Google Scholar - Rao CR:
**Least squares theory using an estimated dispersion matrix and its application to measurement of signals.**In*Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (Berkeley, CA, 1965-66), vol. I: Statistics*. Edited by: Cam LM, Neyman J. University of California Press, Berkeley; 1967:355-372.Google Scholar - Wang SG, Yang ZH:
**Pitman optimality of covariance-improved estimators.***Chin. Sci. Bull.*1995,**40:**1150-1154.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.