Skip to main content
  • Research Article
  • Open access
  • Published:

Several Matrix Euclidean Norm Inequalities Involving Kantorovich Inequality

Abstract

Kantorovich inequality is a very useful tool to study the inefficiency of the ordinary least-squares estimate with one regressor. When regressors are more than one statisticians have to extend it. Matrix, determinant, and trace versions of it have been presented in the literature. In this paper, we provide matrix Euclidean norm Kantorovich inequalities.

1. Introduction

Suppose that is an positive definite matrix and is an real vector, then the well-known Kantorovich inequality can be expressed as

(1.1)

where are the eigenvalues of . It is a very useful tool to study the inefficiency of the ordinary least-squares estimate with one regressor in the linear model. Watson [1] introduced the ratio of the variance of the best linear unbiased estimator to the variance of the ordinary least-squares estimator. Such a lower bound of this ratio was provided by Kantorovich inequality (1.1); see, for example, [2, 3]. When regressors are more than one statisticians have to extend it. Marshall and Olkin [4] were the first to generalize Kantorovich inequality to matrices (see, e.g., [5])

(1.2)

where is an real matrix. If , then (1.2) becomes

(1.3)

Bloomfield and Watson [6] and Knott [7] simultaneously established the inequality

(1.4)

where is an real matrix such that and . Yang [8] presented its trace version

(1.5)

where is an real matrix such that .

To the best of our knowledge, there has not been any matrix Euclidean norm version of Kantorovich inequality yet. Our goal is to present its matrix Euclidean norm version.

This paper is arranged as follows. In Section 2, we will give some lemmas which are useful in the following section. In Section 3, some matrix inequalities are established by Kantorovich inequality or Pólya-Szegö inequality, which are referred to as the extensions of Kantorovich inequality as well and conclusions are given in Section 4.

2. Some Lemmas

We will start with some lemmas which are very useful in the following.

Definition 2.1.

Let be an complex square matrix. is called a normal matrix if

Lemma 2.2.

Let be an complex square matrix and let be the eigenvalues of , then

(2.1)

where denotes the squared Euclidean norm of . The equality in (2.1) holds if and only if is a normal matrix.

Proof.

See [5].

Lemma 2.3 (Pólya-Szegö inequality).

There is

(2.2)

where .

Moreover Greub and Rheinboldt [9] generalized Pólya-Szegö inequality to matrices.

Lemma 2.4 (Poincare).

Let be an Hermitian matrix, and let be an column orthogonal and full rank matrix, that is , then one has

(2.3)

Let be a unitary matrix, whose column vectors are eigenvectors corresponding to , respectively. Assume and , then if and only if ; while if and only if , where is a unitary matrix.

Proof.

See [5].

3. Main Results

Theorem 3.1.

Let and be nonnegative definite Hermitian matrices with , and let be an complex matrix satisfying . Then one has

(3.1)

where and are eigenvalues of matrices and , respectively.

Proof.

We easily get that is a Hermitian matrix since is a Hermitian matrix. Hence is a normal matrix and then we can derive from Lemma 2.2 that

(3.2)

By Lemma 2.4 we get that

(3.3)

Similarly,

(3.4)

Note that

(3.5)

The latter expression of (3.5) may be expressed as

(3.6)

where is an arbitrary permutation of . Clearly, , , and . Therefore, let and , then we can derive from Pólya-Szegö inequality that

(3.7)

Since inequality (3.7) holds for any permutation of thus we find

(3.8)

In the following, the remaining problem is to choose a proper permutation of to minimize

(3.9)

This may be solved by a nontrivial but elementary combinatorial argument, thus we find

(3.10)

Then

(3.11)

Remark 3.2.

When is positive definite Hermitian matrix and , inequality (3.1) plays an important role in the linear model . The covariance matrices of the ordinary least-squares estimator and the best linear unbiased estimator are given in this model

(3.12)

Applying inequality (3.1), we can establish a lower bound of the inefficiency of least-squares estimator

(3.13)

See also [10].

In Theorem 3.1, we need the assumption that . However, we should also point out that the matrix may not meet such an assumption in practice. Therefore, we relax this assumption in the following but the results are weaken.

Theorem 3.3.

Let and be nonnegative definite Hermitian matrices with , and let be an complex matrix, then one has

(3.14)

where and are eigenvalues of matrices and , respectively.

Proof.

If , the result obviously holds. Next set . Let the spectral decomposition of be , where is an orthogonal matrix and Let , then

(3.15)

We can derive from (3.15) that

(3.16)

Similarly,

(3.17)

where We thus have

(3.18)

According to the proof of Theorem 3.1 we can get that

(3.19)

The proof is completed.

Corollary 3.4.

Let be an positive definite Hermitian matrix with eigenvalues , and let be an arbitrary complex matrix, then one has

(3.20)

Proof.

It is very easy to prove therefore we omit the proof.

Theorem 3.5.

Let and be positive definite Hermitian matrices with , and be the eigenvalues of and , respectively, and let be an arbitrary complex matrix. Then,

(3.21)

Proof.

If , the result obviously holds. Next set . Since there exists a unitary matrix such that and , where .

Define . By Corollary 3.4, we can get that

(3.22)

where The right-hand side of (3.22) may be denoted by , then

(3.23)

It is easy to prove that is a momotone increasing function of on interval . Write , then we have From the definitions of and , we thus have

(3.24)

This completes the proof.

Corollary 3.6.

If and are positive semidefinite Hermitian matrices with eigenvalues and , respectively, inequality (3.21) becomes

(3.25)

Theorem 3.7.

Let and be an positive semidefinite Hermitian matrices with , and be the eigenvalues and , respectively, and let , be complex matrices with . Then

(3.26)

Proof.

Note that

(3.27)

Similarly,

(3.28)

Using the abbreviations . Clearly, Let be the eigenvalues of , respectively. Applying Hölder inequality, we can derive that

(3.29)

Thus

(3.30)

This completes the proof.

Corollary 3.8.

When , inequality (3.26) becomes

(3.31)

4. Conclusions

The study of the inefficiency of the ordinary least-squares estimator in the linear model requires a lower bound for the efficiency defined as the ratio of the variance or covariance of the best linear unbiased estimator to the variance or covariance of the ordinary least-squares estimator. Such a bound can be given by Kantorovich inequality or its extensions. Matrix, determinant, and trace versions of it have been presented in the literature. In this paper, we present its matrix Euclidean norm version.

References

  1. Watson GS: Serial correlation in regression analysis, Ph.D. thesis. Department of Experimental Statistics, North Carolina State College, Raleigh, NC, USA; 1951.

    Google Scholar 

  2. Watson GS, Alpargu G, Styan GPH: Some comments on six inequalities associated with the inefficiency of ordinary least squares with one regressor. Linear Algebra and Its Applications 1997, 264: 13–54.

    Article  MathSciNet  MATH  Google Scholar 

  3. Drury SW, Liu S, Lu C-Y, Puntanen S, Styan GPH: Some comments on several matrix inequalities with applications to canonical correlations: historical background and recent developments. Sankhyā 2002,64(2):453–507.

    MathSciNet  MATH  Google Scholar 

  4. Marshall AW, Olkin I: Multivariate distributions generated from mixtures of convolution and product families. In Topics in Statistical Dependence (Somerset, PA, 1987), Ims Lecture Notes-Monograph Series. Volume 16. Institute of Mathematical Statistics, Hayward, Calif, USA; 1990:371–393.

    Google Scholar 

  5. Wang SG, Jia ZZ: Matrix Inequalities. Science Press, Beijing, China; 1994.

    Google Scholar 

  6. Bloomfield P, Watson GS: The inefficiency of least squares. Biometrika 1975, 62: 121–128. 10.1093/biomet/62.1.121

    Article  MathSciNet  MATH  Google Scholar 

  7. Knott M: On the minimum efficiency of least squares. Biometrika 1975, 62: 129–132. 10.1093/biomet/62.1.129

    Article  MathSciNet  MATH  Google Scholar 

  8. Yang H: Extensions of the Kantorovich inequality and the error ratio efficiency of the mean square. Mathematica Applicata 1988,1(4):85–90.

    MathSciNet  MATH  Google Scholar 

  9. Greub W, Rheinboldt W: On a generalization of an inequality of L. V. Kantorovich. Proceedings of the American Mathematical Society 1959, 10: 407–415. 10.1090/S0002-9939-1959-0105028-3

    Article  MathSciNet  MATH  Google Scholar 

  10. Yang H, Wang L: An alternative form of the Watson efficiency. Journal of Statistical Planning and Inference 2009,139(8):2767–2774. 10.1016/j.jspi.2009.01.002

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgment

The authors thank very much the associate editors and reviewers for their insightful comments and kind suggestions that lead to improving the presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Litong Wang.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wang, L., Yang, H. Several Matrix Euclidean Norm Inequalities Involving Kantorovich Inequality. J Inequal Appl 2009, 291984 (2009). https://doi.org/10.1155/2009/291984

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2009/291984

Keywords