# Several Matrix Euclidean Norm Inequalities Involving Kantorovich Inequality

- Litong Wang
^{1}Email author and - Hu Yang
^{1}

**2009**:291984

https://doi.org/10.1155/2009/291984

© L.Wang and H. Yang. 2009

**Received: **21 April 2009

**Accepted: **4 August 2009

**Published: **26 August 2009

## Abstract

Kantorovich inequality is a very useful tool to study the inefficiency of the ordinary least-squares estimate with one regressor. When regressors are more than one statisticians have to extend it. Matrix, determinant, and trace versions of it have been presented in the literature. In this paper, we provide matrix Euclidean norm Kantorovich inequalities.

## Keywords

## 1. Introduction

where is an real matrix such that .

To the best of our knowledge, there has not been any matrix Euclidean norm version of Kantorovich inequality yet. Our goal is to present its matrix Euclidean norm version.

This paper is arranged as follows. In Section 2, we will give some lemmas which are useful in the following section. In Section 3, some matrix inequalities are established by Kantorovich inequality or Pólya-Szegö inequality, which are referred to as the extensions of Kantorovich inequality as well and conclusions are given in Section 4.

## 2. Some Lemmas

We will start with some lemmas which are very useful in the following.

Definition 2.1.

Let be an complex square matrix. is called a normal matrix if

Lemma 2.2.

where denotes the squared Euclidean norm of . The equality in (2.1) holds if and only if is a normal matrix.

Proof.

See [5].

Lemma 2.3 (Pólya-Szegö inequality).

Moreover Greub and Rheinboldt [9] generalized Pólya-Szegö inequality to matrices.

Lemma 2.4 (Poincare).

Let be a unitary matrix, whose column vectors are eigenvectors corresponding to , respectively. Assume and , then if and only if ; while if and only if , where is a unitary matrix.

Proof.

See [5].

## 3. Main Results

Theorem 3.1.

where and are eigenvalues of matrices and , respectively.

Proof.

Remark 3.2.

See also [10].

In Theorem 3.1, we need the assumption that . However, we should also point out that the matrix may not meet such an assumption in practice. Therefore, we relax this assumption in the following but the results are weaken.

Theorem 3.3.

where and are eigenvalues of matrices and , respectively.

Proof.

The proof is completed.

Corollary 3.4.

Proof.

It is very easy to prove therefore we omit the proof.

Theorem 3.5.

Proof.

If , the result obviously holds. Next set . Since there exists a unitary matrix such that and , where .

This completes the proof.

Corollary 3.6.

Theorem 3.7.

Proof.

This completes the proof.

Corollary 3.8.

## 4. Conclusions

The study of the inefficiency of the ordinary least-squares estimator in the linear model requires a lower bound for the efficiency defined as the ratio of the variance or covariance of the best linear unbiased estimator to the variance or covariance of the ordinary least-squares estimator. Such a bound can be given by Kantorovich inequality or its extensions. Matrix, determinant, and trace versions of it have been presented in the literature. In this paper, we present its matrix Euclidean norm version.

## Declarations

### Acknowledgment

The authors thank very much the associate editors and reviewers for their insightful comments and kind suggestions that lead to improving the presentation.

## Authors’ Affiliations

## References

- Watson GS:
*Serial correlation in regression analysis, Ph.D. thesis*. Department of Experimental Statistics, North Carolina State College, Raleigh, NC, USA; 1951.Google Scholar - Watson GS, Alpargu G, Styan GPH:
**Some comments on six inequalities associated with the inefficiency of ordinary least squares with one regressor.***Linear Algebra and Its Applications*1997,**264:**13–54.MathSciNetView ArticleMATHGoogle Scholar - Drury SW, Liu S, Lu C-Y, Puntanen S, Styan GPH:
**Some comments on several matrix inequalities with applications to canonical correlations: historical background and recent developments.***Sankhyā*2002,**64**(2):453–507.MathSciNetMATHGoogle Scholar - Marshall AW, Olkin I:
**Multivariate distributions generated from mixtures of convolution and product families.**In*Topics in Statistical Dependence (Somerset, PA, 1987), Ims Lecture Notes-Monograph Series*.*Volume 16*. Institute of Mathematical Statistics, Hayward, Calif, USA; 1990:371–393.Google Scholar - Wang SG, Jia ZZ:
*Matrix Inequalities*. Science Press, Beijing, China; 1994.Google Scholar - Bloomfield P, Watson GS:
**The inefficiency of least squares.***Biometrika*1975,**62:**121–128. 10.1093/biomet/62.1.121MathSciNetView ArticleMATHGoogle Scholar - Knott M:
**On the minimum efficiency of least squares.***Biometrika*1975,**62:**129–132. 10.1093/biomet/62.1.129MathSciNetView ArticleMATHGoogle Scholar - Yang H:
**Extensions of the Kantorovich inequality and the error ratio efficiency of the mean square.***Mathematica Applicata*1988,**1**(4):85–90.MathSciNetMATHGoogle Scholar - Greub W, Rheinboldt W:
**On a generalization of an inequality of L. V. Kantorovich.***Proceedings of the American Mathematical Society*1959,**10:**407–415. 10.1090/S0002-9939-1959-0105028-3MathSciNetView ArticleMATHGoogle Scholar - Yang H, Wang L:
**An alternative form of the Watson efficiency.***Journal of Statistical Planning and Inference*2009,**139**(8):2767–2774. 10.1016/j.jspi.2009.01.002MathSciNetView ArticleMATHGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.