- Research Article
- Open access
- Published:
Several Matrix Euclidean Norm Inequalities Involving Kantorovich Inequality
Journal of Inequalities and Applications volume 2009, Article number: 291984 (2009)
Abstract
Kantorovich inequality is a very useful tool to study the inefficiency of the ordinary least-squares estimate with one regressor. When regressors are more than one statisticians have to extend it. Matrix, determinant, and trace versions of it have been presented in the literature. In this paper, we provide matrix Euclidean norm Kantorovich inequalities.
1. Introduction
Suppose that is an
positive definite matrix and
is an
real vector, then the well-known Kantorovich inequality can be expressed as
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ1_HTML.gif)
where are the eigenvalues of
. It is a very useful tool to study the inefficiency of the ordinary least-squares estimate with one regressor in the linear model. Watson [1] introduced the ratio of the variance of the best linear unbiased estimator to the variance of the ordinary least-squares estimator. Such a lower bound of this ratio was provided by Kantorovich inequality (1.1); see, for example, [2, 3]. When regressors are more than one statisticians have to extend it. Marshall and Olkin [4] were the first to generalize Kantorovich inequality to matrices (see, e.g., [5])
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ2_HTML.gif)
where is an
real matrix. If
, then (1.2) becomes
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ3_HTML.gif)
Bloomfield and Watson [6] and Knott [7] simultaneously established the inequality
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ4_HTML.gif)
where is an
real matrix such that
and
. Yang [8] presented its trace version
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ5_HTML.gif)
where is an
real matrix such that
.
To the best of our knowledge, there has not been any matrix Euclidean norm version of Kantorovich inequality yet. Our goal is to present its matrix Euclidean norm version.
This paper is arranged as follows. In Section 2, we will give some lemmas which are useful in the following section. In Section 3, some matrix inequalities are established by Kantorovich inequality or Pólya-Szegö inequality, which are referred to as the extensions of Kantorovich inequality as well and conclusions are given in Section 4.
2. Some Lemmas
We will start with some lemmas which are very useful in the following.
Definition 2.1.
Let be an
complex square matrix.
is called a normal matrix if
Lemma 2.2.
Let be an
complex square matrix and let
be the eigenvalues of
, then
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ6_HTML.gif)
where denotes the squared Euclidean norm of
. The equality in (2.1) holds if and only if
is a normal matrix.
Proof.
See [5].
Lemma 2.3 (Pólya-Szegö inequality).
There is
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ7_HTML.gif)
where .
Moreover Greub and Rheinboldt [9] generalized Pólya-Szegö inequality to matrices.
Lemma 2.4 (Poincare).
Let be an
Hermitian matrix, and let
be an
column orthogonal and full rank matrix, that is
, then one has
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ8_HTML.gif)
Let be a unitary matrix, whose column vectors are eigenvectors corresponding to
, respectively. Assume
and
, then
if and only if
; while
if and only if
, where
is a
unitary matrix.
Proof.
See [5].
3. Main Results
Theorem 3.1.
Let and
be
nonnegative definite Hermitian matrices with
, and let
be an
complex matrix satisfying
. Then one has
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ9_HTML.gif)
where and
are eigenvalues of matrices
and
, respectively.
Proof.
We easily get that is a Hermitian matrix since
is a Hermitian matrix. Hence
is a normal matrix and then we can derive from Lemma 2.2 that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ10_HTML.gif)
By Lemma 2.4 we get that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ11_HTML.gif)
Similarly,
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ12_HTML.gif)
Note that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ13_HTML.gif)
The latter expression of (3.5) may be expressed as
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ14_HTML.gif)
where is an arbitrary permutation of
. Clearly,
,
,
and
. Therefore, let
and
, then we can derive from Pólya-Szegö inequality that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ15_HTML.gif)
Since inequality (3.7) holds for any permutation of thus we find
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ16_HTML.gif)
In the following, the remaining problem is to choose a proper permutation of to minimize
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ17_HTML.gif)
This may be solved by a nontrivial but elementary combinatorial argument, thus we find
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ18_HTML.gif)
Then
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ19_HTML.gif)
Remark 3.2.
When is positive definite Hermitian matrix and
, inequality (3.1) plays an important role in the linear model
. The covariance matrices of the ordinary least-squares estimator and the best linear unbiased estimator are given in this model
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ20_HTML.gif)
Applying inequality (3.1), we can establish a lower bound of the inefficiency of least-squares estimator
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ21_HTML.gif)
See also [10].
In Theorem 3.1, we need the assumption that . However, we should also point out that the matrix
may not meet such an assumption in practice. Therefore, we relax this assumption in the following but the results are weaken.
Theorem 3.3.
Let and
be
nonnegative definite Hermitian matrices with
, and let
be an
complex matrix, then one has
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ22_HTML.gif)
where and
are eigenvalues of matrices
and
, respectively.
Proof.
If , the result obviously holds. Next set
. Let the spectral decomposition of
be
, where
is an orthogonal matrix and
Let
, then
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ23_HTML.gif)
We can derive from (3.15) that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ24_HTML.gif)
Similarly,
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ25_HTML.gif)
where We thus have
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ26_HTML.gif)
According to the proof of Theorem 3.1 we can get that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ27_HTML.gif)
The proof is completed.
Corollary 3.4.
Let be an
positive definite Hermitian matrix with eigenvalues
, and let
be an arbitrary
complex matrix, then one has
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ28_HTML.gif)
Proof.
It is very easy to prove therefore we omit the proof.
Theorem 3.5.
Let and
be
positive definite Hermitian matrices with
,
and
be the eigenvalues of
and
, respectively, and let
be an arbitrary
complex matrix. Then,
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ29_HTML.gif)
Proof.
If , the result obviously holds. Next set
. Since
there exists a unitary matrix
such that
and
, where
.
Define . By Corollary 3.4, we can get that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ30_HTML.gif)
where The right-hand side of (3.22) may be denoted by
, then
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ31_HTML.gif)
It is easy to prove that is a momotone increasing function of
on interval
. Write
, then we have
From the definitions of
and
, we thus have
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ32_HTML.gif)
This completes the proof.
Corollary 3.6.
If and
are positive semidefinite Hermitian matrices with eigenvalues
and
, respectively, inequality (3.21) becomes
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ33_HTML.gif)
Theorem 3.7.
Let and
be an
positive semidefinite Hermitian matrices with
,
and
be the eigenvalues
and
, respectively, and let
,
be
complex matrices with
. Then
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ34_HTML.gif)
Proof.
Note that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ35_HTML.gif)
Similarly,
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ36_HTML.gif)
Using the abbreviations . Clearly,
Let
be the eigenvalues of
, respectively. Applying Hölder inequality, we can derive that
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ37_HTML.gif)
Thus
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ38_HTML.gif)
This completes the proof.
Corollary 3.8.
When , inequality (3.26) becomes
![](http://media.springernature.com/full/springer-static/image/art%3A10.1155%2F2009%2F291984/MediaObjects/13660_2009_Article_1932_Equ39_HTML.gif)
4. Conclusions
The study of the inefficiency of the ordinary least-squares estimator in the linear model requires a lower bound for the efficiency defined as the ratio of the variance or covariance of the best linear unbiased estimator to the variance or covariance of the ordinary least-squares estimator. Such a bound can be given by Kantorovich inequality or its extensions. Matrix, determinant, and trace versions of it have been presented in the literature. In this paper, we present its matrix Euclidean norm version.
References
Watson GS: Serial correlation in regression analysis, Ph.D. thesis. Department of Experimental Statistics, North Carolina State College, Raleigh, NC, USA; 1951.
Watson GS, Alpargu G, Styan GPH: Some comments on six inequalities associated with the inefficiency of ordinary least squares with one regressor. Linear Algebra and Its Applications 1997, 264: 13–54.
Drury SW, Liu S, Lu C-Y, Puntanen S, Styan GPH: Some comments on several matrix inequalities with applications to canonical correlations: historical background and recent developments. Sankhyā 2002,64(2):453–507.
Marshall AW, Olkin I: Multivariate distributions generated from mixtures of convolution and product families. In Topics in Statistical Dependence (Somerset, PA, 1987), Ims Lecture Notes-Monograph Series. Volume 16. Institute of Mathematical Statistics, Hayward, Calif, USA; 1990:371–393.
Wang SG, Jia ZZ: Matrix Inequalities. Science Press, Beijing, China; 1994.
Bloomfield P, Watson GS: The inefficiency of least squares. Biometrika 1975, 62: 121–128. 10.1093/biomet/62.1.121
Knott M: On the minimum efficiency of least squares. Biometrika 1975, 62: 129–132. 10.1093/biomet/62.1.129
Yang H: Extensions of the Kantorovich inequality and the error ratio efficiency of the mean square. Mathematica Applicata 1988,1(4):85–90.
Greub W, Rheinboldt W: On a generalization of an inequality of L. V. Kantorovich. Proceedings of the American Mathematical Society 1959, 10: 407–415. 10.1090/S0002-9939-1959-0105028-3
Yang H, Wang L: An alternative form of the Watson efficiency. Journal of Statistical Planning and Inference 2009,139(8):2767–2774. 10.1016/j.jspi.2009.01.002
Acknowledgment
The authors thank very much the associate editors and reviewers for their insightful comments and kind suggestions that lead to improving the presentation.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Wang, L., Yang, H. Several Matrix Euclidean Norm Inequalities Involving Kantorovich Inequality. J Inequal Appl 2009, 291984 (2009). https://doi.org/10.1155/2009/291984
Received:
Accepted:
Published:
DOI: https://doi.org/10.1155/2009/291984