Skip to content

Advertisement

Open Access

The efficiency comparisons between OLSE and BLUE in a singular linear model

Journal of Inequalities and Applications20132013:17

https://doi.org/10.1186/1029-242X-2013-17

Received: 12 August 2012

Accepted: 21 December 2012

Published: 14 January 2013

Abstract

This paper is mainly concerned with the efficiency comparison between OLSE and BLUE in a singular linear model. We define the efficiencies between OLSE and BLUE by means of the matrix Euclidean norm and prove a matrix Euclidean norm version of the Kantorovich inequality to limit upper or lower bounds of these efficiencies. It relaxes the assumptions that the covariance matrix is positive definite and the design matrix has full column rank.

MSC:62J05, 62H05, 62H20.

Keywords

Euclidean normleast squaressingular linear modelWatson efficiency

1 Introduction

Inequalities are studied and utilized widely in many fields such as in matrix theory, statistics and so on. In statistics, they are often used to make efficiency comparisons between two estimators. For example, Wang and Shao [1] have discussed the efficiency comparisons between the ordinary least squares estimator (OLSE) and the best linear unbiased estimator (BLUE) in linear models. In this paper, our goal is to make the comparison of efficiencies between OLSE and BLUE in a singular linear model by using matrix norm versions of the Kantorovich inequality involving a nonnegative definite matrix.

Consider the following linear regression model:
y = X β + ε ,
(1.1)

where y R n is the vector of n observations, X R n × p is the known design matrix, β R P is the unknown vector of regression coefficients and ε R n is the error vector with mean vector zero and the covariance matrix Σ.

When X has full column rank and Σ is assumed to be positive definite, it is well known that the best linear unbiased estimator (BLUE) of β can be expressed as
β ˜ = ( X Σ 1 X ) 1 X Σ 1 y
(1.2)
and the ordinary least squares estimator (OLSE) of β is given by
β ˆ = ( X X ) 1 X y .
(1.3)
According to the Löwner ordering, we can easily compute from (1.2) and (1.3) that cov ( β ˆ ) cov ( β ˜ ) 0 , which is nonnegative definite. Since there is no unique way to measure how ‘bad’ the OLSE can be with respect to the BLUE, various criteria have been considered in the literature; see, e.g., [26]. Among these criteria, the frequently used measure is the Watson efficiency [2] defined as follows:
ϕ 1 = | cov ( β ˜ ) | | cov ( β ˆ ) | = | X X | 2 | X Σ X | | X Σ 1 X | ,
(1.4)
where | | indicates the determinant of the matrix concerned. The lower bound is provided by the Bloomfild-Watson-Knott inequality; see, e.g., [7, 8]. However, Yang and Wang [9] have shown that such a criterion is not always so satisfactory and provided an alternative form defined as the ratio of the Euclidean norms (or Frobenius norms) of the corresponding covariance matrices:
ϕ 2 = cov ( β ˜ ) cov ( β ˆ ) = ( X Σ 1 X ) 1 ( X X ) 1 X Σ X ( X X ) 1 .
(1.5)

Many authors assume that the covariance matrix is nonsingular in their analysis of this classic linear model. But the number of characteristics that could be included in the model may be clearly limited by this assumption of nonsingularity. A few authors relax the condition of nonsingularity and consider a singular linear model. For example, Liski et al. [10] and Liu [4] make efficiency comparisons between the OLSE and BLUE in a singular linear model. In the present paper, the singular linear model is further studied.

The Watson efficiency ϕ 1 has been generalized to a weakly singular model; see, e.g., [11]. For a general case of the underlying singular linear model, it is not interesting because the denominator reduces to zero. In order to relax assumptions on the rank of X and Σ, we mainly discuss its alternative form based on the Euclidean norm [9, 12].

We hereinafter introduce some useful notations. Let the symbols A , A , A + , R ( A ) , R ( A ) and rk ( A ) stand for the transpose, a generalized inverse, the Moore-Penrose inverse, the column space, the orthogonal complement of the column space and the rank of the matrix A, respectively. Moreover, write P A = A A + = A ( A A ) + A and M A = I P A , in particular, H = P X , M = I H . λ i ( A ) denotes the i th largest eigenvalue of the matrix A.

2 A new Kantorovich-type inequality

We start with some lemmas which are very useful in the following.

Lemma 2.1 Let A be an n × n complex matrix and λ 1 , , λ n be eigenvalues of A. Then we have
i = 1 n | λ i | 2 A 2

and the equality holds if and only if A is a regular matrix.

Proof The proof is very easy, we therefore omit it here. □

Lemma 2.2 Let A be an n × n positive semidefinite Hermitian matrix and U be an orthogonal projection matrix with rk ( U ) = k . Then we have
λ n k + i ( A ) λ i ( A U ) λ i ( A ) , i = 1 , , k .

The proof can be found in [13]. See also [14].

Lemma 2.3 The Pólya and Szegö inequality
( i = 1 n a i 2 ) ( i = 1 n b i 2 ) ( M 1 M 2 + m 1 m 2 ) 2 4 m 1 m 2 M 1 M 2 ( i = 1 n a i b i ) 2 ,

where 0 < m 1 a i M 1 , 0 < m 2 b i M 2 , i = 1 , , n .

Lemma 2.4 Let A and B be two n × n positive semidefinite Hermitian matrices, and U be an n × k matrix, R ( A ) R ( B ) , rk ( B ) = q , rk ( B U ) = t , then
λ q t + i ( B A ) λ i ( ( U B U ) U A U ) λ i ( B A ) , i = 1 , , t .

The proof can be found in [15]. See also [14].

Theorem 2.5 Let A be an n × n positive semidefinite Hermitian matrix and λ 1 λ s > 0 ( s n ) be the ordered eigenvalues of A, and let U be an n × p complex matrix such that U U = I p . If p s , we then have

Proof The proof is similar to Theorem 1 in [9], we therefore omit it here. □

3 The comparison of efficiencies

The Watson efficiency [2, 16] and its decompositions [11] are usually used to measure the efficiency of the ordinary least squares. However, Yang and Wang [9] show that such a criterion does not always work well in some cases and propose an alternative form
ρ = cov ( X β ˜ ) cov ( X β ˆ ) = X ( X Σ 1 X ) 1 X X ( X X ) 1 X Σ X ( X X ) 1 X .
(3.1)

The above formula and its lower bound both require the covariance matrix Σ to be positive definite and the design matrix to have full column rank. This assumption limits clearly the number of characters which may be included in the model. We here generalize this formula to the situation where the matrices X and Σ can be of arbitrary rank.

In the following, we divide singular linear models into three categories in accordance with the assumptions on X and Σ. These categories are as follows:
  1. (1)

    R ( X ) R ( Σ ) , rk ( X ) = p , Σ is possibly singular;

     
  2. (2)

    R ( X ) R ( Σ ) , rk ( X ) < p , Σ is possibly singular;

     
  3. (3)

    Σ is possibly singular.

     
From now on, we always assume rk ( Σ ) = s ( s < n ). Then any given singular linear model can be uniquely classified into i ( i = 1 , 2 , 3 ). Many authors have contributed to the theory in the literature; see, e.g., [17, 18]. The general representations for the BLUE of and their covariance matrices can be given respectively by
(3.2)
(3.3)
where W = Σ + X U X and here U 0 is an arbitrary matrix such that R ( W ) = R ( X : Σ ) . In particular, Σ can play the same role as Σ 1 does when Σ is nonsingular as long as R ( X ) R ( Σ ) . That is to say, W can be replaced by Σ in this case. We then have
cov ( X β ˜ ) = X ( X Σ X ) X .
(3.4)
The covariance matrix of the well-known OLSE of is given by
cov ( X β ˆ ) = X ( X X ) X Σ X ( X X ) X = H Σ H .
(3.5)

In the following, we make efficiency comparisons between the OLSE and BLUE in a singular model according to the above category.

Firstly, we will discuss the category (1). The matrix product X Σ X is invariant for all the choices of generalized inverse Σ because of the column space inclusion R ( X ) R ( Σ ) . Applying the rank rule of the matrix product [19], we can get that
rk ( X Σ X ) = rk ( Σ X ) = rk ( X ) dim R ( X ) R ( Σ ) = p .
(3.6)
Note that R ( Σ ) = R ( Σ + ) , and then we can conclude that X Σ X and X Σ + X are both nonsingular. In the literature, such a model is often regarded as a weakly singular model or the Zyskind-Martin model [20]. In this model, the relative efficiency ρ becomes
ρ 1 = cov ( X β ˜ ) cov ( X β ˆ ) = X ( X Σ + X ) 1 X X ( X X ) 1 X Σ X ( X X ) 1 X .
(3.7)

It is easy to prove that ρ 1 1 . The following theorem gives its lower bound.

Theorem 3.1 In the linear regression model (1.1), let λ 1 λ s ( s < n ) be the ordered eigenvalues of Σ and X be an n × p design matrix with rk ( X ) = p , R ( X ) R ( Σ ) . Then we have
ρ 1 2 p λ 1 λ p λ s p + 1 λ s ( λ 1 λ s p + 1 + λ s λ p ) i = 1 p λ i λ s p + i .
Proof We can firstly compute that
There exists some orthogonal matrix P such that Σ = P Λ P , so , where Λ = diag ( λ 1 , , λ s , 0 , , 0 ) . Let , and then we have that and

Using Theorem 2.5, the result in Theorem 3.1 can be established. □

Secondly, we will consider the category (2). Let rk ( X ) = r ( r < p ). Using equation (3.6), we can get that rk ( X Σ X ) = rk ( X Σ + X ) = r . Analogically, the relative efficiency ρ becomes
ρ 2 = cov ( X β ˜ ) cov ( X β ˆ ) = X ( X Σ + X ) + X X ( X X ) + X Σ X ( X X ) + X .
(3.8)

It is easy to prove that ρ 2 1 . The following theorem gives its lower bound.

Theorem 3.2 In the linear regression model (1.1), let λ 1 λ s ( s < n ) be the ordered eigenvalues of Σ and X be an n × p design matrix with rk ( X ) = r ( r < p ), R ( X ) R ( Σ ) . We then have
ρ 2 2 r λ 1 λ r λ s r + 1 λ s ( λ 1 λ s r + 1 + λ s λ r ) i = 1 r λ i λ s r + i .
Proof It is easy to prove that
X ( X Σ X ) + X = ( H Σ + H ) + .

Then the proof is similar to Theorem 3.1, therefore we omit it here. □

Finally, we take into account the category (3). Owing to (3.3), we may write
cov ( X β ˜ ) = cov ( X β ˆ ) H Σ M ( M Σ M ) M Σ H .
Therefore, we define
ρ 3 = cov ( X β ˆ ) cov ( X β ˜ ) cov ( X β ˆ ) = H Σ M ( M Σ M ) M Σ H H Σ H .
(3.9)
Let dim R ( X ) R ( Σ ) = g ( g 0 ) and rk ( X ) = r ( r p ). Note that
rk ( X Σ X ) = rk ( X X ( X X ) + X Σ X ( X X ) + X X ) rk ( X ( X X ) + X Σ X ( X X ) + X ) rk ( X Σ X ) .
Due to equation (3.6), we have
rk ( H Σ H ) = rk ( X ) dim R ( X ) R ( Σ ) .
(3.10)
As a result from
R ( X ) = R ( X ) ( R ( Σ ) R ( Σ ) ) = ( R ( X ) R ( Σ ) ) ( R ( X ) R ( Σ ) ) ,
we then have
rk ( H Σ H ) = r ( r g ) = g .
(3.11)
Similarly, we can obtain that
rk ( M Σ M ) = rk ( Σ M ) = rk ( M ) dim R ( M ) R ( Σ ) = dim R ( M ) R ( Σ ) .
(3.12)
In view of R ( M ) = R ( X ) and
R ( Σ ) = R ( Σ ) ( R ( X ) R ( X ) ) = ( R ( Σ ) R ( X ) ) ( R ( Σ ) R ( X ) ) ,
we can get that
rk ( M Σ M ) = rk ( Σ M ) = s g .
(3.13)
Theorem 3.3 In the linear regression model (1.1), let λ 1 λ s ( s < n ) be the ordered eigenvalues of Σ and X be an n × p design matrix with rk ( X ) = r ( r p ), dim R ( X ) R ( Σ ) = g , rk ( H Σ M ( M Σ M ) M Σ H ) = h , we then have
ρ 3 { 1 2 ( s + r n ) ( λ 1 λ n r + 1 λ s λ s + r n + λ s λ s + r n λ 1 λ n r + 1 ) ( i = 1 s + r n λ i λ n r + i ) , if  h s + r n , 1 2 ( s + r n ) ( λ 1 λ s h + 1 λ s λ h + λ s λ h λ 1 λ s h + 1 ) ( i = 1 h λ i λ s h + i ) , if  h > s + r n .
Proof For convenience, let a = H Σ M ( M Σ M ) M Σ H and b = H Σ H . Then H Σ M ( M Σ M ) M Σ H is invariant for all the choices of generalized inverses ( M Σ M ) . From Lemma 2.1, we can easily get that
a 2 = i = 1 h λ i 2 ( H Σ M ( M Σ M ) M Σ H ) = i = 1 h λ i 2 ( ( M Σ M ) M Σ H Σ M ) .
Obviously, h = rk ( H Σ M ( M Σ M ) M Σ H ) rk ( Σ H ) = g and h rk ( Σ M ) = s g . Since Σ and Σ H Σ are positive semidefinite matrices and R ( Σ H Σ ) R ( Σ ) , we can derive from Lemma 2.4 that
a 2 i = 1 h λ i 2 ( Σ Σ H Σ ) = i = 1 h λ i 2 ( Σ Σ Σ H ) = i = 1 h λ i 2 ( Σ H ) .
Here H is an orthogonal projection matrix, and then we obtain from Lemma 2.2 that
a 2 i = 1 h λ i 2 ( Σ ) i = 1 h λ i 2 .
Furthermore, since H Σ H is a Hermitian matrix, by Lemma 2.1, we can get that
b 2 = i = 1 g λ i 2 ( H Σ H ) = i = 1 g λ i 2 ( Σ H ) .
The Sylvester theorem (see, e.g., [14]) shows that n r + g > s . Analogically, we can get that
b 2 i = 1 s + r n λ n r + i 2 .
Applying the well-known arithmetic-harmonic mean inequality, we have
1 b 2 1 ( s + r n ) 2 i = 1 s + r n λ n r + i 2 .
Firstly, we suppose that h s + r n . We can then compute that
ρ 3 2 = a 2 b 2 1 ( s + r n ) 2 i = 1 s + r n λ i 2 i = 1 s + r n λ n r + i 2 .

By the Pólya and Szegö inequality and a nontrivial but elementary combinational argument, we can establish the first inequality. In fact, the second inequality is similar. □

4 Conclusions

In this article, we use several new matrix norm versions of the Kantorovich inequality involving a nonnegative definite matrix to make the comparison of efficiencies between OLSE and BLUE in a singular linear model. The singular linear model is divided into three categories in accordance with the assumptions on the ranks of X and Σ. We introduce some new relative efficiency criteria and their lower or upper bounds are given based on matrix norm inequalities in Theorem 3.1, Theorem 3.2 and Theorem 3.3.

Declarations

Acknowledgements

The authors thank very much associate editors and referees for their insightful comments that led to improving the presentation. This work is partly supported by the National Natural Science Foundation of China (No. 11126211; No. 61201398) and the Natural Science Foundation of Zhejiang Province (No. LQ12A01021).

Authors’ Affiliations

(1)
Department of Applied Mathematics, Zhejiang University of Technology, Hangzhou, China
(2)
College of Mechanical Electrical Engineering, Zhejiang University of Technology, Hangzhou, China

References

  1. Wang SG, Shao J: Constrained Kantorovich inequality and relative efficiency of least squares. J. Multivar. Anal. 1992, 42: 284–298. 10.1016/0047-259X(92)90048-KMATHMathSciNetView ArticleGoogle Scholar
  2. Watson, GS: Serial correlation in regression analysis. PhD thesis, Department of Experimental Statistics, North Carolina State College, Raleigh (1951)Google Scholar
  3. Khatri CG, Rao CR: Some generalizations of the Kantorovich inequality. Sankhya 1982, 44: 91–102.MATHMathSciNetGoogle Scholar
  4. Liu SZ: Efficiency comparisons between the OLSE and the BLUE in a singular linear model. J. Stat. Plan. Inference 2000, 84: 191–200. 10.1016/S0378-3758(99)00149-4MATHView ArticleGoogle Scholar
  5. Rao CR: The inefficiency of least squares: extension of the Kantorovich inequality. Linear Algebra Appl. 1985, 70: 249–255.MATHMathSciNetView ArticleGoogle Scholar
  6. Yang H: Extensions of the Kantorovich inequality and the error ratio efficiency of the mean square. Math. Appl. 1988, 4: 85–90.Google Scholar
  7. Bloomfield P, Watson GS: The inefficiency of least squares. Biometrika 1975, 62: 121–128. 10.1093/biomet/62.1.121MATHMathSciNetView ArticleGoogle Scholar
  8. Knott M: On the minimum efficiency of the least square. Biometrika 1975, 62: 129–132. 10.1093/biomet/62.1.129MATHMathSciNetView ArticleGoogle Scholar
  9. Yang H, Wang LT: An alternative form of the Watson efficiency. J. Stat. Plan. Inference 2009, 139: 2767–2774. 10.1016/j.jspi.2009.01.002MATHView ArticleGoogle Scholar
  10. Liski EP, Puntanen S, Wang SG: Bounds for the trace of the difference of the covariance matrices of the OLSE and BLUE. Linear Algebra Appl. 1992, 176: 121–130.MATHMathSciNetView ArticleGoogle Scholar
  11. Chu K, Isotalo J, Puntanen S, Styan GPH: On decomposing the Watson efficiency of ordinary least squares in a partitioned weakly singular linear model. Sankhya 2004, 66: 634–651.MATHMathSciNetGoogle Scholar
  12. Wang LT, Yang H: Several matrix Euclidean norm inequalities involving Kantorovich inequality. J. Inequal. Appl. 2009., 2009: Article ID 291984Google Scholar
  13. Poincare H: Sur les equation aux derives partielles de la physique mathematique. Am. J. Math. 1890, 12: 211–294. 10.2307/2369620MATHMathSciNetView ArticleGoogle Scholar
  14. Wang SG, Jia ZZ: Matrix Inequality. Anhui Education Press, Hefei; 1994.Google Scholar
  15. Scott AJ, Styan GPH: On a separation theorem for generalized eigenvalues and a problem in the analysis of sample surveys. Linear Algebra Appl. 1985, 70: 209–224.MATHMathSciNetView ArticleGoogle Scholar
  16. Chu K, Isotalo J, Puntanen S, Styan GPH: The efficiency factorization multiplier for the Watson efficiency in partitioned linear models: some examples and a literature review. J. Stat. Plan. Inference 2007, 137: 3336–3351. 10.1016/j.jspi.2007.03.015MATHMathSciNetView ArticleGoogle Scholar
  17. Christensen R: Plane Answers to Complex Questions: The Theory of Linear Models. Springer, New York; 1987.MATHView ArticleGoogle Scholar
  18. Groß J: The general Gauss-Markov model with possibly singular dispersion matrix. Stat. Pap. 2004, 45: 311–336. 10.1007/BF02777575MATHView ArticleGoogle Scholar
  19. Meyer CD: Matrix Analysis and Applied Linear Algebra. Cambridge University Press, Cambridge; 2001.Google Scholar
  20. Zyskind G, Martin FB: On best linear estimation and a general Gauss-Markov theorem in linear models with arbitrary nonnegative covariance structure. SIAM J. Appl. Math. 1969, 17: 1190–1202. 10.1137/0117110MATHMathSciNetView ArticleGoogle Scholar

Copyright

© Wang and Pan; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement