Skip to main content

Some properties of relative efficiency of estimators in a two linear regression equations system with identical parameter vectors

Abstract

Two normal linear models with some of the parameters identical are discussed in this article. We introduce four relative efficiencies to define the efficiency of estimator in two linear regression equations system with identical parameter vectors, also we give the lower and upper bounds of the four relative efficiencies.

1 Introduction

Consider a system (H) formed by two linear models:

y 1 = X 1 β+ Z 1 β 1 + ε 1 ,
(1)
y 2 = X 2 β+ Z 2 β 2 + ε 2 ,
(2)

where for i=1,2, y i is n i ×1 vector of observations, X i and Z i are n i ×p and n i × t i full rank matrices satisfying rank( X i , Z i )=rank( X i )+rank( Z i ) with rank() denoting the rank of a matrix, β and β i are p×1 and t i ×1 unknown parameters, ε i is n i ×1 random vector supposed to follow a multivariate normal distribution mean 0 and variance covariance matrix σ i I, σ i being a known parameter, ε 1 and ε 2 are independent.

Define Q i =I Z i ( Z i Z i ) 1 Z i , T i = ( Z i Z i ) 1 Z i X i and r= σ 1 σ 2 . Then by Liu [1] we have the following:

  1. (1)

    In the single equation (1), the best linear unbiased estimators (BLUE) of β and β 1 are given respectively by

    β ˆ = ( X 1 Q 1 X 1 ) 1 X 1 Q 1 y 1 ,
    (3)
    β ˆ 1 = ( Z 1 Z 1 ) 1 Z 1 y 1 T 1 β ˆ .
    (4)
  2. (2)

    In the single equation (2), the best linear unbiased estimators (BLUE) of β and β 2 are given respectively by

    β ˜ = ( X 2 Q 2 X 2 ) 1 X 2 Q 2 y 2 ,
    (5)
    β ˜ 2 = ( Z 2 Z 2 ) 1 Z 2 y 2 T 2 β ˜ .
    (6)
  3. (3)

    For the system (H), the BLUE of β, β 1 and β 2 are given respectively by

    β (r)= ( X 1 Q 1 X 1 + r X 2 Q 2 X 2 ) 1 ( X 1 Q 1 y 1 + r X 2 Q 2 y 2 ) ,
    (7)
    β 1 = ( Z 1 Z 1 ) 1 Z 1 y 1 T 1 β (r),
    (8)
    β 2 = ( Z 2 Z 2 ) 1 Z 2 y 2 T 2 β (r).
    (9)

In this article, we only discuss the estimation of the parameter β. Liu [1] gave the comparison between the estimators β ˆ , β ˜ and β (r) in the mean squared error criterion when σ i are known. He also gave an estimator when σ i are unknown and discussed the statistical properties of the estimators β ˆ , β ˜ and β (r). Ma and Wang [2] also studied the estimators β ˆ , β ˜ and β (r) in the mean squared error criterion.

It is easy to compute that

Cov( β ˆ )= σ 1 ( X 1 Q 1 X 1 ) 1 ,
(10)
Cov( β ˜ )= σ 2 ( X 2 Q 2 X 2 ) 1 ,
(11)
Cov ( β ( r ) ) = σ 1 ( X 1 Q 1 X 1 + r X 2 Q 2 X 2 ) 1 .
(12)

From Equations (10)-(12), we can see that

Cov ( β ( r ) ) Cov( β ˆ ),Cov ( β ( r ) ) Cov( β ˜ ).
(13)

In practice, σ i may be unknown, in this case we can use β ˆ or β ˜ to replace β (r). However, this will lead to loss, we introduce the relative efficiency to define the loss. Relative efficiency has been studied by many researchers such as Yang [3], Wang and Ip [4], Liu et al. [5, 6], Yang and Wang [7], Wang and Yang [8, 9] and Yang and Wu [10].

In this article, we introduce four relative efficiencies in system (H), and we also give the lower and upper bounds of the four relative efficiencies.

The rest of the article is organized as follows. In Section 2, we propose the new relative efficiency. Sections 3 and 4 give the lower and upper bounds of the relative efficiencies proposed in Section 2. Some concluding remarks are given in Section 5.

2 New relative efficiency

In order to define the loss when we use β ˆ or β ˜ to replace β (r), we introduce four relative efficiencies as follows:

e 1 ( β ( r ) | β ˆ ) = | Cov ( β ( r ) ) | | Cov ( β ˆ ) | ,
(14)
e 2 ( β ( r ) | β ˜ ) = | Cov ( β ( r ) ) | | Cov ( β ˜ ) | ,
(15)
e 3 ( β ( r ) | β ˆ ) = tr ( Cov ( β ( r ) ) ) tr ( Cov ( β ˆ ) ) ,
(16)
e 4 ( β ( r ) | β ˜ ) = tr ( Cov ( β ( r ) ) ) tr ( Cov ( β ˜ ) ) ,
(17)

where |A| and tr(A) denote the determinant and trace of matrix A, respectively. By Equation (13), we have 0< e i (|)1, i=1,2,3,4. In the next section we will give the lower and upper bounds of e 1 ( β (r)| β ˆ ) and e 2 ( β (r)| β ˜ ).

3 The lower and upper bounds of e 1 ( β (r)| β ˆ ) and e 2 ( β (r)| β ˜ )

In this section we give the lower and upper bounds of e 1 ( β (r)| β ˆ ) and e 2 ( β (r)| β ˜ ). Firstly, we give some lemmas and notations which are needed in the following discussion. Let A be an n×n nonnegative definite matrix, λ 1 (A) λ 2 (A) λ n (A) stands for the ordered eigenvalues of matrix A.

Lemma 3.1 [11]

Let A be an n×n nonnegative definite matrix, and let B be an n×n nonnegative definite matrix, then we have

λ n (A) λ i (B) λ i (AB) λ 1 (A) λ i (B),i=1,2,,n.
(18)

Lemma 3.2 [12]

Let Δ 1 =diag( τ 1 , τ 2 ,, τ p ), τ 1 τ 2 τ p >0, Δ 2 =diag( μ 1 , μ 2 ,, μ p ), μ 1 μ 2 μ p 0 and A be an p×p orthogonal matrix, then we have

i = 1 p τ i μ p + 1 i tr ( Δ 1 A Δ 2 A ) i = 1 p τ i μ i .
(19)

Now we will give the lower and upper bounds of e 1 ( β (r)| β ˆ ).

Theorem 3.1 Let β (r) and β ˆ be given in Equations (7) and (3), let e 1 ( β (r)| β ˆ ) be defined in Equation (14), then we have

1 i = 1 p ( 1 + r θ p 1 η i ) e 1 ( β ( r ) | β ˆ ) 1 i = 1 p ( 1 + r θ 1 1 η i ) ,
(20)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 .

Proof By the definition of e 1 ( β (r)| β ˆ ), we have

e 1 ( β ( r ) | β ˆ ) = | Cov ( β ( r ) ) | | Cov ( β ˆ ) | = | σ 1 ( X 1 Q 1 X 1 + r X 2 Q 2 X 2 ) 1 | | σ 1 ( X 1 Q 1 X 1 ) 1 | = | X 1 Q 1 X 1 | | X 1 Q 1 X 1 + r X 2 Q 2 X 2 | .
(21)

It is easy to see that X 1 Q 1 X 1 >0 and X 2 Q 2 X 2 >0. Define

A= ( X 1 Q 1 X 1 ) 1 / 2 ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 / 2 ,

then A>0, there exists an orthogonal matrix N such that

NA N =diag( ζ 1 ,, ζ p )Δ,
(22)

where ζ 1 ζ p is the eigenvalues of A. Now we define M=N ( X 1 Q 1 X 1 ) 1 / 2 , then we have

M ( X 1 Q 1 X 1 ) M =N N = I p ,
(23)
M ( X 2 Q 2 X 2 ) M = N ( X 1 Q 1 X 1 ) 1 / 2 ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 / 2 N M ( X 2 Q 2 X 2 ) M = N A N = Δ .
(24)

Thus

X 1 Q 1 X 1 = M 1 M 1 ,
(25)
X 2 Q 2 X 2 = M 1 Δ M 1 .
(26)

Then we put Equations (25) and (26) into Equation (21), and we have

e 1 ( β ( r ) | β ˆ ) = | X 1 Q 1 X 1 | | X 1 Q 1 X 1 + r X 2 Q 2 X 2 | = | M 1 M 1 | | M 1 M 1 + r M 1 Δ M 1 | = | M 1 | | M 1 | | M 1 | | I p + r Δ | | M 1 | = 1 | I p + r Δ | .
(27)

Since A= ( X 1 Q 1 X 1 ) 1 / 2 ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 / 2 has the same eigenvalues of ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 , we have λ i (A)= λ i (( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 ), i=1,2,,p. Then by Lemma 3.1 we have

λ p ( ( X 1 Q 1 X 1 ) 1 ) λ i ( X 2 Q 2 X 2 ) λ i ( ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 ) λ 1 ( ( X 1 Q 1 X 1 ) 1 ) λ i ( X 2 Q 2 X 2 ) .
(28)

On the other hand,

λ p ( ( X 1 Q 1 X 1 ) 1 ) = λ 1 1 ( X 1 Q 1 X 1 ) = θ 1 1 ,
(29)
λ 1 ( ( X 1 Q 1 X 1 ) 1 ) = λ p 1 ( X 1 Q 1 X 1 ) = θ p 1 ,
(30)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 . By Equations (28)-(30), we obtain

θ 1 1 η i λ i ( ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 ) θ p 1 η i ,i=1,,p,
(31)

where η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 . Thus by Equations (27) and (31), we have

1 i = 1 p ( 1 + r θ p 1 η i ) e 1 ( β ( r ) | β ˆ ) 1 i = 1 p ( 1 + r θ 1 1 η i ) .
(32)

 □

Corollary 3.1 Let β (r) and β ˆ be given in Equations (7) and (3), let e 1 ( β (r)| β ˆ ) be defined in Equation (14), X 1 Q 1 X 1 and X 2 Q 2 X 2 communicate, then we have

θ p p ( θ 1 + r η 1 ) p e 1 ( β ( r ) | β ˆ ) θ 1 p ( θ p + r η p ) p ,
(33)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 .

Proof Since X 1 Q 1 X 1 and X 2 Q 2 X 2 communicate, there exists an orthogonal matrix G such that

G X 1 Q 1 X 1 G=diag( θ 1 ,, θ p )Σ,
(34)
G X 2 Q 2 X 2 G=diag( η 1 ,, η p )Ω,
(35)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 .

By the definition of e 1 ( β (r)| β ˆ ), we have

e 1 ( β ( r ) | β ˆ ) = | Cov ( β ( r ) ) | | Cov ( β ˆ ) | = | X 1 Q 1 X 1 | | X 1 Q 1 X 1 + r X 2 Q 2 X 2 | = | G Σ G | | G Σ G + r G Ω G | = i = 1 p θ i i = 1 p ( θ i + r η i ) .
(36)

Thus we have

θ p p ( θ 1 + r η 1 ) p e 1 ( β ( r ) | β ˆ ) θ 1 p ( θ p + r η p ) p .
(37)

 □

Using the same way, we can give the lower and upper bounds of e 2 ( β (r)| β ˜ ).

Theorem 3.2 Let β (r) and β ˜ be given in Equations (7) and (5), let e 2 ( β (r)| β ˜ ) be defined in Equation (15), then we have

1 i = 1 p ( r + η p 1 θ i ) e 2 ( β ( r ) | β ˜ ) 1 i = 1 p ( r + η 1 1 θ i ) ,
(38)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 .

Corollary 3.2 Let β (r) and β ˜ be given in Equations (7) and (5), let e 2 ( β (r)| β ˜ ) be defined in Equation (15), X 1 Q 1 X 1 and X 2 Q 2 X 2 communicate, then we have

η p p ( θ 1 + r η 1 ) p e 2 ( β ( r ) | β ˜ ) η 1 p ( θ p + r η p ) p .
(39)

4 The lower and upper bounds of e 3 ( β (r)| β ˆ ) and e 4 ( β (r)| β ˜ )

In this section we give the lower and upper bounds of e 3 ( β (r)| β ˆ ) and e 4 ( β (r)| β ˜ ). Firstly we give the lower and upper bounds of e 3 ( β (r)| β ˆ ).

Theorem 4.1 Let β (r) and β ˆ be given in Equations (7) and (3), let e 3 ( β (r)| β ˆ ) be defined in Equation (16), then we have

i = 1 p θ p + 1 i 1 ( 1 + r ζ i ) 1 i = 1 p θ i 1 e 3 ( β ( r ) | β ˆ ) i = 1 p θ p + 1 i 1 ( 1 + r ζ p + 1 i ) 1 i = 1 p θ i 1 ,
(40)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , ζ 1 ζ p is the ordered eigenvalues of ( X 1 Q 1 X 1 ) 1 / 2 ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 / 2 .

Proof Since X 1 Q 1 X 1 >0, there exists an orthogonal matrix K 1 such that

X 1 Q 1 X 1 = K 1 Σ K 1 ,Σ=diag( θ 1 ,, θ p ),
(41)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 . Similar to Theorem 3.1, we define

A= ( X 1 Q 1 X 1 ) 1 / 2 ( X 2 Q 2 X 2 ) ( X 1 Q 1 X 1 ) 1 / 2 .

Since A>0, there exists an orthogonal matrix K 2 such that

A= K 2 Δ K 2 ,Δ=diag( ζ 1 ,, ζ p ),
(42)

where ζ 1 ζ p is the order eigenvalues of A.

We can easily compute that

tr ( Cov ( β ˆ ) ) = σ 1 tr ( ( X 1 Q 1 X 1 ) 1 ) = σ 1 i = 1 p θ i 1
(43)

and

tr ( Cov ( β ( r ) ) ) = σ 1 tr ( ( X 1 Q 1 X 1 + r X 2 Q 2 X 2 ) 1 ) = σ 1 tr ( ( X 1 Q 1 X 1 ) 1 / 2 ( I p + r A ) 1 ( X 1 Q 1 X 1 ) 1 / 2 ) = σ 1 tr ( ( I p + r A ) 1 ( X 1 Q 1 X 1 ) 1 ) = σ 1 tr ( ( I p + r Δ ) 1 K 2 K 1 Σ 1 K 1 K 2 ) = σ 1 tr ( ( I p + r Δ ) 1 K Σ 1 K ) ,
(44)

where K= K 1 K 2 is an orthogonal matrix. Thus we have

e 3 ( β ( r ) | β ˆ ) = tr ( Cov ( β ( r ) ) ) tr ( Cov ( β ˆ ) ) = tr ( ( I p + r Δ ) 1 K Σ 1 K ) i = 1 p θ i 1 .
(45)

Using Lemma 3.2, we have

i = 1 p θ p + 1 i 1 ( 1 + r ζ i ) 1 tr ( ( I p + r Δ ) 1 K Σ 1 K ) i = 1 p θ p + 1 i 1 ( 1 + r ζ p + 1 i ) 1 .
(46)

Thus

i = 1 p θ p + 1 i 1 ( 1 + r ζ i ) 1 i = 1 p θ i 1 e 3 ( β ( r ) | β ˆ ) i = 1 p θ p + 1 i 1 ( 1 + r ζ p + 1 i ) 1 i = 1 p θ i 1 .
(47)

 □

Corollary 4.1 Let β (r) and β ˆ be given in Equations (7) and (3), let e 3 ( β (r)| β ˆ ) be defined in Equation (16), X 1 Q 1 X 1 and X 2 Q 2 X 2 communicate, then we have

θ p θ 1 + r η 1 e 3 ( β ( r ) | β ˆ ) θ 1 θ p + r η p ,
(48)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 .

Proof Since X 1 Q 1 X 1 and X 2 Q 2 X 2 communicate, there exists an orthogonal matrix G such that

G X 1 Q 1 X 1 G=diag( θ 1 ,, θ p )=Σ,
(49)
G X 2 Q 2 X 2 G=diag( η 1 ,, η p )=Ω,
(50)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 .

By the definition of e 3 ( β (r)| β ˆ ), we have

e 3 ( β ( r ) | β ˆ ) = tr ( Cov ( β ( r ) ) ) tr ( Cov ( β ˆ ) ) = i = 1 p ( θ i + r η i ) 1 i = 1 p θ i 1 .
(51)

Thus we have

θ p θ 1 + r η 1 e 3 ( β ( r ) | β ˆ ) θ 1 θ p + r η p .
(52)

 □

Then we can give the lower and upper bounds of e 4 ( β (r)| β ˜ ).

Theorem 4.2 Let β (r) and β ˜ be given in Equations (7) and (5), let e 4 ( β (r)| β ˜ ) be defined in Equation (17), then we have

i = 1 p η p + 1 i 1 ( r + ι i ) 1 i = 1 p η i 1 e 4 ( β ( r ) | β ˜ ) i = 1 p η p + 1 i 1 ( r + ι p + 1 i ) 1 i = 1 p η i 1 ,
(53)

where η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 , ι 1 ι p is the ordered eigenvalues of ( X 2 Q 2 X 2 ) 1 / 2 ( X 1 Q 1 X 1 ) ( X 2 Q 2 X 2 ) 1 / 2 .

Corollary 4.2 Let β (r) and β ˜ be given in Equations (7) and (5), let e 4 ( β (r)| β ˜ ) be defined in Equation (17), X 1 Q 1 X 1 and X 2 Q 2 X 2 communicate, then we have

η p θ 1 + r η 1 e 4 ( β ( r ) | β ˜ ) η 1 θ p + r η p ,
(54)

where θ 1 θ p is the ordered eigenvalues of X 1 Q 1 X 1 , η 1 η p is the ordered eigenvalues of X 2 Q 2 X 2 .

5 Concluding remarks

In this article, we have introduced four relative efficiencies in two linear regression equations system with identical parameter vectors, and we have also given the lower and upper bounds for the four relative efficiencies.

References

  1. Liu AY: Estimation of the parameters in two linear models with only some of the parameter vectors identical. Stat. Probab. Lett. 1996, 29: 369–375. 10.1016/0167-7152(95)00193-X

    Article  MathSciNet  MATH  Google Scholar 

  2. Ma TF, Wang SG: Estimation of the parameters in a two linear regression equations system with identical parameter vectors. Stat. Probab. Lett. 2009, 79: 1135–1140. 10.1016/j.spl.2008.10.023

    Article  MathSciNet  MATH  Google Scholar 

  3. Yang H: Extensions of the Kantorovich inequality and the error ratio efficiency of the mean square. Math. Appl. 1988, 4: 85–90.

    MathSciNet  MATH  Google Scholar 

  4. Wang SG, Ip WC: A matrix version of the Wielandt inequality and its applications to statistics. Linear Algebra Appl. 1999, 296: 171–181. 10.1016/S0024-3795(99)00117-2

    MathSciNet  Article  MATH  Google Scholar 

  5. Liu SZ: Efficiency comparisons between the OLSE and the BLUE in a singular linear model. J. Stat. Plan. Inference 2000, 84: 191–200. 10.1016/S0378-3758(99)00149-4

    Article  MathSciNet  MATH  Google Scholar 

  6. Liu SZ, Lu CY, Puntanen S: Matrix trace Wielandt inequalities with statistical applications. J. Stat. Plan. Inference 2009, 139: 2254–2260. 10.1016/j.jspi.2008.10.026

    MathSciNet  Article  MATH  Google Scholar 

  7. Yang H, Wang LT: An alternative form of the Watson efficiency. J. Stat. Plan. Inference 2009, 139: 2767–2774. 10.1016/j.jspi.2009.01.002

    Article  MathSciNet  MATH  Google Scholar 

  8. Wang LT, Yang H: Several matrix Euclidean norm inequalities involving Kantorovich inequality. J. Inequal. Appl. 2009., 2009: Article ID 291984

    Google Scholar 

  9. Wang LT, Yang H: Matrix Euclidean norm Wielandt inequalities and their applications to statistics. Stat. Pap. 2012, 53: 521–530. 10.1007/s00362-010-0357-y

    Article  MathSciNet  MATH  Google Scholar 

  10. Yang H, Wu JB: Some matrix norm Kantorovich inequalities and their applications. Commun. Stat., Theory Methods 2011, 22: 4078–4085.

    Article  MathSciNet  MATH  Google Scholar 

  11. Wang SG, Jia ZZ: Matrix Inequality. Anhui Education Press, Hefei; 1994.

    Google Scholar 

  12. Yuan JC: Relative efficiencies of mixed estimator with respect to LS estimator in linear regression model. J. China Univ. Sci. Technol. 2000, 30: 285–291.

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Scientific Research Foundation of Chongqing University of Arts and Sciences (Grant No. R2013SC12), Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant No. KJTD201321), and the National Natural Science Foundation of China (Grant Nos. 71271227, 11201505).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jibo Wu.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wu, J. Some properties of relative efficiency of estimators in a two linear regression equations system with identical parameter vectors. J Inequal Appl 2014, 279 (2014). https://doi.org/10.1186/1029-242X-2014-279

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-279

Keywords

  • best linear unbiased estimator
  • common parameter
  • relative efficiency