Open Access

Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

Journal of Inequalities and Applications20092009:718927

DOI: 10.1155/2009/718927

Received: 28 May 2009

Accepted: 11 August 2009

Published: 27 August 2009

Abstract

By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

1. Introduction

Throughout this paper, , and denote the set of real matrices, the subset of consisting of symmetric matrices, and the subset of consisting of nonnegative definite matrices, respectively. The symbols , and stand for the transpose, the range, Moore-Penrose inverse, generalized inverse, and trace of , respectively. For any , means .

Consider the general multivariate linear model with respect to inequality restricted parameter set:

(1.1)

where is an observable random matrix, , , and are known matrices, respectively. , ( ) are unknown matrices. is the error matrix. denotes the vector made of the columns of and denotes the Kronecker product.

Let . For the linear function , we use the following matrix loss function:

(1.2)

where is a linear estimator of . The risk function is the expected value of loss function:

(1.3)

Suppose and are two estimators of , if for any , we have

(1.4)

and there exists , such that , then is said to be better than . If there does not exist any estimator in set that is better than , where parameters , then is called the admissible estimator of in the set . We denote it by .

In the case of , model (1.1) degenerates to the general multivariate linear model without restrictions. Under the quadratic loss function, many articles discussed the admissibility of linear estimators, such as Cohen [1], Rao [2], LaMotte [3], etc. Under the matrix loss function, Zhu and Lu [4] and Baksalary and Markiewicz [5] studied the admissibility of linear estimators when respectively. Deng et al. [6] discussed the admissibility under the matrix loss in multivariate model. Markiewicz [7] discussed the admissibility in the general multivariate linear model. Marquardt [8] and Perlman [9] pointed out that the least square estimator is not still the admissible estimator if the parameters are restricted. Further, Groß and Markiewicz [10] pointed out that the admissible linear estimator has the form of ridge estimator if the parameters have no restrictions. Therefore, it is useful and important to discuss the admissibility of linear estimators when the parameters have some restrictions.

Zhu and Zhang [11], Lu [12], Deng and Chen [13] studied the admissibility of linear estimators under the quadratic loss and matrix loss when . Qin et al. [14] studied the admissibility of the estimators of estimable function under the loss function in multivariate linear model with respect to restricted parameter set when . In their case, whether an estimator is better than another or not does not depend on the regression parameters. It is easy to generalize the conclusions from univariate linear model to multivariate linear model. However under the matrix loss (1.2), it is more complicated. In this case, whether an estimator is better than another depends on the regression parameters.

In this paper, using the methods of linear algebra and matrix theory, we discuss the admissibility of linear estimators in model (1.1) under the matrix loss (1.2). We prove that the admissibility of the estimators of estimable function under univariate linear model and multivariate linear model are equivalent in the class of homogeneous linear estimators, and some sufficient and necessary conditions that the estimators in the general multivariate linear model with respect to restricted parameter set are admissible are obtained whether the function of parameter is estimable or not, which enriches the theory of admissibility in multivariate linear model.

2. Main Results

Let denote the class of homogeneous linear estimators, and let denote the class of general linear estimators.

Lemma 2.1.

Under model (1.1) with the loss function (1.2), suppose is an estimator of , one has
(2.1)
The equality holds if and only if
(2.2)

where .

Proof.

Since
(2.3)
It is easy to verify that (2.1) holds, and the equality holds if and only if
(2.4)
Expanding it, we have
(2.5)

Thus , that is .

Lemma 2.2.

Under model (1.1) with the loss function (1.2), if , suppose are estimators of , then is better than if and only if
(2.6)
(2.7)

and the two equalities above cannot hold simultaneously.

Proof.

Since , , , (2.3) implies the sufficiency is true. Suppose is better than , then for any , we have
(2.8)
and there exists some such that the equality in (2.8) cannot hold. Taking in (2.8), (2.6) follows. Let , is the identity matrix, then for any , , by (2.8), we have
(2.9)

Therefore, (2.7) holds. It is obvious that the two equalities in (2.6) and (2.7) cannot hold simultaneously.

Consider univariate linear model with respect to restricted parameter set:

(2.10)
and the loss function
(2.11)

where , , and are as defined in (1.1) and (1.2), and are unknown parameters. Set . If is an admissible estimator of , we denote it by .

Similarly to Lemma 2.2, we have the following lemma.

Lemma 2.3.

Under model (2.10) with the loss function (2.11), suppose and are estimators of , then is better than if and only if
(2.12)
(2.13)

and the two equalities above cannot hold simultaneously.

Theorem 2.4.

Consider the model (1.1) with the loss function (1.2), if and only if in model (2.10) with the loss function (2.11).

Proof.

From Lemmas 2.2 and 2.3, we need only to prove the equivalence of (2.7) and (2.13).

Suppose (2.7) is true, we can take , , and plug it into (2.7). Then (2.13) follows.

For the inverse part, suppose (2.13) is true, let , we have

(2.14)

The claim follows.

Remark 2.5.

From this Theorem, we can easily generalize the result under univariate linear model to the case under multivariate linear model in the class of homogeneous linear estimators.

Theorem 2.6.

Consider the model (1.1) with the loss function (1.2), if is estimable, then if and only if:

(1) ,

(2)if there exists , such that
(2.15)

then , , where .

Proof.

From the corresponding theorem in article Deng and Chen [13], under the model (2.10) with the loss function (2.11), if is estimable, then if and only if (1) and (2) in Theorem 2.6 are satisfied. Now Theorem 2.6 follows from Theorem 2.4.

Lemma 2.7.

Consider the model (1.1) with the loss function (1.2), suppose is an estimator of . One has
(2.16)

and the equality holds if and only if .

Proof.

The proof follows from the following equalities:
(2.17)

Lemma 2.8.

Assume , one has

(1)if and , then there exists , for every , and .

(2) if and only if for any vector , implies .

Proof.
  1. (1)

    If , the claim is trivial. If , , where is an orthogonal matrix, , . From , we have , notice that , we get , where . Clearly, there exists , such that . Let , then , and for every , , thus and .

     
  2. (2)

    The claim is easy to verify.

     

Theorem 2.9.

Consider the model (1.1) with the loss function (1.2), if is estimable, then if and only if:

(1) ,

(2)if there exists such that
(2.18)

then , and , where .

Proof.

If , by (2.17) we obtain . Then implies . The claim is true by Theorem 2.6. Now we assume .

Necessity

Assume , by Lemma 2.7, (1) is true. Now we will prove (2). Denote , . Since , rewrite (2.18) as the following
(2.19)
If there exists such that (2.19) holds, for sufficient small , take . Since
(2.20)
Thus
(2.21)
(2.22)
(2.23)
(2.24)
In the above, is sufficiently small, , thus (2.23) follows. , , , thus (2.24) follows
(2.25)
For any compatible vector , assume
(2.26)
By (2.19) we obtain , that is, , plug it into (2.26), then , , thus
(2.27)
From Lemma 2.8, we have
(2.28)

Therefore, there exists , for , the right side of (2.25) is nonnegative definite and its rank is . If is small enough, for every , we have , and the equality cannot always hold if (2) does not hold. It contradicts .

Sufficiency

Assume (1) and (2) are true. Since , by Theorem 2.6, . If there exists an estimator that is better than , then for every ,
(2.29)
Note that for any , if , then . Replace and in (2.29) with and , respectively, divide by on both sides, and let , we get
(2.30)
Since , we have and (otherwise, is better than ). Plug them into (2.29), for every ,
(2.31)

Thus and . implies , and the equality in (2.29) holds always. It contradicts that is better than .

Theorem 2.10.

Under model (1.1) and the loss function (1.2), if is estimable, then if and only if .

Proof.

Denote , , model (1.1) is transformed into
(2.32)
Since
(2.33)

then (2.33) implies that , which combining Theorem  2.4 and the fact that "if , then " yields .

Corollary 2.11.

Under model (1.1) and the loss function (1.2), if is estimable, then if and only if .

Lemma 2.12.

Consider model (1.1) with the loss function (1.2), suppose , if , then
(2.34)

Proof.

(2.35)

where refers to the orthogonal projection onto .

Lemma 2.13.

Suppose and are and real matrices, respectively, there exists a matrix such that if and only if and .

Proof.

For the proof of sufficiency, we need only to prove that there exists a such that is not an inverse symmetric matrix.

Since , .

(1)If there is such that , take , then
(2.36)

where is the column vector whose only nonzero entry is a 1 in the th position.

(2)If there does not exist such that , then there must exist such that and , take , then
(2.37)

That is, .

The proof is complete.

Theorem 2.14.

Consider the model (1.1) with the loss function (1.2), if is inestimable, then if and only if .

Proof.

Lemma 2.1 implies the necessity. For the proof of the inverse part, assume there exists , for any , we have
(2.38)
Since
(2.39)
where , thus
(2.40)
where is a known function. If there exists such that
(2.41)
note that is inestimable, then , by Lemma 2.13, there exists such that
(2.42)

Take , , since , so .

According to (2.40), we have for any real ,

(2.43)
It is a contradiction. Therefore . Since , by Lemma 2.12, we obtain
(2.44)
Take in (2.38), we have
(2.45)

Thus , . There is no estimator that is better than in .

Similarly to Theorem 2.14, we have the following theorem.

Theorem 2.15.

Under model (1.1) and the loss function (1.2), if is inestimable, then if and only if .

Remark 2.16.

This theorem indicates that if is inestimable, then the admissibility of has no relation with the choice of owing to .

Declarations

Acknowledgments

The authors would like to thank the Editor Dr. Kunquan Lan and the anonymous referees whose work and comments made the paper more readable. The research was supported by National Science Foundation (60736047, 60772036, 10671007) and Foundation of BJTU (2006XM037), China.

Authors’ Affiliations

(1)
School of Science, Beijing Jiaotong University
(2)
School of Information, Renmin University of China
(3)
Department of Statistics, Florida State University

References

  1. Cohen A: All admissible linear estimates of the mean vector. Annals of Mathematical Statistics 1966, 37: 458–463. 10.1214/aoms/1177699528MathSciNetView ArticleMATHGoogle Scholar
  2. Rao CR: Estimation of parameters in a linear model. The Annals of Statistics 1976,4(6):1023–1037. 10.1214/aos/1176343639MathSciNetView ArticleMATHGoogle Scholar
  3. LaMotte LR: Admissibility in linear estimation. The Annals of Statistics 1982,10(1):245–255. 10.1214/aos/1176345707MathSciNetView ArticleMATHGoogle Scholar
  4. Zhu XH, Lu CY: Admissibility of linear estimates of parameters in a linear model. Chinese Annals of Mathematics 1987,8(2):220–226.MathSciNetMATHGoogle Scholar
  5. Baksalary JK, Markiewicz A: A matrix inequality and admissibility of linear estimators with respect to the mean square error matrix criterion. Linear Algebra and Its Applications 1989, 112: 9–18. 10.1016/0024-3795(89)90584-3MathSciNetView ArticleMATHGoogle Scholar
  6. Deng QR, Chen JB, Chen XZ: All admissible linear estimators of functions of the mean matrix in multivariate linear models. Acta Mathematica Scientia 1998,18(supplement):16–24.MathSciNetMATHGoogle Scholar
  7. Markiewicz A: Estimation and experiments comparison with respect to the matrix risk. Linear Algebra and Its Applications 2002, 354: 213–222. 10.1016/S0024-3795(01)00375-5MathSciNetView ArticleMATHGoogle Scholar
  8. Marquardt DW: Generalized inverses, ridge regression, biased linear estimation and nonlinear estimation. Technometrics 1970, 12: 591–612. 10.2307/1267205View ArticleMATHGoogle Scholar
  9. Perlman MD: Reduced mean square error estimation for several parameters. Sankhyā B 1972, 34: 89–92.MathSciNetGoogle Scholar
  10. Groß J, Markiewicz A: Characterizations of admissible linear estimators in the linear model. Linear Algebra and Its Applications 2004, 388: 239–248.MathSciNetView ArticleMATHGoogle Scholar
  11. Zhu XH, Zhang SL: Admissible linear estimators in linear models with constraints. Kexue Tongbao 1989,34(11):805–808.MathSciNetGoogle Scholar
  12. Lu CY: Admissibility of inhomogeneous linear estimators in linear models with respect to incomplete ellipsoidal restrictions. Communications in Statistics. A 1995,24(7):1737–1742. 10.1080/03610929508831582View ArticleMATHMathSciNetGoogle Scholar
  13. Deng QR, Chen JB: Admissibility of general linear estimators for incomplete restricted elliptic models under matrix loss. Chinese Annals of Mathematics 1997,18(1):33–40.MathSciNetMATHGoogle Scholar
  14. Qin H, Wu M, Peng JH: Universal admissibility of linear estimators in multivariate linear models with respect to a restricted parameter set. Acta Mathematica Scientia 2002,22(3):427–432.MathSciNetMATHGoogle Scholar

Copyright

© Shangli Zhang et al. 2009

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.