# Admissible Estimators in the General Multivariate Linear Model with Respect to Inequality Restricted Parameter Set

- Shangli Zhang
^{1}, - Gang Liu
^{2}and - Wenhao Gui
^{3}Email author

**2009**:718927

https://doi.org/10.1155/2009/718927

© Shangli Zhang et al. 2009

**Received: **28 May 2009

**Accepted: **11 August 2009

**Published: **27 August 2009

## Abstract

By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.

## Keywords

## 1. Introduction

Throughout this paper, , and denote the set of real matrices, the subset of consisting of symmetric matrices, and the subset of consisting of nonnegative definite matrices, respectively. The symbols , and stand for the transpose, the range, Moore-Penrose inverse, generalized inverse, and trace of , respectively. For any , means .

Consider the general multivariate linear model with respect to inequality restricted parameter set:

where is an observable random matrix, , , and are known matrices, respectively. , ( ) are unknown matrices. is the error matrix. denotes the vector made of the columns of and denotes the Kronecker product.

Let . For the linear function , we use the following matrix loss function:

where is a linear estimator of . The risk function is the expected value of loss function:

Suppose and are two estimators of , if for any , we have

and there exists , such that , then is said to be better than . If there does not exist any estimator in set that is better than , where parameters , then is called the admissible estimator of in the set . We denote it by .

In the case of , model (1.1) degenerates to the general multivariate linear model without restrictions. Under the quadratic loss function, many articles discussed the admissibility of linear estimators, such as Cohen [1], Rao [2], LaMotte [3], etc. Under the matrix loss function, Zhu and Lu [4] and Baksalary and Markiewicz [5] studied the admissibility of linear estimators when respectively. Deng et al. [6] discussed the admissibility under the matrix loss in multivariate model. Markiewicz [7] discussed the admissibility in the general multivariate linear model. Marquardt [8] and Perlman [9] pointed out that the least square estimator is not still the admissible estimator if the parameters are restricted. Further, Groß and Markiewicz [10] pointed out that the admissible linear estimator has the form of ridge estimator if the parameters have no restrictions. Therefore, it is useful and important to discuss the admissibility of linear estimators when the parameters have some restrictions.

Zhu and Zhang [11], Lu [12], Deng and Chen [13] studied the admissibility of linear estimators under the quadratic loss and matrix loss when . Qin et al. [14] studied the admissibility of the estimators of estimable function under the loss function in multivariate linear model with respect to restricted parameter set when . In their case, whether an estimator is better than another or not does not depend on the regression parameters. It is easy to generalize the conclusions from univariate linear model to multivariate linear model. However under the matrix loss (1.2), it is more complicated. In this case, whether an estimator is better than another depends on the regression parameters.

In this paper, using the methods of linear algebra and matrix theory, we discuss the admissibility of linear estimators in model (1.1) under the matrix loss (1.2). We prove that the admissibility of the estimators of estimable function under univariate linear model and multivariate linear model are equivalent in the class of homogeneous linear estimators, and some sufficient and necessary conditions that the estimators in the general multivariate linear model with respect to restricted parameter set are admissible are obtained whether the function of parameter is estimable or not, which enriches the theory of admissibility in multivariate linear model.

## 2. Main Results

Let denote the class of homogeneous linear estimators, and let denote the class of general linear estimators.

Lemma 2.1.

Proof.

Lemma 2.2.

and the two equalities above cannot hold simultaneously.

Proof.

Therefore, (2.7) holds. It is obvious that the two equalities in (2.6) and (2.7) cannot hold simultaneously.

Consider univariate linear model with respect to restricted parameter set:

where , , and are as defined in (1.1) and (1.2), and are unknown parameters. Set . If is an admissible estimator of , we denote it by .

Similarly to Lemma 2.2, we have the following lemma.

Lemma 2.3.

and the two equalities above cannot hold simultaneously.

Theorem 2.4.

Consider the model (1.1) with the loss function (1.2), if and only if in model (2.10) with the loss function (2.11).

Proof.

From Lemmas 2.2 and 2.3, we need only to prove the equivalence of (2.7) and (2.13).

Suppose (2.7) is true, we can take , , and plug it into (2.7). Then (2.13) follows.

For the inverse part, suppose (2.13) is true, let , we have

The claim follows.

Remark 2.5.

From this Theorem, we can easily generalize the result under univariate linear model to the case under multivariate linear model in the class of homogeneous linear estimators.

Theorem 2.6.

Consider the model (1.1) with the loss function (1.2), if is estimable, then if and only if:

Proof.

From the corresponding theorem in article Deng and Chen [13], under the model (2.10) with the loss function (2.11), if is estimable, then if and only if (1) and (2) in Theorem 2.6 are satisfied. Now Theorem 2.6 follows from Theorem 2.4.

Lemma 2.7.

and the equality holds if and only if .

Proof.

Lemma 2.8.

(1)if and , then there exists , for every , and .

(2) if and only if for any vector , implies .

Theorem 2.9.

Consider the model (1.1) with the loss function (1.2), if is estimable, then if and only if:

Proof.

If , by (2.17) we obtain . Then implies . The claim is true by Theorem 2.6. Now we assume .

Necessity

Therefore, there exists , for , the right side of (2.25) is nonnegative definite and its rank is . If is small enough, for every , we have , and the equality cannot always hold if (2) does not hold. It contradicts .

Sufficiency

Thus and . implies , and the equality in (2.29) holds always. It contradicts that is better than .

Theorem 2.10.

Under model (1.1) and the loss function (1.2), if is estimable, then if and only if .

Proof.

then (2.33) implies that , which combining Theorem 2.4 and the fact that "if , then " yields .

Corollary 2.11.

Under model (1.1) and the loss function (1.2), if is estimable, then if and only if .

Lemma 2.12.

Proof.

where refers to the orthogonal projection onto .

Lemma 2.13.

Suppose and are and real matrices, respectively, there exists a matrix such that if and only if and .

Proof.

For the proof of sufficiency, we need only to prove that there exists a such that is not an inverse symmetric matrix.

where is the column vector whose only nonzero entry is a 1 in the th position.

The proof is complete.

Theorem 2.14.

Consider the model (1.1) with the loss function (1.2), if is inestimable, then if and only if .

Proof.

According to (2.40), we have for any real ,

Thus , . There is no estimator that is better than in .

Similarly to Theorem 2.14, we have the following theorem.

Theorem 2.15.

Under model (1.1) and the loss function (1.2), if is inestimable, then if and only if .

Remark 2.16.

This theorem indicates that if is inestimable, then the admissibility of has no relation with the choice of owing to .

## Declarations

### Acknowledgments

The authors would like to thank the Editor Dr. Kunquan Lan and the anonymous referees whose work and comments made the paper more readable. The research was supported by National Science Foundation (60736047, 60772036, 10671007) and Foundation of BJTU (2006XM037), China.

## Authors’ Affiliations

## References

- Cohen A:
**All admissible linear estimates of the mean vector.***Annals of Mathematical Statistics*1966,**37:**458–463. 10.1214/aoms/1177699528MathSciNetView ArticleMATHGoogle Scholar - Rao CR:
**Estimation of parameters in a linear model.***The Annals of Statistics*1976,**4**(6):1023–1037. 10.1214/aos/1176343639MathSciNetView ArticleMATHGoogle Scholar - LaMotte LR:
**Admissibility in linear estimation.***The Annals of Statistics*1982,**10**(1):245–255. 10.1214/aos/1176345707MathSciNetView ArticleMATHGoogle Scholar - Zhu XH, Lu CY:
**Admissibility of linear estimates of parameters in a linear model.***Chinese Annals of Mathematics*1987,**8**(2):220–226.MathSciNetMATHGoogle Scholar - Baksalary JK, Markiewicz A:
**A matrix inequality and admissibility of linear estimators with respect to the mean square error matrix criterion.***Linear Algebra and Its Applications*1989,**112:**9–18. 10.1016/0024-3795(89)90584-3MathSciNetView ArticleMATHGoogle Scholar - Deng QR, Chen JB, Chen XZ:
**All admissible linear estimators of functions of the mean matrix in multivariate linear models.***Acta Mathematica Scientia*1998,**18**(supplement):16–24.MathSciNetMATHGoogle Scholar - Markiewicz A:
**Estimation and experiments comparison with respect to the matrix risk.***Linear Algebra and Its Applications*2002,**354:**213–222. 10.1016/S0024-3795(01)00375-5MathSciNetView ArticleMATHGoogle Scholar - Marquardt DW:
**Generalized inverses, ridge regression, biased linear estimation and nonlinear estimation.***Technometrics*1970,**12:**591–612. 10.2307/1267205View ArticleMATHGoogle Scholar - Perlman MD:
**Reduced mean square error estimation for several parameters.***Sankhyā B*1972,**34:**89–92.MathSciNetGoogle Scholar - Groß J, Markiewicz A:
**Characterizations of admissible linear estimators in the linear model.***Linear Algebra and Its Applications*2004,**388:**239–248.MathSciNetView ArticleMATHGoogle Scholar - Zhu XH, Zhang SL:
**Admissible linear estimators in linear models with constraints.***Kexue Tongbao*1989,**34**(11):805–808.MathSciNetGoogle Scholar - Lu CY:
**Admissibility of inhomogeneous linear estimators in linear models with respect to incomplete ellipsoidal restrictions.***Communications in Statistics. A*1995,**24**(7):1737–1742. 10.1080/03610929508831582View ArticleMATHMathSciNetGoogle Scholar - Deng QR, Chen JB:
**Admissibility of general linear estimators for incomplete restricted elliptic models under matrix loss.***Chinese Annals of Mathematics*1997,**18**(1):33–40.MathSciNetMATHGoogle Scholar - Qin H, Wu M, Peng JH:
**Universal admissibility of linear estimators in multivariate linear models with respect to a restricted parameter set.***Acta Mathematica Scientia*2002,**22**(3):427–432.MathSciNetMATHGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.