- Open Access
Admissibility in general linear model with respect to an inequality constraint under balanced loss
Journal of Inequalities and Applications volume 2014, Article number: 70 (2014)
Since Zellner (Bayesian and Non-Bayesian Estimation Using Balanced Loss Functions, pp. 377-390, 1994) proposed the balanced loss function, many researchers have been attracted to the field concerned. In this paper, under a generalized balanced loss function, we investigate the admissibility of linear estimators of the regression coefficient in general Gauss-Markov model (GGM) with respect to an inequality constraint. The necessary and sufficient conditions that the linear estimators of regression coefficient function are admissible are established, in the class of homogeneous/inhomogeneous linear estimation, respectively.
Throughout this paper, the symbols , , , , and stand for the transpose, the range, Moore-Penrose inverse, generalized inverse, rank, and trace of matrix A, respectively.
Consider the following Gauss-Markov model:
where y is a observable random vector. X is an known design matrix and , ε is a random error vector. β and are unknown parameters.
Since , β in model (1.1) is estimable, i.e., there exists an A, such that . The classic estimator of regression coefficient is the least square estimator , which is the value of d that minimizes the following expression:
It is also the best linear unbiased estimator (BLUE) of β. It indicates some goodness-of-fit of the model. For any estimator of β, the precision of this estimator is widely used to determine it is good or not. That is, under the quadratic loss function
we select the estimate to achieve a minimum of the risk.  embraced the two standards above and proposed the concept of balanced loss. The balanced loss function is defined as
where , S is a known positive definite matrix. The balanced loss function takes both the precision of the estimator and the goodness-of-fit of the model into account. Compared to the standards in (1.2) and (1.3), it is a more comprehensive one that measures the estimate.
Much work has been done on the parameter estimation under the balanced loss function. [2–5] studied the risk function of some specific estimators. [6–8] did some work on the application of the balanced loss function. [9–13] investigated the goodness of the estimators under the balanced loss function.
In model (1.1), the errors have homogeneity of variance and no correlations. But in most real problems, this condition is not always satisfied. In this case, model (1.1) is generalized to the following one:
For model (1.5), the BLUE of β is . According to Rao’s unified theory of least squares, it is the minimum value d of the following expression:
where . We can prove that when V is nonsingular, is a generalized least square estimate. Therefore, the balanced loss function (1.4) cannot be applied to this model. Based on , the idea of balanced loss, we propose a general balanced loss
where , S is a known matrix.
In most cases, we have some prior information in model (1.5). For example, the parameters are constrained to some subset, such as an inequality and ellipsoidal constraints. In this paper, considering model (1.5) with the balanced loss (1.6), we investigate the admissibility of linear estimator of regression coefficient in the linear model with an inequality constraint. The inequality constraint we will discuss is
where r is a known vector. If , then the constraint condition always holds. This model embraces the unconstraint case.
Definition 1.1 Suppose and are two estimators of β, if for any , we have
and there exists , such that , where the risk function , then is said to be better than . If there does not exist any estimator in set Ξ that is better than , where parameters β and σ take values in T, then is called the admissible estimator of Kβ in the set Ξ. We denote it by .
We use the following notations in this paper.
where HL is the class of homogeneous linear estimators and L is the class of inhomogeneous linear estimators.
The admissibility is the most basic and influential rationality requirement of classical statistical decision theory. When the parameters are unconstrained, comprehensive results have been obtained. For instance, [14–17]etc. studied the admissibility in univariate linear model. As [18, 19] pointed out, when the parameters are constrained, the least square estimator may not be admissible. So it is significant to discuss the admissibility of linear estimator in linear model with some constraints. For the Gauss-Markov model with constraints,  developed the admissible estimator. Some other researchers dedicated to this study. [21–24] studied the admissibility in the linear model with an ellipsoidal constraint. For the linear model with an inequality constraint, [25–28] studied the admissibility of linear estimator of parameters in the univariate and multivariate linear models under the quadratic and matrix loss, respectively. However, under the balanced loss, the model with an inequality constraint has not been considered.
2 Admissibility in the class of homogeneous linear estimators
In this section, we study the admissibility in the class of homogeneous linear estimators. Let the quadratic loss in model (1.5) be
where is an estimator of .
Lemma 2.1 Consider the model (1.5) with the loss function (2.1), if and only if
where , .
Lemma 2.2 Under model (1.5) with the loss function (1.6), suppose is an estimator of β, we have
and the equality holds if and only if
where . Notice that , we have .
and the equality holds if and only if . □
Remark 2.1 This lemma indicates the class of estimators is a complete class of HL. That is, for any estimator δ not in , there exists an estimator in such that is better than δ.
Consider the following linear model:
Let , clearly, Cβ is estimable in model (2.6). We take the loss function
Lemma 2.3 Under model (2.6) with the loss function (2.7), suppose , for any , where is better than AY if and only if
Proof The lemma can easily be verified from (2.4). □
Lemma 2.4 Consider the model (1.5) with the loss function (1.6), suppose , then if and only if in model (2.6) with the loss function (2.7), where .
Proof Since , we have
Equations (2.10), (2.11) and Lemma 2.3 indicate that if there exists an estimator of β, is better than , then , the estimator of Cβ, is better than . □
Lemma 2.5 Consider the model (2.6) with the loss function (2.7), holds if and only if under the quadratic loss, holds.
Proof The proof is straightforward. We omit the details. □
Theorem 2.1 Consider the model (2.6) with the loss function (2.7), if and only if .
Proof The necessity is trivial. We only need to prove the sufficiency. For any , if there exists that is better than AY, by Lemma 2.3, for any , (2.8) and (2.9) hold. Notice that (2.9) still holds if replacing β by −β. In other words, for any , (2.9) holds. Since , thus, for any , (2.8) and (2.9) hold. It contradicts with . □
Theorem 2.2 Consider the model (1.5) with the loss function (1.6), if and only if
Proof By Lemma 2.2, (1) holds. Further, is equivalent to . By Lemma 2.4, in the model (1.5) with the loss (1.6), holds if and only if in model (2.6) with the loss (2.7), where . It is also equivalent to in model (2.6) with the loss (2.1) by Lemma 2.5. Therefore, when the condition (1) is satisfied, according to Lemma 2.1 and simple computations, we have holds if and only if (2) and (3) are satisfied. □
Remark 2.2 The following example indicates that the conditions in the above theorem can be satisfied.
Consider the following example: we take , , then , . Also let , then the loss function is
For the diagonal matrix , we consider the admissibility of Ay. The condition (1) in Theorem 2.2 is satisfied. Theorem 2.2(3) implies that . Theorem 2.2(2) implies that . Thus, only if and , Ay is an admissible estimate of β.
3 Admissibility in the class of inhomogeneous linear estimators
In this section, we study the admissibility in the class of inhomogeneous linear estimators.
Lemma 3.1 Let C be a cone in . For any vector b and real number d,
if and only if and , where is the dual cone of C.
Proof This lemma can be found in . □
Theorem 3.1 Consider the model (1.5) with the loss function (1.6), if , then
Proof (1) Let P be an orthogonal projection matrix on . Take , then . Since
and the equality holds if and only if , . This means if , then is better than . It is a contradiction.
(2) Assume there exists , such that . Then there exists , such that . Take , where . For any , we have
According to Lemma 3.1, for any λ small enough and any ,
is better than , which contradicts .
(3) By (1), there exists such that . Suppose is as good as AY, thus, for any ,
By Lemma 2.3, (2.8) and (2.9) hold. Notice that for any , (2.9) still holds and , and therefore (2.9) is equivalent to
We obtain, from (2.8) and (3.4), for any ,
Since , thus, the equality in (3.6) holds if and only if the equality in (3.5) holds. Notice that for and any , we have . Therefore,
It implies that no estimator is better than AY. Thus, . □
In fact, the converse part of Theorem 3.1 is also true. We present this in the following theorem.
Theorem 3.2 Consider the model (1.5) with the loss function (1.6), holds if and only if
Proof By the proof of (1) in Theorem 3.1, we need to prove that there does not exist matrix and such that is better than , where .
Suppose is as good as , then for any ,
Notice that for any , , plug it in (3.7) and let k go to ∞ and 0 respectively, we have
Similarly, replacing β with λβ in (3.8) and let λ go to ∞, we have
Therefore, . Since , we get
Using the same technique, for any , we have
From (3.7), (3.9), and (3.10), we get, for any ,
From Lemma 3.1,
This together with the condition (2) implies that
From (3.11) and (3.13), we have . Thus, .
Plug (3.9), (3.10), and (3.14) into (3.7) and we find that the equality in (3.7) holds. It means there does not exist an estimator that is better than . Therefore, holds. □
We summarize Theorem 2.2 and Theorem 3.2 in the following theorem.
Theorem 3.3 Consider the model (1.5) with the loss function (1.6), holds if and only if
In this paper, under a generalized balanced loss function, we study the admissibility of linear estimators of the regression coefficient in general Gauss-Markov model with respect to an inequality constraint. The necessary and sufficient conditions that the linear estimators of regression coefficient function are admissible are obtained, in the class of homogeneous and inhomogeneous linear estimation, respectively.
Zellner A: Bayesian and Non-Bayesian Estimation Using Balanced Loss Functions. Springer, Berlin; 1994:377–390.
Rodrigues J, Zellner A: Weighted balanced loss function and estimation of the mean time to failure. Commun. Stat., Theory Methods 1994,23(12):3609–3616. 10.1080/03610929408831468
Wan AT: Risk comparison of the inequality constrained least squares and other related estimators under balanced loss. Econ. Lett. 1994,46(3):203–210. 10.1016/0165-1765(94)00485-4
Giles JA, Giles DE, Ohtani K: The exact risks of some pre-test and stein-type regression estimators under balanced loss. Commun. Stat., Theory Methods 1996,25(12):2901–2924. 10.1080/03610929608831878
Ohtani K: The exact risk of a weighted average estimator of the ols and stein-rule estimators in regression under balanced loss. Stat. Risk Model. 1998,16(1):35–46.
Shalabh : Least squares estimators in measurement error models under the balanced loss function. Test 2001,10(2):301–308. 10.1007/BF02595699
Gruber DMH: The efficiency of shrinkage estimators with respect to Zellner’s balanced loss function. Commun. Stat., Theory Methods 2004,33(2):235–249. 10.1081/STA-120028372
Akdeniz F, Wan AT, Akdeniz E: Generalized Liu type estimators under Zellner’s balanced loss function. Commun. Stat., Theory Methods 2005,34(8):1725–1736. 10.1081/STA-200066357
Dey DK, Ghosh M, Strawderman WE: On estimation with balanced loss functions. Stat. Probab. Lett. 1999,45(2):97–101. 10.1016/S0167-7152(99)00047-4
Ohtani K: Inadmissibility of the Stein-rule estimator under the balanced loss function. J. Econom. 1999,88(1):193–201. 10.1016/S0304-4076(98)00030-X
Xu X, Wu Q: Linear admissible estimators of regression coefficient under balanced loss. Acta Math. Sci. 2000, 4: 468–473.
Jozani JM, Marchand É, Parsian A: On estimation with weighted balanced-type loss function. Stat. Probab. Lett. 2006,76(8):773–780. 10.1016/j.spl.2005.10.026
Cao M: ϕ admissibility for linear estimators on regression coefficients in a general multivariate linear model under balanced loss function. J. Stat. Plan. Inference 2009,139(9):3354–3360. 10.1016/j.jspi.2009.03.013
Rao CR: Estimation of parameters in a linear model. Ann. Stat. 1976,4(6):1023–1037. 10.1214/aos/1176343639
LaMotte LR: Admissibility in linear estimation. Ann. Stat. 1982,10(1):245–255. 10.1214/aos/1176345707
Wu Q: Admissibility of linear estimators of regression coefficient in a general Gauss-Markoff model. Acta Math. Appl. Sin. 1986, 2: 251–256.
Dong L, Wu Q: The sufficient and necessary conditions of admissible linear estimates for random regression coefficients and parameters under the quadratic loss function. Acta Math. Sin. 1988,31(2):145–157.
Marquaridt DW: Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation. Technometrics 1970,12(3):591–612.
Perlman MD: Reduced mean square error estimation for several parameters. Sankhyā, Ser. B 1972,34(1):89–92.
Hoffmann K: Admissibility of linear estimators with respect to restricted parameter sets. Stat.: J. Theor. Appl. Stat. 1977,8(4):425–438.
Mathew T: Admissible linear estimation in singular linear models with respect to a restricted parameter set. Commun. Stat., Theory Methods 1985,14(2):491–498. 10.1080/03610928508828927
Lu C: Admissibility of inhomogeneous linear estimators in linear models with respect to incomplete ellipsoidal restrictions. Commun. Stat., Theory Methods 1995,24(7):1737–1742. 10.1080/03610929508831582
Zhang S, Gui W: Admissibility of linear estimators in a growth curve model subject to an incomplete ellipsoidal restriction. Acta Math. Sci. 2008,28(1):194–200. 10.1016/S0252-9602(08)60020-X
Zhang S, Gui W, Liu G: Characterization of admissible linear estimators in the general growth curve model with respect to an incomplete ellipsoidal restriction. Linear Algebra Appl. 2009,431(1):120–131.
Zhang S, Liu G, Gui W: Admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. J. Inequal. Appl. 2009., 2009: Article ID 718927
Lu C, Shi N: Admissible linear estimators in linear models with respect to inequality constraints. Linear Algebra Appl. 2002,354(1):187–194.
Zhang S, Fang Z, Qin H, Han L: Characterization of admissible linear estimators in the growth curve model with respect to inequality constraints. J. Korean Stat. Soc. 2011,40(2):173–179. 10.1016/j.jkss.2010.09.002
Zhang S, Fang Z, Liu G: Characterization of admissible linear estimators in multivariate linear model with respect to inequality constraints under matrix loss function. Commun. Stat., Theory Methods 2013,42(15):2837–2850. 10.1080/03610926.2011.615441
This work was partially supported by National Natural Science Foundation of China (61070236, U1334211, 11371051) and the Project of State Key Laboratory of Rail Traffic Control and Safety (RCS2012ZT004), Beijing Jiaotong University.
The authors declare that they have no competing interests.
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
About this article
Cite this article
Zhang, S., Gui, W. Admissibility in general linear model with respect to an inequality constraint under balanced loss. J Inequal Appl 2014, 70 (2014). https://doi.org/10.1186/1029-242X-2014-70
- general Gauss-Markov model
- homogeneous/inhomogeneous linear estimation
- inequality constraint
- balanced risk function