Let
denote the class of homogeneous linear estimators, and let
denote the class of general linear estimators.
Lemma 2.1.
Under model (1.1) with the loss function (1.2), suppose
is an estimator of
, one has
The equality holds if and only if
where
.
Proof.
Since
It is easy to verify that (2.1) holds, and the equality holds if and only if
Expanding it, we have
Thus
, that is
.
Lemma 2.2.
Under model (1.1) with the loss function (1.2), if
, suppose
are estimators of
, then
is better than
if and only if
and the two equalities above cannot hold simultaneously.
Proof.
Since
,
,
, (2.3) implies the sufficiency is true. Suppose
is better than
, then for any
, we have
and there exists some
such that the equality in (2.8) cannot hold. Taking
in (2.8), (2.6) follows. Let
,
is the identity matrix, then for any
,
, by (2.8), we have
Therefore, (2.7) holds. It is obvious that the two equalities in (2.6) and (2.7) cannot hold simultaneously.
Consider univariate linear model with respect to restricted parameter set:
and the loss function
where
,
,
and
are as defined in (1.1) and (1.2),
and
are unknown parameters. Set
. If
is an admissible estimator of
, we denote it by
.
Similarly to Lemma 2.2, we have the following lemma.
Lemma 2.3.
Under model (2.10) with the loss function (2.11), suppose
and
are estimators of
, then
is better than
if and only if
and the two equalities above cannot hold simultaneously.
Theorem 2.4.
Consider the model (1.1) with the loss function (1.2),
if and only if
in model (2.10) with the loss function (2.11).
Proof.
From Lemmas 2.2 and 2.3, we need only to prove the equivalence of (2.7) and (2.13).
Suppose (2.7) is true, we can take
,
, and plug it into (2.7). Then (2.13) follows.
For the inverse part, suppose (2.13) is true, let
, we have
The claim follows.
Remark 2.5.
From this Theorem, we can easily generalize the result under univariate linear model to the case under multivariate linear model in the class of homogeneous linear estimators.
Theorem 2.6.
Consider the model (1.1) with the loss function (1.2), if
is estimable, then
if and only if:
(1)
,
(2)if there exists
, such that
then
,
, where
.
Proof.
From the corresponding theorem in article Deng and Chen [13], under the model (2.10) with the loss function (2.11), if
is estimable, then
if and only if (1) and (2) in Theorem 2.6 are satisfied. Now Theorem 2.6 follows from Theorem 2.4.
Lemma 2.7.
Consider the model (1.1) with the loss function (1.2), suppose
is an estimator of
. One has
and the equality holds if and only if
.
Proof.
The proof follows from the following equalities:
Lemma 2.8.
Assume
, one has
(1)if
and
, then there exists
, for every
,
and
.
(2)
if and only if for any vector
,
implies
.
Proof.
-
(1)
If
, the claim is trivial. If
,
, where
is an orthogonal matrix,
,
. From
, we have
, notice that
, we get
, where
. Clearly, there exists
, such that
. Let
, then
, and for every
,
, thus
and
.
-
(2)
The claim is easy to verify.
Theorem 2.9.
Consider the model (1.1) with the loss function (1.2), if
is estimable, then
if and only if:
(1)
,
(2)if there exists
such that
then
,
and
, where
.
Proof.
If
, by (2.17) we obtain
. Then
implies
. The claim is true by Theorem 2.6. Now we assume
.
Necessity
Assume
, by Lemma 2.7, (1) is true. Now we will prove (2). Denote
,
. Since
, rewrite (2.18) as the following
If there exists
such that (2.19) holds, for sufficient small
, take
. Since
Thus
In the above,
is sufficiently small,
, thus (2.23) follows.
,
,
, thus (2.24) follows
For any compatible vector
, assume
By (2.19) we obtain
, that is,
, plug it into (2.26), then
,
, thus
From Lemma 2.8, we have
Therefore, there exists
, for
, the right side of (2.25) is nonnegative definite and its rank is
. If
is small enough, for every
, we have
, and the equality cannot always hold if (2) does not hold. It contradicts
.
Sufficiency
Assume (1) and (2) are true. Since
, by Theorem 2.6,
. If there exists an estimator
that is better than
, then for every
,
Note that for any
, if
, then
. Replace
and
in (2.29) with
and
, respectively, divide by
on both sides, and let
, we get
Since
, we have
and
(otherwise,
is better than
). Plug them into (2.29), for every
,
Thus
and
.
implies
, and the equality in (2.29) holds always. It contradicts that
is better than
.
Theorem 2.10.
Under model (1.1) and the loss function (1.2), if
is estimable, then
if and only if
.
Proof.
Denote
,
, model (1.1) is transformed into
Since
then (2.33) implies that
, which combining Theorem 2.4 and the fact that "if
, then
" yields
.
Corollary 2.11.
Under model (1.1) and the loss function (1.2), if
is estimable, then
if and only if
.
Lemma 2.12.
Consider model (1.1) with the loss function (1.2), suppose
, if
, then
Proof.
where
refers to the orthogonal projection onto
.
Lemma 2.13.
Suppose
and
are
and
real matrices, respectively, there exists a
matrix
such that
if and only if
and
.
Proof.
For the proof of sufficiency, we need only to prove that there exists a
such that
is not an inverse symmetric matrix.
Since
,
.
(1)If there is
such that
, take
, then
where
is the column vector whose only nonzero entry is a 1 in the
th position.
(2)If there does not exist
such that
, then there must exist
such that
and
, take
, then
That is,
.
The proof is complete.
Theorem 2.14.
Consider the model (1.1) with the loss function (1.2), if
is inestimable, then
if and only if
.
Proof.
Lemma 2.1 implies the necessity. For the proof of the inverse part, assume there exists
, for any
, we have
Since
where
, thus
where
is a known function. If there exists
such that
note that
is inestimable, then
, by Lemma 2.13, there exists
such that
Take
,
, since
, so
.
According to (2.40), we have for any real
,
It is a contradiction. Therefore
. Since
, by Lemma 2.12, we obtain
Take
in (2.38), we have
Thus
,
. There is no estimator that is better than
in
.
Similarly to Theorem 2.14, we have the following theorem.
Theorem 2.15.
Under model (1.1) and the loss function (1.2), if
is inestimable, then
if and only if
.
Remark 2.16.
This theorem indicates that if
is inestimable, then the admissibility of
has no relation with the choice of
owing to
.