# Some properties of relative efficiency of estimators in a two linear regression equations system with identical parameter vectors

## Abstract

Two normal linear models with some of the parameters identical are discussed in this article. We introduce four relative efficiencies to define the efficiency of estimator in two linear regression equations system with identical parameter vectors, also we give the lower and upper bounds of the four relative efficiencies.

## 1 Introduction

Consider a system (H) formed by two linear models:

${y}_{1}={X}_{1}\beta +{Z}_{1}{\beta }_{1}+{\epsilon }_{1},$
(1)
${y}_{2}={X}_{2}\beta +{Z}_{2}{\beta }_{2}+{\epsilon }_{2},$
(2)

where for $i=1,2$, ${y}_{i}$ is ${n}_{i}×1$ vector of observations, ${X}_{i}$ and ${Z}_{i}$ are ${n}_{i}×p$ and ${n}_{i}×{t}_{i}$ full rank matrices satisfying $rank\left({X}_{i},{Z}_{i}\right)=rank\left({X}_{i}\right)+rank\left({Z}_{i}\right)$ with $rank\left(\cdot \right)$ denoting the rank of a matrix, β and ${\beta }_{i}$ are $p×1$ and ${t}_{i}×1$ unknown parameters, ${\epsilon }_{i}$ is ${n}_{i}×1$ random vector supposed to follow a multivariate normal distribution mean 0 and variance covariance matrix ${\sigma }_{i}I$, ${\sigma }_{i}$ being a known parameter, ${\epsilon }_{1}$ and ${\epsilon }_{2}$ are independent.

Define ${Q}_{i}=I-{Z}_{i}{\left({Z}_{i}^{\prime }{Z}_{i}\right)}^{-1}{Z}_{i}^{\prime }$, ${T}_{i}={\left({Z}_{i}^{\prime }{Z}_{i}\right)}^{-1}{Z}_{i}^{\prime }{X}_{i}$ and $r=\frac{{\sigma }_{1}}{{\sigma }_{2}}$. Then by Liu  we have the following:

1. (1)

In the single equation (1), the best linear unbiased estimators (BLUE) of β and ${\beta }_{1}$ are given respectively by

$\stackrel{ˆ}{\beta }={\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}{X}_{1}^{\prime }{Q}_{1}{y}_{1},$
(3)
${\stackrel{ˆ}{\beta }}_{1}={\left({Z}_{1}^{\prime }{Z}_{1}\right)}^{-1}{Z}_{1}^{\prime }{y}_{1}-{T}_{1}\stackrel{ˆ}{\beta }.$
(4)
2. (2)

In the single equation (2), the best linear unbiased estimators (BLUE) of β and ${\beta }_{2}$ are given respectively by

$\stackrel{˜}{\beta }={\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1}{X}_{2}^{\prime }{Q}_{2}{y}_{2},$
(5)
${\stackrel{˜}{\beta }}_{2}={\left({Z}_{2}^{\prime }{Z}_{2}\right)}^{-1}{Z}_{2}^{\prime }{y}_{2}-{T}_{2}\stackrel{˜}{\beta }.$
(6)
3. (3)

For the system (H), the BLUE of β, ${\beta }_{1}$ and ${\beta }_{2}$ are given respectively by

${\beta }^{\ast }\left(r\right)={\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}+r{X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1}\left({X}_{1}^{\prime }{Q}_{1}{y}_{1}+r{X}_{2}^{\prime }{Q}_{2}{y}_{2}\right),$
(7)
${\beta }_{1}^{\ast }={\left({Z}_{1}^{\prime }{Z}_{1}\right)}^{-1}{Z}_{1}^{\prime }{y}_{1}-{T}_{1}{\beta }^{\ast }\left(r\right),$
(8)
${\beta }_{2}^{\ast }={\left({Z}_{2}^{\prime }{Z}_{2}\right)}^{-1}{Z}_{2}^{\prime }{y}_{2}-{T}_{2}{\beta }^{\ast }\left(r\right).$
(9)

In this article, we only discuss the estimation of the parameter β. Liu  gave the comparison between the estimators $\stackrel{ˆ}{\beta }$, $\stackrel{˜}{\beta }$ and ${\beta }^{\ast }\left(r\right)$ in the mean squared error criterion when ${\sigma }_{i}$ are known. He also gave an estimator when ${\sigma }_{i}$ are unknown and discussed the statistical properties of the estimators $\stackrel{ˆ}{\beta }$, $\stackrel{˜}{\beta }$ and ${\beta }^{\ast }\left(r\right)$. Ma and Wang  also studied the estimators $\stackrel{ˆ}{\beta }$, $\stackrel{˜}{\beta }$ and ${\beta }^{\ast }\left(r\right)$ in the mean squared error criterion.

It is easy to compute that

$Cov\left(\stackrel{ˆ}{\beta }\right)={\sigma }_{1}{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1},$
(10)
$Cov\left(\stackrel{˜}{\beta }\right)={\sigma }_{2}{\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1},$
(11)
$Cov\left({\beta }^{\ast }\left(r\right)\right)={\sigma }_{1}{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}+r{X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1}.$
(12)

From Equations (10)-(12), we can see that

$Cov\left({\beta }^{\ast }\left(r\right)\right)\le Cov\left(\stackrel{ˆ}{\beta }\right),\phantom{\rule{2em}{0ex}}Cov\left({\beta }^{\ast }\left(r\right)\right)\le Cov\left(\stackrel{˜}{\beta }\right).$
(13)

In practice, ${\sigma }_{i}$ may be unknown, in this case we can use $\stackrel{ˆ}{\beta }$ or $\stackrel{˜}{\beta }$ to replace ${\beta }^{\ast }\left(r\right)$. However, this will lead to loss, we introduce the relative efficiency to define the loss. Relative efficiency has been studied by many researchers such as Yang , Wang and Ip , Liu et al. [5, 6], Yang and Wang , Wang and Yang [8, 9] and Yang and Wu .

In this article, we introduce four relative efficiencies in system (H), and we also give the lower and upper bounds of the four relative efficiencies.

The rest of the article is organized as follows. In Section 2, we propose the new relative efficiency. Sections 3 and 4 give the lower and upper bounds of the relative efficiencies proposed in Section 2. Some concluding remarks are given in Section 5.

## 2 New relative efficiency

In order to define the loss when we use $\stackrel{ˆ}{\beta }$ or $\stackrel{˜}{\beta }$ to replace ${\beta }^{\ast }\left(r\right)$, we introduce four relative efficiencies as follows:

${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)=\frac{|Cov\left({\beta }^{\ast }\left(r\right)\right)|}{|Cov\left(\stackrel{ˆ}{\beta }\right)|},$
(14)
${e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)=\frac{|Cov\left({\beta }^{\ast }\left(r\right)\right)|}{|Cov\left(\stackrel{˜}{\beta }\right)|},$
(15)
${e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)=\frac{tr\left(Cov\left({\beta }^{\ast }\left(r\right)\right)\right)}{tr\left(Cov\left(\stackrel{ˆ}{\beta }\right)\right)},$
(16)
${e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)=\frac{tr\left(Cov\left({\beta }^{\ast }\left(r\right)\right)\right)}{tr\left(Cov\left(\stackrel{˜}{\beta }\right)\right)},$
(17)

where $|A|$ and $tr\left(A\right)$ denote the determinant and trace of matrix A, respectively. By Equation (13), we have $0<{e}_{i}\left(\cdot |\cdot \right)\le 1$, $i=1,2,3,4$. In the next section we will give the lower and upper bounds of ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ and ${e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$.

## 3 The lower and upper bounds of ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ and ${e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$

In this section we give the lower and upper bounds of ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ and ${e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$. Firstly, we give some lemmas and notations which are needed in the following discussion. Let A be an $n×n$ nonnegative definite matrix, ${\lambda }_{1}\left(A\right)\ge {\lambda }_{2}\left(A\right)\ge \cdots \ge {\lambda }_{n}\left(A\right)$ stands for the ordered eigenvalues of matrix A.

Lemma 3.1 

Let A be an $n×n$ nonnegative definite matrix, and let B be an $n×n$ nonnegative definite matrix, then we have

${\lambda }_{n}\left(A\right){\lambda }_{i}\left(B\right)\le {\lambda }_{i}\left(AB\right)\le {\lambda }_{1}\left(A\right){\lambda }_{i}\left(B\right),\phantom{\rule{1em}{0ex}}i=1,2,\dots ,n.$
(18)

Lemma 3.2 

Let ${\mathrm{\Delta }}_{1}=diag\left({\tau }_{1},{\tau }_{2},\dots ,{\tau }_{p}\right)$, ${\tau }_{1}\ge {\tau }_{2}\ge \cdots \ge {\tau }_{p}>0$, ${\mathrm{\Delta }}_{2}=diag\left({\mu }_{1},{\mu }_{2},\dots ,{\mu }_{p}\right)$, ${\mu }_{1}\ge {\mu }_{2}\ge \cdots \ge {\mu }_{p}\ge 0$ and A be an $p×p$ orthogonal matrix, then we have

$\sum _{i=1}^{p}{\tau }_{i}{\mu }_{p+1-i}\le tr\left({\mathrm{\Delta }}_{1}{A}^{\prime }{\mathrm{\Delta }}_{2}A\right)\le \sum _{i=1}^{p}{\tau }_{i}{\mu }_{i}.$
(19)

Now we will give the lower and upper bounds of ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$.

Theorem 3.1 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{ˆ}{\beta }$ be given in Equations (7) and (3), let ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ be defined in Equation (14), then we have

$\frac{1}{{\prod }_{i=1}^{p}\left(1+r{\theta }_{p}^{-1}{\eta }_{i}\right)}\le {e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{1}{{\prod }_{i=1}^{p}\left(1+r{\theta }_{1}^{-1}{\eta }_{i}\right)},$
(20)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$.

Proof By the definition of ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$, we have

$\begin{array}{rcl}{e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)& =& \frac{|Cov\left({\beta }^{\ast }\left(r\right)\right)|}{|Cov\left(\stackrel{ˆ}{\beta }\right)|}\\ =& \frac{|{\sigma }_{1}{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}+r{X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1}|}{|{\sigma }_{1}{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}|}\\ =& \frac{|{X}_{1}^{\prime }{Q}_{1}{X}_{1}|}{|{X}_{1}^{\prime }{Q}_{1}{X}_{1}+r{X}_{2}^{\prime }{Q}_{2}{X}_{2}|}.\end{array}$
(21)

It is easy to see that ${X}_{1}^{\prime }{Q}_{1}{X}_{1}>0$ and ${X}_{2}^{\prime }{Q}_{2}{X}_{2}>0$. Define

$A={\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2},$

then $A>0$, there exists an orthogonal matrix N such that

$NA{N}^{\prime }=diag\left({\zeta }_{1},\dots ,{\zeta }_{p}\right)\triangleq \mathrm{\Delta },$
(22)

where ${\zeta }_{1}\ge \cdots \ge {\zeta }_{p}$ is the eigenvalues of A. Now we define $M=N{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}$, then we have

$M\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right){M}^{\prime }=N{N}^{\prime }={I}_{p},$
(23)
$\begin{array}{c}M\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){M}^{\prime }=N{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}{N}^{\prime }\hfill \\ \phantom{M\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){M}^{\prime }}=NA{N}^{\prime }=\mathrm{\Delta }.\hfill \end{array}$
(24)

Thus

${X}_{1}^{\prime }{Q}_{1}{X}_{1}={M}^{-1}{M}^{\prime -1},$
(25)
${X}_{2}^{\prime }{Q}_{2}{X}_{2}={M}^{-1}\mathrm{\Delta }{M}^{\prime -1}.$
(26)

Then we put Equations (25) and (26) into Equation (21), and we have

$\begin{array}{rcl}{e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)& =& \frac{|{X}_{1}^{\prime }{Q}_{1}{X}_{1}|}{|{X}_{1}^{\prime }{Q}_{1}{X}_{1}+r{X}_{2}^{\prime }{Q}_{2}{X}_{2}|}\\ =& \frac{|{M}^{-1}{M}^{\prime -1}|}{|{M}^{-1}{M}^{\prime -1}+r{M}^{-1}\mathrm{\Delta }{M}^{\prime -1}|}\\ =& \frac{|{M}^{-1}||{M}^{\prime -1}|}{|{M}^{-1}||{I}_{p}+r\mathrm{\Delta }||{M}^{\prime -1}|}=\frac{1}{|{I}_{p}+r\mathrm{\Delta }|}.\end{array}$
(27)

Since $A={\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}$ has the same eigenvalues of $\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}$, we have ${\lambda }_{i}\left(A\right)={\lambda }_{i}\left(\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right)$, $i=1,2,\dots ,p$. Then by Lemma 3.1 we have

$\begin{array}{rcl}{\lambda }_{p}\left({\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right){\lambda }_{i}\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)& \le & {\lambda }_{i}\left(\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right)\\ \le & {\lambda }_{1}\left({\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right){\lambda }_{i}\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right).\end{array}$
(28)

On the other hand,

${\lambda }_{p}\left({\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right)={\lambda }_{1}^{-1}\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)={\theta }_{1}^{-1},$
(29)
${\lambda }_{1}\left({\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right)={\lambda }_{p}^{-1}\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)={\theta }_{p}^{-1},$
(30)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$. By Equations (28)-(30), we obtain

${\theta }_{1}^{-1}{\eta }_{i}\le {\lambda }_{i}\left(\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right)\le {\theta }_{p}^{-1}{\eta }_{i},\phantom{\rule{1em}{0ex}}i=1,\dots ,p,$
(31)

where ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$. Thus by Equations (27) and (31), we have

$\frac{1}{{\prod }_{i=1}^{p}\left(1+r{\theta }_{p}^{-1}{\eta }_{i}\right)}\le {e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{1}{{\prod }_{i=1}^{p}\left(1+r{\theta }_{1}^{-1}{\eta }_{i}\right)}.$
(32)

□

Corollary 3.1 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{ˆ}{\beta }$ be given in Equations (7) and (3), let ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ be defined in Equation (14), ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$ and ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$ communicate, then we have

$\frac{{\theta }_{p}^{p}}{{\left({\theta }_{1}+r{\eta }_{1}\right)}^{p}}\le {e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{{\theta }_{1}^{p}}{{\left({\theta }_{p}+r{\eta }_{p}\right)}^{p}},$
(33)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$.

Proof Since ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$ and ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$ communicate, there exists an orthogonal matrix G such that

${G}^{\prime }{X}_{1}^{\prime }{Q}_{1}{X}_{1}G=diag\left({\theta }_{1},\dots ,{\theta }_{p}\right)\triangleq \mathrm{\Sigma },$
(34)
${G}^{\prime }{X}_{2}^{\prime }{Q}_{2}{X}_{2}G=diag\left({\eta }_{1},\dots ,{\eta }_{p}\right)\triangleq \mathrm{\Omega },$
(35)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$.

By the definition of ${e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$, we have

$\begin{array}{rcl}{e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)& =& \frac{|Cov\left({\beta }^{\ast }\left(r\right)\right)|}{|Cov\left(\stackrel{ˆ}{\beta }\right)|}\\ =& \frac{|{X}_{1}^{\prime }{Q}_{1}{X}_{1}|}{|{X}_{1}^{\prime }{Q}_{1}{X}_{1}+r{X}_{2}^{\prime }{Q}_{2}{X}_{2}|}\\ =& \frac{|G\mathrm{\Sigma }{G}^{\prime }|}{|G\mathrm{\Sigma }{G}^{\prime }+rG\mathrm{\Omega }{G}^{\prime }|}\\ =& \frac{{\prod }_{i=1}^{p}{\theta }_{i}}{{\prod }_{i=1}^{p}\left({\theta }_{i}+r{\eta }_{i}\right)}.\end{array}$
(36)

Thus we have

$\frac{{\theta }_{p}^{p}}{{\left({\theta }_{1}+r{\eta }_{1}\right)}^{p}}\le {e}_{1}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{{\theta }_{1}^{p}}{{\left({\theta }_{p}+r{\eta }_{p}\right)}^{p}}.$
(37)

□

Using the same way, we can give the lower and upper bounds of ${e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$.

Theorem 3.2 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{˜}{\beta }$ be given in Equations (7) and (5), let ${e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$ be defined in Equation (15), then we have

$\frac{1}{{\prod }_{i=1}^{p}\left(r+{\eta }_{p}^{-1}{\theta }_{i}\right)}\le {e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)\le \frac{1}{{\prod }_{i=1}^{p}\left(r+{\eta }_{1}^{-1}{\theta }_{i}\right)},$
(38)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$.

Corollary 3.2 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{˜}{\beta }$ be given in Equations (7) and (5), let ${e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$ be defined in Equation (15), ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$ and ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$ communicate, then we have

$\frac{{\eta }_{p}^{p}}{{\left({\theta }_{1}+r{\eta }_{1}\right)}^{p}}\le {e}_{2}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)\le \frac{{\eta }_{1}^{p}}{{\left({\theta }_{p}+r{\eta }_{p}\right)}^{p}}.$
(39)

## 4 The lower and upper bounds of ${e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ and ${e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$

In this section we give the lower and upper bounds of ${e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ and ${e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$. Firstly we give the lower and upper bounds of ${e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$.

Theorem 4.1 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{ˆ}{\beta }$ be given in Equations (7) and (3), let ${e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ be defined in Equation (16), then we have

$\frac{{\sum }_{i=1}^{p}{\theta }_{p+1-i}^{-1}{\left(1+r{\zeta }_{i}\right)}^{-1}}{{\sum }_{i=1}^{p}{\theta }_{i}^{-1}}\le {e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{{\sum }_{i=1}^{p}{\theta }_{p+1-i}^{-1}{\left(1+r{\zeta }_{p+1-i}\right)}^{-1}}{{\sum }_{i=1}^{p}{\theta }_{i}^{-1}},$
(40)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\zeta }_{1}\ge \cdots \ge {\zeta }_{p}$ is the ordered eigenvalues of ${\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}$.

Proof Since ${X}_{1}^{\prime }{Q}_{1}{X}_{1}>0$, there exists an orthogonal matrix ${K}_{1}$ such that

${X}_{1}^{\prime }{Q}_{1}{X}_{1}={K}_{1}^{\prime }\mathrm{\Sigma }{K}_{1},\phantom{\rule{1em}{0ex}}\mathrm{\Sigma }=diag\left({\theta }_{1},\dots ,{\theta }_{p}\right),$
(41)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$. Similar to Theorem 3.1, we define

$A={\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right){\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}.$

Since $A>0$, there exists an orthogonal matrix ${K}_{2}$ such that

$A={K}_{2}^{\prime }\mathrm{\Delta }{K}_{2},\phantom{\rule{1em}{0ex}}\mathrm{\Delta }=diag\left({\zeta }_{1},\dots ,{\zeta }_{p}\right),$
(42)

where ${\zeta }_{1}\ge \cdots \ge {\zeta }_{p}$ is the order eigenvalues of A.

We can easily compute that

$tr\left(Cov\left(\stackrel{ˆ}{\beta }\right)\right)={\sigma }_{1}tr\left({\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right)={\sigma }_{1}\sum _{i=1}^{p}{\theta }_{i}^{-1}$
(43)

and

$\begin{array}{rcl}tr\left(Cov\left({\beta }^{\ast }\left(r\right)\right)\right)& =& {\sigma }_{1}tr\left({\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}+r{X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1}\right)\\ =& {\sigma }_{1}tr\left({\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}{\left({I}_{p}+rA\right)}^{-1}{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1/2}\right)\\ =& {\sigma }_{1}tr\left({\left({I}_{p}+rA\right)}^{-1}{\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right)}^{-1}\right)\\ =& {\sigma }_{1}tr\left({\left({I}_{p}+r\mathrm{\Delta }\right)}^{-1}{K}_{2}{K}_{1}^{\prime }{\mathrm{\Sigma }}^{-1}{K}_{1}{K}_{2}^{\prime }\right)\\ =& {\sigma }_{1}tr\left({\left({I}_{p}+r\mathrm{\Delta }\right)}^{-1}{K}^{\prime }{\mathrm{\Sigma }}^{-1}K\right),\end{array}$
(44)

where $K={K}_{1}{K}_{2}^{\prime }$ is an orthogonal matrix. Thus we have

$\begin{array}{rcl}{e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)& =& \frac{tr\left(Cov\left({\beta }^{\ast }\left(r\right)\right)\right)}{tr\left(Cov\left(\stackrel{ˆ}{\beta }\right)\right)}\\ =& \frac{tr\left({\left({I}_{p}+r\mathrm{\Delta }\right)}^{-1}{K}^{\prime }{\mathrm{\Sigma }}^{-1}K\right)}{{\sum }_{i=1}^{p}{\theta }_{i}^{-1}}.\end{array}$
(45)

Using Lemma 3.2, we have

$\begin{array}{rcl}\sum _{i=1}^{p}{\theta }_{p+1-i}^{-1}{\left(1+r{\zeta }_{i}\right)}^{-1}& \le & tr\left({\left({I}_{p}+r\mathrm{\Delta }\right)}^{-1}{K}^{\prime }{\mathrm{\Sigma }}^{-1}K\right)\\ \le & \sum _{i=1}^{p}{\theta }_{p+1-i}^{-1}{\left(1+r{\zeta }_{p+1-i}\right)}^{-1}.\end{array}$
(46)

Thus

$\frac{{\sum }_{i=1}^{p}{\theta }_{p+1-i}^{-1}{\left(1+r{\zeta }_{i}\right)}^{-1}}{{\sum }_{i=1}^{p}{\theta }_{i}^{-1}}\le {e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{{\sum }_{i=1}^{p}{\theta }_{p+1-i}^{-1}{\left(1+r{\zeta }_{p+1-i}\right)}^{-1}}{{\sum }_{i=1}^{p}{\theta }_{i}^{-1}}.$
(47)

□

Corollary 4.1 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{ˆ}{\beta }$ be given in Equations (7) and (3), let ${e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$ be defined in Equation (16), ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$ and ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$ communicate, then we have

$\frac{{\theta }_{p}}{{\theta }_{1}+r{\eta }_{1}}\le {e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{{\theta }_{1}}{{\theta }_{p}+r{\eta }_{p}},$
(48)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$.

Proof Since ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$ and ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$ communicate, there exists an orthogonal matrix G such that

${G}^{\prime }{X}_{1}^{\prime }{Q}_{1}{X}_{1}G=diag\left({\theta }_{1},\dots ,{\theta }_{p}\right)=\mathrm{\Sigma },$
(49)
${G}^{\prime }{X}_{2}^{\prime }{Q}_{2}{X}_{2}G=diag\left({\eta }_{1},\dots ,{\eta }_{p}\right)=\mathrm{\Omega },$
(50)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$.

By the definition of ${e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)$, we have

$\begin{array}{rcl}{e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)& =& \frac{tr\left(Cov\left({\beta }^{\ast }\left(r\right)\right)\right)}{tr\left(Cov\left(\stackrel{ˆ}{\beta }\right)\right)}\\ =& \frac{{\sum }_{i=1}^{p}{\left({\theta }_{i}+r{\eta }_{i}\right)}^{-1}}{{\sum }_{i=1}^{p}{\theta }_{i}^{-1}}.\end{array}$
(51)

Thus we have

$\frac{{\theta }_{p}}{{\theta }_{1}+r{\eta }_{1}}\le {e}_{3}\left({\beta }^{\ast }\left(r\right)|\stackrel{ˆ}{\beta }\right)\le \frac{{\theta }_{1}}{{\theta }_{p}+r{\eta }_{p}}.$
(52)

□

Then we can give the lower and upper bounds of ${e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$.

Theorem 4.2 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{˜}{\beta }$ be given in Equations (7) and (5), let ${e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$ be defined in Equation (17), then we have

$\frac{{\sum }_{i=1}^{p}{\eta }_{p+1-i}^{-1}{\left(r+{\iota }_{i}\right)}^{-1}}{{\sum }_{i=1}^{p}{\eta }_{i}^{-1}}\le {e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)\le \frac{{\sum }_{i=1}^{p}{\eta }_{p+1-i}^{-1}{\left(r+{\iota }_{p+1-i}\right)}^{-1}}{{\sum }_{i=1}^{p}{\eta }_{i}^{-1}},$
(53)

where ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$, ${\iota }_{1}\ge \cdots \ge {\iota }_{p}$ is the ordered eigenvalues of ${\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1/2}\left({X}_{1}^{\prime }{Q}_{1}{X}_{1}\right){\left({X}_{2}^{\prime }{Q}_{2}{X}_{2}\right)}^{-1/2}$.

Corollary 4.2 Let ${\beta }^{\ast }\left(r\right)$ and $\stackrel{˜}{\beta }$ be given in Equations (7) and (5), let ${e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)$ be defined in Equation (17), ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$ and ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$ communicate, then we have

$\frac{{\eta }_{p}}{{\theta }_{1}+r{\eta }_{1}}\le {e}_{4}\left({\beta }^{\ast }\left(r\right)|\stackrel{˜}{\beta }\right)\le \frac{{\eta }_{1}}{{\theta }_{p}+r{\eta }_{p}},$
(54)

where ${\theta }_{1}\ge \cdots \ge {\theta }_{p}$ is the ordered eigenvalues of ${X}_{1}^{\prime }{Q}_{1}{X}_{1}$, ${\eta }_{1}\ge \cdots \ge {\eta }_{p}$ is the ordered eigenvalues of ${X}_{2}^{\prime }{Q}_{2}{X}_{2}$.

## 5 Concluding remarks

In this article, we have introduced four relative efficiencies in two linear regression equations system with identical parameter vectors, and we have also given the lower and upper bounds for the four relative efficiencies.

## References

1. Liu AY: Estimation of the parameters in two linear models with only some of the parameter vectors identical. Stat. Probab. Lett. 1996, 29: 369–375. 10.1016/0167-7152(95)00193-X

2. Ma TF, Wang SG: Estimation of the parameters in a two linear regression equations system with identical parameter vectors. Stat. Probab. Lett. 2009, 79: 1135–1140. 10.1016/j.spl.2008.10.023

3. Yang H: Extensions of the Kantorovich inequality and the error ratio efficiency of the mean square. Math. Appl. 1988, 4: 85–90.

4. Wang SG, Ip WC: A matrix version of the Wielandt inequality and its applications to statistics. Linear Algebra Appl. 1999, 296: 171–181. 10.1016/S0024-3795(99)00117-2

5. Liu SZ: Efficiency comparisons between the OLSE and the BLUE in a singular linear model. J. Stat. Plan. Inference 2000, 84: 191–200. 10.1016/S0378-3758(99)00149-4

6. Liu SZ, Lu CY, Puntanen S: Matrix trace Wielandt inequalities with statistical applications. J. Stat. Plan. Inference 2009, 139: 2254–2260. 10.1016/j.jspi.2008.10.026

7. Yang H, Wang LT: An alternative form of the Watson efficiency. J. Stat. Plan. Inference 2009, 139: 2767–2774. 10.1016/j.jspi.2009.01.002

8. Wang LT, Yang H: Several matrix Euclidean norm inequalities involving Kantorovich inequality. J. Inequal. Appl. 2009., 2009: Article ID 291984

9. Wang LT, Yang H: Matrix Euclidean norm Wielandt inequalities and their applications to statistics. Stat. Pap. 2012, 53: 521–530. 10.1007/s00362-010-0357-y

10. Yang H, Wu JB: Some matrix norm Kantorovich inequalities and their applications. Commun. Stat., Theory Methods 2011, 22: 4078–4085.

11. Wang SG, Jia ZZ: Matrix Inequality. Anhui Education Press, Hefei; 1994.

12. Yuan JC: Relative efficiencies of mixed estimator with respect to LS estimator in linear regression model. J. China Univ. Sci. Technol. 2000, 30: 285–291.

## Acknowledgements

This work was supported by the Scientific Research Foundation of Chongqing University of Arts and Sciences (Grant No. R2013SC12), Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant No. KJTD201321), and the National Natural Science Foundation of China (Grant Nos. 71271227, 11201505).

## Author information

Authors

### Corresponding author

Correspondence to Jibo Wu. 