# Estimation in a partially linear single-index model with missing response variables and error-prone covariates

## Abstract

In this paper, the authors study the partially linear single-index model when the covariate X is measured with additive error and the response variable Y is sometimes missing. Based on the least-squared technique, an imputation method is proposed to estimate the regression coefficients, single-index coefficients, and the nonparametric function, respectively. Thereafter, asymptotical normalities of the corresponding estimators are proved. A simulation experiment and an application to a diabetes study are used to illustrate our proposed method.

## 1 Introduction

We study the partially linear single-index model

$$Y=g\bigl(Z^{T}\alpha\bigr)+X^{T}\beta+ \varepsilon,$$
(1.1)

where Y is a response variable, $$(Z,X)\in R^{p} \times R^{q}$$ is covariate, $$g(\cdot)$$ is an unknown univariate measurable function, Îµ is a random error with $$E(\varepsilon|Z,X)=0$$, $$\operatorname {Var}(\varepsilon|Z,X)=\sigma^{2}<\infty$$, and $$(\alpha,\beta)$$ is an unknown vector in $$R^{p} \times R^{q}$$ with $$\Vert \alpha \Vert =1$$. The restriction $$\Vert \alpha \Vert =1$$ ensures identifiability.

In recent years, model (1.1) has attracted broad attention because it includes two important semi-parametric models as its special cases: the single model (Ichimura [1]) and the partially linear model (Engle etÂ al. [2]). Relevant studies about model (1.1) have been done by Carroll etÂ al. [3], Yu etÂ al. [4], Liang etÂ al. [5], Xia etÂ al. [6] and Xue etÂ al. [7], all of which based on the complete data set.

In practice, missing-data problems are always caused by design or accident, so the statisticians, such as Liu etÂ al. [8] and Lai etÂ al. [9], have paid a great attention to them. Most of these researches concerning missing-data problems have been carried out on the condition that the covariates can be observed exactly. However, observations are often measured with errors, as can be seen in the papers of Liang etÂ al. [5] and Chen etÂ al. [10]. Nevertheless, those studies of the observations characterized by inaccurate measures are based on the complete data set. Therefore, it is necessary to study error-in-variables models with missing response. Taking both measurement errors in the covariates and the missing response variables into account, Liang etÂ al. [11], Wei etÂ al. [12] and Wei [13] have done some work in the partially linear model, in the partially linear additive model and in the partially linear varying-coefficient model, respectively.

The common method of dealing with missing data is the imputation method which was developed by Wang etÂ al. [14] in the partially linear model. This paper, with the enlightenment of Lai etÂ al. [15], focuses on estimating Î², Î±, and the nonparametric function $$g(\cdot)$$ with imputation method when the covariate X is measured with additive error and the response variable Y is sometimes missing in the model (1.1). It is assumed that the observation V is a substitute of X

$$V=X+U.$$
(1.2)

The $$\delta=0$$ indicates that Y is missing, otherwise $$\delta=1$$. We assume that the measurement error U is independent from $$(Y,Z,X,\delta)$$ with $$E(U)=0$$ and $$\operatorname {cov}(U)=\Sigma_{uu}$$. At first, it is assumed that $$\Sigma_{uu}$$ is known. If it is unknown, it can be estimated with partial replication (Liang etÂ al. [16]). Throughout this paper, we assume the data missing mechanism is as follows:

$$p(\delta=1|Y, Z, X) = p(\delta=1|Z, X)=\pi(Z, X)$$
(1.3)

for some unknown $$\pi(Z, X)$$. In addition, $$p(\delta=1|Y, Z, X, V)=\pi(Z, X)$$, this is because the measurement error U is independent from $$(Y, Z, X, \delta)$$. As is pointed out by Liang etÂ al. [11], since X is observed with measurement error, Y is therefore not missing at random if no further assumptions are made.

The rest of this paper is organized as follows. In SectionÂ 2, the imputation method is used to estimate the parameters and nonparametric function. In SectionÂ 3, relative asymptotic results are presented. In SectionÂ 4, some simulation is conducted to illustrate the proposed approach, and we apply our method to analyze a diabetes data set. All proofs are shown in SectionÂ 5.

## 2 Methodology

In the following, let $$\{(Y_{i}, Z_{i}, X_{i}, V_{i}, U_{i}, \delta_{i}), i=1,2,\ldots,n\}$$ be independent and identically distributed, and write $$A^{\otimes2} = A\cdot A^{T}$$.

### 2.1 Complete method

In order to derive the imputation estimators, first we define the complete estimators of Î², Î±, and the nonparametric function $$g(\cdot)$$. Note that $$\delta_{i}Y_{i} = \delta_{i}g(Z_{i}^{T}\alpha) + \delta_{i} X_{i}^{T}\beta+ \delta_{i}\varepsilon_{i}$$. Taking conditional expectations given $$Z^{T}\alpha$$, from the assumptions, we have

$$E\bigl(\delta_{i}Y_{i}|Z_{i}^{T} \alpha\bigr) = E\bigl(\delta_{i}|Z_{i}^{T}\alpha \bigr)g\bigl(Z_{i}^{T}\alpha\bigr) + E\bigl( \delta_{i}X_{i}|Z_{i}^{T}\alpha \bigr)^{T}\beta.$$

By multiplying the both sides of model (1.1) with $$E(\delta _{i}|Z_{i}^{T}\alpha)$$, we obtain

$$E\bigl(\delta_{i}|Z_{i}^{T}\alpha \bigr)Y_{i} = E\bigl(\delta_{i}|Z_{i}^{T} \alpha\bigr)g\bigl(Z_{i}^{T}\alpha\bigr) + E\bigl( \delta_{i}|Z_{i}^{T}\alpha\bigr)X_{i}^{T} \beta+ E\bigl(\delta_{i}|Z_{i}^{T}\alpha\bigr) \varepsilon_{i}.$$

Then making some straightforward calculations, we get

$$\delta_{i}\bigl[Y_{i}-m_{2} \bigl(Z_{i}^{T}\alpha\bigr)\bigr] = \delta_{i} \bigl[X_{i}-m_{1}\bigl(Z_{i}^{T}\alpha \bigr)\bigr]^{T}\beta+ \delta_{i}\varepsilon_{i},$$
(2.1)

where $$m_{1}(t)=\frac{E(\delta X|Z^{T}\alpha=t)}{E(\delta|Z^{T}\alpha=t)}$$, $$m_{2}(t)=\frac{E(\delta Y|Z^{T}\alpha=t)}{E(\delta|Z^{T}\alpha=t)}$$. If $$m_{1}(t)$$, $$m_{2}(t)$$ are known and the $$X_{i}$$ are observed, according to (2.1), the least-square estimator of Î² can be defined as

\begin{aligned} \hat{\beta} =& \Biggl\{ {\frac{1}{n}\sum_{i=1}^{n} \delta_{i}\bigl[X_{i}-m_{1}\bigl(Z_{i}^{T} \alpha \bigr)\bigr]^{\otimes2}}\Biggr\} ^{-1} \\ &{}\cdot\Biggl\{ {\frac{1}{n}\sum_{i=1}^{n} \delta_{i}\bigl[X_{i}-m_{1}\bigl(Z_{i}^{T} \alpha \bigr)\bigr] \bigl[Y_{i}-m_{2}\bigl(Z_{i}^{T} \alpha\bigr)\bigr]}\Biggr\} . \end{aligned}

However, the $$X_{i}$$ are measured with error and $$m_{1}(Z_{i}^{T}\alpha)$$, $$m_{2}(Z_{i}^{T}\alpha)$$ are unknown. From our assumptions, it follows that $$E(\delta V|Z^{T}\alpha)=E(\delta X|Z^{T}\alpha)$$. Therefore, the estimator of Î² by the correction for the attenuation technique can be defined as

\begin{aligned} \hat{\beta}_{n} =& \Biggl\{ {\frac{1}{n}\sum _{i=1}^{n} \delta_{i} \bigl[V_{i}-\hat {m}_{3}\bigl(Z_{i}^{T} \alpha\bigr)\bigr]^{\otimes2}}-\Sigma_{uu}\Biggr\} ^{-1} \\ &{}\cdot\Biggl\{ {\frac{1}{n}\sum_{i=1}^{n} \delta_{i}\bigl[V_{i}-\hat{m}_{3} \bigl(Z_{i}^{T}\alpha \bigr)\bigr] \bigl[Y_{i}- \hat{m}_{2}\bigl(Z_{i}^{T}\alpha\bigr)\bigr]}\Biggr\} , \end{aligned}
(2.2)

where $$\hat{m}_{2}(Z_{i}^{T}\alpha)$$ and $$\hat{m}_{3}(Z_{i}^{T}\alpha)$$ are the estimators of $$m_{2}(Z_{i}^{T}\alpha)$$ and $$m_{3}(Z_{i}^{T}\alpha)$$, respectively, and $$m_{3}(t)=\frac{E(\delta V|Z^{T}\alpha=t)}{E(\delta|Z^{T}\alpha=t)}$$. Let $$K_{h_{1}}(t)=\frac{K_{1}(\frac{t}{h_{1}})}{h_{1}}$$, with $$K_{1}(\cdot)$$ being a kernel function and $$h_{1}$$ being a suitable bandwidth. Those estimators are defined as

$$\hat{m}_{2}(t) = \sum_{i=1}^{n} \frac{\delta_{i} K_{h_{1}}(Z_{i}^{T}\alpha-t)}{\sum_{i=1}^{n} \delta_{i} K_{h_{1}}(Z_{i}^{T}\alpha-t)} Y_{i}, \qquad \hat{m}_{3}(t) = \sum _{i=1}^{n} \frac{\delta_{i} K_{h_{1}}(Z_{i}^{T}\alpha-t)}{\sum_{i=1}^{n} \delta_{i} K_{h_{1}}(Z_{i}^{T}\alpha-t)} V_{i}.$$

After obtaining the estimator of Î², we try to estimate $$g(\cdot)$$ and $$g'(\cdot)$$ for any fixed Î±, based on $$\hat{\beta}_{n}$$. In fact, it becomes a single-index model which is $$Y-X^{T}\beta=g(Z^{T}\alpha)+\varepsilon$$. Taking conditional expectations given $$Z^{T}\alpha$$ on the above formula, from the previous assumptions, there is $$g(t, \alpha, \beta)= E(Y-X^{T} \beta|Z^{T} \alpha=t)= E(Y-V^{T} \beta|Z^{T} \alpha=t)$$. Thus, estimating $$g(\cdot)$$ is not necessary to be corrected. By a local linear method, we approximate $$g(t)$$ within the neighborhood of $$t_{0}$$, $$g(t)\approx g(t_{0})+g'(t_{0})(t-t_{0})$$. Then we can obtain the estimators of $$g(\cdot)$$ and $$g'(\cdot)$$ by minimizing

$$\min_{g(t_{0}), g'(t_{0})} \sum_{i=1}^{n} \bigl[Y_{i}-V_{i}^{T}\hat{\beta }_{n}-g(t_{0})-g'(t_{0}) (t_{i}-t_{0})\bigr]^{2} K_{h_{2}}(t_{i}-t_{0}) \delta_{i},$$

where $$K_{h_{2}}(t)=\frac{K_{2}(\frac{t}{h_{2}})}{h_{2}}$$, with $$K_{2}(\cdot)$$ being a kernel function and $$h_{2}$$ being a suitable bandwidth. Through a direct calculation, we have

$$\left ( \textstyle\begin{array}{@{}c@{}} \hat{g}_{n}(t_{0}) \\ h_{2}\hat{g}_{n}'(t_{0}) \end{array}\displaystyle \right ) = \frac{1}{n} \sum_{i=1}^{n} \biggl( \frac{1}{n}B_{0}^{T} S_{1} B_{0} \biggr)^{-1} B_{i0}\delta _{i} K_{h_{2}}(t_{i}-t_{0}) \bigl(Y_{i}-V_{i}^{T}\hat{\beta}_{n} \bigr),$$
(2.3)

where

\begin{aligned}& B_{i0}=\left ( \textstyle\begin{array}{@{}c@{}} 1 \\ \frac{t_{i}-t_{0}}{h_{2}} \end{array}\displaystyle \right ),\quad i=1, 2, \ldots, n, \qquad B_{0}=\left ( \textstyle\begin{array}{@{}c@{}}B_{10}^{T}\\ \vdots\\ B_{n0}^{T} \end{array}\displaystyle \right ),\\& S_{1}=\left ( \textstyle\begin{array}{@{}ccc@{}}\delta_{1} K_{h_{2}}(t_{1}-t_{0})&& \\ & \ddots& \\ &&\delta _{n} K_{h_{2}}(t_{n}-t_{0}) \end{array}\displaystyle \right ). \end{aligned}

In order to apply the above formulas, we have to know the estimation values of Î±, which can be obtained by the following formula:

$$\hat{\alpha}_{n}=\min_{\alpha} \sum _{i=1}^{n} \delta_{i} \bigl[Y_{i}-V_{i}^{T}\hat{\beta }_{n}- \hat{g}_{n}\bigl(Z_{i}^{T}\alpha\bigr) \bigr]^{2}.$$
(2.4)

The complete estimation procedure consists of the following steps:

Step 0.:

Select an initial value $$\hat{\alpha}_{0}$$, for example, using an available method, such as the complete data estimation method proposed by Xia etÂ al. [6], and let $$\hat{\alpha}_{n} = \frac{\hat{\alpha}_{0}}{\Vert \hat{\alpha }_{0} \Vert }$$.

Step 1.:

Based on (2.2) and (2.3), we can get $$\hat{\beta }_{nk}$$, $$\hat{g}_{nk}(\cdot)$$ when $$\alpha=\hat{\alpha}_{n}$$.

Step 2.:

The solution of (2.4) is written as $$\hat{\alpha}_{n(k+1)}$$. Let $$\hat{\alpha}_{n}= \frac{\hat{\alpha}_{n(k+1)}}{\Vert \hat {\alpha}_{n(k+1)} \Vert }$$.

Step 3.:

Iterate Steps 1 and 2 until convergence is achieved.

### 2.2 Imputation method

In this part, we will use the imputation technique to estimate Î², Î±, and the nonparametric function $$g(\cdot)$$. The advantage of this method is that all data can be used. First, we get $$\hat{\beta}_{n}$$, $$\hat{\alpha}_{n}$$, and $$\hat{g}_{n}(\cdot)$$ by the complete method. Let $$Y_{i}^{\circ}= \delta_{i} Y_{i} + (1-\delta_{i})[g(Z_{i}^{T} \alpha) + V_{i}^{T} \beta]$$, that is, $$Y_{i}^{\circ}= Y_{i}$$ if $$\delta_{i}=1$$, $$Y_{i}^{\circ}= g(Z_{i}^{T} \alpha) + V_{i}^{T} \beta$$, otherwise. From (1.3), we have $$E(Y^{\circ}|Z,X) = g(Z^{T} \alpha) + X^{T} \beta$$. This implies

$$Y_{i}^{\circ}= g\bigl(Z_{i}^{T} \alpha\bigr) + X_{i}^{T} \beta+ e_{i},$$
(2.5)

where $$E(e_{i}|Z_{i},X_{i})=0$$. It is just the form of the partial linear single-index model. Therefore, the least-square estimator of Î² can be defined as

\begin{aligned} \breve{\beta} =& \Biggl\{ {\frac{1}{n}\sum_{i=1}^{n} \bigl[X_{i}-E\bigl(X|Z_{i}^{T}\alpha \bigr) \bigr]^{\otimes2}}\Biggr\} ^{-1} \\ &{}\cdot\Biggl\{ {\frac{1}{n}\sum_{i=1}^{n} \bigl[X_{i}-E\bigl(X|Z_{i}^{T}\alpha\bigr)\bigr] \bigl[Y_{i}^{\circ}- E\bigl(Y^{\circ}|Z_{i}^{T} \alpha\bigr)\bigr]}\Biggr\} . \end{aligned}

However, since the $$X_{i}$$ are measured with error, we cannot obtain the exact data of $$Y_{i}^{\circ}$$. Let $$Y_{i}^{\ast}= \delta_{i} Y_{i} + (1-\delta_{i})[\hat{g}_{n}(Z_{i}^{T} \hat{\alpha }_{n}) + V_{i}^{T} \hat{\beta}_{n}]$$, it can be estimated as $$Y_{i}^{\circ}$$. Based on the correction for the attenuation technique, the imputation estimator of Î² can be defined as

\begin{aligned}[b] \breve{\beta}_{n} ={}& \Biggl\{ {\frac{1}{n}\sum _{i=1}^{n} \bigl[V_{i}-\hat{E} \bigl(V|Z_{i}^{T}\alpha \bigr)\bigr]^{\otimes2}}- \Sigma_{uu}\Biggr\} ^{-1} \\ &{}\cdot\Biggl\{ {\frac{1}{n}\sum_{i=1}^{n} \bigl[V_{i}-\hat{E}\bigl(V|Z_{i}^{T}\alpha\bigr) \bigr] \bigl[Y_{i}^{\ast}-\hat{E}\bigl(Y^{\ast}|Z_{i}^{T} \alpha\bigr)\bigr] -\frac{1}{n}\sum_{i=1}^{n} (1-\delta_{i})\Sigma_{uu}\hat{\beta}_{n}}\Biggr\} , \end{aligned}
(2.6)

where $$\hat{E}(V|Z_{i}^{T}\alpha)$$, $$\hat{E}(Y^{\ast}|Z_{i}^{T}\alpha)$$ are the estimators of $$E(V|Z_{i}^{T}\alpha)$$, $$E(Y^{\ast}|Z_{i}^{T}\alpha)$$, respectively. Let $$K_{h_{3}}(t)=\frac{K_{3}(\frac{t}{h_{3}})}{h_{3}}$$, with $$K_{3}(\cdot)$$ being a kernel function and $$h_{3}$$ being a suitable bandwidth. Those estimators are defined as

$$\hat{E}(V|t) = \sum_{i=1}^{n} \frac{K_{h_{3}}(Z_{i}^{T}\alpha-t)}{\sum_{i=1}^{n} K_{h_{3}}(Z_{i}^{T}\alpha-t)} V_{i}, \qquad \hat{E}\bigl(Y^{\ast}|t\bigr) = \sum _{i=1}^{n} \frac{ K_{h_{3}}(Z_{i}^{T}\alpha-t)}{\sum_{i=1}^{n} K_{h_{3}}(Z_{i}^{T}\alpha-t)} Y_{i}^{\ast}.$$

Similarly, we obtain the imputation estimators of $$g(t)$$ and $$g'(t)$$ by

$$\min_{g(t_{0}), g'(t_{0})} \sum_{i=1}^{n} \bigl[Y_{i}^{\ast}-V_{i}^{T} \breve{\beta }_{n}-g(t_{0})-g'(t_{0}) (t_{i}-t_{0})\bigr]^{2} K_{h_{4}}(t_{i}-t_{0}),$$
(2.7)

where $$K_{h_{4}}(t)=\frac{K_{4}(\frac{t}{h_{4}})}{h_{4}}$$, with $$K_{4}(\cdot)$$ being a kernel function and $$h_{4}$$ being a suitable bandwidth. Through a direct calculation, we have

$$\left ( \textstyle\begin{array}{@{}c@{}}\breve{g}_{n}(t_{0}) \\ h_{4}\breve{g}_{n}'(t_{0}) \end{array}\displaystyle \right ) = \frac{1}{n} \sum_{i=1}^{n} \biggl( \frac{1}{n}B_{2}^{T} S_{2} B_{2} \biggr)^{-1} B_{i2} K_{h_{4}}(t_{i}-t_{0}) \bigl(Y_{i}^{\ast}-V_{i}^{T}\breve{ \beta}_{n}\bigr),$$
(2.8)

where

\begin{aligned}& B_{i2}=\left ( \textstyle\begin{array}{@{}c@{}}1 \\ \frac{t_{i}-t_{0}}{h_{4}} \end{array}\displaystyle \right ), \quad i=1, 2, \ldots, n,\qquad B_{2}=\left ( \textstyle\begin{array}{@{}c@{}}B_{12}^{T}\\ \vdots\\ B_{n2}^{T} \end{array}\displaystyle \right ),\\& S_{2}=\left ( \textstyle\begin{array}{@{}ccc@{}} K_{h_{4}}(t_{1}-t_{0})&& \\ & \ddots& \\ && K_{h_{4}}(t_{n}-t_{0}) \end{array}\displaystyle \right ). \end{aligned}

As in the complete situation, if we want to use (2.6) and (2.8), it is a must to estimate Î± first, by minimizing the sum of square errors

$$\min_{\alpha} \sum_{i=1}^{n} \bigl[Y_{i}^{\ast}-V_{i}^{T}\breve{ \beta}_{n}-\breve {g}_{n}\bigl(Z_{i}^{T} \alpha\bigr)\bigr]^{2},$$
(2.9)

say $$\breve{\alpha}_{n}$$. Next we do the same work as in the complete situation.

## 3 Asymptotic results

In this section, the main results of this paper are summarized. For a concise representation, let $$\widetilde{\mathcal{S}}=\mathcal{S}- \frac{E(\delta\mathcal {S}|Z^{T}\alpha=t)}{E(\delta|Z^{T}\alpha=t)}$$ and $$\widetilde{\widetilde{\mathcal{S}}}=\mathcal{S}- E(\mathcal {S}|Z^{T}\alpha=t)$$, for example, $$\widetilde{X}=X-\frac{E(\delta X|Z^{T}\alpha= t)}{E(\delta |Z^{T}\alpha=t)}=X-m_{1}(t)$$, $$\widetilde{\widetilde{X}}=X-E( X|Z^{T}\alpha=t)$$. Moreover, in order to state the asymptotic results, the following assumptions will be used.

(C1):

The matrix $$\Gamma_{X|Z}=E\{\delta[X-m_{1}(t)]^{\otimes2}\}$$ is a positive-definite.

(C2):

Each entry of the Hessian matrices of $$m_{1}(t)$$ and $$m_{2}(t)$$ is continuous and squared integrable, where the $$(i, j)$$ entry of a Hessian matrix of $$g(z)$$ is defined as $$\frac{\partial^{2} g(z)}{\partial z_{i}\, \partial z_{j}}$$.

(C3):

The bandwidths are of order $$n^{-\frac{1}{p+4}}$$, where p is the dimension of Z.

(C4):

The kernels $$K_{i}(\cdot)$$, $$i=1,2,3,4$$ are a bounded symmetric density functions with compact support $$[-1,1]$$, and they satisfy $$\int uK_{i}(u)\,du=0$$, $$\int u^{2}K_{i}(u)\,du\neq0$$.

(C5):

The density function $$f(t)$$ of $$Z^{T}\alpha$$ is bounded away from 0 and has two bounded derivatives on its support.

(C6):

$$g(\cdot)$$, $$m_{2}(\cdot)$$, $$m_{3}(\cdot)$$, $$E(V|\cdot)$$, $$E(Y^{*}|\cdot)$$ have two bounded, continuous derivatives on their supports.

(C7):

The probability function $$\pi(Z, X)$$ has bounded continuous second partial derivatives, and is bounded away from zero on the support of $$(Z,X)$$.

(C8):

$$E(\vert \varepsilon \vert ^{4}<\infty)$$, $$E(\vert U\vert ^{3}<\infty)$$.

Now we give the following asymptotical results.

### Theorem 3.1

Assume that the conditions (C1)-(C8) are satisfied, then we obtain

$$\sqrt{n}(\breve{\beta}_{n}-\beta) \rightarrow N\bigl(0, \Sigma_{\widetilde {\widetilde{X}}}^{-1} \Sigma_{\beta^{*}} \Sigma_{\widetilde{\widetilde{X}}}^{-1} \bigr),$$

where $$\Sigma_{\widetilde{\widetilde{X}}}=E\{ \widetilde{\widetilde {X}}^{\otimes2}\}$$, $$\Sigma_{\beta^{*}}=E[\{(\Gamma_{\widetilde{X}}+\Sigma_{1}-\Sigma_{2}\Gamma _{\widetilde{Z}}^{-1}\Gamma_{\widetilde{Z}\widetilde{X}})\Gamma _{\widetilde{X}}^{-1} \cdot\delta(\widetilde{X}(\varepsilon-U^{T}\beta)+ \varepsilon U-(UU^{T}-\Sigma_{uu})\beta)-\Sigma_{2} \Gamma_{\widetilde {Z}}^{-1}\delta\widetilde{Z}g'(Z^{T}\alpha)(\varepsilon-U^{T}\beta) \}^{\otimes2}]$$, with $$\Sigma_{1}=E\{(1-\delta)\widetilde{\widetilde {X}}\widetilde{X}^{T}\}$$ and $$\Sigma_{2}=E\{(1-\delta)\widetilde{\widetilde{X}} [\widetilde{Z }g'(Z^{T}\alpha)]^{T}\}$$.

### Theorem 3.2

Suppose the conditions (C1)-(C8) are satisfied, then we have

\begin{aligned} \sqrt{n}(\breve{\alpha}_{n}-\alpha) \rightarrow N \bigl(0,\Sigma_{\widetilde {\widetilde{Z}}\widetilde{\widetilde{X}}}^{-1} \Sigma_{\alpha^{*}} \Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde {X}}}^{-1}\bigr), \end{aligned}

where $$\Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde{X}}}=E\{ \widetilde{\widetilde{Z}}{\widetilde{\widetilde{X}}}^{T} g'(t_{0})\}$$, $$\Sigma_{\alpha^{*}}=E[(Q+P)^{\otimes2}]$$, with Q and P given in (5.30) and (5.31) of SectionÂ  5, respectively.

### Theorem 3.3

Suppose that the conditions (C1)-(C8) hold, we have

$$\sqrt{nh_{4}}\bigl(\check{g}_{n}(t_{0}; \check{\alpha}_{n},\check{\beta}_{n})-g(t_{0}) \bigr) \rightarrow N\biggl(0,\frac{\mu(t_{0})\gamma_{2}(K_{4})\Sigma_{g}}{f(t_{0})}\biggr),$$

where $$\gamma_{2}(K_{4})=\int K_{4}^{2}(u)\,du$$.

## 4 Numerical examples

### 4.1 Simulation

In this subsection, we carry out some Monte Carlo experiments to show the finite sample performance of the proposed method. The set of data is generated from the following model:

$$Y_{i}=\sin\bigl(\pi\cdot Z^{T}_{i}\alpha \bigr)+X_{i}\beta+\varepsilon_{i},\qquad V_{i}=X_{i}+U_{i},\quad 1\leq i\leq n,$$

where $$\alpha=\frac{1}{\sqrt{3}}(1,1,1)^{T}$$, $$\beta=1$$, $$X_{i}\sim N(0,1)$$, $$\varepsilon_{i} \sim N(0,0.01)$$, $$U_{i} \sim N(0,0.04)$$, the $$Z_{i}$$ are trivariate with independent $$U(0,1)$$ components. Throughout this section, the kernel function $$K_{i}(t)=\frac{15}{16}(1-t^{2})^{2}$$ if $$\vert t\vert \leq1$$ ($$i=1,2,3,4$$) is used. The $$h_{i}$$ ($$i=1,2,3,4$$) are taken as the related bandwidths.

Based on this model, we considered the following three data missing mechanisms of the response, respectively:

Case 1.:

$$P(\delta=1|Z=z, X=x)=0.7+0.1(\vert z^{T}\alpha-0.5\vert +\vert x-1\vert )$$ if $$\vert z^{T}\alpha-0.5\vert +\vert x-1\vert \leq1$$, and 0.68 elsewhere;

Case 2.:

$$P(\delta=1|Z=z, X=x)=0.9-0.2(\vert z^{T}\alpha-0.5\vert +\vert x-1\vert )$$ if $$\vert z^{T}\alpha-0.5\vert +\vert x-1\vert \leq1$$, and 0.81 elsewhere;

Case 3.:

$$P(\delta=1|Z=z, X=x)=0.9$$ for all z and x.

The average missing rates are 0.30, 0.20, and 0.10, respectively. For each case, we generated 1000 random samples of size $$n=50, 100, 150$$, respectively. The estimators with standard error (SE) of Î± and Î² under different missing mechanisms, obtained by two different methods for the simulated data, are reported in TablesÂ 1 and 2. The relative mean integrated square error (MISE) of $$g(\cdot)$$ under different missing mechanisms, obtained by two different methods for the simulated data, is reported in TableÂ 3.

As is expected, the results fit our theory fairly well. From TablesÂ 1 and 2, it can be seen that, for each case, the estimators of both the complete method and the imputation method close their true values, and the standard errors are small. Furthermore, the imputation estimators of Î± and Î² have smaller bias and SE than the complete estimators. As the sample size increases, the bias and SE of these estimators decrease for any fixed missing rate. Furthermore, as the missing rate decreases, the bias and SE of these estimators decrease for any fixed sample size. From TableÂ 3, the imputation estimator $$\breve{g}(\cdot)$$ has a better performance than the complete estimator $$\hat{g}(\cdot)$$ in terms of MISE.

### 4.2 Application to diabetes data

In this part, we will elaborate on the proposed method through an analysis of data set from a diabetes study. Using partially linear additive model, Gai etÂ al. [17] have analyzed the data set which includes 442 observations for diabetes patients. The response variable Y is employed as a quantitative measurement of disease progression one year after baseline. The covariates include age, body mass index (BMI), average blood pressure (BP) and glucose concentration. In our notation, $$Z=(\mathit{age}, \mathit{BMI}, \mathit{BP})^{T}$$, X is the glucose concentration measured with error. We have two replicates of W, the error-prone measurement of the glucose concentration, and we apply them into estimation of the measurement error variance. The precise procedures, containing the modified asymptotic variance for Î± and Î², are depicted in SectionÂ 5 of Liang etÂ al. [16]. We carry out a sensitivity analysis by taking $$\sigma _{uu}=0.0161$$. In order to use the data set to demonstrate our methods, we presume that 20% of the Y values are missed.

The estimated values of parameters of interest via using the complete method and imputation method are presented in TableÂ 4. It is shown that imputation estimators have smaller standard errors than complete estimators.

## 5 Proofs of the main results

In order to prove the main results, we first give some lemmas.

### Lemma 5.1

Assume that the conditions (C1)-(C8) hold, then we have

$$E\bigl\{ \hat{\varphi}\bigl(Z^{T}\alpha\bigr)-\varphi \bigl(Z^{T}\alpha\bigr)\bigr\} ^{2}=O\bigl((nh_{1})^{-1}+h_{1}^{4} \bigr),$$

where $$\varphi(\cdot)$$ defines one of $$m_{1}(\cdot)$$, $$m_{2}(\cdot)$$, $$m_{3}(\cdot)$$, and $$\hat{\varphi}(\cdot)$$ is for the estimators of $$\varphi(\cdot)$$.

The proof of LemmaÂ 5.1 can be finished with the work by Mark etÂ al. [18] and TheoremsÂ 1, 2 by Einmahl etÂ al. [19].

### Lemma 5.2

Assume that the conditions (C1)-(C8) hold, then we have

$$\sqrt{n}(\hat{\beta}_{n}-\beta) \rightarrow N\bigl(0, \Gamma^{-1}_{\widetilde {X}} \Sigma_{\beta} \Gamma^{-1}_{\widetilde{X}} \bigr),$$

where $$\Gamma_{\widetilde{X}}=E\{\delta\widetilde{X}^{\otimes2}\}$$, $$\Sigma_{\beta}=E\{\delta[(\varepsilon-U^{T} \beta)\widetilde {X}]^{\otimes2}\}+ E\{\delta[(UU^{T}-\Sigma_{uu})\beta]^{\otimes2}\}+ E[\delta(UU^{T} \varepsilon^{2})]$$.

The proof of LemmaÂ 5.2 is similar to the proof of TheoremÂ 1 by Liang etÂ al. [11]. So the details are omitted here.

### Lemma 5.3

Under the conditions (C1)-(C8) hold, then we have

$$\sqrt{n}(\hat{\alpha}_{n}-\alpha) \rightarrow N\bigl(0, \Gamma^{-1}_{\widetilde {Z}} \Sigma_{\alpha} \Gamma^{-1}_{\widetilde{Z}} \bigr),$$

where $$\Gamma_{\widetilde{Z}}=E\{\delta[\widetilde{Z}g'(t_{0})]^{\otimes 2}\}$$, $$\Sigma_{\alpha}=E\{\delta\{[\widetilde{Z}g'(t_{0})- \Gamma_{\widetilde{Z}\widetilde{X}}\Gamma^{-1}_{\widetilde{X}}\widetilde {X}](\varepsilon-U^{T} \beta)+ \Gamma_{\widetilde{Z}\widetilde{X}}\Gamma^{-1}_{\widetilde {X}}[(UU^{T}-\Sigma_{uu})\beta-U\varepsilon]\}\}^{\otimes2}$$, with $$\Gamma_{\widetilde{Z}\widetilde{X}}=E[\delta\widetilde {Z}\widetilde{X}^{T} g'(t_{0})]$$.

The proof of LemmaÂ 5.3 uses a similar method to the proof of TheoremÂ 2.2 by Liang etÂ al. [5]. Here, we only give some key steps. First, we derive the following expression:

\begin{aligned} &\hat{g}_{n}(t_{0},\hat{ \alpha}_{n},\hat{\beta}_{n})-g(t_{0}) \\ &\quad =\frac{1}{n}\cdot\frac{1}{f(t_{0})\mu(t_{0})} \sum_{i=1}^{n} \delta_{i} K_{h_{2}}\bigl(Z_{i}^{T} \alpha-t_{0}\bigr) \bigl(\varepsilon_{i}-U_{i}^{T} \beta \bigr) \\ &\qquad {}-(\hat{\beta}_{n}-\beta)^{T}\frac{E(\delta X|Z^{T} \alpha=t_{0})}{E(\delta |Z^{T} \alpha=t_{0})} -(\hat{ \alpha}_{n}-\alpha)^{T}\frac{E(\delta Z g'(Z^{T}\alpha)|Z^{T} \alpha =t_{0})}{E(\delta|Z^{T} \alpha=t_{0})} \\ &\qquad {}+o_{p}\biggl(\frac{1}{\sqrt{n}}\biggr)+O_{p} \bigl(h_{2}^{2}\bigr), \end{aligned}
(5.1)

where $$\mu(t_{0})=E(\delta|Z^{T}\alpha=t_{0})$$. Then we can obtain

\begin{aligned} \sqrt{n}\Gamma_{\widetilde{Z}}(\hat{\alpha}_{n}-\alpha)= \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i} g'\bigl(Z_{i}^{T}\alpha \bigr)\widetilde{Z}_{i} \bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr) -\sqrt{n}\Gamma_{\widetilde{Z}\widetilde{X}}(\hat{\beta}_{n}- \beta)+o_{p}(1). \end{aligned}

Combining LemmaÂ 5.2 and the central limit theorem, we can complete the proof of LemmaÂ 5.3.

### Lemma 5.4

Suppose that the conditions (C1)-(C8) hold, we have

$$\sqrt{nh_{2}}\biggl(\hat{g}_{n}(t_{0}; \hat{\alpha}_{n},\hat{\beta}_{n})-g(t_{0}) - \frac {1}{2}\mu_{2}(K_{2})g''(t_{0})h_{2}^{2} \biggr) \rightarrow N\biggl(0,\frac{\gamma_{2}(K_{2})\Sigma_{g}}{f(t_{0})}\biggr),$$

where $$\mu_{2}(K_{2})=\int u^{2} K_{2}(u)\,du$$, $$\gamma_{2}(K_{2})=\int K_{2}^{2}(u)\,du$$, and $$\Sigma_{g}=\sigma^{2}+\beta^{T}\Sigma_{uu}\beta$$.

### Proof

Note that $$\hat{\alpha}_{n}-\alpha=O_{p}(n^{-\frac{1}{2}})$$, so $$\hat{g}_{n}(t_{0};\hat{\alpha}_{n},\hat{\beta}_{n})-\hat{g}_{n}(t_{0};\alpha,\hat{\beta}_{n})=O_{p}(n^{-\frac{1}{2}})$$. Then we only need to obtain the asymptotic expansion of $$\hat {g}_{n}(t_{0};\alpha,\hat{\beta}_{n})$$.

From (2.3), we have

\begin{aligned} &\left ( \textstyle\begin{array}{@{}c@{}}\hat{g}_{n}(t_{0};\alpha,\hat{\beta}_{n}) \\ h_{2}\hat {g}_{n}'(t_{0};\alpha,\hat{\beta}_{n}) \end{array}\displaystyle \right ) - \left ( \textstyle\begin{array}{@{}c@{}}g(t_{0}) \\ h_{2}g'(t_{0}) \end{array}\displaystyle \right ) \\ &\quad =\biggl(\frac{1}{n}B_{0}^{T} S_{1} B_{0}\biggr)^{-1} \frac{1}{n} \sum _{i=1}^{n} B_{i0}\delta _{i} K_{h_{2}}(t_{i}-t_{0}) \\ &\qquad {}\times\biggl\{ \frac{1}{2} \biggl(\frac{t_{i}-t_{0}}{h_{2}} \biggr)^{2} g''(t_{0})h_{2}^{2}+ \bigl(\varepsilon _{i}-U_{i}^{T} \beta \bigr)-V_{i}^{T}(\hat{\beta}_{n}- \beta)+o_{p}\bigl(h_{2}^{2}\bigr)\biggr\} . \end{aligned}
(5.2)

As is pointed out by Lai etÂ al. [15],

\begin{aligned}& \frac{1}{n}B_{0}^{T} S_{1} B_{0} = \mu(t_{0})f(t_{0}) \left ( \textstyle\begin{array}{@{}cc@{}}1&0 \\ 0&\mu_{2}(K_{2}) \end{array}\displaystyle \right ) \bigl(1+o_{p}(1)\bigr), \end{aligned}
(5.3)
\begin{aligned}& \begin{aligned}[b] & \frac{1}{n} \sum_{i=1}^{n} B_{i0}\delta_{i} K_{h_{2}}(t_{i}-t_{0}) \biggl\{ \frac{1}{2} \biggl(\frac{t_{i}-t_{0}}{h_{2}}\biggr)^{2} g''(t_{0})h_{2}^{2} \biggr\} \\ &\quad =\left ( \textstyle\begin{array}{@{}c@{}}f(t_{0})\mu(t_{0})\frac{1}{2}\mu_{2}(K_{2})g''(t_{0})h_{2}^{2} \\ 0 \end{array}\displaystyle \right ) +o_{p}\biggl(\frac{1}{\sqrt{nh_{2}}}\biggr). \end{aligned} \end{aligned}
(5.4)

Combine (5.2), (5.3), and (5.4) and focus on the top equation, it follows that

\begin{aligned} &\hat{g}_{n}(t_{0};\alpha,\hat{ \beta}_{n})-g(t_{0}) \\ &\quad = \frac{1}{2}\mu _{2}(K_{2})g{''}(t_{0})h_{2}^{2} \\ & \qquad {}+\frac{1}{n}\sum_{i=1}^{n} \frac{1}{\mu(t_{0})f(t_{0})}\delta_{i} K_{h_{2}}(t_{i}-t_{0}) \bigl[\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr) -V_{i}^{T}(\hat{\beta}_{n}-\beta) \bigr]+o_{p}\biggl(\frac{1}{\sqrt{nh_{2}}}\biggr). \end{aligned}

Because of LemmaÂ 5.2, it is easy to obtain

$$\frac{1}{n}\sum_{i=1}^{n} \frac{1}{\mu(t_{0})f(t_{0})}\delta_{i} K_{h_{2}}(t_{i}-t_{0})V_{i}^{T}( \hat{\beta}_{n}-\beta)=o_{p}\biggl(\frac{1}{\sqrt{nh_{2}}}\biggr),$$

then we know that

\begin{aligned} \hat{g}_{n}(t_{0};\hat{ \alpha}_{n},\hat{\beta}_{n})-g(t_{0}) =& \frac{1}{2}\mu _{2}(K_{2})g{''}(t_{0})h_{2}^{2} \\ &{}+\frac{1}{n}\sum_{i=1}^{n} \frac{1}{\mu(t_{0})f(t_{0})}\delta_{i} K_{h_{2}}(t_{i}-t_{0}) \bigl(\varepsilon_{i}-U_{i}^{T} \beta \bigr)+o_{p}\biggl(\frac{1}{\sqrt{nh_{2}}}\biggr). \end{aligned}

Applying the central limit theorem, we obtain LemmaÂ 5.4.â€ƒâ–¡

### Proof of TheoremÂ 3.1

Let

$$\triangle_{n} = \frac{1}{n}\sum_{i=1}^{n} \bigl\{ \bigl[V_{i}-\hat{E}\bigl(V|Z_{i}^{T}\alpha \bigr)\bigr]^{\otimes2}-\Sigma_{uu}\bigr\} .$$

Then

$$\triangle_{n} =E\bigl\{ \bigl[X-E\bigl(X|Z^{T}\alpha\bigr) \bigr]^{\otimes2}\bigr\} +o_{p}(1) =E\bigl\{ \widetilde{ \widetilde{X}}^{\otimes2}\bigr\} +o_{p}(1)=\Sigma_{\widetilde {\widetilde{X}}}+o_{p}(1).$$

By LemmasÂ 5.1-5.4, it is easy to show that

\begin{aligned} \sqrt{n}(\check{\beta}_{n} -\beta) =& \triangle_{n}^{-1} \Biggl\{ \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \bigl[\widetilde{\widetilde{V}_{i}}\bigl(\widetilde{ \widetilde{Y^{\ast}}}_{i}-\widetilde {\widetilde{V}_{i}}^{T} \beta\bigr)\bigr]\Biggr\} \\ & {}+\triangle_{n}^{-1}\Biggl\{ \sqrt{n} \Sigma_{uu}\beta-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(1- \delta_{i})\Sigma_{uu}\hat{\beta}_{n}\Biggr\} +o_{p}(1). \end{aligned}

Because of the Taylor expansion and the continuity of $$g'(\cdot)$$, we obtain

\begin{aligned} &\hat{g}_{n}\bigl(Z_{i}^{T}\hat{ \alpha}_{n}\bigr)-g\bigl(Z_{i}^{T}\alpha\bigr) \\ &\quad = \hat{g}_{n}\bigl(Z_{i}^{T}\alpha \bigr)+g'\bigl(Z_{i}^{T}\alpha\bigr) \bigl(Z_{i}^{T}\hat{\alpha }_{n}-Z_{i}^{T} \alpha\bigr) -g\bigl(Z_{i}^{T}\alpha\bigr)+o_{p} \biggl(\frac{1}{\sqrt{n}}\biggr). \end{aligned}
(5.5)

Note that $$E(Y^{\ast}|Z_{i}^{T}\alpha)=g(Z_{i}^{T}\alpha)+E(X|Z_{i}^{T}\alpha)^{T}\beta$$. Using (5.5) yields

\begin{aligned} \bigl(\widetilde{\widetilde{Y^{\ast}}}_{i}- \widetilde{\widetilde{V}_{i}}^{T}\beta \bigr) =& (1- \delta_{i})\bigl[\hat{g}_{n}\bigl(Z_{i}^{T} \alpha\bigr)-g\bigl(Z_{i}^{T}\alpha\bigr)\bigr]+(1-\delta _{i})g'\bigl(Z_{i}^{T}\alpha \bigr)Z_{i}^{T}(\hat{\alpha}_{n}-\alpha) \\ &{} +(1-\delta_{i})V_{i}^{T}(\hat{ \beta}_{n}-\beta)+\delta_{i}\bigl(\varepsilon _{i}-U_{i}^{T}\beta\bigr)+o_{p}\biggl( \frac{1}{\sqrt{n}}\biggr). \end{aligned}
(5.6)

Combining (5.1) and (5.6), and calculating directly, we have

\begin{aligned} \sqrt{n}(\check{\beta}_{n} -\beta) =& \triangle_{n}^{-1} \Biggl\{ \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \widetilde{\widetilde {V_{i}}}\delta_{i}\bigl( \varepsilon_{i}-U_{i}^{T}\beta\bigr)\Biggr\} \\ &{}+\triangle_{n}^{-1}\Biggl\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n} \widetilde {\widetilde{V}_{i}}(1- \delta_{i}) \widetilde{V}_{i}^{T}(\hat{ \beta}_{n}-\beta)\Biggr\} \\ & {}+\triangle_{n}^{-1}\Biggl\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{n} \widetilde {\widetilde{V}_{i}}(1- \delta_{i}) \bigl[\widetilde{Z}_{i} g' \bigl(Z_{i}^{T}\alpha\bigr)\bigr]^{T}(\hat{ \alpha}_{n}-\alpha)\Biggr\} \\ & {}+\triangle_{n}^{-1}\Biggl\{ \sqrt{n} \Sigma_{uu}\beta-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(1- \delta_{i})\Sigma_{uu}\hat{\beta}_{n}\Biggr\} +o_{p}(1) \\ =& \triangle_{n}^{-1}(I_{1}+I_{2}+I_{3}+I_{4})+o_{p}(1). \end{aligned}

By a straightforward calculation,

\begin{aligned} I_{1}=\frac{1}{\sqrt{n}}\sum _{i=1}^{n}\bigl\{ \widetilde{\widetilde{X}}_{i} \delta _{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr)+\delta_{i}\bigl(\varepsilon_{i} U_{i}-U_{i} U_{i}^{T}\beta \bigr)\bigr\} +o_{p}(1). \end{aligned}
(5.7)

From LemmaÂ 5.2 and the law of large numbers, it follows that

\begin{aligned} I_{2} =& \frac{1}{\sqrt{n}}\sum _{i=1}^{n} \widetilde{\widetilde {X}}_{i}(1- \delta_{i}) \widetilde{X}_{i}^{T}(\hat{ \beta}_{n}-\beta)+\frac{1}{\sqrt{n}}\sum_{i=1}^{n} (1-\delta_{i})\Sigma_{uu}(\hat{\beta}_{n}- \beta)+o_{p}(1) \\ =& \sqrt{n} \Sigma_{1}\cdot\Gamma_{\widetilde{X}}^{-1}\cdot \frac {1}{n}\sum_{i=1}^{n} \bigl\{ \delta_{i} \bigl[\widetilde{X}_{i} \bigl( \varepsilon_{i}-U_{i}^{T} \beta\bigr) +U_{i} \varepsilon_{i}-\bigl(U_{i} U_{i}^{T}-\Sigma_{uu}\bigr)\beta\bigr]\bigr\} \\ & {}+\frac{1}{\sqrt{n}} \sum_{i=1}^{n} (1- \delta_{i}) \Sigma_{uu}(\hat{\beta }_{n}- \beta)+o_{p}(1) \\ =& I_{21}+I_{22}+o_{p}(1), \end{aligned}
(5.8)

where

\begin{aligned} \Sigma_{1}=E\bigl\{ (1-\delta)\widetilde{\widetilde{X}} \widetilde{X}^{T}\bigr\} . \end{aligned}
(5.9)

Using LemmaÂ 5.3, $$I_{3}$$ is decomposed as

\begin{aligned} I_{3} =& \frac{1}{\sqrt{n}}\sum _{i=1}^{n} \widetilde{\widetilde {X}}_{i}(1- \delta_{i}) \bigl[\widetilde{Z}_{i} g' \bigl(Z_{i}^{T}\alpha\bigr)\bigr]^{T} (\hat{ \alpha}_{n}-\alpha) \\ &{}+\frac{1}{\sqrt{n}}\sum_{i=1}^{n} U_{i}(1-\delta_{i}) \bigl[\widetilde{Z}_{i} g'\bigl(Z_{i}^{T}\alpha\bigr) \bigr]^{T}(\hat{\alpha}_{n}-\alpha)+o_{p}(1) \\ =& \sqrt{n} \Sigma_{2}\cdot\Gamma_{\widetilde{Z}}^{-1}\cdot \frac {1}{n}\sum_{i=1}^{n} \delta_{i}\widetilde{Z}_{i} g' \bigl(Z_{i}^{T}\alpha\bigr) \bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr) -\sqrt{n} \Sigma_{2}\cdot\Gamma_{\widetilde{Z}}^{-1} \Gamma_{\widetilde {Z}\widetilde{X}}\Gamma_{\widetilde{X}}^{-1} \\ &{}\cdot\frac{1}{n}\sum_{i=1}^{n} \bigl\{ \delta_{i}\bigl[\widetilde{X}_{i}\bigl(\varepsilon _{i}-U_{i}^{T} \beta\bigr)+U_{i} \varepsilon_{i} -\bigl(U_{i} U_{i}^{T}- \Sigma_{uu}\bigr)\beta\bigr]\bigr\} +o_{p}(1) \\ =& I_{31}-I_{32}+o_{p}(1), \end{aligned}
(5.10)

where

\begin{aligned} \Sigma_{2}=E\bigl\{ (1-\delta)\widetilde{\widetilde{X}} \bigl[\widetilde{Z }g'\bigl(Z^{T}\alpha\bigr) \bigr]^{T}\bigr\} . \end{aligned}
(5.11)

Also we have

\begin{aligned} I_{4}=\sqrt{n}\Biggl[\frac{1}{n}\sum _{i=1}^{n} \delta_{i}\Sigma_{uu} \beta-\frac {1}{n}\sum_{i=1}^{n} (1- \delta_{i})\Sigma_{uu}(\hat{\beta}_{n}-\beta) \Biggr]. \end{aligned}
(5.12)

Combining (5.7), (5.8), and (5.12), we get

\begin{aligned} &I_{1}+I_{22}+I_{4} \\ &\quad = \sqrt{n}\Biggl[ \frac{1}{n}\sum_{i=1}^{n} \delta_{i}\bigl\{ \widetilde{\widetilde{X}}_{i}\bigl( \varepsilon_{i}-U_{i}^{T}\beta\bigr)+ \varepsilon_{i} U_{i}-\bigl(U_{i} U_{i}^{T}-\Sigma_{uu}\bigr)\beta\bigr\} \Biggr] \\ &\quad = \sqrt{n}\Biggl[ \frac{1}{n}\sum_{i=1}^{n} \delta_{i}\bigl\{ \widetilde{X}_{i}\bigl( \varepsilon_{i}-U_{i}^{T}\beta\bigr)+ \varepsilon_{i} U_{i}-\bigl(U_{i} U_{i}^{T}-\Sigma_{uu}\bigr)\beta\bigr\} \Biggr]+o_{p}(1) \\ &\quad = \sqrt{n} \Gamma_{\widetilde{X}}(\hat{\beta}_{n}- \beta)+o_{p}(1). \end{aligned}

Similarly, we obtain

\begin{aligned} I_{21}-I_{32} = \bigl(\Sigma_{1}- \Sigma_{2}\Gamma_{\widetilde{Z}}^{-1}\Gamma _{\widetilde{Z}\widetilde{X}} \bigr)\Gamma_{\widetilde{X}}^{-1} \cdot\sqrt{n} \Gamma_{\widetilde{X}}( \hat{\beta}_{n}-\beta)+o_{p}(1). \end{aligned}

To sum up,

\begin{aligned} \sqrt{n}(\check{\beta}_{n} -\beta) =& \triangle_{n}^{-1}\bigl(\Gamma_{\widetilde{X}}+ \Sigma_{1}-\Sigma_{2}\Gamma _{\widetilde{Z}}^{-1} \Gamma_{\widetilde{Z}\widetilde{X}}\bigr)\Gamma _{\widetilde{X}}^{-1} \\ &{} \cdot\sqrt{n}\Biggl[\frac{1}{n}\sum_{i=1}^{n} \delta_{i}\bigl\{ \widetilde{X}_{i}\bigl( \varepsilon_{i}-U_{i}^{T}\beta\bigr)+ \varepsilon_{i} U_{i}-\bigl(U_{i} U_{i}^{T}-\Sigma_{uu}\bigr)\beta\bigr\} \Biggr] \\ &{} -\triangle_{n}^{-1}\Sigma_{2} \Gamma_{\widetilde{Z}}^{-1} \cdot\sqrt{n}\Biggl[\frac{1}{n}\sum _{i=1}^{n} \delta_{i} g' \bigl(Z_{i}^{T}\alpha\bigr)\widetilde{Z}_{i}\bigl( \varepsilon_{i}-U_{i}^{T}\beta \bigr)\Biggr]+o_{p}(1). \end{aligned}
(5.13)

Via the central limit theorem, TheoremÂ 3.1 can be proved.â€ƒâ–¡

### Proof of TheoremÂ 3.2

We derive the following expression first:

\begin{aligned} &\breve{g}_{n}(t_{0},\breve{ \alpha}_{n},\breve{\beta}_{n})-g(t_{0}) \\ &\quad =\frac{\frac{1}{n}\sum_{i=1}^{n} \delta_{i} K_{h_{4}}(Z_{i}^{T} \alpha -t_{0})(\varepsilon_{i}-U_{i}^{T}\beta)}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T} \alpha-t_{0})} +(\hat{\beta}_{n}-\beta)^{T} E \bigl[(1-\delta) X|Z^{T} \alpha=t_{0}\bigr] \\ &\qquad {}+\bigl[\hat{g}_{n}(t_{0})-g(t_{0})\bigr]\cdot \bigl[1-E\bigl(\delta|Z^{T}\alpha=t_{0}\bigr)\bigr] -(\breve{ \beta}_{n}-\beta)^{T} E\bigl[ X|Z^{T} \alpha=t_{0}\bigr] \\ &\qquad {}-(\breve{\alpha}_{n}-\alpha)^{T} E\bigl( Z g'\bigl(Z^{T}\alpha\bigr)|Z^{T} \alpha=t_{0}\bigr) +o_{p}\biggl(\frac{1}{\sqrt{n}} \biggr)+O_{p}\bigl(h_{4}^{2}\bigr). \end{aligned}
(5.14)

Based on (2.7), we have

\begin{aligned} 0 =&\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}\bigl(Z_{i}^{T}\breve{\alpha}_{n}-t_{0} \bigr) \left ( \textstyle\begin{array}{@{}c@{}}1\\ Z_{i}^{T}\breve{\alpha}_{n}-t_{0} \end{array}\displaystyle \right ) \\ &{}\cdot\bigl[Y_{i}^{*}-V_{i}^{T}\breve{ \beta}_{n}-\breve{g}_{n}(t_{0})-\breve {g}'_{n}(t_{0}) \bigl(Z_{i}^{T} \breve{\alpha}_{n}-t_{0}\bigr)\bigr]. \end{aligned}

Taking only the top equation into account, using a Taylor expansion, and calculating directly, we obtain

\begin{aligned} &\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}\bigl(Z_{i}^{T}\alpha-t_{0}\bigr) \bigl[\breve {g}_{n}(t_{0})-g(t_{0})\bigr] \\ &\quad =\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}\bigl(Z_{i}^{T}\alpha-t_{0}\bigr) \bigl[(1-\delta_{i})V_{i}^{T}(\hat{ \beta}_{n}-\beta)+(1-\delta_{i}) \bigl(\hat {g}_{n} \bigl(Z_{i}^{T}\alpha\bigr)-g\bigl(Z_{i}^{T} \alpha\bigr)\bigr) \\ &\qquad {}+\delta_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr)\bigr]-(\breve{\beta}_{n}-\beta)^{T} \frac {1}{n}\sum_{i=1}^{n} K_{h_{4}}\bigl(Z_{i}^{T}\alpha-t_{0} \bigr)V_{i} \\ &\qquad {}-(\breve{\alpha}_{n}-\alpha)^{T} \frac{1}{n}\sum _{i=1}^{n} K_{h_{4}} \bigl(Z_{i}^{T}\alpha-t_{0}\bigr)Z_{i} g'(t_{0}) +o_{p}\biggl(\frac{1}{\sqrt{n}} \biggr)+O_{p}\bigl(h_{4}^{2}\bigr). \end{aligned}
(5.15)

Dividing all terms in (5.15) by $$\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})$$, we have

\begin{aligned} \breve{g}_{n}(t_{0})-g(t_{0}) =& \frac{\frac{1}{n}\sum_{i=1}^{n} \delta_{i} K_{h_{4}}(Z_{i}^{T}\alpha -t_{0})(\varepsilon_{i}-U_{i}^{T}\beta)}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} \\ &{}+(\hat{\beta}_{n}-\beta)^{T} \frac{\frac{1}{n}\sum_{i=1}^{n} (1-\delta_{i}) K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})V_{i}}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} \\ &{}+\bigl(\hat{g}_{n}(t_{0})-g(t_{0})\bigr) \frac{\frac{1}{n}\sum_{i=1}^{n} (1-\delta_{i}) K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} \\ &{}-(\breve{\beta}_{n}-\beta)^{T} \frac{\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})V_{i}}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} \\ &{}-(\breve{\alpha}_{n}-\alpha)^{T} \frac{\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})Z_{i} g'(t_{0})}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} +o_{p}\biggl(\frac{1}{\sqrt{n}}\biggr)+O_{p} \bigl(h_{4}^{2}\bigr). \end{aligned}

Note that

\begin{aligned}& \frac{\frac{1}{n}\sum_{i=1}^{n} (1-\delta_{i}) K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})V_{i}}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} = E\bigl[(1-\delta) X|Z^{T} \alpha=t_{0} \bigr]\bigl(1+o_{p}(1)\bigr), \\& \frac{\frac{1}{n}\sum_{i=1}^{n} (1-\delta_{i}) K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} = 1-E\bigl(\delta|Z^{T} \alpha=t_{0} \bigr) \bigl(1+o_{p}(1)\bigr), \\& \frac{\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})V_{i}}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} = E\bigl(X|Z^{T} \alpha=t_{0}\bigr) \bigl(1+o_{p}(1)\bigr), \end{aligned}

and

\begin{aligned} \frac{\frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})Z_{i} g'(t_{0})}{ \frac{1}{n}\sum_{i=1}^{n} K_{h_{4}}(Z_{i}^{T}\alpha-t_{0})} =E\bigl( Z g'\bigl(Z^{T}\alpha \bigr)|Z^{T} \alpha=t_{0}\bigr) \bigl(1+o_{p}(1) \bigr). \end{aligned}

Thus, equation (5.14) follows.

Second, we give the proof of TheoremÂ 3.2. From (2.9), $$\breve{\alpha}_{n}$$ is the solution of

$$\frac{1}{n}\sum_{i=1}^{n} \bigl[Y_{i}^{*}-V_{i}^{T}\breve{ \beta}_{n}-\breve {g}_{n}\bigl(Z_{i}^{T} \breve{\alpha}_{n}\bigr)\bigr] \cdot\breve{g}'_{n} \bigl(Z_{i}^{T}\breve{\alpha}_{n} \bigr)Z_{i}=0,$$

it can be rewritten as

\begin{aligned} &\frac{1}{n}\sum_{i=1}^{n} g'\bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \bigl\{ \bigl[Y_{i}^{*}-V_{i}^{T}\beta-g \bigl(Z_{i}^{T}\alpha\bigr)\bigr]-\bigl[\breve{g}_{n} \bigl(Z_{i}^{T}\breve{\alpha }_{n}\bigr)-g \bigl(Z_{i}^{T}\alpha\bigr)\bigr] \\ &\quad {}-V_{i}^{T}(\breve{\beta}_{n}-\beta)\bigr\} \cdot \bigl(1+o_{p}(1)\bigr)=0. \end{aligned}
(5.16)

Because of the Taylor expansion and the continuity of $$g'(\cdot)$$, we can obtain

\begin{aligned} &\breve{g}_{n}\bigl(Z_{i}^{T} \breve{\alpha}_{n}\bigr)-g\bigl(Z_{i}^{T}\alpha \bigr) \\ &\quad = \breve{g}_{n}\bigl(Z_{i}^{T}\alpha \bigr)+g'\bigl(Z_{i}^{T}\alpha\bigr) \bigl(Z_{i}^{T}\breve{\alpha }_{n}-Z_{i}^{T} \alpha\bigr) -g\bigl(Z_{i}^{T}\alpha\bigr)+o_{p} \biggl(\frac{1}{\sqrt{n}}\biggr). \end{aligned}
(5.17)

By (5.17), (5.16) can be written as

\begin{aligned} &\frac{1}{n}\sum_{i=1}^{n} g'\bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \bigl\{ \delta_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr)+(1-\delta_{i})V_{i}^{T}(\hat{\beta }_{n}-\beta) \\ &\quad {}+(1-\delta_{i})\bigl[\hat{g}_{n}\bigl(Z_{i}^{T} \alpha\bigr)-g\bigl(Z_{i}^{T}\alpha\bigr)\bigr] -\bigl[ \breve{g}_{n}\bigl(Z_{i}^{T}\alpha\bigr)-g \bigl(Z_{i}^{T}\alpha\bigr)\bigr] -V_{i}^{T}( \breve{\beta}_{n}-\beta) \\ &\quad {}-g'\bigl(Z_{i}^{T}\alpha \bigr)Z_{i}^{T}(\breve{\alpha}_{n}-\alpha)\bigr\} \bigl(1+o_{p}(1)\bigr)=0. \end{aligned}

Applying (5.14) to the equation, it is easy to obtain

\begin{aligned} &\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i} g'\bigl(Z_{i}^{T}\alpha \bigr)Z_{i}\bigl(\varepsilon _{i}-U_{i}^{T} \beta\bigr) \\ &\qquad {}-\frac{1}{\sqrt{n}}\sum_{i=1}^{n} g'\bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \cdot\frac{\frac{1}{n}\sum_{j=1}^{n} \delta_{j} K_{h_{4}}(Z_{j}^{T} \alpha -Z_{i}^{T}\alpha)(\varepsilon_{j}-U_{j}^{T}\beta)}{ \frac{1}{n}\sum_{j=1}^{n} K_{h_{4}}(Z_{j}^{T} \alpha-Z_{i}^{T}\alpha)} \\ &\qquad {}-\frac{1}{\sqrt{n}}\sum_{i=1}^{n} g'\bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \bigl[\hat {g}_{n}\bigl(Z_{i}^{T}\alpha\bigr)-g \bigl(Z_{i}^{T}\alpha\bigr)\bigr] \cdot\bigl[ \delta_{i}-E\bigl(\delta|Z^{T} \alpha=Z_{i}^{T} \alpha\bigr)\bigr] \\ &\qquad {}+(\hat{\beta}_{n}-\beta)^{T}\frac{1}{\sqrt{n}}\sum _{i=1}^{n} g' \bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \bigl[(1- \delta_{i})V_{i}-E\bigl((1-\delta)V|Z^{T} \alpha=Z_{i}^{T}\alpha\bigr)\bigr] \\ &\quad =\frac{1}{\sqrt{n}}\sum_{i=1}^{n} g'\bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \left ( \textstyle\begin{array}{@{}c@{}} \widetilde{\widetilde{Z}}_{i} g'(Z_{i}^{T}\alpha)\\ \widetilde{\widetilde{X}}_{i}+U_{i} \end{array}\displaystyle \right )^{T} \left ( \textstyle\begin{array}{@{}c@{}} \breve{\alpha}_{n}-\alpha\\ \breve{\beta}_{n}-\beta \end{array}\displaystyle \right )+o_{p}(1). \end{aligned}
(5.18)

Note that the second term of the left-hand side of (5.18) is

\begin{aligned} \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr)E\bigl[Z g'\bigl(Z^{T}\alpha \bigr)|Z^{T} \alpha=Z_{i}^{T}\alpha \bigr]+o_{p}(1). \end{aligned}

Then the first two terms of the left-hand side of (5.18) are as follows:

\begin{aligned} &\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i} g'\bigl(Z_{i}^{T}\alpha \bigr)Z_{i}\bigl(\varepsilon _{i}-U_{i}^{T} \beta\bigr){} {} {} {} {} {} {} {} {} \\ &\qquad {}-\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i}\bigl(\varepsilon _{i}-U_{i}^{T} \beta\bigr)E\bigl[Z g'\bigl(Z^{T}\alpha \bigr)|Z^{T} \alpha=Z_{i}^{T}\alpha\bigr] \\ &\quad =\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta \bigr)\widetilde{\widetilde{Z}}_{i} g' \bigl(Z_{i}^{T}\alpha\bigr). \end{aligned}
(5.19)

Applying (5.1) to the third term of the left-hand side of (5.18), it follows that

\begin{aligned} &\frac{1}{\sqrt{n}}\sum_{i=1}^{n} g'\bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \bigl[\delta_{i}-E\bigl(\delta |Z^{T} \alpha=Z_{i}^{T} \alpha\bigr)\bigr] \cdot\bigl[\hat{g}_{n}\bigl(Z_{i}^{T} \alpha\bigr)-g\bigl(Z_{i}^{T}\alpha\bigr)\bigr] \\ &\quad =\frac{1}{\sqrt{n}}\sum_{i=1}^{n} g'\bigl(Z_{i}^{T}\alpha\bigr)Z_{i} \bigl[\delta_{i}-E\bigl(\delta |Z^{T} \alpha=Z_{i}^{T} \alpha\bigr)\bigr] \\ &\qquad {}\cdot\Biggl\{ \frac{1}{n} \frac{1}{f(Z_{i}^{T}\alpha)\mu(Z_{i}^{T}\alpha)} \sum _{j=1}^{n} \delta_{j} K_{h_{2}} \bigl(Z_{j}^{T} \alpha-Z_{i}^{T}\alpha \bigr) \bigl(\varepsilon _{j}-U_{j}^{T}\beta\bigr) \Biggr\} \\ &\qquad {}-(\hat{\beta}_{n}-\beta)^{T} \frac{1}{\sqrt{n}}\sum _{i=1}^{n} g' \bigl(Z_{i}^{T}\alpha \bigr)Z_{i}\bigl[ \delta_{i}-E\bigl(\delta|Z^{T} \alpha=Z_{i}^{T} \alpha\bigr)\bigr] \frac{E(\delta X|Z^{T} \alpha=Z_{i}^{T}\alpha)}{E(\delta|Z^{T} \alpha =Z_{i}^{T}\alpha)} \\ &\qquad {}-(\hat{\alpha}_{n}-\alpha)^{T} \frac{1}{\sqrt{n}}\sum _{i=1}^{n} g' \bigl(Z_{i}^{T}\alpha\bigr)Z_{i}\bigl[ \delta_{i}-E\bigl(\delta|Z^{T} \alpha=Z_{i}^{T} \alpha\bigr)\bigr] \\ &\qquad {}\times \frac{E(\delta Z g'(Z^{T}\alpha)|Z^{T} \alpha=Z_{i}^{T}\alpha)}{E(\delta|Z^{T} \alpha=Z_{i}^{T}\alpha)}+o_{p}(1) =J_{1}-J_{2}-J_{3}+o_{p}(1). \end{aligned}
(5.20)

Similar to the second term of the left-hand side of (5.18),

\begin{aligned} J_{1} =&\frac{1}{\sqrt{n}}\sum _{i=1}^{n} \delta_{i}\bigl( \varepsilon_{i}-U_{i}^{T}\beta \bigr) \\ &{}\times \biggl\{ \frac{E[\delta Z g'(Z^{T}\alpha)|Z_{i}^{T}\alpha]}{E(\delta|Z_{i}^{T}\alpha)} -\frac{E[E(\delta|Z^{T}\alpha)Z g'(Z^{T}\alpha)|Z_{i}^{T}\alpha]}{E(\delta |Z_{i}^{T}\alpha)}\biggr\} . \end{aligned}
(5.21)

Also, we have

\begin{aligned} J_{2}=\sqrt{n}(\hat{\beta}_{n}- \beta)^{T} E\biggl\{ \bigl[\delta-E\bigl(\delta|Z^{T} \alpha \bigr)\bigr] \frac{E(\delta X|Z^{T} \alpha)}{E(\delta|Z^{T} \alpha)}g'\bigl(Z^{T}\alpha\bigr)Z \biggr\} +o_{p}(1). \end{aligned}
(5.22)

Combining with LemmaÂ 5.3, we have

\begin{aligned} J_{3} =&\Biggl[\Gamma_{\widetilde{Z}}^{-1} \Biggl\{ \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i} g'\bigl(Z_{i}^{T}\alpha \bigr)\widetilde {Z}_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr) -\sqrt{n}\Gamma_{\widetilde{Z}\widetilde{X}}(\hat{\beta}_{n}-\beta) \Biggr\} \Biggr]^{T} \\ &{}\cdot E\biggl\{ \bigl[\delta-E\bigl(\delta|Z^{T} \alpha\bigr)\bigr] \frac{E[\delta Z g'(Z^{T}\alpha)|Z^{T} \alpha]}{E(\delta|Z^{T} \alpha )}g'\bigl(Z^{T}\alpha\bigr)Z\biggr\} +o_{p}(1). \end{aligned}
(5.23)

The last term of the left-hand side of (5.18) is

\begin{aligned} \sqrt{n}(\hat{\beta}_{n}-\beta)^{T} E\bigl\{ \bigl[(1-\delta)X-E\bigl((1-\delta)X|Z^{T}\alpha \bigr) \bigr]g'\bigl(Z^{T}\alpha\bigr)Z\bigr\} +o_{p}(1). \end{aligned}
(5.24)

Through a direct calculation, the first term of the right-hand side of (5.18) is

\begin{aligned} \sqrt{n}\Sigma_{\widetilde{\widetilde{Z}}}(\breve{\alpha}_{n}- \alpha)+o_{p}(1), \end{aligned}
(5.25)

where

\begin{aligned} \Sigma_{\widetilde{\widetilde{Z}}}=E\bigl\{ \bigl[\widetilde{\widetilde{Z}} g'\bigl(Z^{T}\alpha\bigr)\bigr]^{\otimes2}\bigr\} . \end{aligned}
(5.26)

The last term of the right-hand side of (5.18) is

\begin{aligned} \sqrt{n}\Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde {X}}}(\breve{\beta}_{n}- \beta)+o_{p}(1), \end{aligned}
(5.27)

where

\begin{aligned} \Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde{X}}}=E\bigl\{ \widetilde{\widetilde{Z}}g' \bigl(Z^{T}\alpha\bigr)\widetilde{\widetilde{X}}^{T}\bigr\} . \end{aligned}
(5.28)

Combining (5.19)-(5.25), and (5.27), and using TheoremÂ 3.1, (5.18) becomes

\begin{aligned} \sqrt{n}\Sigma_{\widetilde{\widetilde{Z}}}(\breve{\alpha}_{n}- \alpha) =& \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr) g'\bigl(Z_{i}^{T}\alpha\bigr) \widetilde{\widetilde{Z}_{i}} \\ &{}-\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta \bigr)g'\bigl(Z^{T}\alpha\bigr) \frac{E[(\delta-E(\delta|Z^{T}\alpha) )Z|Z_{i}^{T}\alpha]}{E(\delta |Z_{i}^{T}\alpha)} \\ &{}+\sqrt{n}(\hat{\beta}_{n}-\beta)^{T} E\biggl\{ \bigl[ \delta-E\bigl(\delta|Z^{T} \alpha\bigr)\bigr] \frac{E(\delta X|Z^{T} \alpha)}{E(\delta|Z^{T} \alpha)}g' \bigl(Z^{T}\alpha\bigr)Z\biggr\} \\ &{}+ E\biggl\{ \bigl[\delta-E\bigl(\delta|Z^{T} \alpha\bigr)\bigr] \frac{E[\delta Z g'(Z^{T}\alpha)|Z^{T} \alpha]}{E(\delta|Z^{T} \alpha )}g'\bigl(Z^{T}\alpha\bigr)Z\biggr\} ^{T} \\ & {}\cdot \Biggl[\Gamma_{\widetilde{Z}}^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^{n} \delta_{i} g'\bigl(Z_{i}^{T}\alpha\bigr) \widetilde{Z}_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr)\Biggr] \\ &{}- E\biggl\{ \bigl[\delta-E\bigl(\delta|Z^{T} \alpha\bigr)\bigr] \frac{E[\delta Z g'(Z^{T}\alpha)|Z^{T} \alpha]}{E(\delta|Z^{T} \alpha )}g'\bigl(Z^{T}\alpha\bigr)Z\biggr\} ^{T} \\ &{} \cdot \bigl[\Gamma_{\widetilde{Z}}^{-1}\sqrt{n}\Gamma_{\widetilde{Z}\widetilde {X}}( \hat{\beta}_{n}-\beta)\bigr] \\ & {}+\sqrt{n}(\hat{\beta}_{n}-\beta)^{T} E\bigl\{ \bigl[(1- \delta)X-E\bigl((1-\delta)X|Z^{T}\alpha\bigr)\bigr]Z g' \bigl(Z^{T}\alpha\bigr)\bigr\} \\ & {}-\Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde{X}}}\Sigma _{\widetilde{\widetilde{X}}}^{-1} \bigl( \Gamma_{\widetilde{X}}+\Sigma_{1}-\Sigma_{2} \Gamma_{\widetilde {Z}}^{-1}\Gamma_{\widetilde{Z}\widetilde{X}}\bigr)\sqrt{n}(\hat{\beta }_{n}-\beta) \\ &{} +\Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde{X}}}\Sigma _{\widetilde{\widetilde{X}}}^{-1} \Sigma_{2}\Gamma_{\widetilde{Z}}^{-1} \cdot\sqrt{n} \frac{1}{n}\sum_{i=1}^{n} \delta_{i} g'\bigl(Z_{i}^{T}\alpha \bigr)\widetilde{Z}_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta \bigr)+o_{p}(1) \\ =&F_{1}-F_{2}+F_{3}+F_{4}-F_{5}+F_{6}-F_{7}+F_{8}+o_{p}(1). \end{aligned}
(5.29)

Through a direct calculation,

\begin{aligned} Q =&F_{1}-F_{2}+F_{4}+F_{8} \\ =& \biggl\{ 1+ E\biggl\{ \bigl[\delta-E\bigl(\delta|Z^{T} \alpha\bigr) \bigr] \frac{E[\delta Z g'(Z^{T}\alpha)|Z^{T} \alpha]}{E(\delta|Z^{T} \alpha )}g'\bigl(Z^{T}\alpha\bigr)Z\biggr\} ^{T}\Gamma_{\widetilde{Z}}^{-1} \\ &{}+\Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde{X}}}\Sigma _{\widetilde{\widetilde{X}}}^{-1} \Sigma_{2}\Gamma_{\widetilde{Z}}^{-1}\biggr\} \cdot \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \delta_{i} g'\bigl(Z_{i}^{T}\alpha \bigr)\widetilde{Z}_{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr). \end{aligned}
(5.30)

Combining with LemmaÂ 5.2, we have

\begin{aligned} P =&F_{3}-F_{5}+F_{6}-F_{7} \\ =&\biggl[ E\biggl\{ \bigl[\delta-E\bigl(\delta|Z^{T} \alpha\bigr) \bigr] \frac{E(\delta X|Z^{T} \alpha)}{E(\delta|Z^{T} \alpha)}g'\bigl(Z^{T}\alpha\bigr)Z\biggr\} ^{T} \\ &{}-E\biggl\{ \bigl[\delta-E\bigl(\delta|Z^{T} \alpha\bigr)\bigr] \frac{E[\delta Z g'(Z^{T}\alpha)|Z^{T} \alpha]}{E(\delta|Z^{T} \alpha )}g'\bigl(Z^{T}\alpha\bigr)Z\biggr\} ^{T} \Gamma_{\widetilde{Z}}^{-1}\Gamma_{\widetilde{Z}\widetilde{X}} \\ &{}+E\bigl\{ \bigl[(1-\delta)X-E\bigl((1-\delta)X|Z^{T}\alpha\bigr) \bigr]Z g'\bigl(Z^{T}\alpha\bigr)\bigr\} ^{T} \\ &{}-\Sigma_{\widetilde{\widetilde{Z}}\widetilde{\widetilde{X}}}\Sigma _{\widetilde{\widetilde{X}}}^{-1} \bigl( \Gamma_{\widetilde{X}}+\Sigma_{1}-\Sigma_{2} \Gamma_{\widetilde {Z}}^{-1}\Gamma_{\widetilde{Z}\widetilde{X}}\bigr)\biggr] \Gamma_{\widetilde {X}}^{-1} \\ &{}\cdot\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \bigl\{ \delta_{i} \bigl[\widetilde{X}_{i} \bigl( \varepsilon_{i}-U_{i}^{T} \beta\bigr) +U_{i} \varepsilon_{i}-\bigl(U_{i} U_{i}^{T}-\Sigma_{uu}\bigr)\beta\bigr]\bigr\} +o_{p}(1). \end{aligned}
(5.31)

Then, with the application of the central limit theorem, TheoremÂ 3.2 follows immediately.â€ƒâ–¡

### Proof of TheoremÂ 3.3

Similar to the proof of LemmaÂ 5.4, we first derive the asymptotical expression of $$\check{g}_{n}(t_{0};\alpha,\hat {\beta}_{n})$$.

From (2.8), we have

\begin{aligned} &\left ( \textstyle\begin{array}{@{}c@{}}\check{g}_{n}(t_{0};\alpha,\check{\beta}_{n}) \\ h_{4}\check {g}_{n}'(t_{0};\alpha,\check{\beta}_{n}) \end{array}\displaystyle \right ) - \left ( \textstyle\begin{array}{@{}c@{}}g(t_{0}) \\ h_{4}g'(t_{0}) \end{array}\displaystyle \right ) \\ &\quad =\biggl(\frac{1}{n}B_{2}^{T} S_{2} B_{2}\biggr)^{-1} \frac{1}{n} \sum _{i=1}^{n} B_{i2} K_{h_{4}}(t_{i}-t_{0}) \\ &\qquad {}\times\biggl\{ (1-\delta_{i}) (\hat{\beta}_{n}- \beta)^{T}V_{i}+(1-\delta_{i})\bigl[\hat {g}_{n}\bigl(Z_{i}^{T}\hat{\alpha}_{n} \bigr)-g\bigl(Z_{i}^{T}\alpha\bigr)\bigr] \\ &\qquad {}+\frac{1}{2} \biggl(\frac{t_{i}-t_{0}}{h_{4}}\biggr)^{2} g''(t_{0})h_{4}^{2}+ \delta _{i}\bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr)-(\check{\beta}_{n}-\beta)^{T} V_{i} \biggr\} +o_{p}\biggl(\frac {1}{\sqrt{nh_{4}}}\biggr). \end{aligned}
(5.32)

By $$\frac{h_{4}}{h_{2}}\rightarrow0$$, $$n\rightarrow\infty$$, with Lemmas 5.2-5.4, focusing on the top equation, we get

\begin{aligned} \check{g}_{n}(t_{0};\alpha,\check{ \beta}_{n})-g(t_{0}) = \frac{1}{n}\sum _{i=1}^{n} \frac{1}{f(t_{0})}\delta_{i} K_{h_{4}}(t_{i}-t_{0}) \bigl(\varepsilon_{i}-U_{i}^{T} \beta\bigr)+o_{p}\biggl(\frac{1}{\sqrt{nh_{4}}}\biggr). \end{aligned}

Applying the central limit theorem, we complete the proof of TheoremÂ 3.3.â€ƒâ–¡

## References

1. Ichimura, H: Estimation of single index models. Ph.D. dissertation, Department of Economics, MIT (1987)

2. Engle, RF, Granger, CWJ, Rice, J, Weiss, A: Semiparametric estimates of the relation between weather and electricity sales. J. Am. Stat. Assoc. 81, 310-320 (1986)

3. Carroll, RJ, Fan, JQ, Gijbels, I, Wand, MP: Generalized partially linear single-index models. J. Am. Stat. Assoc. 92, 477-489 (1997)

4. Yu, Y, Ruppert, D: Penalized spline estimation for partially linear single-index models. J. Am. Stat. Assoc. 97, 1042-1054 (2002)

5. Liang, H, Wang, N: Partially linear single-index measurement error models. Stat. Sin. 15, 99-116 (2005)

6. Xia, YC, HÃ¤rdle, W: Semi-parametric estimation of partially linear single-index models. J. Multivar. Anal. 97, 1162-1184 (2006)

7. Xue, L, Zhu, LX: Empirical likelihood confidence regions of the parameters in a partially linear single-index model. J. R. Stat. Soc. B 68, 549-570 (2006)

8. Liu, XH, Wang, ZZ, Hu, XM: Estimation in partially linear single-index models with missing covariates. Commun. Stat., Theory Methods 41, 3428-3447 (2012)

9. Lai, P, Wang, QH: Semiparametric efficient estimation for partially linear single-index model with responses missing at random. J. Multivar. Anal. 128, 33-50 (2014)

10. Chen, X, Cui, HJ: Empirical likelihood for partially linear single-index errors-in-variables model. Commun. Stat., Theory Methods 38, 2498-2514 (2009)

11. Liang, H, Wang, SJ, Carrol, RJ: Partially linear models with missing response variables and error-prone covariates. Biometrika 94, 185-198 (2007)

12. Wei, CH, Jia, XJ, Hu, HS: Statistical inference on partially linear additive models with missing response variables and error-prone covariates. Commun. Stat., Theory Methods 44, 872-883 (2015)

13. Wei, CH: Estimation of partially linear varying-coefficient errors-in-variables model with missing responses. Acta Math. Sci. 30, 1042-1054 (2010) (inÂ Chinese)

14. Wang, QH, Sun, ZH: Estimation in partially linear models with missing responses at random. J. Multivar. Anal. 98, 1470-1497 (2007)

15. Lai, P, Wang, QH: Partially linear single-index model with missing responses at random. J. Stat. Plan. Inference 141, 1047-1058 (2011)

16. Liang, H, HÃ¤rdle, W, Carroll, RJ: Estimation in a semiparametric partially linear errors-in-variables model. Ann. Stat. 15, 99-116 (1999)

17. Gai, YJ, Zhang, J, Li, GR, Luo, XC: Statistical inference on partial linear additive models with distortion measurement errors. Stat. Methodol. 27, 20-38 (2015)

18. Mark, YP, Silverman, BW: Weak and strong uniform consistency of kernel regression estimates. Z.Â Wahrscheinlichkeitstheor. Verw. Geb. 61, 405-415 (1982)

19. Einmahl, U, Mason, DM: Uniform in bandwidth consistency of kerneltype function estimators. Ann. Stat. 22, 1380-1403 (2005)

## Acknowledgements

The authors thank the two referees for carefully reading the paper and for their valuable suggestions and comments, which greatly improved the paper. This work is supported by National Natural Science Foundation of China (Nos. 11271155, 11371168, 11001105, 11071126, 11071269, 11501241), Science and Technology Research Program of Education Department in Jilin Province for the 12th Five-Year Plan (440020031139) and Jilin Province Natural Science Foundation (20130101066JC, 20130522102JH, 20150520053JH, 20101596).

## Author information

Authors

### Corresponding author

Correspondence to De-Hui Wang.

### Competing interests

The authors declare that they have no competing interests.

### Authorsâ€™ contributions

The authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and permissions

Qi, X., Wang, DH. Estimation in a partially linear single-index model with missing response variables and error-prone covariates. J Inequal Appl 2016, 11 (2016). https://doi.org/10.1186/s13660-015-0941-8