Skip to main content

Difference-based M-estimator of generalized semiparametric model with NSD errors


In this paper, we consider the generalized semiparametric model (GSPM)

$$ y_{i}=h\bigl(\mathbf{x}_{i}^{T}\beta \bigr)+f(t_{i})+e_{i} , \quad 1\leq i\leq n, $$

where \(h(\cdot )\) is a known function, \(e_{i}\) are dependent errors. We obtain an estimator of the parametric component β for the model by a difference-based M-estimator. In addition, we prove the asymptotic normality of the proposed estimator and investigate the weak convergence rate of the wavelet estimator of \(f(\cdot )\). Furthermore, we apply these results to a partially linear model with dependent errors.


Consider the generalized semiparametric model

$$ y_{i}=h \bigl(\mathbf{x}_{i}^{T} \beta \bigr)+f(t_{i})+e_{i} , \quad 1 \leq i\leq n, $$

where \(y_{i}\) are scalar response variables, \(h(\cdot )\) is a continuously differentiable known function, the superscript T denotes the transpose, \(\mathbf{x}_{i}=(x_{i1},\ldots ,x_{id})^{T}\) are explanatory variables, β is a d-dimensional unknown parameter, \(f(\cdot )\) is an unknown function, and \(0\leq t_{1}\leq t_{2}\leq \cdots \leq t_{n} \leq 1\). Some authors commented that the assumption of independence is a serious restriction (see Huber [1] and Hampel [2]); so for the errors \(e_{i}\), we confine ourselves to negatively superadditive dependent (NSD) errors. NSD random variables have been introduced by Hu [3] and are widely used in statistics; see [4,5,6,7,8,9,10,11,12].

The theory of the GSPM is an extension of the classical theory of partially linear models; the component of the generalized parametric \(h (\mathbf{x}_{i}^{T}\beta )\) for GSPM includes the linear parametric component \(\mathbf{x}^{T}_{i}\beta \), exponential parametric component \(e^{\mathbf{x}^{T}_{i}\beta }\), and so on.

As is well known, the generalized partially linear model and partially linear single-index model (\(h(\cdot )\) is an unknown link function) are also derived from the partially linear model. There is a substantial amount of work for generalized partially linear model (see [13,14,15,16,17,18] and, for a partially linear single-index model, [19,20,21,22,23,24]); this research is devoted to presenting various methods to obtain estimators of β and \(f(t_{i})\) and investigating some large-sample properties of these estimators.

In this paper, we consider a difference-based estimator method to estimate the unknown parametric component β. This difference-based estimator is optimal in the sense that the estimator of the unknown parametric component is asymptotically efficient. For example, Tabakan et al. [25] studied a difference-based ridge in a partially linear model. Wang et al. [26] obtained a difference-based approach to the semiparametric partially linear model. Zhao and You [27] used a difference-based estimator method to estimate the parametric component for partially linear regression models with measurement errors. Duran et al. [28] investigated the difference-based ridge and Liu-type estimators in semiparametric regression models. Wu [29] discussed a restricted difference-based Liu estimator in partially linear models. Hu et al. [30] presented a difference-based Huber–Dutter (DHD) estimator to obtain the root variance σ and parameter β for a partially linear model. However, Most of the results rely on the independence errors. Wu [31] studied the difference-based ridge-type estimator of parameters in a restricted partial linear model with correlated errors, but this paper just focuses on estimating the linear component. Zeng and Liu [32] used a difference-based and ordinary least-square method to obtain the estimator of an unknown parametric component, but this paper ignores the fact that a difference-based estimator may cause greater bias in moderately sized samples than other estimators. Inspired by these papers, we propose a difference-based M-estimator (DM) methods for generalized semiparametric model with NSD errors. The M-estimator is a most famous robust estimator, which was introduced by Huber [33]. In addition, once β is estimated, we can estimate \(f(\cdot )\) by a variety of nonparametric techniques. In this paper, the estimator of \(f(\cdot )\) is obtained by the wavelet method.

The paper has the following structure. In Sect. 2, we present the estimation procedure. In Sect. 3, we establish the main results. The proofs of the main results are provided in the Appendix.

Estimation method


Throughout the paper, Z is the set of integers, N is the set of natural numbers, R is the set of real numbers. A sequence of random variables \(\eta _{n}\) is said to be of smaller order in probability than a sequence \(d_{n}\) (denoted by \(\eta _{n}=o_{P}(d_{n})\)) if \(\eta _{n}/d_{n}\) converges to 0 in probability, and \(\eta _{n}=O_{P}(d_{n})\) if \(\eta _{n}/d_{n}\) is bounded in probability. Convergence in distribution is denoted by \(H_{n}\stackrel{D}{ \rightarrow }H\). For any arbitrary function \(h(\cdot )\), \(h'(\cdot )\), \(h''(\cdot )\), and \(h'''(\cdot )\) are the first, second, and third derivatives of \(h(\cdot )\), respectively. \(\|\mathbf{x}\|\) is the Euclidean norm of x, and \(\lfloor x\rfloor =\max \{k\in \mathbf{Z}:k\leq x\}\). Let \(C_{0}\), \(C_{1}\), \(C_{2}\), \(C_{3}\), \(C_{4}\) be positive constants, and let \(\beta _{0}\) be the true parameter. Let \(\varTheta = \{\beta :|\beta -\beta _{0}|\leq C_{0} \}\).

Difference-based M-estimation

Let \(\tilde{y}_{i}=\sum_{q=0}^{m}d_{q}y_{i+q}\), \(\tilde{h}_{i}(\beta )=\sum_{q=0}^{m}d_{q}h (\mathbf{x}^{T}_{i+q}\beta )\), \(\tilde{f}(t_{i})=\sum_{q=0}^{m}d_{q}f(t_{i+q})\), and \(\tilde{e}_{i}= \sum_{q=0}^{m}d_{q}e_{i+q}\), where \(d_{0},d_{1},\ldots ,d _{m}\) satisfy the conditions

$$ \sum_{q=0}^{m} d_{q}=0, \qquad \sum_{q=0}^{m} d^{2}_{q}=1. $$

Then \(\tilde{y}_{i}\), \(\tilde{h}_{i}(\beta )\), \(\tilde{f}(t_{i})\), and \(\tilde{e}_{i}\) can be seen as the mth-order differences of \(y_{i}\), \(h(\mathbf{x}_{i}^{T}\beta )\), \(f(t_{i})\), and \(e_{i}\), respectively. Hence, applying the differencing procedures, model (1) becomes

$$ \tilde{y}_{i}=\tilde{h}_{i}(\beta )+ \tilde{f}(t_{i})+\tilde{e}_{i}, \quad 1\leq i\leq n-m. $$

From Yatchew [34] we find that the application of differencing procedures in model (1) can remove the nonparametric effect in large samples, so we ignore the presence of \(\tilde{f}(\cdot )\). Thus (3) becomes

$$ \tilde{y}_{i}=\tilde{h}_{i}(\beta )+ \tilde{e}_{i} \quad 1\leq i\leq n-m. $$

Let ρ be a convex function. Assume that ρ has a continuous derivative ψ and there is a such that \(\psi (a)=0\). We can propose the difference-based M-estimator given by minimizing

$$ Q(\beta )=\sum_{i=1}^{n-m}\rho \bigl(\tilde{y_{i}}-\tilde{h}_{i}( \beta )+a \bigr). $$

Let a \(d\times 1\) vector \(\hat{\beta }_{n}\) be the minimizer of (5) and \(\hat{\beta }_{n}\in \varTheta \). Write \(\tilde{\mathbf{h}}'_{i}(\beta )= \sum_{q=0}^{m}d_{q}\times h'(\mathbf{x}^{T}_{i+q}\beta )\mathbf{x}_{i+q}\), \(\tilde{h}'_{ik}(\beta )=\sum_{q=0}^{m}d_{q}h'(\mathbf{x}^{T}_{i+q} \beta )x_{(i+q)k}\), with \(1 \leq k \leq d\), \(\tilde{\mathbf{h}}''_{i}( \beta )=\sum_{q=0}^{m}d_{q}h''(\mathbf{x}^{T}_{i+q}\beta )\times\mathbf{x} _{i+q}\mathbf{x}^{T}_{i+q}\), and \(\tilde{\mathbf{h}}'_{i}(\beta ) \tilde{\mathbf{h}}^{\prime \,T}_{j}(\beta )=\sum_{q=0}^{m}d_{q}h'(\mathbf{x} ^{T}_{i+q}\beta )\mathbf{x}_{i+q}\sum_{q=0}^{m}d_{q}h'(\mathbf{x}^{T} _{j+q}\beta )\mathbf{x}^{T}_{j+q}\). Then the estimator satisfies

$$ \frac{\partial Q(\hat{\beta }_{n})}{\partial \beta }=-\sum_{i=1}^{n-m} \psi (\hat{\tilde{e_{i}}}+a)\tilde{\mathbf{h}}'_{i}( \hat{\beta }_{n})=0 $$

with \(\hat{\tilde{e_{i}}}=\tilde{y}_{i}-\tilde{\mathbf{h}}_{i}( \hat{\beta }_{n})\). The convexity of ρ guarantees the equivalence of (5) and (6) and the asymptotic uniqueness of the solution; otherwise, it is unimportant.

We estimate the nonparametric function \(f(\cdot )\) by the wavelet method. The formal definition of the wavelet method is the following.

Suppose that there exist a scaling function \(\phi (\cdot )\) in the Schwartz space \(S_{l}\) and a multiresolution analysis \(\{V_{\tilde{m}} \}\) in the concomitant Hilbert space \(L^{2}(\mathbf{R})\) with the reproducing kernel \(E_{\tilde{m}}(t,s)\) given by

$$\begin{aligned} E_{\tilde{m}}(t,s)=2^{\tilde{m}}E_{0}\bigl(2^{\tilde{m}}t, 2^{\tilde{m}}s\bigr)=2^{ \tilde{m}}\sum_{k\in \mathbf{Z}}\phi \bigl(2^{\tilde{m}}t-k\bigr)\phi \bigl(2^{ \tilde{m}}s-k\bigr). \end{aligned}$$

Let \(A_{i}=[s_{i-1}, s_{i}]\) denote intervals that partition \([0, 1]\) with \(t_{i} \in A_{i}\) for \(1\leq i\leq n\). Then the estimator of the nonparameter \(f(t)\) is given by

$$\begin{aligned} \hat{f}_{n}(t)=\sum_{i=1}^{n} \bigl(y_{i}-\mathbf{x}^{T}_{i}\hat{\beta } _{n}\bigr) \int _{A_{i}}{E_{\tilde{m}}(t,s)}\,ds. \end{aligned}$$

Main results

We now list some conditions used to obtain the main results.

  1. (C1)

    \(\max_{1\leq i \leq n}\|\mathbf{x}_{i}\|=O(1) \), and the eigenvalues of \(n^{-1}\sum^{n}_{i=1}\mathbf{x}_{i}\mathbf{x}^{T}_{i}\) are bounded above and away from zero.

  2. (C2)

    \(b, bc-d^{2}>0\), where \(b=E\{\psi '(\eta )\}\), \(c=E\{\eta ^{2} \psi '(\eta )\}\), \(d=E\{\eta \psi '(\eta )\}\) with \(\eta =\tilde{e}_{i}+a\).

  3. (C3)

    \(E\psi (\tilde{e}_{i}+a )=0\).

  4. (C4)

    The function ρ is assumed to be convex, not monotone, and possessing bounded derivatives of sufficiently high order in a neighborhood of the point \(\mathbf{x}_{i}^{T}\beta _{0}\). In particular, \(\psi (t)\) should be continuous and bounded in a neighborhood of \(\mathbf{x}_{i}^{T}\beta _{0}\).

  5. (C5)

    \(h(\cdot )\) is assumed to possess bounded derivatives of sufficiently high order in a neighborhood of point \(\mathbf{x}_{i} ^{T}\beta _{0}\).

  6. (C6)

    \(f(\cdot )\in H^{\alpha }\) (Sobolev space) for some \(\alpha >1/2\).

  7. (C7)

    \(f(\cdot )\) is a Lipschitz function of order \(\gamma >0\).

  8. (C8)

    \(\phi (\cdot )\) belongs to \(S_{l}\), which is a Schwartz space for \(l\geq \alpha \), is a Lipschitz function of order 1, and has a compact support, in addition to \(|\hat{\phi }(\xi )-1|=O(\xi )\) as \(\xi \rightarrow 0\), where ϕ̂ denotes the Fourier transform of ϕ.

  9. (C9)

    \(s_{i}\), \(1\leq i\leq n\), satisfy \(\max_{1\leq i\leq n}(s_{i}-s _{i-1})=O(n^{-1})\), and \(2^{\tilde{m}}=O(n^{1/3})\).

Remark 1

Condition (C1) is often imposed in M-estimation theory of regression models. Condition (C2) is used by Silvapullé [35] for HD estimation. In this paper, this condition is also necessary for M-estimation. Condition (C3) is used by Wu [36] and Zeng and Hu [37] with \(a=0\). We require this in order that the expectation of (5) reaches its minimum at the true value \(\beta _{0}\). For Condition (C4), higher-order derivatives are technically convenient (Taylor expansions), but their existence is hardly essential for the results to hold; see Huber [1]. Condition (C5) is quite mild and can be easily satisfied. Conditions (C6)–(C9) are used by Hu et al. [38].

Remark 2

The assumption of \(\psi (a)=0\) and Condition (C4) are serious restrictions, which shows that the M-estimator in our paper is a particular case of the classical M-estimator. However, in our study, these conditions are necessary.

Theorem 3.1

Let \(\{e_{n}, n\geq 1\}\) be a sequence of NSD random variables with \(Ee_{n}=0\), and let for some \(\delta >0\),

$$ \sup_{n\geq 1}E \vert e_{n} \vert ^{2+\delta }< \infty . $$

Suppose that

$$ \sup_{j\geq 1}\sum_{i: \vert i-j \vert \geq u} \bigl\vert \operatorname{cov}(e_{i},e_{j}) \bigr\vert \rightarrow 0 \quad \textit{as } u\rightarrow \infty . $$

Set \(\tilde{e}_{i}=\sum_{q=0}^{m}d_{q}e_{i+q}\), where \(\{d_{q}, 1 \leq q \leq m\}\) are defined in (2). Let \(\{c_{i}, 1\leq i \leq n-m\}\) be an array of constants satisfying \(\max_{1\leq i\leq n-m}|c_{i}|=O(1)\), and suppose that \(\psi (a)=0\) and Conditions (C3) and (C4) hold. Then

$$ (n-m)^{-1/2}\tau ^{-1}\sum _{i=1}^{n-m}c_{i}\psi (\tilde{e}_{i}+a) \stackrel{D}{ \rightarrow } N(0,1), $$

provided that

$$ \tau ^{2}=\lim_{n\rightarrow \infty }(n-m)^{-1} \Biggl\{ \sum_{i=1}^{n-m}c _{i}^{2} \operatorname{Var}\bigl(\psi (\tilde{e}_{i}+a)\bigr)+2\sum _{i=1}^{n-m}\sum_{j=i+1}^{n-m}c _{i}c_{j}\operatorname{Cov}\bigl(\psi (\tilde{e}_{i}+a),\psi ( \tilde{e}_{j}+a)\bigr) \Biggr\} >0. $$

Theorem 3.2

Let \(\{e_{n}, n\geq 1\}\) be a sequence of NSD random variables with \(Ee_{n}=0\) satisfying conditions (8) and (9). Assume that conditions (C1)–(C5) hold. Then

$$\begin{aligned} &(n-m)^{-1/2}\tau ^{-1}_{\beta }E \biggl( \frac{\partial ^{2}Q(\beta _{0})}{ \partial \beta \partial \beta ^{T}} \biggr) (\hat{\beta }_{n}-\beta _{0}) \stackrel{D}{ \rightarrow } N(0,I_{d}), \end{aligned}$$

provided that

$$\begin{aligned} \tau ^{2}_{\beta }={}& \lim_{n\rightarrow \infty } \frac{1}{n-m} \Biggl\{ \sum_{i=1}^{n-m} \tilde{\mathbf{h}}_{i}'(\beta _{0})\tilde{ \mathbf{h}}_{i}^{\prime \,T}( \beta _{0})\operatorname{Var} \bigl(\psi (\tilde{e}_{i}+a ) \bigr) \\ &{}+2\sum_{i=1}^{n-m}\sum _{j= i+1}^{n-m}\tilde{\mathbf{h}}_{i}'( \beta _{0})\tilde{\mathbf{h}}_{j}^{\prime \,T}(\beta _{0})\operatorname{Cov} \bigl(\psi (\tilde{e} _{i}+a ),\psi ( \tilde{e}_{j}+a ) \bigr) \Biggr\} \end{aligned}$$

is a positive definite matrix, where \(I_{d}\) is the identity matrix of order d.

Corollary 3.1

Let \(h(\mathbf{x}_{i}^{T}\beta )= \mathbf{x}_{i}^{T}\beta \), and let \(\{e_{n}, n\geq 1\}\) be a sequence of NSD random variables with \(Ee_{n}=0\) satisfying conditions (8) and (9). Assume that Conditions (C1)–(C4) hold. Then

$$\begin{aligned} &(n-m)^{-1/2}\tau ^{-1}_{\beta }E \biggl( \frac{\partial ^{2}Q(\beta _{0})}{ \partial \beta \partial \beta ^{T}} \biggr) (\hat{\beta }_{n}-\beta _{0}) \stackrel{D}{ \rightarrow } N(0,I_{d}), \end{aligned}$$

provided that

$$ \tau ^{2}_{\beta }=\lim_{n\rightarrow \infty }\frac{1}{n-m} \Biggl\{ \sum_{i=1}^{n-m}\tilde{ \mathbf{x}}_{i}\tilde{\mathbf{x}}_{i}^{T} \operatorname{Var} \bigl(\psi (\tilde{e}_{i}+a ) \bigr)+2\sum _{i=1}^{n-m}\sum_{j= i+1} ^{n-m}\tilde{\mathbf{x}}_{i}\tilde{\mathbf{x}}_{j}^{T}\operatorname{Cov} \bigl(\psi (\tilde{e}_{i}+a ),\psi (\tilde{e}_{j}+a ) \bigr) \Biggr\} $$

is a positive definite matrix.

Corollary 3.2

Let \(\{e_{n}, n\geq 1\}\) be a sequence of NSD random variables with \(Ee_{n}=0\) satisfying \(\operatorname{Cov}_{|i-j|> \bar{m}}(e_{i},e_{j})=0\) with \(\bar{m}<\infty \). Assume that Condition (C1)–(C5) and (8) hold. Then

$$\begin{aligned} &(n-m)^{-1/2}\tau ^{-1}_{\beta }E \biggl( \frac{\partial ^{2}Q(\beta _{0})}{ \partial \beta \partial \beta ^{T}} \biggr) (\hat{\beta }_{n}-\beta _{0}) \stackrel{D}{ \rightarrow } N(0,I_{d}), \end{aligned}$$

provided that

$$\begin{aligned} \tau ^{2}_{\beta }={}&\lim_{n\rightarrow \infty } \frac{1}{n-m} \Biggl\{ \sum_{i=1}^{n-m} \tilde{\mathbf{h}}_{i}'(\beta _{0})\tilde{ \mathbf{h}}_{i}^{\prime \,T}( \beta _{0})\operatorname{Var} \bigl(\psi (\tilde{e}_{i}+a ) \bigr) \\ &{}+2\sum_{k=1}^{\bar{m}}\sum _{i=1}^{n-m-k}\tilde{\mathbf{h}} _{i+k}'( \beta _{0})\tilde{\mathbf{h}}_{i}^{\prime \,T}(\beta _{0})\operatorname{Cov} \bigl(\psi (\tilde{e}_{i+k}+a ),\psi ( \tilde{e}_{i}+a ) \bigr) \Biggr\} \end{aligned}$$

is a positive definite matrix.

By Theorem 3.2 we also easily obtain some corresponding results for \(\rho (t)=t^{2}\). Here we omit their proofs.

Corollary 3.3

(Zeng and Liu [32])

Let \(\rho (t)=t^{2}\), \(h(\mathbf{x}_{i}^{T}\beta )=\mathbf{x}_{i}^{T}\beta \), and let \(\{e_{n}, n\geq 1\}\) be a sequence of NSD random variables with \(Ee_{n}=0\) satisfying conditions (8) and (9). Assume that conditions (C1)–(C2) hold. Then

$$\begin{aligned} &(n-m)^{-1/2}\tau ^{-1}_{\beta }\sum ^{n-m}_{i=1}\tilde{\mathbf{x}}_{i} \tilde{ \mathbf{x}}_{i}^{T}(\hat{\beta }_{n}-\beta _{0})\stackrel{D}{ \rightarrow } (0,I_{d}), \end{aligned}$$

provided that

$$ \tau ^{2}_{\beta }=\lim_{n\rightarrow \infty }(n-m)^{-1} \Biggl\{ \sum_{i=1}^{n-m}\tilde{x}_{i} \tilde{x}^{T}_{i}\operatorname{Var} (\tilde{e}_{i} )+2 \sum_{i=1}^{n-m}\sum _{j=i+1}^{n-m}\tilde{x}_{i} \tilde{x}^{T}_{j}\operatorname{Cov} (\tilde{e}_{i}, \tilde{e}_{j} ) \Biggr\} $$

is a positive definite matrix.

Corollary 3.4

Let \(\rho (t)=t^{2}\), \(h(\mathbf{x} _{i}^{T}\beta )=e^{\mathbf{x}_{i}^{T}\beta }\), and let \(\{e_{n}, n \geq 1\}\) be a sequence of NSD random variables with \(Ee_{n}=0\) satisfying conditions (8) and (9). Assume that conditions (C1)–(C2) hold. Then

$$\begin{aligned} &(n-m)^{-\frac{1}{2}}\tau ^{-1}_{\beta }\sum ^{n-m}_{i=1} \Biggl(\sum_{q=0}^{m}d_{q}e^{\mathbf{x}^{T}_{i+q}\beta _{0}} \mathbf{x}_{i+q} \Biggr) ^{2}(\hat{\beta }_{n}-\beta _{0})\stackrel{D}{\rightarrow } (0,I_{d}), \end{aligned}$$

provided that \(\tau ^{2}_{\beta }=\lim_{n\rightarrow \infty }(n-m)^{-1}\operatorname{Var} (\sum_{i=1}^{n-m}\tilde{e}_{i}\sum_{q=0}^{m}d_{q}e^{\mathbf{x} ^{T}_{i+q}\beta _{0}}\mathbf{x}_{i+q} )\) is a positive definite matrix.

Theorem 3.3

Under the conditions of Theorem 3.2, assume that Conditions (C6)–(C9) hold. Then

$$\begin{aligned} \sup_{0\leq t \leq 1} \bigl\vert \hat{f}_{n}(t)-f(t) \bigr\vert =O_{P}\bigl(n^{- \gamma }\bigr)+O_{P}(\tau _{\tilde{m}})+O_{P}\bigl(n^{-1/3}M_{n}\bigr) \quad \textit{as } n\rightarrow \infty , \end{aligned}$$

where \(M_{n}\rightarrow \infty \) in arbitrary slowly rate, and \(\tau _{\tilde{m}}=2^{-\tilde{m}(\alpha -1/2)}\) if \(1/2< \alpha <3/2\), \(\tau _{\tilde{m}}=\sqrt{\tilde{m}}2^{-\tilde{m}}\) if \(\alpha =3/2\), and \(\tau _{\tilde{m}}=2^{-\tilde{m}}\) if \(\alpha >3/2\).


  1. 1.

    Huber, P.J.: Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Stat. 1(5), 799–821 (1973)

    MathSciNet  MATH  Google Scholar 

  2. 2.

    Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., Stahel, W.A.: Robust Statistics. The Approach Based on Influence Functions. Wiley, New York (1986)

    Google Scholar 

  3. 3.

    Hu, T.Z.: Negatively superadditive dependence of random variables with applications. Chinese J. Appl. Probab. Statist. 16(2), 133–144 (2000)

    MathSciNet  MATH  Google Scholar 

  4. 4.

    Shen, Y., Wang, X.J., Yang, W.Z., Hu, S.H.: Almost sure convergence theorem and strong stability for weighted sums of NSD random variables. Acta Math. Sin. Engl. Ser. 29(4), 743–756 (2013)

    MathSciNet  MATH  Google Scholar 

  5. 5.

    Xue, Z., Zhang, L.L., Lei, Y.J., Chen, Z.J.: Complete moment convergence for weighted sums of negatively superadditive dependent random variables. J. Inequal. Appl. 2015, Article ID 117 (2015)

    MathSciNet  MATH  Google Scholar 

  6. 6.

    Wang, X.J., Deng, X., Zheng, L.L., Hu, S.H.: Complete convergence for arrays of rowwise negatively superadditive dependent random variables and its applications. Statistics 48(4), 834–850 (2014)

    MathSciNet  MATH  Google Scholar 

  7. 7.

    Wang, X.J., Shen, A.T., Chen, Z.Y., Hu, S.H.: Complete convergence for weighted sums of NSD random variables and its application in the EV regression model. Test 24, 166–184 (2015)

    MathSciNet  MATH  Google Scholar 

  8. 8.

    Wang, X.J., Wu, Y., Hu, S.H.: Complete moment convergence for double-indexed randomly weighted sums and its applications. Statistics 52(3), 503–518 (2018)

    MathSciNet  MATH  Google Scholar 

  9. 9.

    Meng, B., Wang, D., Wu, Q.: On the strong convergence for weighted sums of negatively superadditive dependent random variables. J. Inequal. Appl. 2017, Article ID 269 (2017)

    MathSciNet  MATH  Google Scholar 

  10. 10.

    Eghbal, N., Amini, M., Bozorgnia, A.: On the Kolmogorov inequalities for quadratic forms of dependent uniformly bounded random variables. Stat. Probab. Lett. 81, 1112–1120 (2011)

    MathSciNet  MATH  Google Scholar 

  11. 11.

    Shen, A.T., Zhang, Y., Volodin, A.: Applications of the Rosenthal-type inequality for negatively superadditive dependent random variables. Metrika 78, 295–311 (2015)

    MathSciNet  MATH  Google Scholar 

  12. 12.

    Shen, A.T., Xue, M.X., Volodin, A.: Complete moment convergence for arrays of rowwise NSD random variables. Stochastics 88(4), 606–621 (2016)

    MathSciNet  MATH  Google Scholar 

  13. 13.

    Boente, G., He, X., Zhou, J.: Robust estimates in generalized partially linear models. Ann. Stat. 34, 285–2878 (2016)

    MathSciNet  Google Scholar 

  14. 14.

    Cheng, G., Zhou, L., Huang, Z.J.: Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data. Bernoulli 20(1), 141–163 (2014)

    MathSciNet  MATH  Google Scholar 

  15. 15.

    He, X., Fung, W., Zhu, Z.: Robust estimation in generalized partial linear models for clustered data. J. Am. Stat. Assoc. 100, 1176–1184 (2005)

    MathSciNet  MATH  Google Scholar 

  16. 16.

    Graciela, B., Daniela, R.: Robust inference in generalized partially linear models. Comput. Stat. Data Anal. 54(12), 2942–2966 (2010)

    MathSciNet  MATH  Google Scholar 

  17. 17.

    Qin, G., Zhu, Z., Fung, W.K.: Robust estimation of generalized partially linear model for longitudinal data with dropouts. Ann. Inst. Stat. Math. 68, 977–1000 (2016)

    MathSciNet  MATH  Google Scholar 

  18. 18.

    Lin, H., Fu, B., Qin, G., Zhu, Z.: Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Biometrics 73(4), 1132–1139 (2017)

    MathSciNet  MATH  Google Scholar 

  19. 19.

    Yu, Y., Ruppert, D.: Penalized spline estimation for partially linear single-index models. J. Am. Stat. Assoc. 97, 1042–1054 (2002)

    MathSciNet  MATH  Google Scholar 

  20. 20.

    Xia, Y., Hardle, W.: Semi-parametric estimation of partially linear single-index models. J. Multivar. Anal. 97, 1162–1184 (2006)

    MathSciNet  MATH  Google Scholar 

  21. 21.

    Wang, J.L., Xue, L.G., Zhu, L.X., Chong, Y.S.: Estimation for a partial-linear single index model. Ann. Stat. 38(1), 246–274 (2010)

    MathSciNet  MATH  Google Scholar 

  22. 22.

    Huang, Z.S.: Statistical inferences for partially linear single-index models with error-prone linear covariates. J. Stat. Plan. Inference 141(2), 899–909 (2011)

    MathSciNet  MATH  Google Scholar 

  23. 23.

    Lian, H., Liang, H., Carroll, R.: Variance function partially linear single-index models. J. R. Stat. Soc. B 77(1), 171–194 (2015)

    MathSciNet  Google Scholar 

  24. 24.

    Yang, J., Lu, F., Yang, H.: Statistical inference on asymptotic properties of two estimators for the partially linear single-index models. Statistics 52(6), 1193–1211 (2018)

    MathSciNet  MATH  Google Scholar 

  25. 25.

    Tabakan, G., Akdeniz, F.: Difference-based ridge estimator of parameters in partial linear model. Stat. Pap. 51, 357–368 (2010)

    MathSciNet  MATH  Google Scholar 

  26. 26.

    Wang, L., Brown, L.D., Cai, T.T.: A difference based approach to the semiparametric partial linear model. Electron. J. Stat. 5, 619–641 (2011)

    MathSciNet  MATH  Google Scholar 

  27. 27.

    Zhao, H., You, J.: Difference based estimation for partially linear regression models with measurement errors. J. Multivar. Anal. 102, 1321–1338 (2011)

    MathSciNet  MATH  Google Scholar 

  28. 28.

    Duran, E.A., Hädle, W.K., Osipenko, M.: Difference based ridge and Liu type estimators in semiparametric regression models. J. Multivar. Anal. 105(1), 164–175 (2012)

    MathSciNet  MATH  Google Scholar 

  29. 29.

    Wu, J.: Restricted difference-based Liu estimator in partially linear model. J. Comput. Appl. Math. 300, 97–102 (2016)

    MathSciNet  MATH  Google Scholar 

  30. 30.

    Hu, H.C., Yang, Y., Pan, X.: Asymptotic normality of DHD estimators in a partially linear model. Stat. Pap. 57(3), 567–587 (2016)

    MathSciNet  MATH  Google Scholar 

  31. 31.

    Wu, J.: Difference based ridge type estimator of parameters in restricted partially linear model with correlated errors. SpringerPlus 5, 178 (2016)

    Google Scholar 

  32. 32.

    Zeng, Z., Liu, X.D.: Asymptotic normality of difference-based estimator in partially linear model with dependent errors. J. Inequal. Appl. 2018, Article ID 267 (2018)

    Google Scholar 

  33. 33.

    Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 35, 73–101 (1964)

    MathSciNet  MATH  Google Scholar 

  34. 34.

    Yatchew, A.: An elementary estimator for the partial linear model. Econ. Lett. 5, 135–143 (1997)

    MathSciNet  MATH  Google Scholar 

  35. 35.

    Silvapullé, M.J.: Asymptotic behavior of robust estimators of regression and scale parameter with fixed carriers. Ann. Stat. 13(4), 1490–1497 (1985)

    MathSciNet  MATH  Google Scholar 

  36. 36.

    Wu, W.B.: M-estimation of linear models with dependent errors. Ann. Stat. 35, 495–521 (2007)

    MathSciNet  MATH  Google Scholar 

  37. 37.

    Zeng, Z., Hu, H.C.: Weak linear representation of M-estimation in GLMs with dependent errors. Stoch. Dyn. 17, 1750034 (2017).

    MathSciNet  MATH  Google Scholar 

  38. 38.

    Hu, H.C., Cui, H.J., Li, K.C.: Asymptotic properties of wavelet estimators in partially linear errors-in-variables models with long-memory errors. Acta Math. Appl. Sin. Engl. Ser. 34(1), 77–96 (2018)

    MathSciNet  MATH  Google Scholar 

  39. 39.

    Zhou, X., You, J.: Wavelet estimation in varying-coefficient partially linear regression models. Stat. Probab. Lett. 68, 91–104 (2004)

    MathSciNet  MATH  Google Scholar 

Download references


This authors would like to thank a referee and an Associate Editor for their comments and suggestions.

Availability of data and materials

Not application.


The research is supported Support by National Natural Science Foundation of China [grant number 71471075].

Author information




All authors contributed equally to the writing of this paper. All authors read approved the final manuscript.

Corresponding author

Correspondence to Xiangdong Liu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



1.1 Lemmas

In this section, we present the proofs of the main results. We first need some lemmas.

Lemma 1

Under Conditions (C1), (C4), and (C5), suppose that \(e_{i}\) satisfies (8). Then

$$\begin{aligned} \frac{\partial ^{2}Q(\beta _{0})}{\partial \beta \partial \beta ^{T}}-E\frac{ \partial ^{2}Q(\beta _{0})}{\partial \beta \partial \beta ^{T}} =O_{P} \bigl((n-m)^{\frac{1}{2}} \bigr) \end{aligned}$$


$$\begin{aligned} \sup \bigl\Vert n^{-3/2} R_{nl} (\tilde{\beta } ) \bigr\Vert = \sup \biggl\Vert n^{-3/2}\frac{\partial ^{3}}{\partial \beta \partial \beta ^{T}\partial \beta _{l}}Q (\tilde{\beta } ) \biggr\Vert \rightarrow 0 \quad \textit{as } n\rightarrow \infty , \end{aligned}$$

where \(\tilde{\beta }=s\hat{\beta }_{n}+(1-s)\beta _{0}\) for some \(s\in [0,1]\), \(l=1,2,\ldots,d\).


We have

$$\begin{aligned} &\frac{\partial ^{2}}{\partial \beta \partial \beta ^{T}}Q(\beta _{0})-E\frac{ \partial ^{2}}{\partial \beta \partial \beta ^{T}}Q(\beta _{0}) \\ &\quad = \Biggl(\sum_{i=1}^{n-m}\psi ' (\tilde{e}_{i}+a ){\tilde{\mathbf{h}}'_{i}( \beta _{0})} {\tilde{\mathbf{h}}^{\prime \,T}_{i}(\beta _{0})}+\sum_{i=1}^{n-m} \psi ( \tilde{e}_{i}+a )\tilde{\mathbf{h}}_{i}''( \beta _{0}) \Biggr) \\ &\qquad {}- \Biggl(E \Biggl(\sum_{i=1}^{n-m}\psi ' (\tilde{e}_{i}+a ) {\tilde{\mathbf{h}}'_{i}( \beta _{0})} {\tilde{\mathbf{h}}^{\prime \,T}_{i}(\beta _{0})} \Biggr)+E \Biggl(\sum_{i=1}^{n-m} \psi (\tilde{e}_{i}+a ) \tilde{\mathbf{h}}_{i}''( \beta _{0}) \Biggr) \Biggr) \\ &\quad = \Biggl(\sum_{i=1}^{n-m}\psi ' (\tilde{e}_{i}+a ){\tilde{\mathbf{h}}'_{i}( \beta _{0})} {\tilde{\mathbf{h}}^{\prime \,T}_{i}(\beta _{0})}-E \Biggl(\sum_{i=1} ^{n-m} \psi ' (\tilde{e}_{i}+a ){\tilde{\mathbf{h}}'_{i}( \beta _{0})} {\tilde{\mathbf{h}}^{\prime \,T}_{i}(\beta _{0})} \Biggr) \Biggr) \\ &\qquad {}+ \Biggl(\sum_{i=1}^{n-m}\psi ( \tilde{e}_{i}+a ) \tilde{\mathbf{h}}_{i}''( \beta _{0})-\sum_{i=1}^{n-m}E \bigl( \psi (\tilde{e}_{i}+a )\tilde{\mathbf{h}}_{i}''( \beta _{0}) \bigr) \Biggr) \\ &\quad :=I_{1}+I_{2}. \end{aligned}$$

From (8) we have

$$ \sup_{i\geq 1}\sum_{j: \vert i-j \vert \geq u} \bigl\vert \operatorname{cov}\bigl(\psi '(e_{i}+a),\psi '(e_{j}+a)\bigr) \bigr\vert \rightarrow 0 \quad \mbox{as } u\rightarrow \infty . $$

Therefore, for a fixed small ε, there exists a positive integer \(\delta =\delta _{\varepsilon }\) such that

$$ \sup_{i\geq 1}\sum_{j: \vert i-j \vert \geq \delta } \bigl\vert \operatorname{cov}\bigl(\psi '(e_{i}+a),\psi '(e _{j}+a)\bigr) \bigr\vert < \varepsilon $$

for \(1\leq k_{1},k_{2}, l_{1},l_{2} \leq d\), and thus

$$\begin{aligned} &\sum_{i=1}^{n-m}\sum _{j: \vert i-j \vert \geq 1}{\tilde{h}'_{ik_{1}} (\beta _{0} )} {\tilde{h}'_{il_{1}} (\beta _{0} )} {\tilde{h}'_{jk _{2}} (\beta _{0} )} { \tilde{h}'_{jl_{2}} (\beta _{0} )}\\ &\qquad {}\times E \bigl\{ \bigl[ \psi ' (\tilde{e}_{i}+a )-E\psi ' (\tilde{e} _{i}+a ) \bigr] \bigl[\psi ' (\tilde{e}_{j}+a )-E \psi ' (\tilde{e}_{j}+a ) \bigr] \bigr\} \\ &\quad =\sum_{j:1\leq \vert i-j \vert < \delta }\sum_{i=1}^{n-m}{ \tilde{h}'_{ik_{1}} (\beta _{0} )} { \tilde{h}'_{il_{1}} (\beta _{0} )} { \tilde{h}'_{jk_{2}} (\beta _{0} )} { \tilde{h}'_{jl_{2}} (\beta _{0} )}\\ &\qquad {}\times E \bigl\{ \bigl[ \psi ' (\tilde{e}_{i}+a )-E \psi ' ( \tilde{e}_{i}+a ) \bigr] \bigl[\psi ' (\tilde{e} _{j}+a )-E\psi ' (\tilde{e}_{j}+a ) \bigr] \bigr\} \\ &\qquad {}+\sum_{i=1}^{n-m}\sum _{j: \vert i-j \vert \geq \delta }{\tilde{h}'_{ik_{1}} (\beta _{0} )} {\tilde{h}'_{il_{1}} (\beta _{0} )} {\tilde{h}'_{jk_{2}} (\beta _{0} )} { \tilde{h}'_{jl_{2}} (\beta _{0} )}\\ &\qquad {}\times E \bigl\{ \bigl[ \psi ' (\tilde{e}_{i}+a )-E \psi ' ( \tilde{e}_{i}+a ) \bigr] \bigl[\psi ' (\tilde{e} _{j}+a )-E\psi ' (\tilde{e}_{j}+a ) \bigr] \bigr\} \\ &\quad \leq 2\delta \max_{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}\\ &\qquad {}\times \max_{1\leq i,j\leq n-m}E \bigl\{ \bigl[\psi ' ( \tilde{e}_{i}+a )-E\psi ' (\tilde{e}_{i}+a ) \bigr] \bigl[\psi ' (\tilde{e}_{j}+a )-E\psi ' (\tilde{e} _{j}+a ) \bigr] \bigr\} \sum _{i=1}^{n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2} \\ &\qquad {}+\max_{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{4}\\ &\qquad {}\times \sum _{i=1}^{n-m}\sum_{i: \vert i-j \vert \geq \delta }E \bigl\{ \bigl[\psi ' (\tilde{e}_{i}+a )-E\psi ' (\tilde{e} _{i}+a ) \bigr] \bigl[\psi ' ( \tilde{e}_{j}+a )-E \psi ' (\tilde{e}_{j}+a ) \bigr] \bigr\} \\ &\quad \leq 2\delta \max_{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}\\ &\qquad {}\times \max_{1\leq i,j\leq n-m}E \bigl\{ \bigl[\psi ' ( \tilde{e}_{i}+a )-E\psi ' (\tilde{e}_{i}+a ) \bigr] \bigl[\psi ' (\tilde{e}_{j}+a )-E\psi ' (\tilde{e} _{j}+a ) \bigr] \bigr\} \sum _{i=1}^{n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2} \\ &\qquad {}+\max_{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{4}(n-m)\varepsilon . \end{aligned}$$

By Condition (C5) we have that \(\max_{1\leq i \leq n}\{\sum_{q=1}^{m} | d_{q}h'(\mathbf{x}^{T}_{i+q}\tilde{\beta })|\}\), \(\max_{1\leq i \leq n}\{\sum_{q=1}^{m} | d_{q}\times h''(\mathbf{x}^{T}_{i+q}\tilde{\beta })| \}\), and \(\max_{1\leq i \leq n}\{\sum_{q=1}^{m} | d_{q}h'''( \mathbf{x}^{T}_{i+q}\tilde{\beta })|\}\) are bounded by some constant \(C_{1}\). Then by (C4) it follows that, for same constant \(M>0\),

$$\begin{aligned} &(n-m)^{-2}E \Biggl\{ \sum_{i=1}^{n-m} \psi ' (\tilde{e}_{i}+a ) {\tilde{h}'_{ik} (\beta _{0} )} {\tilde{h}'_{il} (\beta _{0} )}-E \Biggl(\sum_{i=1}^{n-m} \psi ' (\tilde{e}_{i}+a ) {\tilde{h}'_{ik} (\beta _{0} ){\tilde{h}'_{il} (\beta _{0} )}} \Biggr) \Biggr\} ^{2} \\ &\quad =(n-m)^{-2} \Biggl\{ \sum_{i=1}^{n-m} \tilde{h}^{\prime \,2}_{ik} (\beta _{0} ) \tilde{h}^{\prime \,2}_{il} (\beta _{0} )E \bigl(\psi ' (\tilde{e}_{i}+a )-E\psi ' ( \tilde{e}_{i}+a ) \bigr) ^{2} \\ &\qquad {}+\sum_{i=1}^{n-m}\sum _{j: \vert j-i \vert \geq 1}\tilde{h}'_{ik_{1}} (\beta _{0} )\tilde{h}'_{il_{1}} (\beta _{0} ) \tilde{h}'_{jk_{2}} (\beta _{0} ) \tilde{h}'_{jl_{2}} (\beta _{0} )\\ &\qquad {}\times E \bigl[ \bigl(\psi ' (\tilde{e}_{i}+a )-E \psi ' ( \tilde{e}_{i}+a ) \bigr) \bigl(\psi ' (\tilde{e} _{j}+a )-E\psi ' (\tilde{e}_{j}+a ) \bigr) \bigr] \Biggr\} \\ &\quad \leq (n-m)^{-1}\max_{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}(n-m)^{-1}\sum_{i=1}^{n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}M \\ &\qquad {}+2\delta (n-m)^{-1}\max_{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}(n-m)^{-1}\sum_{i=1}^{n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}M\\ &\qquad {}+(n-m)^{-1}\max _{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{4} \varepsilon \\ &\quad \leq (2\delta +1) (n-m)^{-1}\max_{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}(n-m)^{-1}\sum_{i=1}^{n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{2}M\\ &\qquad {}+(n-m)^{-1}\max _{1\leq i\leq n-m} \bigl\Vert \tilde{\mathbf{h}}'_{i} (\beta _{0} ) \bigr\Vert ^{4} \varepsilon \\ &\quad = (2\delta +1) (n-m)^{-1}\max_{1\leq i\leq n-m} \Biggl\Vert \sum_{q=0} ^{m}d_{q}h' \bigl(\mathbf{x}_{i+q}^{T}\beta _{0}\bigr) \mathbf{x}_{i+q} \Biggr\Vert ^{2}\\ &\qquad {}\times (n-m)^{-1}\sum _{i=1}^{n-m} \Biggl\Vert \sum _{q=0}^{m}d_{q}h'\bigl( \mathbf{x}_{i+q}^{T}\beta _{0}\bigr) \mathbf{x}_{i+q} \Biggr\Vert ^{2}M \\ &\qquad {}+(n-m)^{-1}\max_{1\leq i\leq n-m} \Biggl\Vert \sum _{q=0}^{m}d_{q}h'\bigl( \mathbf{x}_{i+q}^{T}\beta _{0}\bigr) \mathbf{x}_{i+q} \Biggr\Vert ^{4}\varepsilon \\ &\quad \leq (n-m)^{-1} \Biggl\{ (2\delta +1)C_{1}^{2} \max_{1\leq i\leq n-m} \sum_{q=0}^{m} \Vert \mathbf{x}_{i+q} \Vert ^{2}(n-m)^{-1}C_{1}^{2} \sum_{i=1} ^{n-m}\sum _{q=0}^{m} \Vert \mathbf{x}_{i+q} \Vert ^{2}M\\ &\qquad {}+C_{1}^{4} \Biggl(\max_{1 \leq i\leq n-m} \sum_{q=0}^{m} \Vert \mathbf{x}_{i+q} \Vert ^{2} \Biggr)^{2} \varepsilon \Biggr\} , \end{aligned}$$

and from (C1) it follows that

$$\begin{aligned} (n-m)^{-1}\operatorname{Var} (I_{1} )=O(1). \end{aligned}$$

By the Chebyshev inequality it suffices to verify that \(I_{1}=O_{P} ((n-m)^{\frac{1}{2}} )\). In the same way, we easily obtain that \(I_{2}=O_{P} ((n-m)^{\frac{1}{2}} )\). Consequently,

$$\begin{aligned} \frac{\partial ^{2}}{\partial \beta \partial \beta ^{T}}Q(\beta _{0})-E\frac{ \partial ^{2}}{\partial \beta \partial \beta ^{T}}Q(\beta _{0}) =O_{P} \bigl((n-m)^{\frac{1}{2}} \bigr). \end{aligned}$$

Note that, for \(1 \leq l\leq d\),

$$\begin{aligned} &\frac{\partial ^{3}}{\partial \beta \partial \beta ^{T}\partial \beta _{l}}Q (\tilde{\beta } ) \\ &\quad =\sum^{n-m}_{i=1} \Biggl\{ 3\psi ' \Biggl(\tilde{y}_{i}-\sum _{q=0}^{m}d_{q}h(\mathbf{x}_{i+q} \tilde{\beta })+a \Biggr)\sum_{q=0}^{m}d_{q}h'( \mathbf{x}_{i+q} \tilde{\beta })\sum_{q=0}^{m}d_{q}h''( \mathbf{x}_{i+q}\tilde{\beta })x _{(i+q)l}\mathbf{x}_{i+q} \mathbf{x}^{T}_{i+q} \\ &\qquad {}-\psi '' \Biggl(\tilde{y}_{i}-\sum _{q=0}^{m}d_{q}h( \mathbf{x}_{i+q} \tilde{\beta })+a \Biggr) \Biggl(\sum _{q=0}^{m}d_{q}h'( \mathbf{x}_{i+q} \tilde{\beta }) \Biggr)^{3}x_{(i+q)l} \mathbf{x}_{i+q}\mathbf{x}^{T} _{i+q} \\ &\qquad {}-\psi \Biggl(\tilde{y}_{i}-\sum_{q=0}^{m}d_{q}h( \mathbf{x} _{i+q}\tilde{\beta })+a \Biggr)\sum _{q=0}^{m}d_{q}h'''( \mathbf{x}_{i+q} \tilde{\beta })x_{(i+q)l}\mathbf{x}_{i+q} \mathbf{x}^{T}_{i+q} \Biggr\} . \end{aligned}$$

By Conditions (C1), (C4), and (C5), for \(1\leq k, l, s\leq d\) and some constant \(M>0\), we have

$$\begin{aligned} &\sup \Biggl\vert n^{-3/2}\sum^{n-m}_{i=1} \Biggl\{ 3\psi ' \Biggl(\tilde{y} _{i}-\sum _{q=0}^{m}d_{q}h(\mathbf{x}_{i+q} \tilde{\beta })+a \Biggr)\\ &\qquad {}\times \sum_{q=0}^{m}d_{q}h'( \mathbf{x}_{i+q}\tilde{\beta })\sum_{q=0}^{m}d _{q}h''(\mathbf{x}_{i+q}\tilde{ \beta })x_{(i+q)l}x_{(i+q)k}x_{(i+q)s} \\ &\qquad {}-\psi '' \Biggl(\tilde{y}_{i}-\sum _{q=0}^{m}d_{q}h( \mathbf{x}_{i+q} \tilde{\beta })+a \Biggr) \Biggl(\sum _{q=0}^{m}d_{q}h'( \mathbf{x}_{i+q} \tilde{\beta }) \Biggr)^{3}x_{(i+q)l}x_{(i+q)k}x_{(i+q)s} \\ &\qquad {}-\psi \Biggl(\tilde{y}_{i}-\sum _{q=0}^{m}d_{q}h( \mathbf{x}_{i+q} \tilde{\beta })+a \Biggr)\sum_{q=0}^{m}d_{q}h'''( \mathbf{x}_{i+q}\tilde{\beta })x_{(i+q)l}x_{(i+q)k}x_{(i+q)s} \Biggr\} \Biggr\vert \\ &\quad \leq (n-m)^{-1/2}M\bigl(C_{1}^{2}+C_{1}^{3}+C_{1} \bigr)\max_{1\leq i\leq n-m} \sum_{q=0}^{m} \Vert \mathbf{x}_{i+q} \Vert (n-m)^{-1}\sum _{i=1} ^{n-m}\sum_{q=0}^{m} \Vert \mathbf{x}_{i+q} \Vert ^{2} \\ &\quad \rightarrow 0,\quad n\rightarrow \infty . \end{aligned}$$

Hence (15) holds, and the proof is completed. □

Lemma 2

If (C1)–(C5) hold, then

$$ \sqrt{n}(\hat{\beta }_{n} -\beta _{0})=O_{p}(1). $$


We can prove Lemma 2 by an argument similar to Lemma 4 of Silvapullé [35], so we omit the details. □

Lemma 3

(Zhou and You [39])

If Condition (C8) holds, then

  1. (a1)

    \(|E_{0}(t,s)|\leq \frac{C_{k}}{(1+|t-s|)^{k}}\), \(|E_{\tilde{m}}(t,s)| \leq \frac{2^{\tilde{m}}C}{(1+2^{\tilde{m}}|t-s|)^{k}}\) (where \(k \in \mathbf{N} \), and \(C=C(k)\) is a constant depending on k only);

  2. (a2)

    \(\sup_{0\leq s\leq 1}|E_{\tilde{m}}(t,s)|=O(2^{\tilde{m}})\);

  3. (a3)

    \(\sup_{t}\int ^{1}_{0}|E_{\tilde{m}}(t,s)|\,ds\leq C_{2}\);

  4. (a4)

    \(\int ^{1}_{0}E_{\tilde{m}}(t,s)\,ds\rightarrow 1, n\rightarrow \infty \).

1.2 Proof of Theorem 3.1

By Condition (8) we have

$$\begin{aligned} \sup_{n\geq 1}Ee_{n}^{2}< \infty \quad \mbox{and} \quad \lim_{x\rightarrow \infty }\sup_{n\geq 1}Ee_{n}^{2}I \bigl\{ \vert e_{n} \vert >x\bigr\} =0, \end{aligned}$$

from which it follows that

$$\begin{aligned} C_{3}:=\sup_{n>m}{(n-m)}^{-1}\sum _{i=1}^{n-m} \sum_{q=0}^{m} \operatorname{Var} (d _{q}e_{i+q})< \infty , \end{aligned}$$

and for all \(\varepsilon >0\),

$$\begin{aligned} (n-m)^{-1}\sum_{i=1}^{n-m}\sum _{q=0}^{m}E(d_{q}e_{i+q})^{2}I \bigl\{ \vert d_{q}e _{i+q} \vert \geq \sqrt{n-m} \varepsilon \bigr\} \rightarrow 0 \quad \mbox{as } n\rightarrow \infty . \end{aligned}$$

Then we can find a positive number sequence \(\{\varepsilon _{n}, n \geq 1\}\) with \(\varepsilon _{n}\rightarrow 0\) such that

$$\begin{aligned} (n-m)^{-1}\sum_{i=1}^{n-m}\sum _{q=0}^{m}E(d_{q}e_{i+q})^{2}I \bigl\{ \vert d_{q}e _{i+q} \vert \geq \sqrt{n-m} \varepsilon _{n}\bigr\} \rightarrow 0 \quad \mbox{as } n\rightarrow \infty . \end{aligned}$$

Now we define the integers: \(m_{0}=0\) and for \(j=0,1,2,\ldots \) ,

$$\begin{aligned} &m_{2j+1}=\min \Biggl\{ m':m'\geq m_{2j}, {(n-m)}^{-1}\sum_{i=m_{2j}+1} ^{m'}\sum_{q=0}^{m} \operatorname{Var}(d_{q}e_{i+q})> \sqrt{\varepsilon _{n}} \Biggr\} , \\ &m_{2j+2}=m_{2j+1}+ \biggl\lfloor \frac{1}{\varepsilon _{n}} \biggr\rfloor +m. \end{aligned}$$


$$\begin{aligned} &I_{j}=\{k:m_{2j}< k\leq m_{2j+1}, j=0, \ldots , l\} \quad \mbox{and} \\ &J_{j}=\{k:m_{2j+1}< k\leq m_{2(j+1)}, j=0, \ldots , l \}, \end{aligned}$$

where \(l=l(n)\) is the number of blocks of indices \(I_{j}\). Then

$$\begin{aligned} l\sqrt{\varepsilon _{n}}\leq {(n-m)}^{-1}\sum _{j=1}^{l}\sum_{i\in I _{j}} \sum_{q=0}^{m} \operatorname{Var} (d_{q}e_{i+q})\leq {(n-m)}^{-1}\sum _{i=1} ^{n-m} \sum_{q=0}^{m} E(d_{q}e_{i+q})^{2}\leq C_{3}, \end{aligned}$$

and hence we have \(l\leq C_{3}/\sqrt{\varepsilon _{n}}\). If the remainder term is not zero, then as the construction ends, we put all the remainder terms into a block denoted by \(J_{l}\). Hence, by the Lagrange mean value theorem,

$$\begin{aligned} &\frac{1}{\sqrt{n-m}}\sum_{i=1}^{n-m}c_{i} \psi (\tilde{e}_{i}+a) \\ &\quad =\frac{1}{\sqrt{n-m}}\sum_{i=1}^{n-m}c_{i} \psi (\tilde{e}_{i}+a)-\frac{1}{ \sqrt{n-m}}\sum _{i=1}^{n-m}c_{i}\psi (a) \\ &\quad =\frac{1}{\sqrt{n-m}}\sum_{i=1}^{n-m}c_{i} \psi '(\xi _{i}) \tilde{e}_{i}, \end{aligned}$$

where \(\xi _{i}=t\tilde{e}_{i}+a\) for some \(t\in [0,1]\).

Moreover, setting \(a_{i}=\tau ^{-1}c_{i}\psi '(\xi _{i})\), we have

$$\begin{aligned} &\frac{1}{\sqrt{n-m}}\sum_{i=1}^{n-m}c_{i} \psi (\tilde{e}_{i}+a) \\ &\quad =\frac{1}{\sqrt{n-m}}\sum_{i=1}^{n-m}a_{i} \tilde{e}_{i} \\ &\quad =\frac{1}{\sqrt{n-m}}\sum_{j=1}^{l}\sum _{i\in I_{j}}a_{i} \tilde{e}_{i}+ \frac{1}{\sqrt{n-m}}\sum_{j=1}^{l}\sum _{i\in J_{j}}a _{i}\tilde{e}_{i} \\ &\quad :=I+J. \end{aligned}$$

By the argument in the proof of Theorem 4.1 in Zeng and Liu [32] we have

$$ {(n-m)}^{-1/2}\sum_{i=1}^{n-m}a_{i} \tilde{e}_{i}\stackrel{D}{\rightarrow } N(0,1), $$

which implies

$$ {(n-m)}^{-1/2}\tau ^{-1}\sum_{i=1}^{n-m}c_{i} \psi (\tilde{e}_{i}+a)\stackrel{D}{ \rightarrow } N(0,1). $$

The proof is completed.

1.3 Proof of Theorem 3.2

Now we will use Theorem 3.1 to prove Theorem 3.2. Expanding \(\frac{\partial }{\partial \beta }Q (\hat{\beta }_{n} )\) about \(\beta _{0}\), we have

$$ \frac{\partial }{\partial \beta }Q (\hat{\beta }_{n} )=\frac{ \partial }{\partial \beta }Q (\beta _{0} )+\frac{\partial ^{2}}{\partial \beta \partial \beta ^{T}}Q (\beta _{0} ) (\hat{\beta }_{n}-\beta _{0} )+ \frac{1}{2}\bigl[R_{\mathrm{nl}}( \tilde{\beta },\hat{\beta }_{n},\beta _{0}) \bigr]_{1\leq l\leq d}, $$

where \(\tilde{\beta }=s\hat{\beta }_{n}+(1-s)\beta _{0}\) for some \(s\in [0,1]\), and

$$\begin{aligned} & \bigl[R_{nl}(\tilde{\beta },\hat{\beta }_{n},\beta _{0}) \bigr]_{1 \leq l\leq d} \\ &\quad = \bigl((\hat{\beta }_{n}-\beta _{0})^{T}R_{nl}( \tilde{\beta }) ( \hat{\beta }_{n}-\beta _{0}),(\hat{\beta }_{n}-\beta _{0})^{T}R_{n2}( \tilde{ \beta }) (\hat{\beta }_{n}-\beta _{0}), \ldots,\\ &\qquad (\hat{\beta }_{n}- \beta _{0})^{T}R_{nd}(\tilde{ \beta }) (\hat{\beta }_{n}-\beta _{0}) \bigr) ^{T}. \end{aligned}$$

From (6) we have

$$ \frac{\partial ^{2}}{\partial \beta \partial \beta ^{T}}Q (\beta _{0} ) (\hat{\beta }_{n}-\beta _{0} )=-\frac{\partial }{\partial \beta }Q (\beta _{0} )- \frac{1}{2}\bigl[R_{\mathrm{nl}}( \tilde{\beta },\hat{\beta }_{n},\beta _{0})\bigr]_{1\leq l\leq d} $$

and, by Lemma 1 and Lemma 2,

$$\begin{aligned} &(n-m)^{-\frac{1}{2}} \biggl[E\frac{\partial ^{2}}{\partial \beta ^{T} \partial \beta }Q(\beta _{0})+O_{P} \bigl((n-m)^{\frac{1}{2}} \bigr) \biggr] (\hat{\beta }_{n}-\beta _{0} ) \\ &\quad =-(n-m)^{-\frac{1}{2}} \biggl\{ \frac{\partial }{\partial \beta }Q( \beta _{0})+ \frac{1}{2}\bigl[R_{nl}(\tilde{\beta },\hat{\beta }_{n},\beta _{0})\bigr]_{1\leq l\leq d} \biggr\} \\ &\quad =-(n-m)^{-\frac{1}{2}}\frac{\partial }{\partial \beta }Q(\beta _{0})+o _{P}(1) \\ &\quad =(n-m)^{-\frac{1}{2}} \Biggl\{ \sum^{n-m}_{i=1} \psi (\tilde{e} _{i}+a )\tilde{\mathbf{h}}_{i}'( \beta _{0}) \Biggr\} +o_{P}(1). \end{aligned}$$

We now show that

$$ \frac{1}{\sqrt{n-m}\tau _{\beta }}\sum_{i=1}^{n-m}\psi ( \tilde{e} _{i}+a )\tilde{\mathbf{h}}_{i}' ( \beta _{0} )\stackrel{D}{ \rightarrow } N(0,I_{d}). $$

Let u be a \(1 \times d\) such that \(\|u\|=1\). By the Cramér–Wold theorem it suffices to verify that

$$ \frac{1}{\sqrt{n-m}\tau }\sum_{i=1}^{n-m}\psi ( \tilde{e}_{i}+a )u \tilde{\mathbf{h}}_{i}'( \beta _{0})\stackrel{D}{\rightarrow } N(0,1), $$

where \(\tau ^{2}=\lim_{n\rightarrow \infty }\operatorname{Var} ((n-m)^{-1/2} \sum_{i=1}^{n-m}\psi (\tilde{e}_{i}+a )u \tilde{\mathbf{h}}_{i}'(\beta _{0}) )\); by the definition of \(\tau ^{2}_{\beta }\), \(\tau ^{2}>0\). By Theorem 3.1, (25) follows from \(u\tilde{\mathbf{h}}_{i}'(\beta _{0})=O(1)\). The proof is completed.

1.4 Proof of Theorem 3.3

By (7) we have

$$\begin{aligned} \hat{f}_{n}(t)-f(t) ={}&\sum^{n}_{i=1} \bigl(y_{i}-h\bigl(\mathbf{x}^{T}_{i} \hat{\beta }_{n}\bigr) \bigr) \int _{A_{i}}E_{\tilde{m}}(t,s)\,ds-f(t) \\ ={}&\sum^{n}_{i=1} \bigl(h\bigl( \mathbf{x}^{T}_{i}\beta \bigr)+f(t_{i})+e_{i}-h \bigl( \mathbf{x}^{T}_{i}\hat{\beta }_{n}\bigr) \bigr) \int _{A_{i}}E_{\tilde{m}}(t,s)\,ds-f(t) \\ ={}&\sum^{n}_{i=1} \bigl(h\bigl( \mathbf{x}^{T}_{i}\beta \bigr)-h\bigl(\mathbf{x}^{T} _{i}\hat{\beta }_{n}\bigr) \bigr) \int _{A_{i}}E_{\tilde{m}}(t,s)\,ds \\ &{}+ \Biggl\{ \sum^{n}_{i=1}f(t_{i}) \int _{A_{i}}E_{\tilde{m}}(t,s)\,ds-f(t) \Biggr\} + \sum ^{n}_{i=1}e_{i} \int _{A_{i}}E_{\tilde{m}}(t,s)\,ds \\ :={}&I_{1}+I_{2}+I_{3}. \end{aligned}$$

By the argument in the proof of Theorem 3.2 in Hu [30] we have

$$ I_{2}=O\bigl(n^{-\gamma }\bigr)+O(\tau _{\tilde{m}}) $$


$$ I_{3}=O_{P}\bigl(n^{-\frac{1}{3}}M_{n}\bigr). $$

By Lemma 3, (C1), and (C5) we assume that

$$ \max_{1\leq i\leq n} \bigl\Vert h'(\xi _{i}) \mathbf{x}^{T}_{i} \bigr\Vert \sup_{t} \sum_{i=1}^{n} \int _{A_{i}} \bigl\vert E_{\tilde{m}}(t,s) \bigr\vert \,ds \leq C_{4}, $$

where \(\xi _{i}=r\mathbf{x}^{T}_{i}\beta +(1-r)\mathbf{x}^{T}_{i} \hat{\beta }_{n}, r\in [0,1]\). It follows that

$$\begin{aligned} I_{3} &\leq \sup_{t} \Biggl\vert \sum ^{n}_{i=1} \bigl(h\bigl(\mathbf{x}^{T}_{i} \beta \bigr)-h\bigl(\mathbf{x}^{T}_{i}\hat{\beta }_{n}\bigr) \bigr) \int _{A_{i}}E _{\tilde{m}}(t,s)\,ds \Biggr\vert \\ &\leq \sup_{t} \Biggl\vert \sum ^{n}_{i=1} \bigl(h'(\xi _{i}) \mathbf{x}^{T} _{i}(\beta -\hat{\beta }_{n}) \bigr) \int _{A_{i}}E_{\tilde{m}}(t,s)\,ds \Biggr\vert \\ &\leq \max_{1\leq i\leq n} \bigl\Vert h'(\xi _{i})\mathbf{x}^{T}_{i} \bigr\Vert \sup _{t}\sum_{i=1}^{n} \int _{A_{i}} \bigl\vert E_{\tilde{m}}(t,s) \bigr\vert \,ds \Vert \beta -\hat{\beta }_{n} \Vert \\ &\leq C_{4} \Vert \beta -\hat{\beta }_{n} \Vert . \end{aligned}$$

By Lemma 2 we get

$$\begin{aligned} I_{3}=O_{p}\bigl(n^{-1/2}\bigr). \end{aligned}$$

Then Theorem 3.2 follows from (26), (27), and (28).

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Fu, F., Zeng, Z. & Liu, X. Difference-based M-estimator of generalized semiparametric model with NSD errors. J Inequal Appl 2019, 61 (2019).

Download citation


  • 60F05
  • 62F12
  • 62G05


  • Generalized semiparametric model
  • NSD random variables
  • M-estimator
  • Asymptotic normality
  • Weak convergence rate