Skip to main content

Asymptotic properties of wavelet estimators in heteroscedastic semiparametric model based on negatively associated innovations

Abstract

Consider the heteroscedastic semiparametric regression model \(y_{i}=x_{i}\beta+g(t_{i})+\varepsilon_{i}\), \(i=1, 2, \ldots, n\), where β is an unknown slope parameter, \(\varepsilon_{i}=\sigma_{i}e_{i}\), \(\sigma^{2}_{i}=f(u_{i})\), \((x_{i},t_{i},u_{i})\) are nonrandom design points, \(y_{i}\) are the response variables, f and g are unknown functions defined on the closed interval \([0,1]\), random errors \(\{e_{i} \}\) are negatively associated (NA) random variables with zero means. Whereas kernel estimators of β, g, and f have attracted a lot of attention in the literature, in this paper, we investigate their wavelet estimators and derive the strong consistency of these estimators under NA error assumption. At the same time, we also obtain the Berry–Esséen type bounds of the wavelet estimators of β and g.

1 Introduction

Consider the following partially linear or semiparametric regression model:

$$ y_{i}=x_{i}\beta+g(t_{i})+\varepsilon_{i},\quad1 \leq i\leq n, $$
(1.1)

where β is an unknown slope parameter, \(\varepsilon_{i}=\sigma _{i}e_{i}\), \(\sigma^{2}_{i}=f(u_{i})\), \((x_{i},t_{i},u_{i})\) are nonrandom design points, \(y_{i}\) are the response variables, f and g are unknown functions defined on the closed interval \([0,1]\), and \(\{e_{i} \}\) are random errors.

It is well known that model (1.1) and its particular cases have been widely studied by many authors when the errors \(e_{i}\) are independent identically distributed (i.i.d.). For instance, when \(\sigma ^{2}_{i}=\sigma^{2}\), model (1.1) is reduced to the usual partial linear model, which was first introduced by Engle et al. [1], and then various estimation methods have been used to obtain estimators of the unknown quantities in (1.1) and their asymptotic properties, (see [25]). Under the mixing assumption, the asymptotic normality of the estimators for β and g were derived in [69]. When \(\sigma^{2}_{i}=f(u_{i})\), model (1.1) becomes the heteroscedastic semiparametric model, Back and Liang [10], Zhang and Liang [11], and Wei and Li [12] established the strong consistency and the asymptotic normality, respectively, for the least-squares estimators (LSEs) and weighted least-squares estimators (WLSEs) of β, based on nonparametric estimators of f and g. If \(g(t)\equiv0\) and \(\sigma ^{2}_{i}=f(u_{i}) \), model (1.1) is reduced to the heteroscedastic linear model; when \(\beta\equiv0\) and \(\sigma^{2}_{i}=f(u_{i})\), model (1.1) boils down to the heteroscedastic nonparametric regression model, whose asymptotic properties of unknown quantities were studied by Robinson [13], Carroll and Härdle [14], and Liang and Qi [15].

In recent years, wavelets techniques, owing to their ability to adapt to local features of curves, have been used extensively in statistics, engineering, and technological fields. Many authors have considered employing wavelet methods to estimate nonparametric and semiparametric models. See Antoniadis et al. [16], Sun and Chai [17], Li et al. [1820], Xue [21], Zhou et al. [22], among others.

In this paper, we consider NA for the model errors. Let us recall the definition of NA random variables. A finite collection of random variables \(\{X_{i}, 1\leq i\leq n\}\) is said to be negatively associated (NA) if for all disjoint subsets \(A,B\subset\{1,2,\ldots,n\}\),

$$\operatorname{Cov}\bigl(f(X_{i},i\in A),g(X_{j},j\in B) \bigr)\leq0, $$

where f and g are real coordinatewise nondecreasing functions such that their covariance exists. An infinite sequence of random variables \(\{X_{n}, n\geq1\}\) is said to be NA if for every \(n\geq2\), \(X_{1},X_{2},\ldots, X_{n}\) are NA. The definition of NA random variables was introduced by Alam and Saxena [23] and carefully studied by Joag-Dev and Proschan [24]. Although i.i.d. random variables are NA, NA variables may be non-i.i.d. according to the definitions. Because of its wide applications in systems reliability and multivariate statistical analysis, recently the notion of NA received a lot of attention. We refer to Matula [25] and Shao [26], amongst others.

In this paper, we aim to derive the least squares estimators, weighted least squares estimators of β, and their strong consistency for the wavelet estimators of f and g. At the same time, Berry–Esséen-type bounds of their wavelet estimators of β and g are investigated for the heteroscedastic semiparametric model under NA random errors.

The structure of the rest is as follows. Some basic assumptions and estimators are listed in Sect. 2. Some notations and main results are given in Sect. 3. Proofs of the main results are provided in Sect. 4. In the Appendix, some preliminary lemmas are stated.

Throughout the paper, \(C, C_{1}, C_{2}, \ldots\) denote some positive constants not depending on n, which may be different in various places. By \(\lfloor x\rfloor\) we denote the largest integer not exceeding x; \((j_{1},j_{2},\ldots,j_{n})\) stands for any permutation of \((1,2,\ldots, n)\); \(a_{n}=O(b_{n})\) means \(|a_{n}|\leq C|b_{n}|\), and \(a_{n}=o(b_{n})\) means that \(a_{n}/b_{n}\rightarrow0\). By \(I(A)\) we denote the indicator function of a set A, \(\varPhi(x)\) is the standard normal distribution function, and \(a^{+}=\max(0,a)\), \(a^{-}=\min(0,-a)\). All limits are taken as the sample size n tends to ∞, unless specified otherwise.

2 Estimators and assumptions

In model (1.1), if β is known to be the true parameter, then since \(Ee_{i}=0\), we have \(g(t_{i})=E(y_{i}-x_{i}\beta)\), \(1\leq i\leq n\). Hence a natural wavelet estimator of g is

$${g}_{n}(t,\beta)=\sum_{i=1}^{n} (y_{i}-x_{i}\beta) \int_{A_{i}}E_{m}(t,s)\,ds, $$

where the wavelet kernel \(E_{m}(t,s)\) can be defined by \(E_{m}(t,s)=2^{m}\sum_{k\in Z}\phi(2^{m}t-k)\phi(2^{m}s-k)\), ϕ is a scaling function, \(m=m(n)>0\) is an integer depending only on n, and \(A_{i}=[s_{i-1}, s_{i})\) are intervals that partition \([0,1]\) with \(t_{i}\in A_{i}\), \(i=1,2,\ldots,n\), and \(0\leq t_{1}\leq t_{2}\leq\cdots \leq t_{n}=1\). To estimate β, we minimize

$$ S(\beta)=\sum_{i=1}^{n} \bigl[y_{i}-x_{i}\beta-g_{n}(t,\beta) \bigr]^{2}=\sum_{i=1}^{n} ( \tilde{y}_{i}-\tilde{x}_{i}\beta)^{2}. $$
(2.1)

The minimizer to (2.1) is found to be

$$ \hat{\beta}_{L}=\sum_{i=1}^{n} \tilde{x}_{i}\tilde{y}_{i}/S_{n}^{2}, $$
(2.2)

where \(\tilde{x}_{i}=x_{i}-\sum_{j=1}^{n}x_{j}\int_{A_{j}}E_{m}(t_{i},s)\,ds\), \(\tilde {y}_{i}=y_{i}-\sum_{j=1}^{n}y_{j}\int_{A_{j}}E_{m}(t_{i},s)\,ds\), and \(S_{n}^{2}=\sum_{i=1}^{n} \tilde{x}^{2}_{i}\). The estimator \(\hat{\beta}_{L}\) is called the LSE of β.

When the errors are heteroscedastic, we consider two different cases according to f. If \(\sigma^{2}_{i}=f(u_{i})\) are known, then \(\hat{\beta }_{L}\) is modified to be the WLSE

$$ \hat{\beta}_{W}=\sum_{i=1}^{n} a_{i}\tilde{x}_{i}\tilde {y}_{i}/T_{n}^{2}, $$
(2.3)

where \(a_{i}=\sigma_{i}^{-2}=1/f(u_{i})\) and \(T_{n}^{2}=\sum_{i=1}^{n}a_{i}\tilde {x}^{2}_{i}\). In fact, f is unknown and must be estimated. When \(Ee_{i}^{2}=1\), we have \(E(y_{i}-x_{i}\beta-g(t_{i}))^{2}=f(u_{i})\). Hence the estimator of f can be defined by

$$ \hat{f}_{n}(u_{i})=\sum_{j=1}^{n} (\tilde{y}_{j}-\tilde{x}_{j}\hat {\beta}_{L})^{2} \int_{B_{j}}E_{m}(u_{i},s) \,ds, $$
(2.4)

where \(B_{i}=[s'_{i-1}, s'_{i})\) are intervals that partition \([0,1]\) with \(u_{i}\in B_{i}\), \(i=1,2,\ldots,n\), and \(0\leq u_{1}\leq u_{2}\leq\cdots \leq u_{n}=1\).

For convenience, we assume that \(\min_{1\leq i\leq n} |\hat {f}_{n}(u_{i})|>0\). Consequently, the WLSE of β is

$$ \tilde{\beta}_{W}=\sum_{i=1}^{n} a_{ni}\tilde{x}_{i}\tilde {y}_{i}/W_{n}^{2}, $$
(2.5)

where \(a_{ni}=1/\hat{f}_{n}(u_{i})\), \(1\leq i\leq n\), and \(W_{n}^{2}=\sum_{i=1}^{n}a_{ni}\tilde{x}^{2}_{i}\).

We define the plug-in estimators of the nonparametric component g corresponding to \(\hat{\beta}_{L}\), \(\hat{\beta}_{W}\), and \(\tilde{\beta }_{W}\), respectively, by

$$\hat{g}_{L}(t)=\sum_{i=1}^{n} (y_{i}-x_{i}\hat{\beta}_{L}) \int_{A_{i}}E_{m}(t,s)\, ds,\quad\quad \hat{g}_{W}(t)=\sum_{i=1}^{n} (y_{i}-x_{i}\hat{\beta}_{W}) \int_{A_{i}}E_{m}(t,s)\,ds, $$

and

$$\tilde{g}_{W}(t)=\sum_{i=1}^{n} (y_{i}-x_{i}\tilde{\beta}_{W}) \int _{A_{i}}E_{m}(t,s)\,ds. $$

Now we list some basic assumptions, which will be used in Sect. 3.

(A0):

Let \(\{e_{i},i\geq1\}\) be a sequence of negatively associated random variables with \(Ee_{i}=0\) and \(\sup_{i\geq 1}E|e_{i}|^{2+\delta}<\infty\) for some \(\delta>0\).

(A1):

There exists a function h on \([0,1]\) such that \(x_{i}=h(t_{i})+v_{i}\), and

  1. (i)

    \(\lim_{n\rightarrow\infty}n^{-1}\sum_{i=1}^{n}v_{i}^{2}=\varSigma _{0}\) (\(0<\varSigma_{0}<\infty\));

  2. (ii)

    \(\max_{1\leq i\leq n}|v_{i}|=O(1)\);

  3. (iii)

    \(\limsup_{n\rightarrow\infty}(\sqrt{n}\log n)^{-1}\max_{1\leq m\leq n}|\sum_{i=1}^{m}v_{j_{i}}|<\infty\).

(A2):
  1. (i)

    f, g, and h satisfy the Lipschitz condition of order 1 on \([0,1]\), \(h\in H^{v}\), \(v>3/2\), where \(H^{v}\) is the Sobolev space of order v;

  2. (ii)

    \(0< m_{0}\leq\min_{1\leq i\leq n}f(u_{i})\leq\max_{1\leq i\leq n}f(u_{i})\leq M_{0}<\infty\).

(A3):

The scaling function ϕ is r-regular (r is a positive integer), satisfies the Lipschitz condition of order 1, and has a compact support. Furthermore, \(|\hat{\phi}(\xi)-1|=O(\xi)\) as \(\xi\rightarrow\infty\), where ϕ̂ is the Fourier transform of ϕ.

(A4):
  1. (i)

    \(\max_{1\leq i\leq n}|s_{i}-s_{i-1}|=O(n^{-1})\); \(\max_{1\leq i\leq n}|s'_{i}-s'_{i-1}|=O(n^{-1})\);

  2. (ii)

    There exist \(d_{1}>0\) and \(d_{2}>0\) such that \(\min_{1\leq i\leq n}|t_{i}-t_{i-1}|\geq d_{1}/n\) and \(\min_{1\leq i\leq n}|u_{i}-u_{i-1}|\geq d_{2}/n\).

(A5):

The spectral density function \(\psi(\omega)\) of \(\{e_{i}\}\) satisfies \(0< C_{1}\leq\psi(\omega)\leq C_{2}<\infty\) for \(\omega\in(-\pi , \pi]\).

(A6):

There exist positive integers \(p:=p(n)\), \(q:=q(n)\), and \(k:=k_{n}=\lfloor\frac{n}{p+q}\rfloor\) such that for \(p+q\leq n\), \(qp^{-1}\leq C<\infty\).

Remark 2.1

Assumptions (A1), (A2), and (A5) are the standard regularity conditions commonly used in the recent literature such as Härdle [5], Liang et al. [6], Liang and Fan [7], Zhang and Liang [11]. Assumption (A3) is a general condition for wavelet estimator. According to Bernstein’s blockwise idea, Assumption (A6) is a technical condition and easily satisfied if p, q are chosen reasonably to show Theorem 3.3 (see, e.g., Liang et al. [15, 27] and Li et al. [18, 20]).

Remark 2.2

It can be deduced from (A1)(i), (iii), (A2), (A3), and (A4) that

$$ \begin{gathered}n^{-1}\sum_{i=1}^{n} \tilde{x}_{i}^{2}\rightarrow\varSigma_{0};\qquad S_{n}^{-2}\sum_{i=1}^{n} \vert \tilde{x}_{i} \vert \leq C; \\ C_{3}\leq n^{-1}\sum_{i=1}^{n} \sigma_{i}^{-2}\tilde{x}_{i}^{2}\leq C_{4};\qquad T_{n}^{-2}\sum_{i=1}^{n} \bigl\vert \sigma_{i}^{-2}\tilde{x}_{i} \bigr\vert \leq C. \end{gathered} $$
(2.6)

Remark 2.3

Let \(\tilde{\kappa}(t_{i})=\kappa(t_{i})-\sum_{j=1}^{n}\kappa(t_{j})\int _{A_{j}}E_{m}(t_{i},s)\,ds\), set \(\kappa=f, g,\text{ or }h\). Under assumptions (A2)(i), (A3), and (A4), by Theorem 3.2 in [16] it follows that

$$ \sup_{t} \bigl\vert \tilde{\kappa}(t) \bigr\vert =O \bigl(n^{-1}+2^{-m}\bigr). $$
(2.7)

Remark 2.4

By (A1)(ii), (2.7), and Lemma A.6 in the Appendix it is easy to obtain that

$$ \max_{1\leq i\leq n} \vert \tilde{x}_{i} \vert \leq\max _{1\leq i\leq n} \vert \tilde{h}_{i} \vert +\max _{1\leq i\leq n} \vert v_{i} \vert +\max _{1\leq j\leq n} \vert v_{j} \vert \max _{1\leq i\leq n}\sum_{j=1}^{n} \biggl\vert \int _{A_{j}}E_{m}(t_{i},s)\,ds \biggr\vert =O(1). $$
(2.8)

3 Notations and main results

To state our main results, we introduce the following notations. Set

$$\begin{gathered}\sigma^{2}_{1n}= \operatorname{Var} \Biggl(\sum_{i=1}^{n} \tilde {x}_{i}\sigma_{i}e_{i} \Biggr),\qquad \sigma^{2}_{2n}=\operatorname{Var} \Biggl(\sum _{i=1}^{n}\tilde{x}_{i}\sigma ^{-1}_{i}e_{i} \Biggr), \\ \varGamma^{2}_{n}(t)= \operatorname{Var} \Biggl(\sum_{i=1}^{n} \sigma_{i}e_{i} \int _{A_{i}}E_{m}(t,s)\,ds \Biggr); \qquad u(q)=\sup_{j\geq1}\sum_{j: \vert j-i \vert \geq q} \bigl\vert \operatorname{Cov}(e_{i}, e_{j}) \bigr\vert ,\\ \lambda _{1n}=qp^{-1}, \qquad\lambda_{2n}=pn^{-1},\qquad \lambda_{3n}=\bigl(2^{-m}+n^{-1} \bigr)^{2}, \qquad\lambda_{4n}=\frac{2^{m}}{n}\log^{2}n ; \\ \lambda_{5n}=\bigl(2^{-m}+n^{-1}\bigr)\log n+ \sqrt{n}\bigl(2^{-m}+n^{-1}\bigr)^{-2}, \\ \gamma _{1n}=qp^{-1}2^{m}, \qquad\gamma_{2n}=pn^{-1}2^{m},\qquad\gamma_{3n}=2^{-m/2}+\sqrt{2^{m}/n}\log n;\\ \mu_{1n}=\sum_{i=1}^{3} \lambda_{in}^{1/3}+ 2\lambda_{4n}^{1/3}+ \lambda _{5n}; \qquad \mu_{2n}=\sum_{i=1}^{2} \gamma_{in}^{1/3}+\gamma_{3n}^{(2+\delta )/(3+\delta)}; \\ \upsilon_{1n}=\lambda_{2n}^{\delta/2}+u^{1/3}(q);\qquad \upsilon_{2n}= \gamma_{2n}^{\delta/2}+u^{1/3}(q). \end{gathered} $$

Theorem 3.1

Suppose that (A0), (A1)(i), and (A2)(A5) hold. If \(2^{m}/n=O(n^{-1/2})\), then

$$ {(\mathrm{i})} \quad\hat{\beta}_{L}\rightarrow\beta\quad \textit{a.s.}; \qquad{(\mathrm{ii})}\quad \hat{\beta}_{W}\rightarrow\beta \quad \textit{a.s.} $$
(3.1)

In addition, if \(\max_{1\leq j\leq n}|\sum_{i=1}^{n}x_{i}\int _{A_{i}}E_{m}(t_{j},s)\,ds|=O(1)\), then

$$ {(\mathrm{i})}\quad \max_{1\leq i\leq n} \bigl\vert \hat {g}_{L}(t_{i})-g(t_{i}) \bigr\vert \rightarrow0 \quad\textit{a.s.}; \qquad {(\mathrm{ii})}\quad\max_{1\leq i\leq n} \bigl\vert \hat{g}_{W}(t_{i})-g(t_{i}) \bigr\vert \rightarrow 0 \quad\textit{a.s.} $$
(3.2)

Theorem 3.2

Assume that (A0), (A1)(i), and (A2)(A5) are satisfied. If \(Ee^{2}_{i}=1\), \(\sup_{i}E|e_{i}|^{p}< \infty\)for some \(p>4\)and \(2^{m}/n=O(n^{-1/2})\), then

$$\begin{gathered} {(\mathrm{i})}\quad \vert \hat{\beta}_{L}-\beta \vert =o \bigl(n^{-1/4}\bigr);\qquad{(\mathrm{ii})}\quad\max_{1\leq i\leq n} \bigl\vert \hat {f}_{n}(u_{i})-f(u_{i}) \bigr\vert \rightarrow0 \quad\textit{a.s.};\\ {(\mathrm{iii})} \quad\tilde{ \beta}_{W}\rightarrow\beta\quad\textit{a.s.}\end{gathered} $$
(3.3)

In addition, if \(\max_{1\leq j\leq n}|\sum_{i=1}^{n}x_{i}\int _{A_{i}}E_{m}(t_{j},s)\,ds|=O(1)\), then

$$ \max_{1\leq i\leq n} \bigl\vert \tilde {g}_{W}(t_{i})-g(t_{i}) \bigr\vert \rightarrow0 \quad\textit{a.s.} $$
(3.4)

Remark 3.1

When random errors \(\{e_{i}\}\) are i.i.d. random variables with zero means, Chen et al. [3] proved (3.1) and (3.3) under similar conditions. Since independent random samples are a particular case of NA random samples, Theorems 3.1 and 3.2 extend the results of Chen et al. [3]. Back and Liang [10] also obtained (3.1)–(3.4) for the weighted estimators of unknown quantities under NA samples.

Theorem 3.3

Suppose that (A0)(A6) are satisfied. If \(\mu_{1n}\rightarrow 0\)and \(\upsilon_{1n}\rightarrow0\), then we have

$$\begin{gathered} {(\mathrm{i})}\quad\sup_{y} \biggl\vert P \biggl( \frac{S_{n}^{2}(\hat {\beta}_{L}-\beta)}{\sigma_{1n}}\leq y \biggr)-\varPhi(y) \biggr\vert =O(\mu _{1n}+ \upsilon_{1n});\\ {(\mathrm{ii})}\quad\sup _{y} \biggl\vert P \biggl(\frac{T_{n}^{2}(\hat{\beta}_{W}-\beta )}{\sigma_{2n}}\leq y \biggr)- \varPhi(y) \biggr\vert =O(\mu_{1n}+\upsilon _{1n}).\end{gathered} $$
(3.5)

Corollary 3.1

Suppose that the assumptions of Theorem 3.3are fulfilled. If \(2^{m}/n=O(n^{-\theta})\), \(u(n)=O(n^{-(1-\theta)/(2\theta-1)})\), \(\frac {1}{2}<\theta\leq\frac{7}{10}\), then

$$\begin{gathered} {(\mathrm{i})}\quad\sup_{y} \biggl\vert P \biggl( \frac{S_{n}^{2}(\hat {\beta}_{L}-\beta)}{\sigma_{1n}}\leq y \biggr)-\varPhi(y) \biggr\vert =O\bigl(n^{\frac {\theta-1}{3}} \bigr);\\ {(\mathrm{ii})}\quad\sup_{y} \biggl\vert P \biggl(\frac{T_{n}^{2}(\hat{\beta}_{W}-\beta )}{\sigma_{2n}}\leq y \biggr)-\varPhi(y) \biggr\vert =O \bigl(n^{\frac{\theta -1}{3}}\bigr).\end{gathered} $$
(3.6)

Theorem 3.4

Under the assumptions of Theorem 3.3, if \(\mu_{2n}\rightarrow 0\)and \(\upsilon_{2n}\rightarrow0\), then for each \(t\in[0,1]\), we have

$$ \sup_{y} \biggl\vert P \biggl(\frac{\hat{g}(t)-E\hat {g}(t)}{\varGamma_{n}(t)}\leq y \biggr)-\varPhi(y) \biggr\vert =O(\mu _{2n}+\upsilon_{2n}), $$
(3.7)

where \(\hat{g}(t)=\hat{g}_{L}(t)\)or \(\hat{g}_{W}(t)\).

Corollary 3.2

Under the assumption of Theorem 3.4, if

$$2^{m}/n=O\bigl(n^{-\theta }\bigr),\qquad u(n)=O\bigl(n^{-(\theta-\rho)/(2\rho-1)}\bigr),\quad \frac{1}{2}< \rho< \theta< 1,$$

then

$$ \sup_{y} \biggl\vert P \biggl(\frac{\hat{g}(t)-E\hat {g}(t)}{\varGamma_{n}(t)}\leq y \biggr)-\varPhi(y) \biggr\vert =O \bigl(n^{-\min \{\frac{\theta-\rho}{3},\frac{3(1-\theta)}{8} \}} \bigr). $$
(3.8)

4 Proof of main theorems

Proof of Theorem 3.1

We prove only (3.1)(ii), as the proof of (3.1)(i) is analogous.

Step 1. Set \(\tilde{g}_{i}=g(t_{i})-\sum_{j=1}^{n}g(t_{j})\int _{A_{j}}E_{m}(t_{i},s)\,ds\), \(\tilde{\varepsilon}_{i}=\varepsilon_{i}-\sum_{j=1}^{n}\varepsilon_{j}\int_{A_{j}}E_{m}(t_{i},s)\,ds\). It easily follows that

$$ \begin{aligned}[b]\hat{\beta}_{W}-\beta&=T_{n}^{-2} \Biggl[\sum_{i=1}^{n}a_{i}\tilde {x}_{i}\varepsilon_{i}- \sum_{i=1}^{n}a_{i} \tilde{x}_{i} \Biggl(\sum_{j=1}^{n} \varepsilon_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)+\sum _{i=1}^{n}a_{i} \tilde{x}_{i}\tilde{g}_{i} \Biggr] \\ &:=A_{1n}-A_{2n}+A_{3n}. \end{aligned} $$
(4.1)

Firstly, we prove \(A_{1n}\rightarrow0\) a.s. Note that \(A_{1n}=\sum_{i=1}^{n} (T_{n}^{-2}a_{i}\tilde{x}_{i}\sigma_{i}) e_{i}=:\sum_{i=1}^{n} c_{ni}e_{i}\). It follows from (A1)(i), (A2)(ii), (2.6), and \(2^{m}/n=O(n^{-1/2})\) that

$$\begin{gathered} \max_{1\leq i\leq n} \vert c_{ni} \vert \leq\max _{1\leq i\leq n}\frac{ \vert a_{i}\tilde {x}_{i} \vert }{T_{n}}\cdot\max_{1\leq i\leq n} \frac{\sigma_{i}}{T_{n}}=O\bigl(n^{-1/2}\bigr),\\ \sum _{i=1}^{n}c^{2}_{ni}=T_{n}^{-4} \sum_{i=1}^{n}a_{i} \tilde{x}^{2}_{i}\cdot a_{i}\sigma ^{2}_{i}=O\bigl(n^{-1}\bigr).\end{gathered} $$

Hence, according to Lemma A.1, we have \(A_{1n}\rightarrow 0\) a.s. Obviously,

$$A_{2n}=\sum_{j=1}^{n} \Biggl( \sum_{i=1}^{n} T_{n}^{-2}a_{i} \tilde{x}_{i}\sigma_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)e_{j}=: \sum_{j=1}^{n}d_{nj}e_{j}. $$

By (A2)(ii), Lemma A.6, (2.6), and \(2^{m}/n=O(n^{-1/2})\) we can obtain

$$\begin{gathered}\max_{1\leq j\leq n} \vert d_{nj} \vert \leq \Bigl(\max_{1\leq j\leq n} \sigma_{j} \Bigr) \biggl(\max_{1\leq i,j\leq n} \int _{A_{j}} \bigl\vert E_{m}(t_{i},s) \bigr\vert \,ds \biggr) \Biggl(T_{n}^{-2}\sum _{i=1}^{n} \vert a_{i}\tilde {x}_{i} \vert \Biggr)=O\bigl(n^{-1/2}\bigr), \\ \begin{aligned}\sum_{j=1}^{n}d^{2}_{nj}& \leq CT_{n}^{-2}\sum_{j=1}^{n} \sum_{i=1}^{n} \biggl( \int _{A_{j}}E_{m}(t_{i},s)\,ds \biggr)^{2} \Biggl(T_{n}^{-2}\sum _{i=1}^{n}a_{i}\tilde {x}^{2}_{i} \Biggr) \\ &\leq C T_{n}^{-2}\sum_{i=1}^{n} \sum_{j=1}^{n} \biggl( \int_{A_{j}}E_{m}(t_{i},s)\, ds \biggr)^{2}=O\bigl(2^{m}/n\bigr)=O\bigl(n^{-1/2} \bigr).\end{aligned} \end{gathered} $$

Therefore \(A_{2n}\rightarrow0\) a.s. by Lemma A.1. Clearly, from (2.6) and (2.7) we obtain

$$\vert A_{3n} \vert \leq\Bigl(\max_{1\leq i\leq n} \vert \tilde{g}_{i} \vert \Bigr)\cdot\Biggl(T_{n}^{-2} \sum_{i=1}^{n} \vert a_{i} \tilde{x}_{i} \vert \Biggr)=O\bigl(2^{-m}+n^{-1} \bigr)\rightarrow0. $$

Step 2. We prove (3.2)(i), as the proof of (3.2)(ii) is analogous. We can see that

$$\begin{gathered}\max_{1\leq i\leq n} \bigl\vert \hat{g}_{L}(t_{i})-g(t_{i}) \bigr\vert \\ \quad\leq\max_{1\leq i\leq n} \Biggl\{ \vert \beta-\hat{ \beta}_{L} \vert \cdot \Biggl\vert \sum _{j=1}^{n}x_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr\vert + \vert \tilde {g}_{i} \vert + \Biggl\vert \sum _{j=1}^{n}\sigma_{j}e_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr\vert \Biggr\} \\ \quad:=B_{1n}+B_{2n}+B_{3n}. \end{gathered} $$

Together with (3.1)(i), under the additional assumption, it follows that \(B_{1n}\rightarrow0\) a.s. We obtain from (A2)(ii) and Lemma A.6 that \(B_{3n}\rightarrow0\) a.s. by applying Lemma A.2. Hence (3.2)(i) is proved by (2.7). □

Proof of Theorem 3.2

Step 1. Firstly, we prove (3.3)(i). We have

$$ \begin{aligned}[b] \vert \hat{\beta}_{L}-\beta \vert &=S_{n}^{-2} \Biggl[\sum_{i=1}^{n} \tilde {x}_{i}\varepsilon_{i}- \sum _{i=1}^{n}\tilde{x}_{i} \Biggl(\sum _{j=1}^{n}\varepsilon_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)+\sum _{i=1}^{n}\tilde{x}_{i} \tilde{g}_{i} \Biggr] \\ &:=C_{1n}-C_{2n}+C_{3n}. \end{aligned} $$
(4.2)

Note that

$$C_{1n}=\sum_{i=1}^{n}\bigl(S_{n}^{-2}\tilde{x}_{i}\sigma_{i}\bigr)e_{i}:=\sum_{i=1}^{n}c'_{ni}e_{i}$$

and

$$C_{2n}=\sum_{j=1}^{n} \Biggl(\sum_{i=1}^{n} S_{n}^{-2}\tilde{x}_{i}\sigma_{j}\int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)e_{j}:=\sum_{j=1}^{n} d'_{nj}e_{j}.$$

Similar to the proof of (3.1)(ii), we have

$$\begin{gathered}\max_{1\leq i\leq n} \bigl\vert c'_{ni} \bigr\vert \leq \biggl(\max _{1\leq i\leq n}\frac{ \vert \tilde{x}_{i} \vert }{S_{n}} \biggr)\cdot\frac{1}{S_{n}}=O \bigl(n^{-1/2}\bigr),\qquad \sum_{i=1}^{n}c^{\prime2}_{ni} \leq C\sum_{i=1}^{n}\tilde{x}_{i}^{2}/S_{n}^{4}=O \bigl(n^{-1}\bigr), \\ \max_{1\leq j\leq n} \bigl\vert d'_{nj} \bigr\vert \leq \Bigl(\max_{1\leq j\leq n}\sigma _{j} \Bigr) \biggl(\max_{1\leq i,j\leq n} \int_{A_{j}} \bigl\vert E_{m}(t_{i},s) \bigr\vert \,ds \biggr) \Biggl(S_{n}^{-2}\sum _{i=1}^{n} \vert \tilde{x}_{i} \vert \Biggr)=O\bigl(2^{m}/n\bigr)=O\bigl(n^{-1/2}\bigr), \\ \begin{aligned}\sum_{j=1}^{n}d^{\prime2}_{nj}& \leq CS_{n}^{-2}\sum_{j=1}^{n} \sum_{i=1}^{n} \biggl( \int _{A_{j}}E_{m}(t_{i},s)\,ds \biggr)^{2} \Biggl(S_{n}^{-2}\sum _{i=1}^{n}\tilde {x}^{2}_{i} \Biggr) \\ &\leq C S_{n}^{-2}\sum_{i=1}^{n} \sum_{j=1}^{n} \biggl( \int_{A_{j}}E_{m}(t_{i},s)\, ds \biggr)^{2}=O\bigl(2^{m}/n\bigr)=O\bigl(n^{-1/2} \bigr).\end{aligned} \end{gathered} $$

Therefore, applying Lemma A.1 and taking \(\alpha=4\), we obtain that \(C_{in}=o(n^{-1/4})\) a.s., \(i=1,2\). As for \(C_{3n}\), by \(2^{m}/n=O(n^{-1/2})\), (2.6), and (2.7) we easily see that

$$\vert C_{3n} \vert \leq \Bigl(\max_{1\leq i\leq n} \vert \tilde{g}_{i} \vert \Bigr)\cdot\Biggl(S_{n}^{-2} \sum_{i=1}^{n} \vert \tilde{x}_{i} \vert \Biggr)=O\bigl(2^{-m}+n^{-1}\bigr)=o \bigl(n^{-1/4}\bigr). $$

Step 2. We prove (3.3)(ii). Noting that \(\hat {f}_{n}(u)=\sum_{i=1}^{n} [\tilde{x}_{i}(\beta-\hat{\beta}_{L})+\tilde {g}_{i}+\tilde{\varepsilon}_{i} ]^{2}\int_{B_{i}}E_{m}(u,s)\,ds\), we can see that

$$\begin{aligned}& \max_{1\leq j\leq n} \bigl\vert \hat{f}_{n}(u_{j})-f(u_{j}) \bigr\vert \\& \quad\leq\max _{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n} \varepsilon^{2}_{i} \int _{B_{i}}E_{m}(u_{j},s) \,ds-f(u_{j}) \Biggr\vert \\& \qquad{}+2\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{n}\varepsilon_{i} \int _{B_{i}}E_{m}(u_{j},s)\,ds \Biggl(\sum _{j=1}^{n}\varepsilon_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr) \Biggr\vert \\& \qquad{}+\max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n} \int_{B_{i}}E_{m}(u_{j},s)\,ds \Biggl(\sum _{j=1}^{n}\varepsilon_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)^{2} \Biggr\vert \\& \qquad{}+2\max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}\tilde{\varepsilon}_{i} \tilde {g}_{i} \int_{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert \\& \qquad{}+2 \vert \beta-\hat{\beta}_{L} \vert \max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}\tilde {x}_{i}\tilde{\varepsilon}_{i} \int_{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert +(\beta-\hat{\beta}_{L})^{2}\max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}\tilde {x}_{i}^{2} \int_{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert \\& \qquad{}+2 \vert \beta-\hat{\beta}_{L} \vert \max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}\tilde {x}_{i}\tilde{g}_{i} \int_{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert + \max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n} \tilde{g}^{2}_{i} \int _{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert \\& \quad:=\sum_{i=1}^{8}D_{in}. \end{aligned}$$

As for \(D_{1n}\), we have

$$\begin{aligned}D_{1n}\leq{}&\max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}f(u_{i}) \bigl(e_{i}^{2}-1\bigr) \int_{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert \\ &+\max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}f(u_{i}) \int_{B_{i}}E_{m}(u_{j},s)\, ds-f(u_{j}) \Biggr\vert \\ =:{}&D_{11n}+D_{12n}. \end{aligned} $$

Note that \(Ee_{i}^{2}=1\), so \(e_{i}^{2}-1=[(e^{+}_{i})^{2}-E(e^{+}_{i})^{2}]+[(e^{-}_{i})^{2}-E(e^{-}_{i})^{2}]:=\xi_{i_{1}}+\xi _{i_{2}}\), and

$$D_{11n}\leq\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{n} \biggl(f(u_{i}) \int _{B_{i}}E_{m}(u_{j},s)\,ds \biggr) \xi_{i_{1}} \Biggr\vert +\max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n} \biggl(f(u_{i}) \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr) \xi_{i_{2}} \Biggr\vert . $$

Since \(\{\xi_{i_{1}}, i\geq1\}\) and \(\{\xi_{i_{2}}, i\geq1\}\) are NA random variables with zero means, \(\sup_{i}E|\xi_{i_{j}}|^{p/2}\leq C\sup_{i}E|e_{i}|^{p}<\infty\), \(j=1,2\). By (A2)(ii) and Lemma A.6 we have

$$\max_{1\leq i,j\leq n} \biggl\vert f(u_{i}) \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr\vert =O\bigl(2^{m}/n\bigr)=O\bigl(n^{-1/2}\bigr) $$

and

$$\max_{1\leq j\leq n}\sum_{i=1}^{n} \biggl\vert f(u_{i}) \int_{B_{i}}E_{m}(u_{j},s)\, ds \biggr\vert =O(1). $$

Therefore \(D_{11n}\rightarrow0\) a.s. by Lemma A.2. By (2.7) we have \(|D_{12n}|=O(2^{-m}+n^{-1})\), so \(D_{1n}\rightarrow0\) a.s.

Note that

$$\vert D_{2n} \vert \leq2 \Biggl(\max_{1\leq i\leq n} \Biggl\vert \sum_{j=1}^{n}\sigma _{j}e_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr\vert \Biggr) \Biggl(\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{n}\sigma_{i}e_{i} \int_{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert \Biggr). $$

Similar to the proof of \(D_{11n}\), we obtain \(D_{2n}\rightarrow0\) a.s. by Lemma A.2.

Applying the assumptions, from (2.6), (2.7), (3.3)(i), Lemma A.2, and Lemma A.6 it follows that

$$\begin{aligned}& \vert D_{3n} \vert \leq\max _{1\leq i\leq n} \Biggl(\sum_{j=1}^{n} \sigma _{j}e_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)^{2}\max_{1\leq j\leq n}\sum _{i=1}^{n} \biggl\vert \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr\vert \rightarrow0 \quad\mbox{a.s.}, \\& \begin{aligned} \vert D_{4n} \vert \leq{}&2\Bigl(\max_{1\leq i\leq n} \vert \tilde{g}_{i} \vert \Bigr)\cdot \Biggl(\max _{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}{ \varepsilon}_{i} \int_{B_{i}}E_{m}(u_{j},s)\, ds \Biggr\vert \Biggr) +2\Bigl(\max_{1\leq i\leq n} \vert \tilde{g}_{i} \vert \Bigr) \\ &\cdot \Biggl(\max_{1\leq i\leq n} \Biggl\vert \sum _{k=1}^{n}\varepsilon_{k} \int _{A_{k}}E_{m}(t_{i},s)\,ds \Biggr\vert \Biggr) \Biggl(\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{n} \int_{B_{i}}E_{m}(u_{j},s)\, ds \Biggr\vert \Biggr)\\ \rightarrow{}&0 \quad\mbox{a.s.},\end{aligned} \\& \vert D_{6n} \vert \leq(\beta-\hat{\beta}_{L})^{2} \cdot \biggl(\max_{1\leq i,j\leq n} \int_{B_{i}} \bigl\vert E_{m}(u_{j},s) \bigr\vert \,ds \biggr) \Biggl(\sum_{i=1}^{n} \tilde{x}_{i}^{2} \Biggr)=o(1) \quad\mbox{a.s.}, \\& \vert D_{7n} \vert \leq2 \vert \beta-\hat{ \beta}_{L} \vert \cdot\Bigl(\max_{1\leq i\leq n} \vert \tilde {g}_{i} \vert \Bigr)\cdot \biggl(\max_{1\leq i,j\leq n} \int_{B_{i}} \bigl\vert E_{m}(u_{j},s) \bigr\vert \,ds \biggr) \Biggl(\sum_{i=1}^{n} \vert \tilde{x}_{i} \vert \Biggr)\rightarrow0 \quad\mbox{a.s.}, \\& \vert D_{8n} \vert \leq\Bigl(\max_{1\leq i\leq n} \vert \tilde{g}_{i} \vert \Bigr)^{2}\cdot \Biggl(\max _{1\leq j\leq n}\sum_{i=1}^{n} \biggl\vert \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr\vert \Biggr)\rightarrow0, \\& \vert D_{5n} \vert \leq2 \vert \beta-\hat{ \beta}_{L} \vert \cdot \Biggl(\max_{1\leq j\leq n}\sqrt { \Biggl(\sum_{i=1}^{n} \tilde{x}^{2}_{i} \Biggr)\sum_{i=1}^{n} \tilde {\varepsilon}^{2}_{i} \biggl( \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2}} \Biggr). \end{aligned}$$

To prove that \(D_{5n}\rightarrow0\) a.s., it suffices to show that

$$ \max_{1\leq j\leq n}\sum_{i=1}^{n} \tilde{\varepsilon }^{2}_{i} \biggl( \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2}=O\bigl(n^{-1/2}\bigr) \quad\mbox{a.s.} $$
(4.3)

As for (4.3), we can split

$$\begin{gathered}\max_{1\leq j\leq n}\sum _{i=1}^{n}\tilde{\varepsilon }^{2}_{i} \biggl( \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2} \\\quad\leq\max_{1\leq j\leq n}\sum _{i=1}^{n}{\varepsilon}^{2}_{i} \biggl( \int _{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2} \\ \qquad{}+2\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{n}{\varepsilon}_{i} \biggl( \int _{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2} \Biggl(\sum_{k=1}^{n}{ \varepsilon}_{k} \int _{A_{k}}E_{m}(t_{i},s)\,ds \Biggr) \Biggr\vert \\ \qquad{}+ \Biggl(\max_{1\leq j\leq n}\sum_{i=1}^{n} \biggl( \int_{B_{i}}E_{m}(u_{j},s)\, ds \biggr)^{2} \Biggr)\cdot \Biggl(\sum_{k=1}^{n}{ \varepsilon}_{k} \int _{A_{k}}E_{m}(t_{i},s)\,ds \Biggr)^{2} \\ \quad:=D_{51n}+D_{52n}+D_{53n}. \end{gathered} $$

By Lemmas A.2 and A.6, since \(2^{m}/n=O(n^{-1/2})\), we have

$$\begin{aligned}& \begin{aligned} \vert D_{52n} \vert \leq{}&2 \Biggl(\max _{1\leq i\leq n} \Biggl\vert \sum_{k=1}^{n}{ \varepsilon}_{k} \int_{A_{k}}E_{m}(t_{i},s)\,ds \Biggr\vert \Biggr) \cdot \biggl(\max_{1\leq i,j\leq n} \int_{B_{i}} \bigl\vert E_{m}(u_{j},s) \bigr\vert \,ds \biggr) \\ & \cdot\Biggl(\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{n}{\varepsilon}_{i} \int _{B_{i}}E_{m}(u_{j},s)\,ds \Biggr\vert \Biggr)\\={}&o\bigl(n^{-1/2}\bigr) \quad\mbox{a.s.},\end{aligned} \\& \begin{aligned} \vert D_{53n} \vert \leq{}& \Biggl(\max_{1\leq i\leq n} \Biggl\vert \sum_{k=1}^{n}{\varepsilon }_{k} \int_{A_{k}}E_{m}(t_{i},s)\,ds \Biggr\vert ^{2} \Biggr)\cdot \biggl(\max_{1\leq i,j\leq n} \int_{B_{i}} \bigl\vert E_{m}(u_{j},s) \bigr\vert \,ds \biggr) \\ & \cdot\Biggl(\max_{1\leq j\leq n}\sum_{i=1}^{n} \biggl\vert \int_{B_{i}}E_{m}(u_{j},s)\, ds \biggr\vert \Biggr)\\ ={}&o\bigl(n^{-1/2}\bigr) \quad\mbox{a.s.} \end{aligned} \end{aligned}$$

Note that

$$\begin{aligned} \vert D_{51n} \vert \leq{}&\max _{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n} \sigma^{2}_{i}\xi_{i_{1}} \biggl( \int_{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2} \Biggr\vert + \max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n}\sigma^{2}_{i} \xi_{i_{2}} \biggl( \int _{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2} \Biggr\vert \\ &+\max_{1\leq j\leq n} \Biggl\vert \sum_{i=1}^{n} \sigma^{2}_{i}Ee^{2}_{i} \biggl( \int _{B_{i}}E_{m}(u_{j},s)\,ds \biggr)^{2} \Biggr\vert \\:={}& D'_{51n}+ D''_{51n}+ D'''_{51n}, \end{aligned} $$

where \(\xi_{i_{1}}=[(e^{+}_{i})^{2}-E(e^{+}_{i})^{2}]\) and \(\xi _{i_{2}}=[(e^{-}_{i})^{2}-E(e^{-}_{i})^{2}]\) are NA random variables with zero means. Similar to the proof of \(D_{11n}\), we obtain that \(D'_{51n}=o(n^{-1/2})\) a.s. and \(D''_{51n}=o(n^{-1/2}) \) a.s. by Lemma A.2. On the other hand,

$$\bigl\vert D'''_{51n} \bigr\vert \leq \biggl(\max_{1\leq j\leq n}\sigma^{2}_{i} \int _{B_{i}} \bigl\vert E_{m}(u_{j},s) \bigr\vert \,ds \biggr) \Biggl(\max_{1\leq j\leq n}\sum _{i=1}^{n} \biggl\vert \int_{B_{i}}E_{m}(u_{j},s)\, ds \biggr\vert \Biggr)=O\bigl(n^{-1/2}\bigr), $$

and thus we have proved that \(D_{51n}=O(n^{-1/2})\) a.s. This completes the proof of (3.3)(ii).

Step 3. Next, we prove (3.3)(iii) by means of (A2)(ii) and (3.3)(ii). When n is large enough, it easily follows that

$$ 0\leq m'_{0}\leq\min_{1\leq i\leq n} \hat{f}_{n}(u_{i})\leq \max_{1\leq i\leq n} \hat{f}_{n}(u_{i})\leq M'_{0}\leq \infty, $$
(4.4)

\(C_{5}\leq W_{n}^{2}/n\leq C_{6}\), and \(W^{-2}_{n}\sum_{i=1}^{n}|a_{ni}\tilde {x}_{i}|\leq C\). Hence we have

$$ \vert \tilde{\beta}_{n}-\beta \vert \leq\frac{n}{W_{n}^{2}} \Biggl\vert \frac {1}{n}\sum_{i=1}^{n} a_{ni}\tilde{x}_{i}\tilde{\varepsilon}_{i} \Biggr\vert +\frac {1}{W_{n}^{2}} \Biggl\vert \sum_{i=1}^{n} a_{ni}\tilde{x}_{i}\tilde{g}_{i} \Biggr\vert =: \frac {n}{W_{n}^{2}}E_{1n}+E_{2n}. $$
(4.5)

Together with (2.7) and (4.5), we get

$$\begin{gathered} \vert E_{2n} \vert \leq \Bigl(\max _{1\leq i\leq n} \vert \tilde {g}_{i} \vert \Bigr)\cdot \Biggl(W_{n}^{-2}\sum_{i=1}^{n} \vert a_{ni}\tilde{x}_{i} \vert \Biggr)\rightarrow0, \\ \begin{aligned} \vert E_{1n} \vert \leq{}& \Biggl\vert \frac{1}{n}\sum _{i=1}^{n} (a_{ni}-a_{i}) \tilde {x}_{i}\tilde{\varepsilon}_{i} \Biggr\vert + \Biggl\vert \frac{1}{n}\sum_{i=1}^{n} a_{i}\tilde{x}_{i}\tilde{\varepsilon}_{i} \Biggr\vert \\ \leq{}& \Biggl\vert \frac{1}{n}\sum_{i=1}^{n} (a_{ni}-a_{i})\tilde{x}_{i}{\varepsilon }_{i} \Biggr\vert + \Biggl\vert \frac{1}{n}\sum _{i=1}^{n} (a_{ni}-a_{i}) \tilde{x}_{i} \Biggl( \sum_{j=1}^{n} \varepsilon_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr) \Biggr\vert \\ &+ \Biggl\vert \frac{1}{n}\sum_{i=1}^{n} a_{i}\tilde{x}_{i}\tilde{\varepsilon }_{i} \Biggr\vert \\ =:{}&E_{11n}+E_{12n}+E_{13n}.\end{aligned} \end{gathered} $$

We know from Lemma A.2 that

$$ \begin{aligned}[b]n^{-1}\sum_{i=1}^{n}e_{i}^{2}&=n^{-1} \sum_{i=1}^{n} \bigl[\bigl(e^{+}_{i} \bigr)^{2}-E\bigl(e^{+}_{i}\bigr)^{2} \bigr]+ n^{-1}\sum_{i=1}^{n} \bigl[ \bigl(e^{-}_{i}\bigr)^{2}-E\bigl(e^{-}_{i} \bigr)^{2} \bigr]+ n^{-1}\sum_{i=1}^{n}Ee^{2}_{i} \\ &=O(1)\quad \mbox{a.s.} \end{aligned} $$
(4.6)

Applying Lemma A.2 and combining (4.4)–(4.6) with (3.3)(ii), we obtain

$$\begin{gathered} \vert E_{11n} \vert \leq \biggl(\max _{1\leq i\leq n}\frac{ \vert \hat {f}_{n}(u_{i})-f(u_{i}) \vert }{\hat{f}_{n}(u_{i})f(u_{i})} \biggr) \sqrt{ \Biggl(\frac{1}{n}\sum_{i=1}^{n} \tilde{x}^{2}_{i} \Biggr) \Biggl(\frac {1}{n}\sum _{i=1}^{n}{e}^{2}_{i} \Biggr)}\rightarrow0 \quad\mbox{a.s.}, \\ \begin{aligned} \vert E_{12n} \vert &\leq \biggl(\max_{1\leq i\leq n} \frac{ \vert \hat {f}_{n}(u_{i})-f(u_{i}) \vert }{\hat{f}_{n}(u_{i})f(u_{i})} \biggr)\cdot \Biggl(\frac{1}{n}\sum _{i=1}^{n} \vert \tilde{x}_{i} \vert \Biggr)\cdot \Biggl(\max_{1\leq i\leq n} \Biggl\vert \sum _{j=1}^{n}\varepsilon_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr\vert \Biggr)\\&\rightarrow0\quad \mbox{a.s.} \end{aligned}\end{gathered} $$

As for \(E_{13n}\), we have \(E_{13n}=\frac {T_{n}^{2}}{n}(|A_{1n}+A_{2n}|)\rightarrow0\) a.s., and therefore \(E_{1n}\rightarrow0\) a.s. □

The proof of (3.4) is similar to that of (3.2)(i), and hence we omit it.

Proof of Theorem 3.3

We prove only (3.5)(i), as the proof of (3.5)(ii) is analogous. From the definition of \(\hat{\beta}_{L}\) we have

$$ \begin{aligned}[b]S_{n}^{2}(\hat{ \beta}_{L}-\beta)&=\sum_{i=1}^{n} \tilde{x}_{i}\sigma_{i}e_{i}- \sum _{i=1}^{n}\tilde{x}_{i} \Biggl(\sum _{j=1}^{n}\sigma_{j}e_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)+\sum _{i=1}^{n}\tilde{x}_{i} \tilde{g}_{i} \\ &:=L_{1n}-L_{2n}+L_{3n}. \end{aligned} $$
(4.7)

Setting \(Z_{ni}=\frac{\tilde{x}_{i}\sigma_{i}e_{i}}{\sigma_{1n}}\), we employ Bernstein’s big-block and small-block procedure. Let \(y_{nm}=\sum_{i=k_{m}}^{k_{m}+p-1}Z_{ni}\), \(y'_{nm}=\sum_{i=l_{m}}^{l_{m}+q-1}Z_{ni}\), \(y'_{nk+1}=\sum_{i=k(p+q)+1}^{n}Z_{ni}\), \(k_{m}=(m-1)(p+q)+1\), \(l_{m}= (m-1)(p+q)+p+1\), \(m=1,2,\ldots, k\). Then

$$\sigma^{-1}_{1n}L_{1n}:=\tilde{L}_{1n}= \sum_{i=1}^{n} Z_{ni}=\sum _{m=1}^{k}y_{nm}+\sum _{m=1}^{k}y'_{nm}+y'_{nk+1}=:L_{11n}+L_{12n} +L_{13n}. $$

We observe that

$$\begin{aligned}L_{2n}&=\sum_{i=1}^{n} \Biggl[\tilde{h}_{i}+v_{i}- \Biggl(\sum _{k=1}^{n}v_{k} \int_{A_{k}}E_{m}(t_{i},s)\,ds \Biggr) \Biggr]\cdot \Biggl(\sum_{j=1}^{n} \sigma_{j}e_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr) \\ &=\sum_{i=1}^{n} \tilde{h}_{i} \Biggl(\sum_{j=1}^{n}\sigma_{j}e_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)+\sum _{i=1}^{n}v_{i} \Biggl(\sum _{j=1}^{n}\sigma _{j}e_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr) \\ &\quad-\sum_{i=1}^{n} \Biggl(\sum _{k=1}^{n}v_{k} \int_{A_{k}}E_{m}(t_{i},s)\,ds \Biggr) \Biggl(\sum_{j=1}^{n}\sigma_{j}e_{j} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr) \\ &:=L_{21n}+L_{22n}-L_{23n}. \end{aligned} $$

So we can write

$$\frac{S_{n}^{2}(\hat{\beta}_{L}-\beta)}{\sigma_{1n}}=L_{11n}+L_{12n} +L_{13n}+ \sigma_{1n}^{-1}(L_{21n}+L_{22n}-L_{23n}+L_{3n}). $$

By applying Lemma A.4 we have

$$ \begin{aligned}[b]&\sup_{y} \biggl\vert P \biggl( \frac{S_{n}^{2}(\hat{\beta}_{L}-\beta )}{\sigma_{1n}}\leq y \biggr)-\varPhi(y) \biggr\vert \\ &\quad\leq\sup_{y} \bigl\vert P(L_{11n}\leq y)- \varPhi(y) \bigr\vert +P\bigl( \vert L_{12n} \vert > \lambda_{1n}^{1/3}\bigr) +P\bigl( \vert L_{13n} \vert >\lambda_{2n}^{1/3}\bigr) \\ &\qquad{}+P\bigl(\sigma_{1n}^{-1} \vert L_{21n} \vert >\lambda_{3n}^{1/3}\bigr) +P\bigl(\sigma_{1n}^{-1} \vert L_{22n} \vert >\lambda_{4n}^{1/3}\bigr)+P \bigl(\sigma _{1n}^{-1} \vert L_{23n} \vert > \lambda_{4n}^{1/3}\bigr) \\ &\qquad{}+\frac{1}{\sqrt{2\pi}} \Biggl(\sum_{k=1}^{3} \lambda_{kn}^{1/3} +2\lambda_{4n}^{1/3} \Biggr) +\frac{1}{\sqrt{2\pi}}\sigma_{1n}^{-1} \vert L_{3n} \vert \\ &\quad =\sum_{k=1}^{8}I_{kn}. \end{aligned} $$
(4.8)

Therefore, to prove (3.5)(i), it suffices to show that \(\sum_{k=2}^{8}I_{kn}=O(\mu_{1n})\) and \(I_{1n}=O(\upsilon_{1n}+\lambda ^{1/2}_{1n} +\lambda^{1/2}_{2n})\). Here we need to the following Abel inequality (see Härdle et al. [5]). Let \(A_{1},\ldots,A_{n}\) and \(B_{1},\ldots, B_{n}\) (\(B_{1}\geq B_{2}\geq\cdots\geq B_{n}\geq0\)) be two sequences of real numbers, and let \(S_{k}=\sum_{i=1}^{k}A_{i}\), \(M_{1}=\min_{1\leq k\leq n}S_{k}\), and \(M_{2}=\max_{1\leq k\leq n}S_{k}\). Then

$$ B_{1}M_{1}\leq\sum_{k=1}^{n}A_{k}B_{k} \leq B_{1}M_{2}. $$
(4.9)

Note that

$$\sigma_{1n}^{2}=E \Biggl(\sum _{i=1}^{n}\tilde{x}_{i} \sigma_{i}e_{i} \Biggr)^{2}= \int _{-\pi}^{\pi}\psi(\omega) \Biggl\vert \sum _{k=1}^{n}\tilde{x}_{k}\sigma _{k}e^{-ik\omega} \Biggr\vert ^{2}\,d\omega $$

and

$$\varGamma_{n}^{2}(t)=E \Biggl(\sum _{i=1}^{n}\sigma_{i}e_{i} \int_{A_{i}}E_{m}(t,s)\, ds \Biggr)^{2}= \int_{-\pi}^{\pi}\psi(\omega) \Biggl\vert \sum _{k=1}^{n}\sigma_{k} \int_{A_{k}}E_{m}(t,s)\,dse^{-ik\omega} \Biggr\vert ^{2}\,d\omega. $$

By (A2)(ii), (A5), (2.6), and Lemma A.6 it follows that

$$\begin{aligned}& C_{7} n\leq C_{1} \sum_{i=1}^{n} \tilde{x}_{i}^{2}\leq\sigma _{1n}^{2} \leq C_{2} \sum_{i=1}^{n} \tilde{x}_{i}^{2}\leq C_{8} n, \end{aligned}$$
(4.10)
$$\begin{aligned}& C_{9} \sum_{i=1}^{n} \biggl( \int_{A_{i}}E_{m}(t,s)\,ds \biggr)^{2}\leq \varGamma_{n}^{2}(t)\leq C_{10}\sum _{i=1}^{n} \biggl( \int _{A_{i}}E_{m}(t,s)\,ds \biggr)^{2}=O \bigl(2^{m}/n\bigr). \end{aligned}$$
(4.11)

Step 1. We first prove \(\sum_{k=2}^{8}I_{kn}=O(\mu_{1n})\). Using Lemma A.3, (2.7), and (2.8), from (A0)(i), (A1)–(A6), (4.10), and (4.11) it follows that

$$\begin{gathered}I_{2n}\leq\frac{E \vert L_{12n} \vert ^{2}}{\lambda_{1n}^{2/3}} \leq \frac{C}{n\lambda_{1n}^{2/3}}\sum_{m=1}^{k}\sum _{i=l_{m}}^{l_{m}+q-1}\tilde{x}_{i}^{2} \sigma^{2}_{i}\leq\frac{Ckq}{n\lambda _{1n}^{2/3}}\leq C \lambda_{1n}^{1/3}, \\ I_{3n}\leq\frac{E \vert L_{13n} \vert ^{2}}{\lambda_{2n}^{2/3}} \leq\frac{C}{n\lambda_{2n}^{2/3}}\sum _{i=k(p+q)+1}^{n}\tilde {x}_{i}^{2} \sigma^{2}_{i}\leq\frac{Cp}{n\lambda_{2n}^{2/3}}\leq C\lambda _{2n}^{1/3}, \\ \begin{aligned}I_{4n}&\leq\frac{\sigma_{1n}^{-2}E \vert L_{21n} \vert ^{2}}{\lambda_{3n}^{2/3}} \leq\frac{C}{n\lambda_{3n}^{2/3}}\sum _{j=1}^{n} \Biggl(\sum_{i=1}^{n} \tilde{h}_{i} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)^{2}\sigma^{2}_{j} \\ &\leq\frac{C}{n\lambda_{3n}^{2/3}}\Bigl(\max_{1\leq i\leq n} \vert \tilde{h}_{i} \vert ^{2}\Bigr) \cdot\sum _{j=1}^{n} \Biggl(\max_{1\leq j\leq n}\sum _{i=1}^{n} \int _{A_{j}} \bigl\vert E_{m}(t_{i},s) \bigr\vert \,ds \Biggr)^{2} \\ &\leq\frac{C(2^{-m}+n^{-1})^{2}}{\lambda_{3n}^{2/3}}\leq C\lambda _{3n}^{1/3},\end{aligned} \\ \begin{aligned}I_{5n}&\leq\frac{\sigma_{1n}^{-2}E \vert L_{22n} \vert ^{2}}{\lambda_{4n}^{2/3}} \leq\frac{C}{n\lambda_{4n}^{2/3}}\sum _{j=1}^{n} \Biggl(\sum_{i=1}^{n}v_{i} \int_{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)^{2}\sigma^{2}_{j} \\ &\leq\frac{C}{n\lambda_{4n}^{2/3}}\max_{1\leq i,j\leq n} \biggl( \int _{A_{j}} \bigl\vert E_{m}(t_{i},s) \bigr\vert \,ds \biggr)\cdot\max_{1\leq i\leq n}\sum _{j=1}^{n} \biggl\vert \int_{A_{j}} \bigl\vert E_{m}(t_{i},s) \bigr\vert \,ds \biggr\vert \cdot \Biggl(\max_{1\leq m\leq n} \Biggl\vert \sum_{i=1}^{m}v_{j_{i}} \Biggr\vert \Biggr)^{2} \\ &\leq\frac{C(2^{m}n^{-1}\log^{2}n)}{\lambda_{4n}^{2/3}}\leq C\lambda_{4n}^{1/3},\end{aligned} \\ \begin{aligned} I_{6n}&\leq\frac{\sigma_{1n}^{-2}E \vert L_{23n} \vert ^{2}}{\lambda _{4n}^{2/3}} \leq \frac{C}{n\lambda_{4n}^{2/3}}\sum_{j=1}^{n} \Biggl(\sum _{l=1}^{n}v_{l} \int_{A_{l}}E_{m}(t_{i},s)\,ds\cdot\sum _{i=1}^{n} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr)^{2}\sigma^{2}_{j} \\ &\leq\frac{C}{n\lambda_{4n}^{2/3}} \max_{1\leq i,j\leq n} \biggl( \int_{A_{j}} \bigl\vert E_{m}(t_{i},s) \bigr\vert \,ds \biggr)\cdot\max_{1\leq i\leq n}\sum _{j=1}^{n} \biggl\vert \int_{A_{j}}E_{m}(t_{i},s)\,ds \biggr\vert \\ &\quad\cdot \Biggl( \max_{1\leq l\leq n}\sum_{i=1}^{n} \biggl\vert \int_{A_{l}}E_{m}(t_{i},s)\, ds \biggr\vert \max_{1\leq m\leq n} \Biggl\vert \sum _{i=1}^{m}v_{j_{i}} \Biggr\vert \Biggr)^{2}\\ & \leq\frac{C(2^{m}n^{-1}\log^{2}n)}{\lambda_{4n}^{2/3}}\leq C\lambda_{4n}^{1/3}. \end{aligned}\end{gathered} $$

As for \(I_{8n}\), we have

$$\begin{aligned}I_{8n}&=\sigma_{1n}^{-1} \Biggl\vert \sum_{i=1}^{n}\tilde {x}_{i}\tilde{g}_{i} \Biggr\vert \leq\frac{C}{\sqrt{n}} \Biggl( \Biggl\vert \sum_{i=1}^{n}v_{i} \tilde{g}_{i} \Biggr\vert + \Biggl\vert \sum _{i=1}^{n}\tilde {h}_{i} \tilde{g}_{i} \Biggr\vert + \Biggl\vert \sum _{i=1}^{n}\tilde{g}_{i}\sum _{j=1}^{n}v_{j} \int _{A_{j}}E_{m}(t_{i},s)\,ds \Biggr\vert \Biggr) \\ &\leq\frac{C}{\sqrt{n}}\max_{1\leq i\leq n} \vert \tilde{g}_{i} \vert \max_{1\leq m\leq n} \Biggl\vert \sum _{i=1}^{m}v_{j_{i}} \Biggr\vert + \frac{C n}{\sqrt{n}}\max_{1\leq i\leq n} \vert \tilde{h}_{i} \vert \max_{1\leq i\leq n} \vert \tilde {g}_{i} \vert \\ &\quad+\frac{C}{\sqrt{n}}\max_{1\leq i\leq n} \vert \tilde{g}_{i} \vert \cdot \max_{1\leq j\leq n}\sum_{i=1}^{n} \int_{A_{j}} \bigl\vert E_{m}(t_{i},s) \bigr\vert \,ds\cdot \max_{1\leq m\leq n} \Biggl\vert \sum _{i=1}^{m}v_{j_{i}} \Biggr\vert \\ &\leq C\bigl(2^{-m}+n^{-1}\bigr) \bigl(\log n+\sqrt{n} \bigl(2^{-m}+n^{-1}\bigr)\bigr)=C\lambda_{5n}. \end{aligned} $$

Hence from the previous estimates we obtain that \(\sum_{k=2}^{8}I_{kn}=O(\mu_{1n})\).

Step 2. We verify \(I_{1n}=O(\lambda_{1n}^{1/2}+\lambda _{2n}^{1/2}+\upsilon_{1n})\). Let \(\{\eta_{nm}:m=1,2,\ldots,k\}\) be independent random variables with the same distributions as \(y_{nm}\), \(m=1,2,\ldots,k\). Set \(H_{n}=\sum_{m=1}^{k}\eta_{nm}\) and \(s_{n}^{2}=\sum_{m=1}^{k}\operatorname{Var}(y_{nm})\). Following the method of the proof of Theorem 2.1 in Liang and Li [27] and Li et al. [20], we easily see that

$$ \begin{aligned}[b]I_{1n}&=\sup_{y} \bigl\vert P(L_{11n}\leq y)-\varPhi(y) \bigr\vert \\ & \leq\sup _{y} \bigl\vert P(L_{11n}\leq y)-P(H_{n}\leq y) \bigr\vert \\&\quad+\sup_{y} \bigl\vert P(H_{n}\leq y)- \varPhi(y/s_{n}) \bigr\vert +\sup_{y} \bigl\vert \varPhi(y/s_{n})-\varPhi (y) \bigr\vert \\ &:=I_{11n}+I_{12n}+I_{13n}. \end{aligned} $$
(4.12)

(i) We evaluate \(s_{n}^{2}\). Noticing that \(s_{n}^{2}=E{L}^{2}_{11n}-2\sum_{1\leq i< j\leq k}\operatorname{Cov}(y_{ni},y_{nj})\) and \(E\tilde{L}_{1n}^{2}=1\), we can get

$$ \bigl\vert E(L_{11n})^{2}-1 \bigr\vert \leq C\bigl( \lambda_{1n}^{1/2}+\lambda _{2n}^{1/2} \bigr). $$
(4.13)

On the other hand, from (A1), (A2), (4.10), and (2.8) it follows that

$$ \begin{aligned}[b] \biggl\vert \sum_{1\leq i< j\leq k} \operatorname {Cov}(y_{ni},y_{nj}) \biggr\vert &\leq Cn^{-1} \sum_{i=1}^{k-1}\sum _{j=i+1}^{k}\sum _{s=k_{i}}^{k_{i}+p-1}\sum_{t=k_{j}}^{k_{j}+p-1} \vert \tilde{x}_{s}\tilde{x}_{t}\sigma_{s} \sigma_{t} \vert \cdot \bigl\vert \operatorname{Cov}(e_{s},e_{t}) \bigr\vert \\ &\leq Ckpn^{-1}u(q)\leq Cu(q). \end{aligned} $$
(4.14)

Thus, from (4.13) and (4.14) it follows that \(|s_{n}^{2}-1|\leq C(\lambda_{1n}^{1/2}+\lambda_{2n}^{1/2}+u(q))\).

(ii) Applying the Berry–Esséen inequality (see Petrov [28], Theorem 5.7), for \(\delta>0\), we get

$$ \sup_{y} \bigl\vert P(H_{n}/s_{n}\leq y)-\varPhi(y) \bigr\vert \leq C \sum_{m=1}^{k} \bigl(E \vert y_{nm} \vert ^{2+\delta}/s_{n}^{2+\delta} \bigr). $$
(4.15)

By Lemma A.3 from (A0), (A1), (A2), (4.10), and (2.8) we can deduce that

$$ \begin{aligned}[b]\sum_{m=1}^{k}E \vert y_{nm} \vert ^{2+\delta}&=\sum _{m=1}^{k}E \Biggl\vert \sum _{j=k_{m}}^{k_{m}+p-1} Z_{ni} \Biggr\vert ^{2+\delta} \\ &\leq C\sigma_{1n}^{-(2+\delta)}\sum_{m=1}^{k} \Biggl\{ \sum_{i=k_{m}}^{k_{m}+p-1} E \vert \tilde{x}_{i}\sigma_{i}e_{i} \vert ^{2+\delta}+ \Biggl[\sum_{i=k_{m}}^{k_{m}+p-1}E( \tilde{x}_{i}\sigma_{i}e_{i})^{2} \Biggr]^{1+\delta /2} \Biggr\} \\ &\leq C\bigl(kpn^{-1}\bigr) \bigl(n^{-\delta/2}+(p/n)^{\delta/2} \bigr)\leq C\lambda^{\delta/2}_{2n}. \end{aligned} $$
(4.16)

Since \(s_{n}\rightarrow1\) by (4.13) and (4.14), from (4.15) and (4.16) we easily see that \(I_{12n}\leq C\lambda^{\delta/2}_{2n}\). Note that \(I_{13n}=O(|s_{n}^{2}-1|)=O(\lambda_{1n}^{1/2}+\lambda_{2n}^{1/2}+u(q))\).

(iii) Next, we evaluate \(I_{11n}\). Let \(\varphi_{1}(t)\) and \(\varphi _{2}(t)\) are the characteristic functions of \(L_{11n}\) and \(H_{n}\), respectively. Thus applying the Esséen inequality (see Petrov [28], Theorem 5.3), for any \(T>0\), we have

$$ \begin{aligned}[b]&\sup_{t} \bigl\vert P(L_{11n}\leq t)-P(H_{n}\leq t) \bigr\vert \\&\quad\leq \int_{-T}^{T} \biggl\vert \frac {\varphi_{1}(t)-\varphi_{2}(t)}{t} \biggr\vert \,dt+T\sup_{t} \int_{ \vert u \vert \leq C/T} \bigl\vert P(H_{n}\leq u+t)-P(H_{n}\leq t) \bigr\vert \,du \\ &\quad :=I'_{11n}+I''_{11n}. \end{aligned} $$
(4.17)

From Lemma A.5 and (4.14) it follows that

$$\begin{aligned} \bigl\vert \varphi_{1}(t)- \varphi_{2}(t) \bigr\vert &= \Biggl\vert E\exp \Biggl(\mathrm {i}t \sum_{m=1}^{k} y_{nm} \Biggr)- \prod_{m=1}^{k} E\exp{(\mathrm{i}t y_{nm})} \Biggr\vert \\ &\leq4t^{2}\sum_{1\leq i< j\leq k}\sum _{s_{1}=k_{i}}^{k_{i}+p-1}\sum_{t_{1}=k_{j}}^{k_{j}+p-1} \bigl\vert \operatorname{Cov}(Z_{ns_{1}},Z_{nt_{1}}) \bigr\vert \\ &\leq4Ct^{2}u(q), \end{aligned} $$

which implies that

$$ I'_{11n}= \int_{-T}^{T} \biggl\vert \frac{\varphi_{1}(t)-\varphi _{2}(t)}{t} \biggr\vert \,dt\leq Cu(q)T^{2}. $$
(4.18)

Therefore by (4.15) and (4.16) we have

$$ \begin{aligned}[b]&\sup_{t} \bigl\vert P(H_{n}\leq t+u)-P(H_{n}\leq t) \bigr\vert \\&\quad\leq\sup _{t} \biggl\vert P \biggl(\frac{H_{n}}{s_{n}}\leq \frac{t+u}{s_{n}} \biggr)-\varPhi \biggl(\frac {t+u}{s_{n}} \biggr) \biggr\vert \\ &\qquad+\sup_{t} \biggl\vert P \biggl(\frac{H_{n}}{s_{n}}\leq \frac{t}{s_{n}} \biggr)-\varPhi \biggl(\frac{t}{s_{n}} \biggr) \biggr\vert +\sup_{t} \biggl\vert \varPhi \biggl(\frac{t+u}{s_{n}} \biggr)-\varPhi \biggl(\frac {t}{s_{n}} \biggr) \biggr\vert \\ &\quad\leq2\sup_{t} \biggl\vert P \biggl(\frac{H_{n}}{s_{n}} \leq t \biggr)-\varPhi(t) \biggr\vert +\sup_{t} \biggl\vert \varPhi \biggl(\frac{t+u}{s_{n}} \biggr)-\varPhi \biggl( \frac {t}{s_{n}} \biggr) \biggr\vert \\ &\quad\leq C \biggl(\lambda_{2n}^{\delta/2}+ \biggl\vert \frac{u}{s_{n}} \biggr\vert \biggr)\leq C \bigl(\lambda_{2n}^{\delta/2}+ \vert u \vert \bigr). \end{aligned} $$
(4.19)

From (4.19) it follows that

$$ I''_{11n}=T\sup_{t} \int_{ \vert u \vert \leq C/T} \bigl\vert P(H_{n}\leq t+u)-P(H_{n}\leq t) \bigr\vert \,du\leq C\bigl(\lambda_{2n}^{\delta/2}+1/T \bigr). $$
(4.20)

Combining (4.17), (4.18) with (4.20) and choosing \(T=u^{-1/3}(q)\), we easily see that \(I_{11n}\leq C(u^{1/3}(q)+\lambda_{2n}^{\delta/2})\). So \(I_{1n}\leq C(\lambda_{1n}^{1/2}+\lambda_{2n}^{1/2}+\upsilon _{1n})\). This completes the proof of Theorem 3.3 from Step 1 and Step 2. □

Proof of Corollary 3.1

In Theorem 3.3, choosing \(p=\lfloor n^{\theta}\rfloor\), \(q=\lfloor n^{2\theta-1}\rfloor\), \(\delta=1\), when \(1/2<\theta\leq7/10\), we have \(\mu_{1n}=O(n^{-(\theta -1)/3})\) and \(\upsilon_{1n}=O(n^{-(\theta-1)/3})\). Therefore (3.6) directly follows from Theorem 3.3. □

Proof of Theorem 3.4

We prove only the case of \(\hat{g}(t)=\hat{g}_{L}(t)\), as the proof of \(\hat{g}(t)=\hat{g}_{W}(t)\) is analogous.

By the definition of \(\hat{g}_{L}(t)\) we easily see that

$$\begin{aligned}\varGamma_{n}^{-1}(t) \bigl( \hat{g}_{L}(t)-E\hat {g}_{L}(t)\bigr)={}&\varGamma_{n}^{-1}(t) \Biggl(\sum_{i=1}^{n}\varepsilon_{i} \int _{A_{i}}E_{m}(t,s)\,ds \Biggr) \\ &+\varGamma_{n}^{-1}(t) \Biggl(\sum _{i=1}^{n}x_{i}(E\hat{\beta}_{L}- \beta) \int _{A_{i}}E_{m}(t,s)\,ds \Biggr) \\ &+\varGamma_{n}^{-1}(t) \Biggl(\sum _{i=1}^{n}x_{i}(\beta-\hat{ \beta}_{L}) \int _{A_{i}}E_{m}(t,s)\,ds \Biggr) \\ :={}&J_{1n}+J_{2n}+J_{3n}. \end{aligned} $$

Set \(Z'_{ni}=\frac{\sigma_{i}e_{i}\int_{A_{i}}E_{m}(t,s)\,ds}{\varGamma_{n}(t)}\). Similar to \(\tilde{L}_{1n}\), we can split \(J_{1n}\) as \(J_{1n}=\sum_{i=1}^{n} Z'_{ni}:= J_{11n}+J_{12n}+J_{13n}\), where \(J_{11n}=\sum_{m=1}^{k}\chi_{nm}\), \(J_{12n}=\sum_{m=1}^{k}\chi'_{nm}\), \(J_{13n}=\chi'_{nk+1}\), \(\chi_{nm}=\sum_{i=k_{m}}^{k_{m}+p-1}Z'_{ni}\), \(\chi '_{nm}=\sum_{i=l_{m}}^{l_{m}+q-1}Z'_{ni}\), \(\chi'_{nk+1}=\sum_{i=k(p+q)+1}^{n}Z'_{ni}\), \(k_{m}=(m-1)(p+q)+1\), \(l_{m}=(m-1)(p+q)+p+1\), \(m=1,2,\ldots,k\).

Applying Lemma A.4, we have

$$ \begin{aligned}[b]&\sup_{y} \biggl\vert P \biggl( \frac{\hat{g}_{L}(t)-E\hat {g}_{L}(t)}{\varGamma_{n}(t)}\leq y \biggr)-\varPhi(y) \biggr\vert \\ &\quad\leq\sup_{y} \bigl\vert P(J_{11n}\leq y)- \varPhi(y) \bigr\vert +P\bigl( \vert J_{12n} \vert > \gamma_{1n}^{1/3}\bigr) +P\bigl( \vert J_{13n} \vert >\gamma_{2n}^{1/3}\bigr) \\ &\qquad+\frac{ \vert J_{2n} \vert }{\sqrt{2\pi}}+P\bigl( \vert J_{3n} \vert > \gamma_{3n}^{(2+\delta )/(3+\delta)}\bigr) +\frac{1}{\sqrt{2\pi}} \Biggl(\sum _{k=1}^{2}\gamma_{kn}^{1/3} +\gamma_{3n}^{\frac{2+\delta}{3+\delta}} \Biggr)\\&\quad =\sum _{k=1}^{6}G_{kn}. \end{aligned} $$
(4.21)

Hence it suffices to show that \(\sum_{k=2}^{6}G_{kn}=O(\mu_{2n})\) and \(G_{1n}=O(\gamma^{1/2}_{1n}+\gamma^{1/2}_{2n}+\upsilon_{2n})\).

Step 1. We first prove \(\sum_{k=2}^{6}G_{kn}=O(\mu_{2n})\). Similar to the proof for \(I_{2n}-I_{8n}\) in Theorem 3.3, we have

$$\begin{gathered}G_{2n}\leq\frac{E \vert J_{12n} \vert ^{2}}{\gamma_{1n}^{2/3}} \leq \frac{C}{\varGamma_{n}^{2}(t)\gamma_{1n}^{2/3}}\sum_{m=1}^{k}\sum _{i=l_{m}}^{l_{m}+q-1} \biggl( \int_{A_{i}}E_{m}(t,s)\,ds \biggr)^{2} \sigma^{2}_{i}\leq \frac{Ckq2^{m}}{n\gamma_{1n}^{2/3}}\leq C \gamma_{1n}^{1/3}, \\ G_{3n}\leq\frac{E \vert J_{13n} \vert ^{2}}{\gamma_{2n}^{2/3}} \leq\frac{C}{\varGamma_{n}^{2}(t)\gamma_{2n}^{2/3}}\sum _{i=k(p+q)+1}^{n} \biggl( \int_{A_{i}}E_{m}(t,s)\,ds \biggr)^{2} \sigma^{2}_{i}\leq \frac{C2^{m} p}{n\gamma_{2n}^{2/3}}\leq C \gamma_{2n}^{1/3}. \end{gathered} $$

Note that if \(\xi_{n}\rightarrow\xi\sim N(0,1)\), then \(E|\xi _{n}|\rightarrow E|\xi|=\sqrt{2/\pi}\) and \(E|\xi_{n}|^{2+\delta}\rightarrow E|\xi|^{2+\delta}\). By Theorem 3.3(i) and (2.6) it follows that

$$\begin{aligned}& \vert \beta-E\hat{\beta}_{L} \vert \leq E \vert \beta-\hat{ \beta }_{L} \vert =O\bigl(\sigma_{1n}/S_{n}^{2} \bigr)=O\bigl(n^{-1/2}\bigr), \end{aligned}$$
(4.22)
$$\begin{aligned}& E \vert \beta-\hat{\beta}_{L} \vert ^{2+\delta}\leq O \bigl( \bigl(\sigma _{1n}/S_{n}^{2}\bigr)^{2+\delta} \bigr)=O \bigl(n^{-(1+\delta/2)} \bigr). \end{aligned}$$
(4.23)

Therefore, applying the Abel inequality (4.9) and combining (A1)(iii) and (A2)(i) with Lemma A.6, from (4.22) and (4.23) we get

$$ \begin{aligned}[b] \vert G_{4n} \vert &= \frac{1}{\sqrt{2\pi}\varGamma_{n}(t)}\cdot \vert \beta-E\hat{\beta }_{L} \vert \cdot \Biggl\vert \sum_{i=1}^{n}x_{i} \int_{A_{i}}E_{m}(t,s)\,ds \Biggr\vert \\ &\leq C\varGamma_{n}^{-1}(t)n^{-1/2} \Biggl(\sup _{0\leq t\leq1} \bigl\vert h(t) \bigr\vert +\max _{1\leq i\leq n} \int_{A_{i}} \bigl\vert E_{m}(t,s) \bigr\vert \,ds \cdot\max_{1\leq l\leq n} \Biggl\vert \sum _{i=1}^{l}v_{j_{i}} \Biggr\vert \Biggr) \\ &\leq C\bigl(2^{-m/2}+\sqrt{2^{m}/n}\log n\bigr)=C \gamma_{3n} \end{aligned} $$
(4.24)

and

$$ \begin{aligned}[b] \vert J_{3n} \vert ^{2+\delta}&= \varGamma^{-(2+\delta)}_{n}(t)E \vert \beta-\hat{\beta }_{L} \vert ^{2+\delta} \Biggl\vert \sum _{i=1}^{n}x_{i} \int_{A_{i}}E_{m}(t,s)\,ds \Biggr\vert ^{2+\delta} \\ &\leq C\varGamma_{n}^{-(2+\delta)}(t)n^{-(2+\delta)/2} \Biggl(\sup _{0\leq t\leq1} \bigl\vert h(t) \bigr\vert +\max _{1\leq i\leq n} \int_{A_{i}} \bigl\vert E_{m}(t,s) \bigr\vert \,ds \cdot\max_{1\leq l\leq n} \Biggl\vert \sum _{i=1}^{l}v_{j_{i}} \Biggr\vert \Biggr)^{2+\delta}\hspace{-24pt} \\ &\leq C\gamma^{2+\delta}_{3n}, \end{aligned} $$
(4.25)

which implies that \(G_{5n}\leq c\gamma_{3n}^{(2+\delta)/(3+\delta)}\). So we get \(\sum_{k=2}^{6}G_{kn}=O(\mu_{2n})\).

Step 2. We verify \(G_{1n}=O(\gamma_{1n}^{1/2}+\gamma _{2n}^{1/2}+\upsilon_{2n})\). Let \(\{\zeta_{nm}:m=1,2,\ldots,k\}\) be independent random variables and \(\zeta_{nm}\) have the same distribution as \(\chi_{nm}\), \(m=1,2,\ldots,k\). Set \(T_{n}=\sum_{m=1}^{k}\zeta _{nm}\) and \(t_{n}^{2}=\sum_{m=1}^{k} \operatorname{Var}(\chi_{nm})\). Similar to the proof of (4.17), we easily see that

$$ \begin{aligned}[b]G_{1n}&=\sup_{y} \bigl\vert P(J_{11n}\leq y)-\varPhi(y) \bigr\vert \\& \leq\sup _{y} \bigl\vert P(J_{11n}\leq y)-P(T_{n}\leq y) \bigr\vert \\ &\quad+\sup_{y} \bigl\vert P(T_{n}\leq y)- \varPhi(y/t_{n}) \bigr\vert +\sup_{y} \bigl\vert \varPhi(y/t_{n})-\varPhi (y) \bigr\vert \\ &=G_{11n}+G_{12n}+G_{13n}. \end{aligned} $$
(4.26)

Similar to the proof of (4.13)–(4.20), we can obtain \(|t_{n}^{2}-1|\leq C(\gamma_{1n}^{1/2}+\gamma_{2n}^{1/2}+u(q))\), \(|G_{12n}|\leq C\gamma_{2n}^{\delta/2}\), \(|G_{13n}|\leq C(\gamma _{1n}^{1/2}+\gamma_{2n}^{1/2}+u(q))\), and \(|G_{11n}|\leq C \upsilon _{2n}\). Thus it follows that \(G_{1n}=O(\gamma_{1n}^{1/2}+\gamma _{2n}^{1/2}+\upsilon_{2n})\). The proof of Theorem 3.4 is completed. □

Proof of Corollary 3.2

Letting \(p=\lfloor n^{\rho }\rfloor\), \(q=\lfloor n^{2\rho-1}\rfloor\), \(\delta=1\), when \(1/2<\rho<\theta <1\), we have \(\gamma^{1/3}_{1n}=O(n^{-(\theta-\rho)/3})\), \(\gamma ^{1/3}_{2n}=O(n^{-(\theta-\rho)/3})\), \(\gamma^{3/4}_{3n}=O(n^{-3(1-\theta )/8})\), and \(u^{1/3}(q)=O(n^{-(\theta-\rho)/3})\). Therefore (3.8) directly follows from Theorem 3.4. □

References

  1. Engle, R., Granger, C., Rice, J., Weiss, A.: Semiparametric estimates of the relation between weather and electricity sales. J. Am. Stat. Assoc. 81, 310–320 (1986)

    Article  Google Scholar 

  2. Gao, J.T., Chen, X.R., Zhao, L.C.: Asymptotic normality of a class of estimators in partial linear models. Acta Math. Sin. 37(2), 256–268 (1994)

    MathSciNet  MATH  Google Scholar 

  3. Chen, M.H., Ren, Z., Hu, S.H.: Strong consistency of a class of estimators in partial linear model. Acta Math. Sin. 41(2), 429–439 (1998)

    MathSciNet  MATH  Google Scholar 

  4. Hamilton, S.A., Truong, Y.K.: Local linear estimation in partly linear models. J. Multivar. Anal. 60, 1–19 (1997)

    Article  MathSciNet  Google Scholar 

  5. Härdle, W., Liang, H.Y., Gao, J.L.: Partial Linear Models. Physica-Verlag, Heidelberg (2000)

    Book  Google Scholar 

  6. Liang, H.Y., Mammitzsch, V., Steinebach, J.: On a semiparametric regression model whose errors from a linear process with negatively associated innovations. Statistics 40(3), 207–226 (2006)

    Article  MathSciNet  Google Scholar 

  7. Liang, H.Y., Fan, G.L.: Berry–Esséen type bounds of estimators in a semi-parametric model with linear process errors. J. Multivar. Anal. 100, 1–15 (2009)

    Article  Google Scholar 

  8. Liang, H.Y., Jing, B.Y.: Asymptotic normality in partial linear models based on dependent errors. J. Stat. Plan. Inference 139, 1357–1374 (2009)

    Article  MathSciNet  Google Scholar 

  9. You, J., Chen, G.: Semiparametric generalized least squares estimation in partially linear regression models with correlated errors. Acta Math. Sin. Engl. Ser. 23(6), 1013–1024 (2007)

    Article  MathSciNet  Google Scholar 

  10. Back, J.I., Liang, H.Y.: Asymptotic of estimators in semi-parametric model under NA samples. J. Stat. Plan. Inference 136, 3362–3382 (2006)

    Article  MathSciNet  Google Scholar 

  11. Zhang, J.J., Liang, H.Y.: Berry–Esséen type bounds in heteroscedastic semi-parametric model. J. Stat. Plan. Inference 141, 3447–3462 (2011)

    Article  Google Scholar 

  12. Wei, C.D., Li, Y.M.: Berry–Esséen bounds for wavelet estimator in semiparametric regression model with linear process errors. J. Inequal. Appl. 2012, Article ID 44 (2012)

    Article  Google Scholar 

  13. Robinson, P.M.: Asymptotically efficient estimation in the presence of heteroscedasticity of unknown form. Econometrica 55, 875–891 (1987)

    Article  MathSciNet  Google Scholar 

  14. Caroll, R.J., Härdle, W.: Second order effects in semiparametric weighted least squares regression. Statistics 2, 179–186 (1989)

    MathSciNet  MATH  Google Scholar 

  15. Liang, H.Y., Qi, Y.Y.: Asymptotic normality of wavelet estimator of regression function under NA assumption. Bull. Korean Math. Soc. 44(2), 247–257 (2007)

    Article  MathSciNet  Google Scholar 

  16. Antoniadis, A., Gregoire, G., Mckeague, I.W.: Wavelet methods for curve estimation. J. Am. Stat. Assoc. 89, 1340–1352 (1994)

    Article  MathSciNet  Google Scholar 

  17. Sun, Y., Chai, G.X.: Nonparametric wavelet estimation of a fixed designed regression function. Acta Math. Sci. 24A(5), 597–606 (2004)

    MathSciNet  MATH  Google Scholar 

  18. Li, Y.M., Yang, S.C., Zhou, Y.: Consistency and uniformly asymptotic normality of wavelet estimator in regression model with associated samples. Stat. Probab. Lett. 78, 2947–2956 (2008)

    Article  MathSciNet  Google Scholar 

  19. Li, Y.M., Guo, J.H.: Asymptotic normality of wavelet estimator for strong mixing errors. J. Korean Stat. Soc. 38, 383–390 (2009)

    Article  MathSciNet  Google Scholar 

  20. Li, Y.M., Wei, C.D., Xing, G.D.: Berry–Esséen bounds for wavelet estimator in a regression model with linear process errors. Stat. Probab. Lett. 81(1), 103–110 (2011)

    Article  Google Scholar 

  21. Xue, L.G.: Uniform convergence rates of the wavelet estimator of regression function under mixing error. Acta Math. Sci. 22A(4), 528–535 (2002)

    MathSciNet  MATH  Google Scholar 

  22. Zhou, X.C., Lin, J.G., Yin, C.M.: Asymptotic properties of wavelet-based estimator in nonparametric regression model with weakly dependent process. J. Inequal. Appl. 2013, Article ID 261 (2013)

    Article  Google Scholar 

  23. Alam, K., Saxena, K.M.L.: Positive dependence in multivariate distributions. Commun. Stat., Theory Methods 10, 1183–1196 (1981)

    Article  MathSciNet  Google Scholar 

  24. Joag-Dev, K., Proschan, F.: Negative association of random variables with applications. Ann. Stat. 11(1), 286–295 (1983)

    Article  MathSciNet  Google Scholar 

  25. Matula, P.: A note on the almost sure convergence of sums of negatively dependence random variables. Stat. Probab. Lett. 15, 209–213 (1992)

    Article  Google Scholar 

  26. Shao, Q.M., Su, Q.: The law of the iterated logarithm for negatively associated random variables. Stoch. Process. Appl. 83, 139–148 (1999)

    Article  MathSciNet  Google Scholar 

  27. Liang, H.Y., Li, Y.Y.: A Berry–Esséen type bound of regression estimator based on linear process errors. J. Korean Math. Soc. 45(6), 1753–1767 (2008)

    Article  MathSciNet  Google Scholar 

  28. Petrov, V.V.: Limit Theorem of Probability Theory: Sequences of Independent Random Variables. Oxford University Press, New York (1995)

    MATH  Google Scholar 

  29. Yang, S.C.: Uniformly asymptotic normality of the regression weighted estimator for negatively associated samples. Stat. Probab. Lett. 62(2), 101–110 (2003)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and the referees for their valuable comments and suggestions that improved the quality of our paper.

Funding

This work was supported in part by the NNSF of China (No. 11626031) and Natural Science Foundation of Anhui Province (KJ2016A428).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xueping Hu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Lemma A.1

(Back and Liang [10])

Let \(\{X_{n}, n\geq1\}\)be a sequence of NA random variables with zero means and \(\alpha>2\). Assume that \(\{a_{ni},1\leq i\leq n, n\geq1\}\)is a triangular array of real numbers with \(\max_{1\leq i\leq n}|a_{ni}|=O(n^{-1/2})\)and \(\sum_{i=1}^{n}a_{ni}^{2}=o(n^{-2/\alpha}(\log n)^{-1})\). If \(\sup_{i}\mathrm{E}|X_{i}|^{p}<\infty\)for some \(p>2\alpha /(\alpha-2)\), then \(\sum_{i=1}^{n}a_{ni}X_{i}=o(n^{-1/\alpha})\)a.s.

Lemma A.2

(Back and Liang [10])

Let \(\{X_{n}, n\geq1\}\)be a sequence of NA random variables with zero means. Assume that \(\{a_{ni},1\leq i\leq n, n\geq1\}\)is a function array defined on the closed intervalIofRsatisfying \(\max_{1\leq i,j\leq n}|a_{ni}(u_{j})|=O(n^{-1/2})\)and \(\max_{1\leq j\leq n}\sum_{i=1}^{n}|a_{ni}(u_{j})|=O(1)\). If \(\sup_{i}\mathrm{E}|X_{i}|^{p}<\infty\)for some \(p>2\), then

$$\begin{gathered} {(\mathrm{i})}\quad\max_{1\leq j\leq n} \Biggl\vert \sum _{i=1}^{n}a_{ni}(u_{j})X_{i} \Biggr\vert =o\bigl(L(n)\bigr) \quad\textit{a.s.},\\ {(\mathrm{ii})}\quad\max _{1\leq j\leq n} \Biggl(\sum_{i=1}^{n} \bigl\vert a_{ni}(u_{j})X_{i} \bigr\vert \Biggr)=O(1) \quad\textit{a.s.},\end{gathered} $$

where \(L(x)>0\)is a slowly varying function as \(x\rightarrow\infty\), and \(\sqrt{x}L(x)\)is nondecreasing for \(x\geq x_{0}>0\).

Lemma A.3

(Liang and Li [27])

Let \(\{X_{n}; n\geq1\}\)be a sequence of NA random variables with zero means and \(\mathrm{E}|X_{n}|^{p}<\infty\)for some \(p>1\), and let \(\{b_{i}, i\geq1\}\)be a sequence of real numbers. Then there exists a positive constant \(C_{p}\)such that

$$\mathrm{E}\max_{1\leq m\leq n} \Biggl\vert \sum _{i=1}^{m}b_{i}X_{i} \Biggr\vert ^{p}\leq C_{p} \Biggl\{ \sum _{i=1}^{n}\mathrm{E} \vert b_{i}X_{i} \vert ^{p}+I(p>2) \Biggl(\sum_{i=1}^{n} \mathrm{E}(b_{i}X_{i})^{2} \Biggr)^{p/2} \Biggr\} . $$

Lemma A.4

(Yang [29])

Suppose that \(\{\varsigma _{n},n\geq1\}\), \(\{\eta_{n},n\geq1\}\), and \(\{\xi_{n},n\geq1\}\)are three random variable sequences, \(\{\gamma_{n},n\geq1\}\)is a positive nonrandom sequence, and \(\gamma_{n}\rightarrow0\). If \(\sup_{x}|F_{\varsigma_{n}}(x)-\varPhi(x)|\leq C\gamma_{n}\), then for any \(\varepsilon_{1}>0\)and \(\varepsilon_{2}>0\),

$$\sup_{x} \bigl\vert F_{\varsigma_{n}+\eta_{n}+\xi_{n}}(x)-\varPhi(x) \bigr\vert \leq C\bigl\{ \gamma _{n}+\varepsilon_{1}+ \varepsilon_{2}+P\bigl( \vert \eta_{n} \vert \geq \varepsilon_{1}\bigr)+P\bigl( \vert \xi _{n} \vert \geq \varepsilon_{2}\bigr)\bigr\} . $$

Lemma A.5

(Liang and Fan [7])

Suppose that \(X_{n}\), \(n\geq1\)is a sequence of NA random variables with finite second moments. Let \(\{a_{j},j\geq1\}\)be a real sequence, and let \(1=m_{0}< m_{1}<\cdots <m_{k}=n\). Define \(Y_{l}=\sum_{j=m_{l-1}+1}^{m_{l}}a_{j}X_{j}\)for \(1\leq l\leq k\). Then

$$\Biggl\vert E\exp\Biggl\{ \mathrm{i}t\sum_{l=1}^{k}Y_{l} \Biggr\} -\prod_{i=1}^{k}\exp\{ \mathrm{i}tY_{l}\} \Biggr\vert \leq4t^{2}\sum _{1\leq s< j\leq k}\sum_{l_{1}=m_{s-1}+1}^{m_{s}} \sum_{l_{2}=m_{j-1}+1}^{m_{j}} \vert a_{l_{1}}a_{l_{2}} \vert \bigl\vert \operatorname{Cov}(X_{l_{1}}, X_{l_{2}}) \bigr\vert . $$

Lemma A.6

(Wei and Li [12])

Assume that Assumptions (A3) and (A4) hold. Then

  1. (i)

    \(\sup_{m}\int_{0}^{1}|\mathrm{E}_{m}(t,s)|\,ds\leq C\);

  2. (ii)

    \(\sum_{i=1}^{n}|\int_{A_{i}}\mathrm{E}_{m}(t,s)\,ds|\leq C\);

  3. (iii)

    \(\sup_{0\leq s,t\leq1}|\mathrm{E}_{m}(t,s)|=O(2^{m})\);

  4. (iv)

    \(|\int_{A_{i}}\mathrm{E}_{m}(t,s)\,ds|=O(\frac {2^{m}}{n})\), \(i=1,2,\ldots,n\);

  5. (v)

    \(\sum_{i=1}^{n}(\int_{A_{i}}\mathrm{E}_{m}(t,s)\,ds)^{2}=O(\frac {2^{m}}{n})\);

  6. (vi)

    \(\max_{1\leq i\leq n}\sum_{j=1}^{n}\int_{A_{j}}|\mathrm {E}_{m}(t_{i},s)|\,ds\leq C\);

  7. (vii)

    \(\max_{1\leq i\leq n}\sum_{j=1}^{n}\int_{A_{i}}|\mathrm {E}_{m}(t_{j},s)|\,ds\leq C\).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, X., Zhong, J., Ren, J. et al. Asymptotic properties of wavelet estimators in heteroscedastic semiparametric model based on negatively associated innovations. J Inequal Appl 2019, 314 (2019). https://doi.org/10.1186/s13660-019-2267-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2267-4

MSC

Keywords