Skip to main content

Uniform convergence of estimator for nonparametric regression with dependent data

Abstract

In this paper, the authors investigate the internal estimator of nonparametric regression with dependent data such as α-mixing. Under suitable conditions such as the arithmetically α-mixing and \(E|Y_{1}|^{s}<\infty\) (\(s>2\)), the convergence rate \(|\widehat{m}_{n}(x)-m(x)|=O_{P}(a_{n})+O(h^{2})\) and uniform convergence rate \(\sup_{x\in S_{f}^{\prime}}|\widehat{m}_{n}(x)-m(x)|=O_{p}(a_{n})+O(h^{2})\) are presented, if \(a_{n}=\sqrt{\frac{\ln n}{nh^{d}}}\rightarrow0\). We generalize some results in Shen and Xie (Stat. Probab. Lett. 83:1915-1925, 2013).

1 Introduction

Kernel-type estimators of the regression function are widely various situations because of their flexibility and efficiency, in the dependent cases as well as in the independent data case. This paper is concerned with the nonparametric regression model

$$ Y_{i}=m(X_{i})+U_{i},\quad 1\leq i\leq n, n \geq1, $$
(1.1)

where \((X_{i},Y_{i})\in R^{d}\times R\), \(1\leq i\leq n\), \(U_{i}\) is random variable such that \(E(U_{i}|X_{i})=0\), \(1\leq i\leq n\). Then one has

$$ E(Y_{i}|X_{i}=x)=m(x),\quad 1\leq i\leq n, n\geq1. $$

The most popular nonparametric estimators of the unknown function \(m(x)\) is the Nadaraya-Watson estimator \(\widehat{m}_{NW}(x)\) given below and the local polynomials fitting. Let \(K(x)\) be a kernel function. Define \(K_{h}(x)=h^{-d}K(x/h)\), where \(h=h_{n}\) is a sequence of positive bandwidths tending to zero as \(n\rightarrow\infty\). Kernel-type estimators of the regression function are widely various situations because of their flexibility and efficiency, in the dependent data case as well as the independent data case. For the independent data, Nadaraya [2] and Watson [3] gave the most popular nonparametric estimators of the unknown function \(m(x)\), named the Nadaraya-Watson estimator \(\widehat{m}_{NW}(x)\), i.e.

$$ \widehat{m}_{NW}(x)=\frac{\sum_{i=1}^{n} Y_{i}K_{h}(x-X_{i})}{\sum_{i=1}^{n} K_{h}(x-X_{i})}. $$
(1.2)

Jones et al. [4] considered various versions of kernel-type regression estimators such as the Nadaraya-Watson estimator (1.2) and the local linear estimator. They also investigated the following internal estimator:

$$ \widehat{m}_{n}(x)=\frac{1}{n}\sum_{i=1}^{n} \frac{Y_{i}K_{h}(x-X_{i})}{f(X_{i})} $$
(1.3)

for known density \(f(\cdot)\). The term ‘internal’ stands for the fact that the factor \(\frac{1}{f(X_{i})}\) is internal to the summation, while the estimator \(\widehat{m}_{NW}(x)\) has the factor \(\frac{1}{\widehat{f}(x)}=\frac{1}{n^{-1}\sum_{i=1}^{n}K_{h}(x-X_{i})}\) externally to the summation.

The internal estimator was first proposed by Mack and Müller [5]. Jones et al. [4] studied various kernel-type regression estimators, including introduced the internal estimator (1.3). Linton and Nielsen [6] introduced ‘integration method’, based on direct integration of initial pilot estimator (1.3). Linton and Jacho-Chávez [7] studied the two internal nonparametric estimators with the estimator similar to estimator (1.3) but in place of an unknown density \(f(\cdot)\), a classical kernel estimator \(\widehat{f}(x)=n^{-1}\sum_{i=1}^{n} K_{h}(x-X_{i})\) is used. Much work has been done for the kernel density estimation. For example, Masry [8] gave the recursive probability density estimation for a mixing dependent sample, Roussas et al. [9] and Tran et al. [10] investigated the fixed design regression for dependent data, Liebscher [11] studied the strong convergence of sums of α-mixing random variables and gave its application to density estimation, Hansen [12] obtained the uniform convergence rates for kernel estimation with dependent data, and so on. For more work as regards kernel estimation, we can also refer to [1330] and the references therein.

Let \((\Omega, \mathcal{F}, P)\) be a fixed probability space. Denote \(N=\{1,2,\ldots,n,\ldots\}\). Let \(\mathcal {F}_{m}^{n}=\sigma(X_{i}, m\leq i\leq n, i\in N)\) be the σ-field generated by random variables \(X_{m},X_{m+1},\ldots,X_{n}\), \(1\leq m\leq n\). For \(n\geq1\), we define

$$\alpha(n)=\sup_{m\in N}\sup_{A\in\mathcal {F}_{1}^{m},B\in\mathcal {F}_{m+n}^{\infty}}\bigl|P(AB)-P(A)P(B)\bigr|. $$

Definition 1.1

If \(\alpha(n)\downarrow0\) as \(n\rightarrow \infty\), then \(\{X_{n},n\geq1\}\) is called a strong mixing or α-mixing sequence.

Recently, Shen and Xie [1] obtained the strong consistency of the internal estimator (1.3) under α-mixing data. In their paper, the process is assumed to be geometrically α-mixing sequence, i.e. the mixing coefficients \(\alpha(n)\) satisfy \(\alpha(n)\leq\beta_{0}e^{-\beta_{1}n}\), where \(\beta_{0}>0\) and \(\beta_{1}>0\). They also supposed that the sequence \(\{Y_{n},n\geq 1\}\) is bounded, as well as the density \(f(x)\) of \(X_{1}\). Inspired by Hansen [12], Shen and Xie [1] and other papers above, we also investigate the convergence of the internal estimator (1.3) under α-mixing data. The process is supposed to be an arithmetically α-mixing sequence, i.e. the mixing coefficients \(\alpha(n)\) satisfy \(\alpha(n)\leq Cn^{-\beta}\), \(C>0\), and \(\beta>0\). Without the bounded conditions of \(\{Y_{n},n\geq 1\}\) and the density \(f(x)\) of \(X_{1}\), we establish the convergence rate and uniform convergence rate for the internal estimator (1.3). For the details, please see our results in Section 2. The conclusion and the lemmas and proofs of the main results are presented in Section 3 and Section 4, respectively.

Regarding notation, for \(x=(x_{1},\ldots,x_{d})\in R^{d}\), set \(\|x\|=\max(|x_{1}|,\ldots,|x_{d}|)\). Throughout the paper, \(C,C_{1},C_{2},C_{3},\ldots\) , denote some positive constants not depending on n, which may be different in various places. \(\lfloor x\rfloor\) denotes the largest integer not exceeding x. → means to take the limit as \(n\rightarrow\infty\), \(\xrightarrow{P}\) means to convergence in probability. \(X\stackrel{d}{=}Y\) means that the random variables X and Y have the same distribution. A sequence \(\{X_{n},n\geq1\}\) is said to be of second-order stationarity if \((X_{1},X_{1+k})\stackrel{d}{=} (X_{i},X_{i+k})\), \(i\geq1\), \(k\geq1\).

2 Results and discussion

2.1 Some assumptions

Assumption 2.1

We assume the data observed \(\{(X_{n},Y_{n}),n\geq1\}\) valued in \(R^{d}\times R\) comes from a second-order stationary stochastic sequence. The sequence \(\{(X_{n},Y_{n}),n\geq1\}\) is also assumed to be arithmetically α-mixing with mixing coefficients \(\alpha(n)\) such that

$$ \alpha(n)\leq An^{-\beta}, $$
(2.1)

where \(A<\infty\) and for some \(s>2\)

$$ E|Y_{1}|^{s}< \infty $$
(2.2)

and

$$ \beta\geq\frac{2s-2}{s-2}. $$
(2.3)

The known density \(f(\cdot)\) of \(X_{1}\) is upon its compact support \(S_{f}\) and it is also assumed that \(\inf_{x\in S_{f}}f(x)>0\). Let \(B_{0}\) be a positive constant such as

$$ \sup_{x\in S_{f}} E\bigl(|Y_{1}|^{s}| X_{1}=x \bigr)f(x)\leq B_{0}. $$
(2.4)

Also, there is a \(j^{*}<\infty\) such that for all \(j\geq j^{*}\)

$$ \sup_{x_{1}\in S_{f},x_{j+1}\in S_{f}} E\bigl(|Y_{1}Y_{j+1}| | X_{1}=x_{1},X_{j+1}=x_{j+1}\bigr)f_{j}(x_{1},x_{j+1}) \leq B_{1}, $$
(2.5)

where \(B_{1}\) is a positive constant and \(f_{j}(x_{1},x_{j+1})\) denotes the joint density of \((X_{1},X_{j+1})\).

Assumption 2.2

There exist two positive constants \(\bar{K}>0\) and \(\mu>0\) such that

$$ \sup_{u\in R^{d}}\bigl|K(u)\bigr|\leq \bar{K} \quad\mbox{and}\quad \int _{R^{d}}\bigl|K(u)\bigr|\,du= \mu. $$
(2.6)

Assumption 2.3

Denote by \(S_{f}^{0}\) the interior of \(S_{f}\). For \(x\in S_{f}^{0}\), the function \(m(x)\) is twice differentiable and there exists a positive constant b such that

$$\biggl|\frac{\partial^{2} m(x)}{\partial x_{i}\,\partial x_{j}}\biggl|\leq b, \quad\forall i,j=1,2,\ldots,d. $$

The kernel density function \(K(\cdot)\) is symmetrical and satisfies

$$\int _{R^{d}}|v_{i}||v_{j}|K(v)\,dv< \infty,\quad \forall i,j=1,2,\ldots,d. $$

Assumption 2.4

The kernel function satisfies the Lipschitz condition, i.e.

$$\exists L>0,\quad \bigl|K(u)-K\bigl(u^{\prime}\bigr)\bigr|\leq L\bigl\| u-u^{\prime}\bigr\| ,\quad u,u^{\prime} \in R^{d}. $$

Remark 2.1

Similar to Assumption 2 of Hansen [12], Assumption 2.1 specifies that the serial dependence in the data is of strong mixing type, and equations (2.1)-(2.3) specify a required decay rate. Condition (2.4) controls the tail behavior of the conditional expectation \(E(|Y_{1}|^{s}|X_{1}=x)\), condition (2.5) places a similar bound on the joint density and conditional expectation. Assumptions 2.2-2.4 are the conditions of kernel function \(K(u)\), i.e., Assumption 2.2 is a general condition, Assumption 2.3 is used to estimate the convergence rate of \(|E\widehat{m}_{n}(x)-m(x)|\), and Assumption 2.4 is used to investigate the uniform convergence rate of the internal estimator \(\widehat{m}_{n}(x)\).

2.2 Main results

First, we investigate the variance bound of estimator \(\widehat{m}_{n}(x)\). For \(1\leq r\leq s\) and \(s>2\), denote \(\bar{\mu}(r,s):=\frac{(B_{0})^{r/s}\bar{K}^{r-1}\mu}{(\inf_{x\in S_{f}} f(x))^{r-1+r/s}}\), where \(B_{0}\), \(\inf_{x\in S_{f}} f(x)\), , and μ are defined in Assumptions 2.1 and 2.2.

Theorem 2.1

Let Assumption  2.1 and Assumption  2.2 be fulfilled. Then there exists a \(\Theta<\infty\) such that for n sufficiently large and \(x\in S_{f}\)

$$ \operatorname{Var}\bigl(\widehat{m}_{n}(x)\bigr)\leq\frac{\Theta}{nh^{d}}, $$
(2.7)

where \(\Theta:=\bar{\mu}(2,s)+2j^{*}\bar{\mu}(2,s)+2 (\bar{\mu}^{2}(1,s)+\frac {B_{1}\mu^{2}}{(\inf_{x\in S_{f}}f)^{2}} )+\frac{16A^{1-2/s}\bar{\mu}^{\frac{2}{s}}(s,s)}{(s-2)/s}\).

As an application to Theorem 2.1, we obtain the weak consistency of estimator \(\widehat{m}_{n}(x)\).

Corollary 2.1

Let Assumption  2.1 and Assumption  2.2 be fulfilled and \(K(\cdot)\) be a density function. For \(x\in S_{f}\), \(m(x)\) is supposed to be continuous at x. If \(nh^{d}\rightarrow \infty\) as \(n\rightarrow\infty\), then

$$ \widehat{m}_{n}(x)\xrightarrow{ P }m(x). $$
(2.8)

Next, the convergence rate of estimator \(\widehat{m}_{n}(x)\) is presented.

Theorem 2.2

For \(0<\theta<1\) and \(s>2\), let Assumptions 2.1-2.3 hold, where the mixing exponent β satisfies

$$ \beta> \max\biggl\{ \frac{2-4\theta+2\theta s}{(1-\theta)(s-2)},\frac{2s-2}{s-2}\biggr\} . $$
(2.9)

Denote \(a_{n}=\sqrt{\frac{\ln n}{nh^{d}}}\) and take \(h=n^{-\theta/d}\). Then for \(x\in S_{f}^{0}\), one has

$$ \bigl|\widehat{m}_{n}(x)-m(x)\bigr|= O_{P}(a_{n})+O \bigl(h^{2}\bigr). $$
(2.10)

Third, we now investigate the uniform convergence rate of estimator \(\widehat{m}_{n}(x)\) and its convergence over a compact set. Let \(S_{f}^{\prime}\) be any compact set contained in \(S_{f}^{0}\).

Theorem 2.3

For \(0<\theta<1\) and \(s>2\), let Assumptions 2.1-2.3 be fulfilled, where the mixing exponent β satisfies

$$ \beta>\max\biggl\{ \frac{s\theta d+3\theta s+sd+s-2\theta d-4\theta}{(1-\theta)(s-2)},\frac{2s-2}{s-2}\biggr\} . $$
(2.11)

Suppose that Assumption  2.4 is also fulfilled. Denote \(a_{n}=\sqrt{\frac{\ln n}{nh^{d}}}\) and take \(h=n^{-\theta/d}\). Then

$$ \sup_{x\in S_{f}^{\prime}}\bigl|\widehat{m}_{n}(x)-m(x)\bigr|=O_{p}(a_{n})+O \bigl(h^{2}\bigr). $$
(2.12)

2.3 Discussion

The parametric θ in Theorem 2.2 and Theorem 2.3 plays the role of a bridge between the process (i.e. mixing exponent) and choice of positive bandwidth h. For example, if \(d=2\), \(\theta=\frac{1}{3}\), and \(\beta>\max\{\frac{s+1}{s-2},\frac{2s-2}{s-2}\}\), then we take \(h=n^{-1/6}\) in Theorem 2.2 and obtain the convergence rate \(|\widehat{m}_{n}(x)-m(x)|= O_{P}((\ln n)^{1/2}n^{-1/3})\). Similarly, if \(d=2\), \(\theta=\frac{1}{3}\), and \(\beta>\frac{2s-2}{s-2}\), then we choose \(h=n^{-1/6}\) in Theorem 2.3 and establish the uniform convergence rate \(\sup_{x\in S_{f}^{\prime}}|\widehat{m}_{n}(x)-m(x)|=O_{p}((\ln n)^{1/2}n^{-1/3})\).

3 Conclusion

On the one hand, similar to Theorem 2.1, Hansen [12] investigated the kernel average estimator

$$\widehat{\Psi}(x)=\frac{1}{n}\sum_{i=1}^{n}Y_{i}K_{h}(x-X_{i}), $$

and obtained the variance bound \(\operatorname{Var}(\widehat{\Psi}(x))\leq \frac{\Theta}{nh^{d}}\), where Θ is a positive constant. For the details, see Theorem 1 of Hansen [12]. Under some other conditions, Hansen [12] also gave the uniform convergence rates such as \(\sup_{\|x\|\leq c_{n}}|\widehat{\Psi}(x)-E\widehat{\Psi}(x)|=O_{P}(a_{n})\), where \(a_{n}=\sqrt{\frac{\ln n}{nh^{d}}}\rightarrow0\) and \(\{c_{n}\}\) is a sequence of positive constant (see Theorems 2-5 of Hansen [12]). On the other hand, under the conditions such as the geometrically α-mixing and \(\{Y_{n}, n\geq1\}\) is bounded as well as the density function \(f(x)\) of \(X_{1}\), Shen and Xie [1] obtained the complete convergence such as \(|\widehat{m}_{n}(x)-m(x)|\xrightarrow{a.c.}0\), if \(\frac{\ln^{2}n}{nh^{d}}\rightarrow0\) (see Theorem 3.1 of Shen and Xie [1]), the uniform complete convergence such as \(\sup_{x\in S_{f}^{\prime}}|\widehat{m}_{n}(x)-m(x)|\xrightarrow{a.c.}0\), if \(\frac{\ln^{2}n}{nh^{d}}\rightarrow0\) (see Theorem 4.1 of Shen and Xie [1]). In this paper, we do not need the bounded conditions of \(\{Y_{n},n\geq1\}\) and \(f(x)\) of \(X_{1}\), and we also investigate the convergence of the internal estimator \(\hat{m}_{n}(x)\). Under some weak conditions such as the arithmetically α-mixing and \(E|Y_{1}|^{s}<\infty\), \(s>2\), we establish the convergence rate in Theorem 2.2 such as \(|\widehat{m}_{n}(x)-m(x)|= O_{P}(a_{n})+O(h^{2})\) if \(a_{n}=\sqrt{\frac{\ln n}{nh^{d}}}\rightarrow0\), and uniform convergence rate in Theorem 2.3 such as \(\sup_{x\in S_{f}^{\prime}}|\widehat{m}_{n}(x)-m(x)|=O_{p}(a_{n})+O(h^{2})\) if \(a_{n}=\sqrt{\frac{\ln n}{nh^{d}}}\rightarrow0\). In Theorem 2.2 and Theorem 2.3, we have \(|\widehat{m}_{n}(x)-E\widehat{m}_{n}(x)|=O_{P}(a_{n})\) and \(\sup_{x\in S_{f}^{\prime}}|\widehat{m}_{n}(x)-E\widehat{m}_{n}(x)|=O_{P}(a_{n})\), where the convergence rates are the same as that obtained by Hansen [12]. So, we relatively generalize the results in Shen and Xie [1].

4 Some lemmas and the proofs of the main results

Lemma 4.1

(Hall and Heyde, [31], Corollary A.2, i.e. Davydov’s lemma)

Suppose that X and Y are random variables which are \(\mathscr{G}\)-measurable and \(\mathscr{H}\)-measurable, respectively, and \(E|X|^{p}<\infty\), \(E|Y|^{q}<\infty\), where \(p,q>1\), \(p^{-1}+q^{-1}<1\). Then

$$\bigl|E(XY)-EX EY\bigr|\leq8\bigl(E|X|^{p}\bigr)^{1/p} \bigl(E|Y|^{q}\bigr)^{1/q}\bigl[\alpha(\mathscr {G}, \mathscr{H})\bigr]^{1-p^{-1}-q^{-1}}. $$

Lemma 4.2

(Liebscher [32], Proposition 5.1)

Let \(\{X_{n},n\geq1\}\) be a stationary α-mixing sequence with mixing coefficient \(\alpha(k)\). Assume that \(EX_{i}=0\) and \(|X_{i}|\leq S<\infty\), a.s., \(i=1,2,\ldots,n\). Then, for \(n,m\in N\), \(0< m\leq n/2\), and all \(\varepsilon>0\),

$$P \Biggl( \Biggl|\sum_{i=1}^{n} X_{i} \Biggr|>\varepsilon \Biggr)\leq4\exp \biggl\{ -\frac{\varepsilon^{2}}{16(\frac{n}{m}D_{m}+\frac{1}{3}\varepsilon Sm)} \biggr\} +32 \frac{S}{\varepsilon}n\alpha(m), $$

where \(D_{m}=\max_{1\leq j\leq2m}\operatorname{Var}(\sum_{i=1}^{j} X_{i})\).

Lemma 4.3

(Shen and Xie [1], Lemma 3.2)

Under Assumption  2.3, for \(x\in S_{f}^{0}\), one has

$$\bigl|E\widehat{m}_{n}(x)-m(x)\bigr|=O\bigl(h^{2}\bigr). $$

Proof of Theorem 2.1

For \(x\in S_{f}\), let \(Z_{i}:=\frac{Y_{i}K_{h}(x-X_{i})}{f(X_{i})}\), \(1\leq i\leq n\). Consider now

$$ \widehat{m}_{n}(x)=\frac{1}{n}\sum_{i=1}^{n} \frac{Y_{i}K_{h}(x-X_{i})}{f(X_{i})}=\frac{1}{n}\sum_{i=1}^{n} Z_{i}, \quad n\geq 1. $$

For any \(1\leq r\leq s\) and \(s>2\), it follows from (2.4) and (2.6) that

$$\begin{aligned} h^{d(r-1)}E|Z_{1}|^{r} =&h^{d(r-1)}E \biggl| \frac{K_{h}(x-X_{1})Y_{1}}{f(X_{1})} \biggr|^{r} \\ =&h^{d(r-1)}E \biggl(\frac{|K_{h}(x-X_{1})|^{r}}{f^{r}(X_{1})}E\bigl(|Y_{1}|^{r}| X_{1} \bigr) \biggr) \\ =& \int _{S_{f}}\biggl|K\biggl(\frac{x-u}{h}\biggr)\biggr|^{r}E \bigl(|Y_{1}|^{r}|X_{1}=u\bigr)\frac {1}{h^{d}} \frac{f(u)}{f^{r}(u)}\,du \\ \leq& \int _{S_{f}}\biggl|K\biggl(\frac {x-u}{h}\biggr)\biggr|^{r} \bigl(E\bigl(|Y_{1}|^{s}|X_{1}=u\bigr)f(u) \bigr)^{r/s}\frac{1}{h^{d}}\frac {1}{f^{r-1+r/s}(u)}\,du \\ \leq& \frac{(B_{0})^{r/s}\bar{K}^{r-1}\mu}{(\inf_{x\in S_{f}} f)^{r-1+r/s}}:=\bar{\mu}(r,s)< \infty. \end{aligned}$$
(4.1)

For \(j\geq j^{*}\), by (2.5), one has

$$\begin{aligned} E|Z_{1}Z_{j+1}| =&E \biggl|\frac {K_{h}(x-X_{1})K_{h}(x-X_{j+1})Y_{1}Y_{j+1}}{f(X_{1})f(X_{j+1})} \biggr| \\ =&E \biggl(\frac {|K_{h}(x-X_{1})K_{h}(x-X_{j+1})|}{f(X_{1})f(X_{j+1})}E\bigl(|Y_{1}Y_{j+1}||X_{1},X_{j+1}\bigr) \biggr) \\ =& \int _{S_{f}} \int _{S_{f}}\biggl|K\biggl(\frac{x-u_{1}}{h}\biggr)K\biggl( \frac {x-u_{j+1}}{h}\biggr)\biggr|E\bigl(|Y_{1}Y_{j+1}||X_{1}=u_{1},X_{j}=u_{j+1}\bigr) \\ &{}\times\frac{1}{h^{2d}}\frac {1}{f(u_{1})f(u_{j+1})}f_{j}(u_{1},u_{j+1})\,du_{1}\,du_{j+1} \\ \leq&\frac{B_{1}}{(\inf_{x\in S_{f}} f)^{2}} \int _{R^{d}} \int _{R^{d}}\bigl|K(u_{1})K(u_{j+1})\bigr|\,du_{1}\,du_{j+1} \leq \frac{B_{1}\mu^{2}}{(\inf_{x\in S_{f}} f)^{2}}< \infty. \end{aligned}$$
(4.2)

Define the covariances \(\gamma_{j}=\operatorname{Cov}(Z_{1},Z_{j+1})\), \(j>0\). Assume that n is sufficiently large so that \(h^{-d}\geq j^{*}\). We now bound the \(\gamma_{j}\) separately for \(j\leq j^{*}\), \(j^{*}< j\leq h^{-d}\), and \(h^{-d}< j<\infty\). First, for \(1\leq j\leq j^{*}\), by the Cauchy-Schwarz inequality and (4.1) with \(r=2\),

$$ |\gamma_{j}|\leq\sqrt{\operatorname{Var}(Z_{1})\cdot \operatorname{Var}(Z_{j+1})}= \operatorname{Var}(Z_{1}) \leq EZ_{1}^{2} \leq\bar{\mu}(2,s) h^{-d}. $$
(4.3)

Second, for \(j^{*}< j\leq h^{-d}\), in view of (4.1) (\(r=1\)) and (4.2), we establish that

$$ |\gamma_{j}|\leq E|Z_{1}Z_{j+1}|+\bigl(E|Z_{1}|\bigr)^{2} \leq \frac{B_{1}\mu^{2}}{(\inf_{x\in S_{f}} f)^{2}}+\bar{\mu}^{2}(1,s). $$
(4.4)

Third, for \(j>h^{-d}\), we apply Lemma 4.1, (2.1) and (4.1) with \(r=s\) (\(s>2\)) and we thus obtain

$$\begin{aligned} |\gamma_{j}| \leq& 8\bigl(\alpha({j})\bigr)^{1-2/s} \bigl(E|Z_{1}|^{s}\bigr)^{2/s}\leq 8A^{-1-2/s}j^{-\beta(1-2/s)}\bigl(\bar{\mu}(s,s) h^{-d(s-1)} \bigr)^{2/s} \\ \leq& 8A^{-1-2/s}\bar{\mu}^{\frac{2}{s}}(s,s) j^{-(2-2/s)}h^{-2d(s-1)/s}. \end{aligned}$$
(4.5)

Consequently, in view of the property of second-order stationarity and (4.3)-(4.5), for n sufficiently large, we establish

$$\begin{aligned} \operatorname{Var}\bigl(\widehat{m}_{n}(x)\bigr) =&\frac{1}{n^{2}}\operatorname{Var} \Biggl(\sum _{i=1}^{n} Z_{i} \Biggr)= \frac{1}{n^{2}} \Biggl(n\gamma_{0}+2n\sum _{j=1}^{n-1}\biggl(1-\frac {j}{n}\biggr) \gamma_{j} \Biggr) \\ \leq&\frac{1}{n^{2}} \biggl(nh^{-d}\bar{\mu}(2,s)+2n\sum _{1\leq j\leq j^{*}} |\gamma_{j}|+2n\sum _{j^{*}< j\leq h^{-d}} |\gamma_{j}|+2n\sum _{h^{-d}< j} |\gamma_{j}| \biggr) \\ \leq&\frac{1}{n}\bar{\mu}(2,s)h^{-d}+\frac{2}{n}j^{*} \bar{\mu }(2,s)h^{-d}+\frac{2}{n}\bigl(h^{-d}-j^{*} \bigr) \biggl(\bar{\mu}^{2}(1,s)+\frac{B_{1}\mu ^{2}}{(\inf_{x\in S_{f}}f)^{2}} \biggr) \\ &{}+\frac{2}{n}\sum_{h^{-d}< j< \infty}8A^{1-2/s} \bar{\mu}^{\frac{2}{s}}(s,s) j^{-(2-2/s)}h^{-2d(s-1)/s} \\ \leq& \biggl(\bar{\mu}(2,s)+2j^{*}\bar{\mu}(2,s)+2 \biggl(\bar{\mu}^{2}(1,s)+\frac{B_{1}\mu^{2}}{(\inf_{x\in S_{f}}f)^{2}} \biggr) \\ &{}+\frac{16A^{1-2/s}\bar{\mu}^{\frac {2}{s}}(s,s)}{(s-2)/s} \biggr) \frac{1}{nh^{d}} \\ :=&\frac{\Theta}{nh^{d}}, \end{aligned}$$
(4.6)

where the final inequality uses the fact that \(\sum_{j=k+1}^{\infty}j^{-\delta}\leq\int _{k}^{\infty}x^{-\delta}\,dx=\frac{k^{1-\delta}}{\delta-1}\) for \(\delta>1\) and \(k\geq1\).

Thus, (2.7) is completely proved. □

Proof of Corollary 2.1

It is easy to see that

$$\bigl|\widehat{m}_{n}(x)-m(x)\bigr|\leq \bigl|\widehat{m}_{n}(x)-E \widehat{m}(x)\bigr|+\bigl|E\widehat{m}_{n}(x)-m(x)\bigr|, $$

which can be treated as ‘variance’ part and ‘bias’ part, respectively.

On the one hand, by the proof of Theorem 3.1 of Shen and Xie [1], one has \(|E\widehat{m}_{n}(x)-m(x)|\rightarrow0\). On the other hand, we apply Theorem 2.1 and obtain that \(|\widehat{m}_{n}(x)-E\widehat{m}_{n}(x)|\xrightarrow{ P } 0\). So, (2.8) is proved finally. □

Proof of Theorem 2.2

Let \(\tau_{n}=a_{n}^{-1/(s-1)}\) and define

$$ R_{n}=\widehat{m}_{n}(x)-\frac{1}{n}\sum _{i=1}^{n} \frac{Y_{i}K_{h}(x-X_{i})}{f(X_{i})}I\bigl(|Y_{i}|\leq \tau_{n}\bigr) =\frac{1}{n}\sum_{i=1}^{n} \frac{Y_{i}K_{h}(x-X_{i})}{f(X_{i})}I\bigl(|Y_{i}|>\tau_{n}\bigr). $$

Obviously, we have

$$\begin{aligned} E|R_{n}| \leq& E \biggl(\frac{|K_{h}(x-X_{1})|}{f(X_{1})}E\bigl(|Y_{1}|I\bigl(|Y_{1}|> \tau _{n}\bigr)|X_{1}\bigr) \biggr) \\ \leq&\frac{1}{(\inf_{S_{f}}f)} \int _{S_{f}}\biggl|K\biggl(\frac {x-u}{h}\biggr)\biggr|E \bigl(|Y_{1}|I\bigl(|Y_{1}|>\tau_{n}\bigr)|X_{1}=u \bigr)\frac{f(u)}{h^{d}}\,du \\ = &\frac{1}{(\inf_{S_{f}}f)} \int _{S_{f}}\bigl|K(u)\bigr|E\bigl(|Y_{1}|I\bigl(|Y_{1}|> \tau_{n}\bigr)|X_{1}=x-hu\bigr)f(x-hu)\,du \\ \leq&\frac{1}{(\inf_{S_{f}}f)}\frac{1}{\tau_{n}^{s-1}} \int _{S_{f}}\bigl|K(u)\bigr|E\bigl(|Y_{1}|^{s}|X_{1}=x-hu \bigr)f(x-hu)\,du \\ \leq&\frac{1}{(\inf_{S_{f}}f)}\frac{\mu B_{0}}{\tau_{n}^{s-1}}. \end{aligned}$$
(4.7)

Combining Markov’s inequality with (4.7), one has

$$ |R_{n}-ER_{n}|=O_{P}\bigl(\tau_{n}^{-(s-1)} \bigr)=O_{P}(a_{n}). $$
(4.8)

Denote

$$ \widetilde{m}_{n}(x)=\frac{1}{n}\sum _{i=1}^{n} \frac{Y_{i}K_{h}(x-X_{i})}{f(X_{i})}I\bigl(|Y_{i}|\leq \tau_{n}\bigr):=\frac{1}{n}\sum_{i=1}^{n} \tilde{Z}_{n},\quad n\geq1. $$
(4.9)

It can be seen that

$$\begin{aligned} \bigl|\widehat{m}_{n}(x)-m(x)\bigr| \leq& \bigl|\widehat{m}_{n}(x)-E \widehat{m}_{n}(x)\bigr|+\bigl|E\widehat{m}_{n}(x)-m(x)\bigr| \\ \leq&\bigl|\widetilde{m}_{n}(x)-E\widetilde{m}_{n}(x)\bigr|+|R_{n}-ER_{n}|+\bigl|E \widehat {m}_{n}(x)-m(x)\bigr|. \end{aligned}$$
(4.10)

Similar to the proof of (4.6), it can be argued that

$$\operatorname{Var} \Biggl(\sum_{i=1}^{j} \tilde{Z}_{i} \Biggr)\leq C_{2}jh^{-d}, $$

which implies

$$D_{m}=\max_{1\leq j\leq2m}\operatorname{Var} \Biggl(\sum _{i=1}^{j} \tilde {Z}_{i} \Biggr)\leq C_{3}mh^{-d}. $$

Meanwhile, one has \(|\tilde{Z}_{i}-E\tilde{Z}_{i}|\leq \frac{C_{1}\tau_{n}}{h^{d}}\), \(1\leq i\leq n\). Setting \(m=a_{n}^{-1}\tau_{n}^{-1}\) and using (2.9), \(h=n^{-\theta/d}\), and Lemma 4.2 with \(\varepsilon=a_{n}n\), we obtain for n sufficiently large

$$\begin{aligned} &P\bigl(\bigl|\widetilde{m}_{n}(x)-E\widetilde{m}_{n}(x)\bigr|>a_{n} \bigr) \\ &\quad=P \Biggl( \Biggl|\sum_{i=1}^{n} ( \tilde{Z}_{i}-E\tilde{Z}_{i}) \Biggr|>na_{n} \Biggr) \\ &\quad\leq4\exp \biggl\{ -\frac{na_{n}^{2}}{16(C_{3}h^{-d} +\frac{1}{3}C_{1}h^{-d})} \biggr\} +32\frac{C_{1}\tau_{n}}{a_{n}nh^{d}}nA(a_{n} \tau_{n})^{\beta} \\ &\quad\leq4\exp\biggl\{ -\frac{\ln n}{16(C_{3}+\frac{1}{3}C_{1})}\biggr\} +C_{4}h^{-d}a_{n}^{\frac{\beta(s-2)-s}{s-1}} \\ &\quad\leq o(1)+C_{5}n^{\theta}n^{(\theta-1)\frac{\beta(s-2)-s}{2(s-1)}}(\ln n)^{\frac{\beta(s-2)-s}{2(s-1)}} \\ &\quad=o(1)+C_{5}n^{\frac{\beta(\theta-1)(s-2)+\theta s+s-2\theta}{2(s-1)}}(\ln n)^{\frac{\beta(s-2)-s}{2(s-1)}} =o(1), \end{aligned}$$
(4.11)

in view of \(s>2\), \(0<\theta<1\), \(\beta>\max\{\frac{\theta s+s-2\theta}{(1-\theta)(s-2)},\frac{2s-2}{s-2}\}\), and \(\frac{\beta(\theta-1)(s-2)+\theta s+s-2\theta}{2(s-1)}<0\).

Consequently, by (4.8), (4.10), (4.11), and Lemma 4.3, we establish the result of (2.10). □

Proof of Theorem 2.3

We use some similar notation in the proof of Theorem 2.2. Obviously, one has

$$ \sup_{x\in S_{f}^{\prime}}\bigl|\widehat{m}_{n}(x)-m(x)\bigr|\leq \sup _{x\in S_{f}^{\prime}}\bigl|\widehat{m}_{n}(x)-E\widehat{m}_{n}(x)\bigr|+ \sup_{x\in S_{f}^{\prime}}\bigl|E\widehat{m}_{n}(x)-m(x)\bigr|. $$
(4.12)

By the proof of (3.21) of Shen and Xie [1], we establish that

$$ \bigl|E\widehat{m}_{n}(x)-m(x)\bigr|\leq h^{2}\frac{b}{2}\sum _{1\leq i,j\leq d} \int _{R^{d}}K(v)|v_{i}v_{j}|\,dv\leq C_{0}h^{2}, $$

which implies

$$ \sup_{x\in S_{f}^{\prime}}\bigl|E\widehat{m}_{n}(x)-m(x)\bigr|=O \bigl(h^{2}\bigr). $$
(4.13)

Since \(\hat{m}_{n}(x)=R_{n}(x)+\tilde{m}_{n}(x)\),

$$ \sup_{x\in S_{f}^{\prime}}\bigl|\widehat{m}_{n}(x)-E\widehat{m}_{n}(x)\bigr| \leq \sup_{x\in S_{f}^{\prime}}\bigl|\widetilde{m}_{n}(x)-E \widetilde{m}_{n}(x)\bigr|+\sup_{x\in S_{f}^{\prime}}|R_{n}-ER_{n}|. $$
(4.14)

It follows from the proof of (4.8) that

$$ \sup_{x\in S_{f}^{\prime}}|R_{n}-ER_{n}|=O_{p}(a_{n}). $$
(4.15)

Since \(S_{f}^{\prime}\) is a compact set, there exists a \(\xi>0\) such that \(S_{f}^{\prime}\subset B:=\{x:\|x\|\leq\xi\}\). Let \(v_{n}\) be a positive integer. Take an open covering \(\bigcup_{j=1}^{v_{n}^{d}}B_{jn}\) of B, where \(B_{jn}\subset \{x:\|x-x_{jn}\|\leq\frac{\xi}{v_{n}}\}\), \(j=1,2,\ldots,v_{n}^{d}\), and their interiors are disjoint. So it follows that

$$\begin{aligned} &\sup_{x\in S_{f}^{\prime}}\bigl|\widetilde{m}_{n}(x)-E \widetilde{m}_{n}(x)\bigr| \\ &\quad\leq \max_{1\leq j\leq v_{n}^{d}}\sup _{x\in B_{jn}\cap S_{f}^{\prime}}\bigl|\widetilde{m}_{n}(x)-E\widetilde{m}_{n}(x)\bigr| \\ &\quad\leq\max_{1\leq j\leq v_{n}^{d}}\sup_{x\in B_{jn}\cap S_{f}^{\prime}}\bigl| \widetilde{m}_{n}(x)-\widetilde{m}_{n}(x_{jn})\bigr|+ \max_{1\leq j\leq v_{n}^{d}}\bigl|\widetilde{m}_{n}(x_{jn})-E \widetilde{m}_{n}(x_{jn})\bigr| \\ &\qquad{}+\max_{1\leq j\leq v_{n}^{d}}\sup_{x\in B_{jn}\cap S_{f}^{\prime}}\bigl|E \widetilde{m}_{n}(x_{jn})-E\widetilde{m}_{n}(x)\bigr| \\ &\quad:=I_{1}+I_{2}+I_{3}. \end{aligned}$$
(4.16)

By the definition of \(\widetilde{m}_{n}(x)\) in (4.9) and the Lipschitz condition of K,

$$\begin{aligned} \bigl|\widetilde{m}_{n}(x)-\widetilde{m}_{n}(x_{jn})\bigr| \leq& \Bigl(\inf_{S_{f}^{\prime}}f \Bigr)^{-1}\frac{\tau_{n}}{nh^{d}} \sum_{i=1}^{n} \biggl|K\biggl(\frac{x-X_{i}}{h} \biggr)-K\biggl(\frac{x_{jn}-X_{i}}{h}\biggr) \biggr| \\ \leq&\frac{L\tau_{n}}{h^{d+1}\inf_{S_{f}^{\prime}} f}\|x-x_{jn}\|,\quad x\in S_{f}^{\prime}. \end{aligned}$$

Taking \(v_{n}=\lfloor\frac{\tau_{n}}{h^{d+1}a_{n}}\rfloor+1\), we obtain

$$ \sup_{x\in B_{jn}\cap S_{f}^{\prime}}\bigl|\widetilde{m}_{n}(x)- \widetilde{m}_{n}(x_{jn})\bigr|\leq \frac{L\xi}{\inf_{S_{f}^{\prime}}f}a_{n},\quad 1\leq j\leq v_{n}^{d} , $$
(4.17)

and

$$ I_{1}=\max_{1\leq j\leq v_{n}^{d}}\sup_{x\in B_{jn}\cap S_{f}^{\prime}}\bigl| \widetilde{m}_{n}(x)-\widetilde{m}_{n}(x_{jn})\bigr|=O(a_{n}). $$
(4.18)

In view of \(|E\widetilde{m}_{n}(x_{jn})-E\widetilde{m}_{n}(x)|\leq E|\widetilde{m}_{n}(x_{jn})-\widetilde{m}_{n}(x)|\), we have by (4.17)

$$ I_{3}=\max_{1\leq j\leq v_{n}^{d}}\sup_{x\in B_{jn}\cap S_{f}^{\prime}}\bigl|E \widetilde{m}_{n}(x_{jn})-E\widetilde{m}_{n}(x)\bigr| \leq \frac{L\xi}{\inf_{S_{f}^{\prime}}f}a_{n}=O(a_{n}). $$
(4.19)

For \(1\leq i\leq n\) and \(1\leq j\leq v_{n}^{d}\), denote \(\tilde{Z}_{i}(j)=\frac{Y_{i}K_{h}(x_{jn}-X_{i})}{f(X_{i})}I(|Y_{i}|\leq \tau_{n})\). Then similar to the proof of (4.11), we obtain by Lemma 4.2 with \(m=a_{n}^{-1}\tau_{n}^{-1}\) and \(\varepsilon=Mna_{n}\) for n sufficiently large

$$\begin{aligned} P\bigl(|I_{2}|>Ma_{n}\bigr) =&P\Bigl(\max_{1\leq j\leq v_{n}^{d}}\bigl| \widetilde{m}_{n}(x_{jn})-E\widetilde{m}_{n}(x_{jn})\bigr|>Ma_{n} \Bigr) \\ \leq&\sum_{j=1}^{v_{n}^{d}}P \Biggl( \Biggl|\sum _{i=1}^{n} \bigl(\tilde{Z}_{i}(j)-E \tilde{Z}_{i}(j)\bigr) \Biggr|>Mna_{n} \Biggr) \\ \leq&4v_{n}^{d}\exp \biggl\{ -\frac{M^{2}na_{n}^{2}}{16(C_{3}h^{-d}+\frac {1}{3}C_{1}Mh^{-d})} \biggr\} +32v_{n}^{d}\frac{C_{1}\tau_{n}}{Ma_{n}h^{d}}A(a_{n}\tau _{n})^{\beta} \\ =&I_{21}+I_{22}, \end{aligned}$$
(4.20)

where the value of M will be given in (4.22).

In view of \(0<\theta<1\), \(s>2\), \(h=n^{-\theta/d}\), and \(a_{n}=(\frac{\ln n}{nh^{d}})^{1/2}\), one has \(h^{-d(d+1)}=n^{\theta(d+1)}\) and \(a_{n}^{-\frac{sd}{s-1}}=(\ln n)^{-\frac{sd}{2(s-1)}}n^{\frac{sd(1-\theta)}{2(s-1)}}\). Therefore, by \(v_{n}=\lfloor\frac{\tau_{n}}{h^{d+1}a_{n}}\rfloor+1\) and \(\tau_{n}=a_{n}^{-\frac{1}{s-1}}\), we obtain for n sufficiently large

$$\begin{aligned} I_{21} =&4v_{n}^{d}\exp \biggl\{ - \frac{M^{2}na_{n}^{2}}{16(C_{3}h^{-d}+\frac {1}{3}MC_{1}h^{-d})} \biggr\} \\ \leq& C_{4}h^{-d(d+1)}a_{n}^{-\frac{sd}{s-1}}\exp \biggl\{ -\frac{M^{2}\ln n}{16(C_{3}+\frac{1}{3}MC_{1})} \biggr\} \\ \leq&C_{5}(\ln n)^{-\frac{sd}{2(s-1)}}n^{\theta(d+1) +\frac{sd(1-\theta)}{2(s-1)}-\frac {M^{2}}{16(C_{3}+\frac{1}{3}MC_{1})}} =o(1), \end{aligned}$$
(4.21)

where M is sufficiently large such that

$$ \frac{M^{2}}{16(C_{3}+\frac{1}{3}MC_{1})}\geq \theta(d+1) +\frac{sd(1-\theta)}{2(s-1)}. $$
(4.22)

Meanwhile, by (2.11) and \(h=n^{-\theta/d}\), one has for n sufficiently large

$$\begin{aligned} I_{22} =&32v_{n}^{d}\frac{C_{1}\tau_{n}}{Ma_{n}h^{d}}A(a_{n} \tau_{n})^{\beta} \leq\frac{C_{6}}{M} \biggl( \frac{\tau_{n}}{h^{d+1}a_{n}} \biggr)^{d}a_{n}^{\frac{\beta (s-2)-s}{s-1}}h^{-d} =\frac{C_{6}}{M}a_{n}^{\frac{\beta(s-2)-s(d+1)}{s-1}}h^{-d(d+2)} \\ =&\frac{C_{6}}{M}(\ln n)^{\frac{\beta(s-2)-s(d+1)}{2(s-1)}}n^{\frac{\beta(\theta -1)(s-2)+s\theta d+3\theta s+sd+s-2\theta d-4\theta}{2(s-1)}} =o(1), \end{aligned}$$
(4.23)

in which is used the fact that \(s>2\), \(0<\theta<1\),

$$\beta>\max\biggl\{ \frac{s\theta d+3\theta s+sd+s-2\theta d-4\theta}{(1-\theta)(s-2)},\frac{2s-2}{s-2}\biggr\} , $$

and

$$\frac{\beta(\theta-1)(s-2)+s\theta d+3\theta s+sd+s-2\theta d-4\theta}{2(s-1)}< 0. $$

Thus, by (4.20)-(4.23), we establish that

$$ |I_{2}|=O_{p}(a_{n}). $$
(4.24)

Finally, the result of (2.12) follows from (4.12)-(4.16), (4.18), (4.19), and (4.24) immediately. □

References

  1. Shen, J, Xie, Y: Strong consistency of the internal estimator of nonparametric regression with dependent data. Stat. Probab. Lett. 83, 1915-1925 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  2. Nadaraya, EA: On estimating regression. Theory Probab. Appl. 9, 141-142 (1964)

    Article  MATH  Google Scholar 

  3. Watson, GS: Smooth regression analysis. Sankhya, Ser. A 26, 359-372 (1964)

    MathSciNet  MATH  Google Scholar 

  4. Jones, MC, Davies, SJ, Park, BU: Versions of kernel-type regression estimators. J. Am. Stat. Assoc. 89, 825-832 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  5. Mack, YP, Müller, HG: Derivative estimation in nonparametric regression with random predictor variable. Sankhya, Ser. A 51, 59-72 (1989)

    MathSciNet  MATH  Google Scholar 

  6. Linton, O, Nielsen, J: A kernel method of estimating structured nonparametric regression based on marginal integration. Biometrika 82, 93-100 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  7. Linton, O, Jacho-Chávez, D: On internally corrected and symmetrized kernel estimators for nonparametric regression. Test 19, 166-186 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Masry, E: Recursive probability density estimation for weakly dependent stationary processes. IEEE Trans. Inf. Theory 32, 254-267 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  9. Roussas, GG, Tran, LT, Ioannides, DA: Fixed design regression for time series: asymptotic normality. J. Multivar. Anal. 40, 262-291 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  10. Tran, LT, Roussas, GG, Yakowitz, S, Van, BT: Fixed-design regression for linear time series. Ann. Stat. 24, 975-991 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  11. Liebscher, E: Strong convergence of sums of α-mixing random variables with applications to density estimation. Stoch. Process. Appl. 65, 69-80 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hansen, BE: Uniform convergence rates for kernel estimation with dependent data. Econom. Theory 24, 726-748 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. Andrews, DWK: Nonparametric kernel estimation for semiparametric models. Econom. Theory 11, 560-596 (1995)

    Article  MathSciNet  Google Scholar 

  14. Bakouch, HS, Ristić, MM, Sandhya, E, Satheesh, S: Random products and product auto-regression. Filomat 27, 1197-1203 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  15. Bosq, D: Nonparametric Statistics for Stochastic Processes: Estimation and Prediction, 2nd edn. Lecture Notes in Statistics, vol. 110. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  16. Bosq, D, Blanke, D: Inference and Prediction in Large Dimensions. Wiley, Chichester (2007)

    Book  MATH  Google Scholar 

  17. Fan, JQ: Design-adaptive nonparametric regression. J. Am. Stat. Assoc. 87, 998-1004 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  18. Fan, JQ: Local linear regression smoothers and their minimax efficiency. Ann. Stat. 21, 196-216 (1993)

    Article  MATH  Google Scholar 

  19. Fan, JQ, Yao, QW: Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York (2003)

    Book  MATH  Google Scholar 

  20. Gao, JT, Lu, ZD, Tjostheim, D: Estimation in semiparametric spatial regression. Ann. Stat. 34, 1395-1435 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  21. Gao, JT, Tjostheim, D, Yin, JY: Estimation in threshold autoregressive models with a stationary and a unit root regime. J. Econom. 172, 1-13 (2013)

    Article  MathSciNet  Google Scholar 

  22. Györfi, L, Kohler, M, Krzyżak, A, Walk, H: A Distribution-Free Theory of Nonparametric Regression. Springer, New York (2002)

    Book  MATH  Google Scholar 

  23. Ling, NX, Wu, YH: Consistency of modified regression estimation for functional data. Statistics 46, 149-158 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  24. Ling, NX, Xu, Q: Asymptotic normality of conditional density estimation in the single index model for functional time series data. Stat. Probab. Lett. 82, 2235-2243 (2012)

    Article  MathSciNet  Google Scholar 

  25. Ling, NX, Li, ZH, Yang, WZ: Conditional density estimation in the single functional index model for α-mixing functional data. Commun. Stat., Theory Methods 43, 441-454 (2014)

    Article  MathSciNet  Google Scholar 

  26. Masry, E: Multivariate local polynomial regression for time series: uniform strong consistency and rates. J. Time Ser. Anal. 17, 571-599 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  27. Newey, WK: Kernel estimation of partial means and a generalized variance estimator. Econom. Theory 10, 233-253 (1994)

    MathSciNet  Google Scholar 

  28. Peligrad, M: Properties of uniform consistency of the kernel estimators of density and of regression functions under dependence conditions. Stoch. Stoch. Rep. 40, 147-168 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  29. Rosenblatt, M: Remarks on some non-parametric estimates of a density function. Ann. Math. Stat. 27, 832-837 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  30. Stone, CJ: Consistent nonparametric regression. Ann. Stat. 5, 595-645 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  31. Hall, P, Heyde, CC: Martingale Limit Theory and Its Application. Academic Press, New York (1980)

    MATH  Google Scholar 

  32. Liebscher, E: Estimation of the density and the regression function under mixing conditions. Stat. Decis. 19, 9-26 (2001)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are deeply grateful to the editors and the anonymous referees for their careful reading and insightful comments, which helped in improving the earlier version of this paper. This work is supported by the National Natural Science Foundation of China (11171001, 11426032, 11501005), Natural Science Foundation of Anhui Province (1408085QA02, 1508085QA01, 1508085J06, 1608085QA02), the Provincial Natural Science Research Project of Anhui Colleges (KJ2014A010, KJ2014A020, KJ2015A065), Quality Engineering Project of Anhui Province (2015jyxm054), Applied Teaching Model Curriculum of Anhui University (XJYYKC1401, ZLTS2015053) and Doctoral Research Start-up Funds Projects of Anhui University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenzhi Yang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, X., Yang, W. & Hu, S. Uniform convergence of estimator for nonparametric regression with dependent data. J Inequal Appl 2016, 142 (2016). https://doi.org/10.1186/s13660-016-1087-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1087-z

MSC

Keywords