Skip to main content

Consistency of the Priestley–Chao estimator in nonparametric regression model with widely orthant dependent errors

Abstract

In this paper, we establish the strong consistency and complete consistency of the Priestley–Chao estimator in nonparametric regression model with widely orthant dependent errors under some general conditions. We also obtain the rates of strong consistency and complete consistency. We show that the rates can approximate to \(O(n^{-1/3})\) under appropriate conditions. The results obtained in the paper improve or extend the corresponding ones to widely orthant dependent assumptions.

1 Introduction

Consider the following nonparametric regression model:

$$ Y_{i}=f(x_{i})+\varepsilon_{i}, \quad 1\leq i \leq n, $$
(1)

where f is an unknown function defined in the interval \([0,1]\), \(\{ Y_{1},\ldots,Y_{n}\}\) are n observations at the fixed points \(\{ x_{1},\ldots,x_{n}\}\), and \(\{\varepsilon_{i},1\leq i\leq n\}\) are random errors. The designed points \(x_{1},\ldots,x_{n}\) are assumed, without loss of generality, to be \(0=x_{0}\leq x_{1}\leq\cdots\leq x_{n-1}\leq x_{n}=1\). Priestley and Chao [1] introduced the following nonparametric kernel estimator of \(f(x)\), which is usually called the Priestley–Chao (P–C) estimator:

$$ f_{n}(x)=\sum_{i=1}^{n}Y_{i} {{x_{i}-x_{i-1}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr), $$
(2)

where \(K(u)\) is a measurable function, and \(h_{n}\) are the bandwidths satisfying \(0< h_{n}\rightarrow0\) as \(n\rightarrow\infty\).

This estimator can capture the shape of the true curve better than many polynomial regression estimators. Therefore many limit properties of the P–C estimator were found. For example, Priestley and Chao [1] established the weak consistency of the estimator with i.i.d. errors; Benedetti [2] further studied the strong convergence and asymptotic normality also with i.i.d. errors; Yang and Wang [3] obtained the strong consistency and uniformly strong consistency with negatively associated (NA) errors under some weaker assumptions, which improve and extend the corresponding results of Benedetti [2]; Yang [4] provided the Berry–Esseen bound of the estimator (2) for NA random variables; Wu et al. [5] investigated the rates of strong consistency and complete consistency for extended negatively dependent random errors; and so on.

It is known that the independence assumption is not always reasonable in statistical applications. That is why various dependent structures were introduced in the past decades. Beyond the mixing structures, the dependent structures attract more and more attention in recent years. Many limit theorems for dependent structures, such as NA, negatively superadditive-dependent (NSD), negatively orthant dependent (NOD), extended negatively dependent (END) structures, and so on, are successfully established. In the literature it has been pointed that NA implies NSD, NSD implies NOD, and NOD implies END and that the reverse is generally not true. For more detail, we refer the readers to Joag-Dev and Proschan [6], Hu [7], Lehmann [8], and Liu [9].

In this work, we further study the consistency properties of estimator (2) under a much more general dependent structure, that is, a widely orthant dependent structure. The concept of widely orthant dependent random variables was introduced by Wang et al. [10] as follows.

Definition 1.1

A finite collection of random variables \((X_{1},X_{2},\ldots,X_{n})\) is said to be widely upper orthant dependent (WUOD) if there exists a finite real number \(g_{U}(n)\) such that, for all finite real numbers \(x_{i}\), \(1\leq i\leq n\),

$$ P(X_{1}>x_{1},X_{2}>x_{2}, \ldots,X_{n}>x_{n})\leq g_{U}(n)\prod _{i=1}^{n}P(X_{i}>x_{i}). $$

A finite collection of random variables \((X_{1},X_{2},\ldots,X_{n})\) is said to be widely lower orthant dependent (WLOD) if there exists a finite real number \(g_{L}(n)\) such that, for all finite real numbers \(x_{i}\), \(1\leq i\leq n\),

$$ P(X_{1}\leq x_{1}, X_{2}\leq x_{2}, \ldots,X_{n}\leq x_{n})\leq g_{L}(n)\prod _{i=1}^{n}P(X_{i}\leq x_{i}). $$

If \((X_{1},X_{2},\ldots,X_{n})\) is both WUOD and WLOD, then we say that \((X_{1},X_{2},\ldots,X_{n})\) is widely orthant dependent random variables (WOD), and \(g_{U}(n)\) and \(g_{L}(n)\) are called dominating coefficients.

An array \(\{X_{ni}, 1\leq i\leq n, n\geq1\}\) of random variables is said to be rowwise WOD random variables if for every \(n\geq1\), \(\{X_{ni}, 1\leq i\leq n\}\) are WOD random variables.

It is easily seen that \(g_{U}(n)\geq1\) and \(g_{L}(n)\geq1\). If \(g_{U}(n)=g_{L}(n)=M\) for all \(n\geq1\), where \(M\geq1\) is a positive constant, then the WOD structure reduces to the END structure. As pointed out before, END contains NA, NSD, and NOD as particular cases. Wang et al. [10] also presented some examples to show that WOD does not imply END. Therefore, WOD is a rather weak and general structure, and it is of great interest to study the limiting properties of a WOD structure.

Since the concept of WOD random variables was introduced, many probability limit behaviors and applications were established. For example, Wang et al. [10] obtained uniform asymptotic estimates of finite-time ruin probability with WOD claim sizes; Wang and Cheng [11] investigated the basic renewal theorems and complete convergence for random walks with WOD increments; Liu et al. [12] and Chen et al. [13] improved and extended the preceding results; Wang et al. [14] gave a result on uniform asymptotics of the finite-time ruin probability for all times with WLOD interoccurrence times; Shen [15] obtained a Bernstein-type inequality for WOD random variables; Qiu and Chen [16] obtained the complete convergence and complete moment convergence for weighted sums of WOD random variables under some general conditions; Wang et al. [17] studied the complete convergence for arrays of WOD random variables and gave its application to complete consistency of the weighted estimator in a nonparametric regression model based on WOD errors; Yang et al. [18] presented the Bahadur representation of sample quantiles for WOD random variables; Wang and Hu [19] established the weak consistency, strong consistency, complete consistency, and their convergence rates for the nearest-neighbor estimator of the density function based on WOD samples; Wu et al. [20] investigated the complete moment convergence for arrays of WOD random variables; and so on. We point out that in most papers cited, the moment conditions are related to the dominating coefficients, that is, the faster the dominating coefficients approach infinity, the stronger the moment conditions should be.

In this paper, we further investigate the consistency results of estimator (2) in the nonparametric regression model based on WOD errors. We establish the strong consistency, complete consistency, the rate of strong consistency, and the rate of complete consistency under some general conditions. These results improve or extend the corresponding ones of Yang and Wang [3] and Wu et al. [5]. It is worth pointing out that in our main results the conditions on the dominating coefficients are very general and the moment conditions are unrelated to the dominating coefficients.

Throughout this paper, the symbols \(C,c_{1},c_{2},\ldots\) represent positive constants whose values may vary in different places. We denote \(\log x=\ln\max\{ e,x\}\), and \(I(A)\) stands for the indicator function of a set A. In the paper, without loss of generality, we assume that \(g(n)=\max\{g_{U}(n),g_{L}(n)\}\) and \(f(x)=0\) if \(x \notin[0,1]\).

The rest of the paper is organized as follows. We state some preliminaries in Sect. 2. In Sect. 3, we present the main results and give their proofs in Sect. 4.

2 Main results

In this section, we first give some notations. Let \(\tilde{\delta }_{i}=x_{i}-x_{i-1}\) for \(1\leq i\leq n\) and \(\delta_{n}=\max_{1\leq i\leq n}\tilde{\delta}_{i}\). The following assumptions are needed in the main results.

(\(A_{1}\)):

\(f(x)\) satisfies the Lipschitz condition of order α (>0);

(\(A_{2}\)):

(i) \(K(u)\) satisfies the Lipschitz condition of order β (>0); (ii) \(K(u)\) is bounded in \(\mathbb{R}^{1}\); (iii) \(\int_{-\infty}^{+\infty}K(u)\,du=1\); (iv) \(\int_{-\infty}^{+\infty }|K(u)|\,du<\infty\); (iv′) \(\int_{-\infty}^{+\infty}|u|^{\alpha }|K(u)|\,du<\infty\);

(\(A_{3}\)):

\(h_{n}\rightarrow0\), \(\delta_{n}\rightarrow0\), and \(h_{n}^{-1}\{(\delta_{n}/h_{n})^{\beta}+\delta_{n}^{\alpha}\} \rightarrow0\) as \(n\rightarrow\infty\);

(\(A_{4}\)):

There exist \(r>1\) and \(s>1\) such that \(\delta _{n}/h_{n}=O(n^{-1/r}(\log n)^{-s})\).

Remark 2.1

(\(A_{1}\))–(\(A_{4}\)) are the basic assumptions for the Priestley–Chao estimator. Yang and Wang [3] used conditions (\(A_{1}\)), (\(A_{2}\))(i)–(iv), and (\(A_{3}\)) and stated that they are weaker than those in Priestley and Chao [1] and Benedetti [2]; Wu et al. [5] adopted condition (\(A_{2}\))(iv′), which is stronger than (\(A_{2}\))(iv), to establish the rates of strong consistency and complete consistency; (\(A_{4}\)) has also been used by Wu et al. [5] and is a little weaker than the condition \(\delta_{n}/h_{n}=O(n^{-1/r}(\log n)^{-1-\rho})\) for some \(\rho>1\) in Yang and Wang [3].

Based on these assumptions, we now present our main results. The following two theorems state the strong consistency and complete consistency for estimator (2).

Theorem 2.1

Suppose (\(A_{1}\)), (\(A_{2}\))(i)(iv), (\(A_{3}\)), and (\(A_{4}\)) hold. Let \(\{\varepsilon _{n},n\geq1\}\) be a sequence of WOD random errors with \(E\varepsilon_{n}=0\) and \(\sup_{n\geq1}E|\varepsilon _{n}|^{r}<\infty\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\),

$$ f_{n}(x)-f(x)\rightarrow0\quad \textit{a.s.} $$
(3)

Theorem 2.2

Suppose (\(A_{1}\)), (\(A_{2}\))(i)(iv), (\(A_{3}\)), and (\(A_{4}\)) hold. Let \(\{\varepsilon _{n},n\geq1\}\) be a sequence of WOD random errors with \(E\varepsilon_{n}=0\) and \(\sup_{n\geq1}E|\varepsilon _{n}|^{r+1}<\infty\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\),

$$ f_{n}(x)-f(x)\rightarrow0\quad \textit{completely}. $$
(4)

Remark 2.2

Yang and Wang [3] obtained the results of strong consistency for the estimator \(f_{n}(x)\) with NA errors under the assumption \(\delta_{n}/h_{n}=O(n^{-1/r}(\log n)^{-1-\rho})\) for some \(\rho>1\). Note that WOD contains NA as a particular case and assumption (\(A_{4}\)) is slightly weaker than \(\delta_{n}/h_{n}=O(n^{-1/r}(\log n)^{-1-\rho})\). Therefore Theorem 2.1 improves and extends the result of Yang and Wang [3] to a much more general case. Moreover, under some stronger moment condition, we also obtain the complete consistency for estimator (2) in Theorem 2.2. It is worth pointing out that the restriction on the dominating coefficients is very general. For example, if \(1< a< s\), then \(n^{\tau}=o(e^{(\log n)^{a}})\) for any \(\tau \geq0\). Moreover, the moment conditions are unrelated to the dominating coefficients.

From Theorems 2.1 and 2.2 we easily obtain the following two corollaries.

Corollary 2.1

Suppose (\(A_{1}\)) and (\(A_{2}\))(i)(iv) hold. Let \(\{\varepsilon_{n},n\geq1\}\) be a sequence of WOD random errors with \(E\varepsilon _{n}=0\) and \(\sup_{n\geq1}E|\varepsilon_{n}|^{r}<\infty\). Let \(\delta_{n}=O(n^{-1})\) and \(h_{n}=n^{-l}\) for some \(0< l<\min\{\alpha , \beta/(1+\beta), 1-1/r\}\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\), (3) holds.

Corollary 2.2

Suppose (\(A_{1}\)) and (\(A_{2}\))(i)(iv) hold. Let \(\{\varepsilon_{n},n\geq1\}\) be a sequence of WOD random errors with \(E\varepsilon _{n}=0\) and \(\sup_{n\geq1}E|\varepsilon_{n}|^{r+1}<\infty\). Let \(\delta_{n}=O(n^{-1})\) and \(h_{n}=n^{-l}\) for some \(0< l<\min\{\alpha , \beta/(1+\beta), 1-1/r\}\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\), (4) holds.

Now we present the rates of strong consistency and complete consistency.

Theorem 2.3

Suppose (\(A_{1}\)), (\(A_{2}\))(i), (ii), (iii), (iv′), (\(A_{3}\)), and (\(A_{4}\)) hold. Let \(\{\varepsilon_{n},n\geq1\}\) be a sequence of WOD random errors with \(E\varepsilon_{n}=0\) and \(\sup_{n\geq 1}E|\varepsilon_{n}|^{2r}<\infty\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\),

$$ \bigl\vert f_{n}(x)-f(x) \bigr\vert =O \bigl(h_{n}^{-1} \bigl\{ (\delta _{n}/h_{n})^{\beta}+ \delta_{n}^{\alpha}\bigr\} +h_{n}^{\alpha} \bigr)+o \bigl(n^{-1/2r}\bigr)\quad \textit{a.s.} $$

Theorem 2.4

Suppose (\(A_{1}\)), (\(A_{2}\))(i), (ii), (iii), (iv′), (\(A_{3}\)), and (\(A_{4}\)) hold. Let \(\{\varepsilon_{n},n\geq1\}\) be a sequence of WOD random errors with \(E\varepsilon_{n}=0\) and \(\sup_{n\geq 1}E|\varepsilon_{n}|^{2r+2}<\infty\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\),

$$ \bigl\vert f_{n}(x)-f(x) \bigr\vert =O \bigl(h_{n}^{-1} \bigl\{ (\delta _{n}/h_{n})^{\beta}+ \delta_{n}^{\alpha}\bigr\} +h_{n}^{\alpha} \bigr)+o \bigl(n^{-1/2r}\bigr)\quad \textit{completely}. $$

Remark 2.3

If \(a=0\) (under which the WOD errors reduce to END errors) in Theorem 2.3, then the result is equivalent to that established by Wu et al. [5]. Therefore Theorem 2.3 extends the corresponding result of Wu et al. [5] from END errors to WOD errors. For Theorem 2.4, if \(a=0\), then the rate is the same as that of Wu et al. [5]. Moreover, to avoid the influence of the dominating coefficients on the moment condition, the method used in proving Theorem 2.4 is somewhat different from that of Wu et al. [5]. Therefore Theorem 2.4 extends the corresponding result of Wu et al. [5] from END random errors to WOD random errors.

By Theorems 2.3 and 2.4 we can also obtain the following two corollaries on the rates of strong consistency and complete consistency.

Corollary 2.3

Suppose (\(A_{1}\)) and (\(A_{2}\))(i), (ii), (iii), (iv′) hold with \(\alpha=\beta=1\). Let \(\delta_{n}=O(n^{-1})\), \(r>3/2\), and \(h_{n}=n^{-l}\) for some \(\frac {1}{2r}< l<\frac{1}{2}-\frac{1}{4r}\). Let \(\{\varepsilon_{n},n\geq1\} \) be a sequence of WOD random errors with \(E\varepsilon_{n}=0\) and \(\sup_{n\geq1}E|\varepsilon _{n}|^{2r}<\infty\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\),

$$ \bigl\vert f_{n}(x)-f(x) \bigr\vert =o\bigl(n^{-1/2r}\bigr) \quad \textit{a.s.} $$

Corollary 2.4

Suppose (\(A_{1}\)) and (\(A_{2}\))(i), (ii), (iii), (iv′) hold with \(\alpha=\beta=1\). Let \(\delta_{n}=O(n^{-1})\), \(r>3/2\), and \(h_{n}=n^{-l}\) for some \(\frac {1}{2r}< l<\frac{1}{2}-\frac{1}{4r}\). Let \(\{\varepsilon_{n},n\geq1\} \) be a sequence of WOD random errors with \(E\varepsilon_{n}=0\) and \(\sup_{n\geq1}E|\varepsilon _{n}|^{2r+2}<\infty\). If \(g(n)=O(e^{(\log n)^{a}})\) for some \(0\leq a< s\), then for any \(x\in(0,1)\),

$$ \bigl\vert f_{n}(x)-f(x) \bigr\vert =o\bigl(n^{-1/2r}\bigr) \quad \textit{completely}. $$

Remark 2.4

In Corollaries 2.3 and 2.4, if \(r\approx3/2\), then the rates of strong consistency and complete consistency, respectively, can approximate to \(O(n^{-1/3})\).

3 Preliminary lemmas

In this section, we state some lemmas which will be used in the proofs of our main results. The first one is a basic property of WOD random variables; see Wang et al. [10].

Lemma 3.1

Let random variables \(\{X_{n}, n\geq1\}\) be WLOD (WUOD) with dominating coefficients \(g_{L}(n)\), \(n\geq1\) (\(g_{U}(n)\), \(n\geq1\)). If \(\{f_{n}(\cdot),n\geq1\}\) are all nondecreasing, then \(\{f_{n}(X_{n}), n\geq1\}\) are still WLOD (WUOD) with dominating coefficients \(g_{L}(n)\), \(n\geq1\) (\(g_{U}(n)\), \(n\geq1\)).

If \(\{f_{n}(\cdot),n\geq1\}\) are all nonincreasing, then \(\{f_{n}(X_{n}), n\geq1\}\) are WUOD (WLOD) with dominating coefficients \(g_{L}(n)\), \(n\geq1\) (\(g_{U}(n)\), \(n\geq1\)).

The following Bernstein-type inequality is important in proving our main results; see Shen [15].

Lemma 3.2

Let \(\{\varepsilon_{n}, n\geq1\}\) be a sequence of WOD random variables with \(E\varepsilon_{n}=0\) and \(|\varepsilon_{n}|\leq b\) a.s. for each \(n\geq 1\), where b is a positive constant. Denote \(B_{n}^{2}= \sum_{i=1}^{n}E\varepsilon_{i}^{2}\) for each \(n\geq 1\). Then for every \(\varepsilon>0\),

$$ P \Biggl( \Biggl\vert \sum_{i=1}^{n} \varepsilon_{i} \Biggr\vert \geq \epsilon \Biggr)\leq2g(n) \exp \biggl\{ -\frac{\epsilon ^{2}}{2B_{n}^{2}+\frac{2}{3}b\epsilon} \biggr\} . $$

The next lemma has been proved by Wu et al. [5].

Lemma 3.3

Suppose (\(A_{1}\)), (\(A_{2}\))(i), (ii), (iii), (iv′), and (\(A_{3}\)) hold. Then for any \(x\in(0,1)\),

$$ \bigl\vert Ef_{n}(x)-f(x) \bigr\vert =O \bigl(h_{n}^{-1} \bigl\{ (\delta _{n}/h_{n})^{\beta}+ \delta_{n}^{\alpha}\bigr\} +h_{n}^{\alpha} \bigr). $$

Lemma 3.4

Suppose (\(A_{1}\)), (\(A_{2}\))(i)(iv), and (\(A_{3}\)) hold. Then for any \(x\in(0,1)\),

$$ \lim_{n\rightarrow\infty}Ef_{n}(x)=f(x). $$

Proof

Noting that \(f(x)=0\) for \(x\notin[0,1]\), we have

$$\begin{aligned} \bigl\vert Ef_{n}(x)-f(x) \bigr\vert =& \Biggl\vert \sum _{i=1}^{n}{{\tilde {\delta}_{i}}\over {h_{n}}} K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr)f(x_{i})-f(x) \Biggr\vert \\ \leq& \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}} K \biggl({{x-x_{i}}\over {h_{n}}} \biggr)f(x_{i})-h_{n}^{-1} \int _{0}^{1}K \biggl({{x-u}\over {h_{n}}} \biggr)f(u)\,du \Biggr\vert \\ &{}+ \biggl\vert h_{n}^{-1} \int_{-\infty}^{+\infty}K \biggl({{x-u}\over {h_{n}}} \biggr)f(u)\,du- \int_{-\infty}^{+\infty}K(u)f(x)\,du \biggr\vert \\ \triangleq&I_{1}+I_{2}. \end{aligned}$$

Wu et al. [5] have already proved that \(I_{1}\leq Ch_{n}^{-1}\{ (\delta_{n}/h_{n})^{\beta}+\delta_{n}^{\alpha}\}\rightarrow0\). Therefore we only need to show that \(I_{2}\rightarrow0\). Noting that \(f(\cdot)\) is bounded and \(|K(\cdot)|\) is integrable, we have by the integral transformation that

$$\begin{aligned} I_{2} =& \biggl\vert \int_{-\infty}^{+\infty }\bigl(f(x-h_{n}u)-f(x) \bigr)K(u)\,du \biggr\vert \\ \leq& \int_{-\infty}^{+\infty} \bigl\vert f(x-h_{n}u)-f(x) \bigr\vert \bigl\vert K(u) \bigr\vert \,du \\ =& \int_{ \vert u \vert \leq h_{n}^{-1/2}} \bigl\vert f(x-h_{n}u)-f(x) \bigr\vert \bigl\vert K(u) \bigr\vert \,du+ \int _{ \vert u \vert >h_{n}^{-1/2}} \bigl\vert f(x-h_{n}u)-f(x) \bigr\vert \bigl\vert K(u) \bigr\vert \,du \\ \leq&Ch_{n}^{\alpha/2} \int_{ \vert u \vert \leq h_{n}^{-1/2}} \bigl\vert K(u) \bigr\vert \,du+C \int _{ \vert u \vert >h_{n}^{-1/2}} \bigl\vert K(u) \bigr\vert \,du \\ \rightarrow&0\quad \text{as }n\rightarrow\infty. \end{aligned}$$

The proof of the lemma is complete. □

The following lemma can be seen in Yang and Wang [3] or Wu et al. [5].

Lemma 3.5

Suppose (\(A_{2}\))(i), (ii), (iii), (iv), and (\(A_{3}\)) hold. Then for any \(x\in(0,1)\),

$$ \lim_{n\rightarrow\infty}\sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}} \over {h_{n}}} \biggr) \biggr\vert = \int_{-\infty}^{+\infty} \bigl\vert K(u) \bigr\vert \,du. $$

4 Proofs of main results

In this section, we present the proofs of the main results presented in Sect. 3.

Proof of Theorem 2.1

It is easy to see that, for any \(x\in(0,1)\),

$$ \bigl\vert f_{n}(x)-f(x) \bigr\vert \leq \bigl\vert Ef_{n}(x)-f(x) \bigr\vert + \bigl\vert f_{n}(x)-Ef_{n}(x) \bigr\vert . $$

Therefore by Lemma 3.4 we only need to show

$$ f_{n}(x)-Ef_{n}(x)=\sum_{i=1}^{n} {{\tilde{\delta }_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \varepsilon_{i}\rightarrow0\quad \mbox{a.s.} $$
(5)

For \(1\leq i\leq n\), \(n\geq1\), denote

$$\begin{aligned}& \varepsilon_{ni}(1)=-n^{1/r}I\bigl(\varepsilon _{i}< -n^{1/r}\bigr)+\varepsilon_{i}I\bigl(| \varepsilon_{i}|\leq n^{1/r}\bigr)+n^{1/r}I\bigl( \varepsilon_{i}>n^{1/r}\bigr), \\& \varepsilon_{ni}(2)=\bigl(\varepsilon_{i}+n^{1/r} \bigr)I\bigl(\varepsilon_{i}< -n^{1/r}\bigr) +\bigl( \varepsilon_{i}-n^{1/r}\bigr)I\bigl(\varepsilon_{i}>n^{1/r} \bigr). \end{aligned}$$

Hence by Lemma 3.1 we can easily check that \(\{\varepsilon _{ni}(1),1\leq i\leq n\}_{n\geq1}\) are still WOD random variables with \(\varepsilon_{ni}(1)+\varepsilon _{ni}(2)=\varepsilon_{i}\). Thus, to show (5), we need to prove that

$$\begin{aligned}& \sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(1)-E \varepsilon_{ni}(1)\bigr]\rightarrow0 \quad \mbox{a.s.}, \end{aligned}$$
(6)
$$\begin{aligned}& \sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) E\varepsilon_{ni}(2) \rightarrow0, \end{aligned}$$
(7)

and

$$ \sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \varepsilon_{ni}(2)\rightarrow0 \quad \mbox{a.s.} $$
(8)

We first utilize the Bernstein-type inequality for WOD random variables to prove (6). For all n large enough, we have

$$ \max_{1\leq i\leq n} \biggl\vert {{\tilde{\delta }_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(1)-E \varepsilon_{ni}(1)\bigr] \biggr\vert \leq c_{1}(\log n)^{-s} $$

and, by Lemma 3.5 and the boundedness of \(K(\cdot)\),

$$\begin{aligned}& \sum_{i=1}^{n}E \biggl\vert {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \bigl[ \varepsilon_{ni}(1)-E\varepsilon_{ni}(1)\bigr] \biggr\vert ^{2} \\& \quad \leq C{{\delta_{n}}\over {h_{n}}}\sum_{i=1}^{n} {{\tilde{\delta }_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E\bigl(\varepsilon_{ni}(1)\bigr)^{2} \\& \quad \leq C(\log n)^{-s}\sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E \bigl\vert \varepsilon_{ni}(1) \bigr\vert \\& \quad \leq C(\log n)^{-s}\sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert \sup_{i\geq1}E|\varepsilon_{i}|\leq c_{2}(\log n)^{-s}. \end{aligned}$$

Since \(s>1\), we have that, for any \(\varrho>0\) and \(0\leq a< s\), \((\log n)^{a}-\varrho(\log n)^{s}\leq-\varrho(\log n)^{s}/2\leq-2\log n\) for all n large enough. Hence by Lemma 3.2 we obtain that, for any \(\epsilon>0\),

$$\begin{aligned}& \sum_{n=1}^{\infty}P \Biggl( \Biggl\vert \sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(1)-E \varepsilon_{ni}(1)\bigr] \Biggr\vert \geq\epsilon \Biggr) \\& \quad \leq 2\sum_{n=1}^{\infty}g(n)\exp \biggl\{ -\frac{\epsilon ^{2}}{2c_{2}(\log n)^{-s}+\frac{2}{3}c_{1}\epsilon(\log n)^{-s}} \biggr\} \\& \quad \leq C\sum_{n=1}^{\infty}\exp \biggl\{ ( \log n)^{a}-\frac{\epsilon ^{2}}{2c_{2}+\frac{2}{3}c_{1}\epsilon}(\log n)^{s} \biggr\} \\& \quad \leq C\sum_{n=1}^{\infty}n^{-2}< \infty. \end{aligned}$$
(9)

By (9) and the Borel–Cantelli lemma we get (6).

Next, we turn to deal with (7). By Lemma 3.5 it follows that

$$\begin{aligned} \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) E \varepsilon_{ni}(2) \Biggr\vert \leq&\sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E \bigl\vert \varepsilon_{ni}(2) \bigr\vert \\ \leq&\sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E| \varepsilon_{i}|I\bigl(|\varepsilon_{i}|>n^{1/r} \bigr) \\ \leq&n^{(1-r)/r}\sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E|\varepsilon_{i}|^{r}I\bigl(| \varepsilon_{i}|>n^{1/r}\bigr) \\ \leq&n^{(1-r)/r}\sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert \sup_{i\geq1}E|\varepsilon_{i}|^{r} \\ \leq&Cn^{(1-r)/r} \rightarrow0\quad \text{as }n\rightarrow\infty. \end{aligned}$$

Finally, we adopt the Kronecker’s lemma to solve (8). Note that

$$ \sum_{i=1}^{\infty}i^{-1/r}(\log i)^{-s}E|\varepsilon_{i}|I\bigl(|\varepsilon_{i}|>i^{1/r} \bigr)\leq \sum_{i=1}^{\infty}i^{-1}( \log i)^{-s}E|\varepsilon _{i}|^{r}I\bigl(| \varepsilon_{i}|>i^{1/r}\bigr)< \infty $$

and thus

$$\sum_{i=1}^{\infty}i^{-1/r}(\log i)^{-s}|\varepsilon _{i}|I\bigl(|\varepsilon_{i}|>i^{1/r} \bigr)< \infty\quad \mbox{a.s.} $$

Hence, by the Kronecker lemma we obtain that

$$ n^{-1/r}(\log n)^{-s}\sum_{i=1}^{n}| \varepsilon _{i}|I\bigl(|\varepsilon_{i}|>i^{1/r} \bigr)\rightarrow0 \quad \mbox{a.s.} $$

Since \(K(\cdot)\) is bounded, we obtain that

$$\begin{aligned} \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \varepsilon_{ni}(2) \Biggr\vert \leq&C{{\delta_{n}}\over {h_{n}}}\sum _{i=1}^{n}|\varepsilon_{i}|I \bigl(|\varepsilon_{i}|>n^{1/r}\bigr) \\ \leq&Cn^{-1/r}(\log n)^{-s}\sum_{i=1}^{n}| \varepsilon _{i}|I\bigl(|\varepsilon_{i}|>i^{1/r} \bigr)\rightarrow0 \quad \mbox{a.s.}, \end{aligned}$$

which implies (8). The proof of Theorem 2.1 is complete. □

Proof of Theorem 2.2

We use the same notations as that in the proof of Theorem 2.1. It is easy to see that we only need to prove that

$$ \sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(1)-E \varepsilon_{ni}(1)\bigr]\rightarrow0 \quad \mbox{completely} $$
(10)

and

$$ \sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(2)-E \varepsilon_{ni}(2)\bigr]\rightarrow0\quad \mbox{completely}. $$
(11)

Obviously, (10) directly follows from (9). So we only need to prove (11). Note that \(q>r+1\) implies \((1-q)/r<-1\). By Markov’s inequality and Lemma 3.5 we have

$$\begin{aligned}& \sum_{n=1}^{\infty}P \Biggl( \Biggl\vert \sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(2)-E \varepsilon_{ni}(2)\bigr] \Biggr\vert >\epsilon \Biggr) \\& \quad \leq \epsilon^{-1}\sum_{n=1}^{\infty}E \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \bigl[ \varepsilon_{ni}(2)-E\varepsilon_{ni}(2)\bigr] \Biggr\vert \\& \quad \leq C\sum_{n=1}^{\infty}\sum _{i=1}^{n}{{\tilde{\delta }_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E|\varepsilon_{i}|I \bigl(|\varepsilon_{i}|>n^{1/r}\bigr) \\& \quad \leq C\sum_{n=1}^{\infty}\sum _{i=1}^{n}{{\tilde{\delta }_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert \sup_{i\geq1}E| \varepsilon_{i}|I\bigl(|\varepsilon_{i}|>n^{1/r} \bigr) \\& \quad \leq C\sum_{n=1}^{\infty}\sup _{i\geq1}\sum_{j=n}^{\infty }E| \varepsilon_{i}|I\bigl(j< |\varepsilon_{i}|^{r} \leq j+1\bigr) \\& \quad \leq C\sup_{i\geq1}\sum_{j=1}^{\infty}jE| \varepsilon _{i}|I\bigl(j< |\varepsilon_{i}|^{r} \leq j+1\bigr) \\& \quad \leq C\sup_{i\geq1}\sum_{j=1}^{\infty}E| \varepsilon _{i}|^{r+1}I\bigl(j< |\varepsilon_{i}|^{r} \leq j+1\bigr)\leq C\sup_{i\geq 1}E|\varepsilon_{i}|^{r+1}< \infty. \end{aligned}$$

Hence (11) is proved, and thus the proof of Theorem 2.2 is complete. □

Proof of Theorem 2.3

Recall that for any \(x\in(0,1)\),

$$ \bigl\vert f_{n}(x)-f(x) \bigr\vert \leq \bigl\vert Ef_{n}(x)-f(x) \bigr\vert + \bigl\vert f_{n}(x)-Ef_{n}(x) \bigr\vert . $$

Since Lemma 3.3 has provided the convergence rate for \(|Ef_{n}(x)-f(x)|\), we only need to show that

$$ \bigl\vert f_{n}(x)-Ef_{n}(x) \bigr\vert = \Biggl\vert \sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \varepsilon_{i} \Biggr\vert =o \bigl(n^{-1/2r}\bigr) \quad \mbox{a.s.} $$
(12)

For each \(1\leq i\leq n\), \(n\geq1\), define

$$\begin{aligned}& \varepsilon_{ni}(3)=-n^{1/2r}I\bigl(\varepsilon _{i}< -n^{1/2r}\bigr)+\varepsilon_{i}I\bigl(| \varepsilon_{i}|\leq n^{1/2r}\bigr)+n^{1/2r}I\bigl( \varepsilon_{i}>n^{1/2r}\bigr), \\& \varepsilon_{ni}(4)=\bigl(\varepsilon_{i}+n^{1/2r} \bigr)I\bigl(\varepsilon_{i}< -n^{1/2r}\bigr) +\bigl( \varepsilon_{i}-n^{1/2r}\bigr)I\bigl(\varepsilon_{i}>n^{1/2r} \bigr). \end{aligned}$$

It follows by Lemma 3.1 that \(\{\varepsilon_{ni}(3),1\leq i\leq n\} _{n\geq1}\) are still WOD random variables with \(\varepsilon_{ni}(3)+\varepsilon_{ni}(4)=\varepsilon_{i}\). Thus, to prove (12), it suffices to prove that

$$\begin{aligned}& \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \bigl[ \varepsilon_{ni}(3)-E\varepsilon_{ni}(3)\bigr] \Biggr\vert =o\bigl(n^{-1/2r}\bigr)\quad \mbox{a.s.}, \end{aligned}$$
(13)
$$\begin{aligned}& \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) E \varepsilon_{ni}(4) \Biggr\vert =o\bigl(n^{-1/2r} \bigr), \end{aligned}$$
(14)

and

$$ \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \varepsilon_{ni}(4) \Biggr\vert =o\bigl(n^{-1/2r}\bigr) \quad \mbox{a.s.} $$
(15)

We adopt the method used in the proof of Theorem 2.1. Note that, for all n large enough,

$$ \max_{1\leq i\leq n} \biggl\vert {{\tilde{\delta }_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(3)-E \varepsilon_{ni}(3)\bigr] \biggr\vert \leq c_{3}n^{-1/2r}( \log n)^{-s}, $$

and, by Lemma 3.5 and the boundedness of \(K(\cdot)\),

$$\begin{aligned}& \sum_{i=1}^{n}E \biggl\vert {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \bigl[ \varepsilon_{ni}(3)-E\varepsilon_{ni}(3)\bigr] \biggr\vert ^{2} \\& \quad \leq C{{\delta_{n}}\over {h_{n}}}\sum_{i=1}^{n} {{\tilde {\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E\varepsilon_{i}^{2} \\& \quad \leq c_{4}n^{-1/r}(\log n)^{-s}, \end{aligned}$$

where the last inequality follows by \(\sup_{i\geq1}E\varepsilon _{i}^{2}\leq\sup_{i\geq1}[E|\varepsilon_{i}|^{2r}]^{1/r}<\infty\). Note that \((\log n)^{a}-\varrho(\log n)^{s}\leq-\varrho (\log n)^{s}/2\leq-2\log n\) for all n large enough. Hence we obtain by Lemma 3.2 that, for any \(\epsilon>0\),

$$\begin{aligned}& \sum_{n=1}^{\infty}P \Biggl( \Biggl\vert \sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(3)-E \varepsilon_{ni}(3)\bigr] \Biggr\vert \geq n^{-1/2r}\epsilon \Biggr) \\& \quad \leq 2\sum_{n=1}^{\infty}g(n)\exp \biggl\{ -\frac{n^{-1/r}\epsilon ^{2}}{2c_{4}n^{-1/r}(\log n)^{-s}+\frac{2}{3}c_{3}\epsilon n^{-1/r}(\log n)^{-s}} \biggr\} \\& \quad \leq C\sum_{n=1}^{\infty}\exp \biggl\{ ( \log n)^{a}-\frac{\epsilon ^{2}}{2c_{4}+\frac{2}{3}c_{3}\epsilon}(\log n)^{s} \biggr\} \\& \quad \leq C\sum_{n=1}^{\infty}n^{-2}< \infty. \end{aligned}$$
(16)

By (16) and the Borel–Cantelli lemma we obtain (13).

On another hand, by Lemma 3.5 it follows that

$$\begin{aligned} \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) E \varepsilon_{ni}(4) \Biggr\vert \leq&\sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E \bigl\vert \varepsilon_{ni}(4) \bigr\vert \\ \leq&\sum_{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E| \varepsilon_{i}|I\bigl(|\varepsilon_{i}|>n^{1/2r} \bigr) \\ \leq&n^{(1-2r)/2r}\sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E|\varepsilon_{i}|^{2r}I\bigl(| \varepsilon_{i}|>n^{1/2r}\bigr) \\ \leq&Cn^{(1-2r)/2r}=o\bigl(n^{-1/2r}\bigr). \end{aligned}$$

Therefore (14) is proved. Finally, we will prove (15). Note that

$$ \sum_{i=1}^{\infty}i^{-1/2r}(\log i)^{-s}E|\varepsilon_{i}|I\bigl(|\varepsilon_{i}|>i^{1/2r} \bigr)\leq \sum_{i=1}^{\infty}i^{-1}( \log i)^{-s}E|\varepsilon _{i}|^{2r}I\bigl(| \varepsilon_{i}|>i^{1/2r}\bigr)< \infty, $$

and thus

$$ \sum_{i=1}^{\infty}i^{-1/2r}(\log i)^{-s}|\varepsilon_{i}|I\bigl(|\varepsilon_{i}|>i^{1/2r} \bigr)< \infty\quad \mbox{a.s.} $$

By Kronecker’s lemma we have that

$$ n^{-1/2r}(\log n)^{-s}\sum_{i=1}^{n}| \varepsilon _{i}|I\bigl(|\varepsilon_{i}|>i^{1/2r} \bigr)\rightarrow0\quad \mbox{a.s.} $$

Since \(K(\cdot)\) is bounded, we obtain

$$\begin{aligned} \Biggl\vert \sum_{i=1}^{n} {{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \varepsilon_{ni}^{(2)} \Biggr\vert \leq& C {{\delta_{n}}\over {h_{n}}}\sum_{i=1}^{n}| \varepsilon_{i}|I\bigl(|\varepsilon _{i}|>n^{1/2r} \bigr) \\ \leq&Cn^{-1/2r}\cdot n^{-1/2r}(\log n)^{-s}\sum _{i=1}^{n}|\varepsilon_{i}|I \bigl(|\varepsilon_{i}|>n^{1/2r}\bigr) \\ \leq&Cn^{-1/2r}\cdot n^{-1/2r}(\log n)^{-s}\sum _{i=1}^{n}|\varepsilon_{i}|I \bigl(|\varepsilon_{i}|>i^{1/2r}\bigr) \\ =&o\bigl(n^{-1/2r}\bigr) \quad \mbox{a.s.}, \end{aligned}$$

which gives (15). The proof of Theorem 2.3 is complete. □

Proof of Theorem 2.4

The notations are the same as in the proof of Theorem 2.3. To finish the proof, we only need to verify that, for any \(\epsilon>0\),

$$ \sum_{n=1}^{\infty}P \Biggl( \Biggl\vert \sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(3)-E \varepsilon_{ni}(3)\bigr] \Biggr\vert \geq n^{-1/2r}\epsilon \Biggr)< \infty $$
(17)

and

$$ \sum_{n=1}^{\infty}P \Biggl( \Biggl\vert \sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(4)-E \varepsilon_{ni}(4)\bigr] \Biggr\vert \geq n^{-1/2r}\epsilon \Biggr)< \infty. $$
(18)

Similarly to the proof of (16), we can obtain (17). So we only need to prove (18). Note that \(\sup_{i\geq1} E|\varepsilon_{i}|^{2+2r}<\infty\). By Markov’s inequality and Lemma 3.5 we have that, for any \(\epsilon>0\),

$$\begin{aligned}& \sum_{n=1}^{\infty}P \Biggl( \Biggl\vert \sum _{i=1}^{n}{{\tilde{\delta}_{i}}\over {h_{n}}}K \biggl( {{x-x_{i}}\over {h_{n}}} \biggr) \bigl[\varepsilon_{ni}(4)-E \varepsilon_{ni}(4)\bigr] \Biggr\vert \geq n^{-1/2r}\epsilon \Biggr) \\& \quad \leq C\sum_{n=1}^{\infty}n^{1/2r}E \Biggl\vert \sum_{i=1}^{n} {{\tilde {\delta}_{i}}\over {h_{n}}}K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \varepsilon_{ni}^{(2)} \Biggr\vert \\& \quad \leq C\sum_{n=1}^{\infty}n^{1/2r} \sum_{i=1}^{n}{{\tilde{\delta }_{i}}\over {h_{n}}} \biggl\vert K \biggl({{x-x_{i}}\over {h_{n}}} \biggr) \biggr\vert E|\varepsilon _{i}|I\bigl(|\varepsilon_{i}|>n^{1/2r}\bigr) \\& \quad \leq C\sum_{n=1}^{\infty}n^{1/2r} \sup_{i\geq1}E|\varepsilon _{i}|I\bigl(| \varepsilon_{i}|>n^{1/2r}\bigr) \\& \quad = C\sup_{i\geq1}\sum_{j=1}^{\infty}E| \varepsilon _{i}|I\bigl(j< |\varepsilon_{i}|^{2r} \leq j+1\bigr)\sum_{n=1}^{j}n^{1/2r} \\& \quad \leq C\sup_{i\geq1}\sum_{j=1}^{\infty}j^{1+1/2r}E| \varepsilon _{i}|I\bigl(j< |\varepsilon_{i}|^{2r} \leq j+1\bigr) \\& \quad \leq C\sup_{i\geq1}\sum_{j=1}^{\infty}E| \varepsilon _{i}|^{2r+2}I\bigl(j< |\varepsilon_{i}|^{2r} \leq j+1\bigr)< \infty. \end{aligned}$$

Hence (18) is proved, and thus the proof of Theorem 2.4 is complete. □

References

  1. Priestley, M.B., Chao, M.T.: Non-parametric function fitting. J. R. Stat. Soc. B 34, 385–392 (1972)

    MathSciNet  MATH  Google Scholar 

  2. Benedetti, J.K.: On the nonparametric estimation of regression functions. J. R. Stat. Soc. B 39, 248–253 (1977)

    MathSciNet  MATH  Google Scholar 

  3. Yang, S.C., Wang, Y.B.: Strong consistency of regression function estimator for negatively associated samples. Acta Math. Appl. Sin. 22(4), 522–530 (1999)

    MATH  Google Scholar 

  4. Yang, S.C.: Uniformly asymptotic normality of the regression weighted estimator for negatively associated samples. Stat. Probab. Lett. 62(2), 101–110 (2003)

    Article  MathSciNet  Google Scholar 

  5. Wu, Y., Wang, X.J., Balakrishnan, N.: On the consistency of the P–C estimator in a nonparametric regression model. Stat. Pap. 2, 1–17 (2017). https://doi.org/10.1007/s00362-017-0966-9

    Article  Google Scholar 

  6. Joag-Dev, K., Proschan, F.: Negative association of random variables with applications. Ann. Stat. 11, 286–295 (1983)

    Article  MathSciNet  Google Scholar 

  7. Hu, T.Z.: Negatively superadditive dependence of random variables with applications. Chinese J. Appl. Probab. Statist. 16, 133–144 (2000)

    MathSciNet  MATH  Google Scholar 

  8. Lehmann, E.L.: Some concepts of dependence. Ann. Math. Stat. 37, 1137–1153 (1966)

    Article  MathSciNet  Google Scholar 

  9. Liu, L.: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79, 1290–1298 (2009)

    Article  MathSciNet  Google Scholar 

  10. Wang, K.Y., Wang, Y.B., Gao, Q.W.: Uniform asymptotics for the finite-time ruin probability of a new dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15, 109–124 (2013)

    Article  MathSciNet  Google Scholar 

  11. Wang, Y., Cheng, D.: Basic renewal theorems for random walks with widely dependent increments. J. Math. Anal. Appl. 384, 597–606 (2011)

    Article  MathSciNet  Google Scholar 

  12. Liu, X.J., Gao, Q.W., Wang, Y.B.: A note on a dependent risk model with constant interest rate. Stat. Probab. Lett. 82, 707–712 (2012)

    Article  MathSciNet  Google Scholar 

  13. Chen, W., Wang, Y.B., Cheng, D.Y.: An inequality of widely dependent random variables and its applications. Lith. Math. J. 56, 16–31 (2016)

    Article  MathSciNet  Google Scholar 

  14. Wang, Y.B., Cui, Z.L., Wang, K.Y., Ma, X.L.: Uniform asymptotics of the finite-time ruin probability for all times. J. Math. Anal. Appl. 390, 208–223 (2012)

    Article  MathSciNet  Google Scholar 

  15. Shen, A.T.: Bernstein-type inequality for widely dependent sequence and its application to nonparametric regression models. Abstr. Appl. Anal. 2013, Article ID 862602 (2013)

    MathSciNet  Google Scholar 

  16. Qiu, D.H., Chen, P.Y.: Complete and complete moment convergence for weighted sums of widely orthant dependent random variables. Acta Math. Sin. Engl. Ser. 30, 1539–1548 (2014)

    Article  MathSciNet  Google Scholar 

  17. Wang, X.J., Xu, C., Hu, T.-C., Volodin, A., Hu, S.H.: On complete convergence for widely orthant-dependent random variables and its applications in nonparametric regression models. Test 23, 607–629 (2014)

    Article  MathSciNet  Google Scholar 

  18. Yang, W.Z., Liu, T.T., Wang, X.J., Hu, S.H.: On the Bahadur representation of sample quantiles for widely orthant dependent sequences. Filomat 28, 1333–1343 (2014)

    Article  MathSciNet  Google Scholar 

  19. Wang, X.J., Hu, H.S.: The consistency of the nearest neighbor estimator of the density function based on WOD samples. J. Math. Anal. Appl. 429(1), 497–512 (2015)

    Article  MathSciNet  Google Scholar 

  20. Wu, Y., Wang, X.J., Rosalsky, A.: Complete moment convergence for arrays of rowwise widely orthant dependent random variables. Acta Math. Sin. Engl. Ser. 34(10), 1531–1548 (2018)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is very grateful to the referees for their many valuable comments and suggestions.

Availability of data and materials

Not applicable.

Funding

The paper is supported by the Key Research Projects of Natural Science in Colleges and Universities of Anhui Provincial Education Department (KJ2015A379).

Author information

Authors and Affiliations

Authors

Contributions

The author completed the paper and approved the final manuscript.

Corresponding author

Correspondence to Qihui He.

Ethics declarations

Competing interests

The author declares that there are no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, Q. Consistency of the Priestley–Chao estimator in nonparametric regression model with widely orthant dependent errors. J Inequal Appl 2019, 64 (2019). https://doi.org/10.1186/s13660-019-2016-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2016-8

MSC

Keywords