Skip to main content

Consistency properties for the wavelet estimator in nonparametric regression model with dependent errors

Abstract

In this paper, we establish the pth mean consistency, complete consistency, and the rate of complete consistency for the wavelet estimator in a nonparametric regression model with m-extended negatively dependent random errors. We show that the best rates can be nearly \(O(n^{-1/3})\) under some general conditions. The results obtained in the paper markedly improve and extend some corresponding ones to a much more general setting.

1 Introduction

Consider the nonparametric regression model

$$ Y_{ni}= g(t_{ni}) + \varepsilon _{ni},\quad 1\leq i\leq n, n\geq 1, $$
(1)

where the regression function g is an unknown Borel-measurable function defined on \([0,1]\), \(\{t_{ni}\}\) are nonrandom design points such that \(0\leq t_{n1}\leq \cdots \leq t_{nn} \leq 1\), and \(\{\varepsilon _{ni}\}\) are random errors.

It is known that nonparametric regression model (1) has many applications in practical fields such as communications and control systems, classification, econometrics, and so on. Thus model (1) has been widely investigated by many scholars. For some classical estimations in the independent case, we refer the readers to Watson [1], Nadaraya [2], Priestley and Chao [3], Gasser and Müller [4], and Georgiev [5]. Due to its significant applications, this nonparametric regression model has also been investigated in various dependent cases. For example, Yang and Wang [6] investigated the strong consistency of the P-C estimator in (1) with negatively associated (NA) samples; Yang [7] studied the rate of asymptotic normality for the weighted estimator with NA samples; Liang and Jing [8] established the mean consistency, strong consistency, complete consistency, and asymptotic normality for the weighted estimator with NA samples; Wang et al. [9] investigated the complete consistency of the weighted estimator in (1) with extended negatively dependent (END) errors; Chen et al. [10] obtained the mean consistency, strong consistency, and complete consistency of the weighted estimator in model (1) with martingale difference errors.

Compared to these smooth methods, the wavelet method has the advantage of estimating nonsmooth functions. Therefore, in this paper, we concentrate on the wavelet estimation of unknown function g in model (1). Firstly, we recall two necessary concepts.

Definition 1.1

A scale function φ is q-regular (\(q\in \mathbb{Z}\)) if for any \(l\leq q\) and integer k, we have \(|\frac{d^{l}\varphi }{dx^{l}}|\leq C_{k}(1+|x|)^{-k}\), where \(C_{k}\) is a generic constant depending only on k.

Definition 1.2

A function space \(H^{\nu }\) (\(\nu \in \mathrm{R}\)) is called the Sobolev space of order ν if for all \(h\in H^{\nu }\), we have \(\int |\hat{h}(\omega )|^{2}(1+\omega ^{2})^{\nu }\,d\omega <\infty \), where ĥ is the Fourier transform of h.

Define the wavelet kernel

$$ E_{k}(t,s)=2^{k} E_{0}\bigl(2^{k} t,2^{k} s\bigr), $$

where \(E_{0}(t,s)=\sum_{j\in \mathbb{Z}}\phi (t-j)\phi (s-j)\), \(k=k(n)> 0\) is an integer depending only on n, and ϕ is the scaling function in the Schwartz space.

Antoniadis et al. [11] proposed the following wavelet estimator of g:

$$ g_{n}(t)=\sum_{i=1}^{n}Y_{ni} \int _{A_{ni}}E_{k}(t,s)\,ds, $$
(2)

where \(A_{n1},A_{n2},\ldots ,A_{nn}\) is a partition of the interval \([0,1]\) with \(t_{ni}\in A_{ni}\), \(A_{ni}=[s_{n(i-1)},s_{ni})\) for \(i=1,2,\ldots , n-1\), and \(A_{nn}=[s_{n(n-1)},s_{nn}]\).

It is well known that the wavelet method is a powerful tool in many fields such as applied mathematics, physics, computer science, signal and information processing, image processing, and so on. Therefore, since Antoniadis et al. [11] introduced this method to nonparametric regression model, many results were established. For example, Xue [12] investigated the rates of strong convergence for the wavelet estimator under completed and censored data; Sun and Chai [13] established the weak consistency, strong consistency, and the convergence rate for the wavelet estimator under stationary α-mixing samples; Li et al. [14] obtained the weak consistency and the rate of uniformly asymptotic normality for wavelet estimator with associated samples; Liang [15] established the asymptotic normality for wavelet estimator in heteroscedastic model with α-mixing samples; Li et al. [16] obtained the Berry–Esseen bounds for wavelet estimator in a regression model with linear process errors generated by φ-mixing sequences; Tang et al. [17] studied the asymptotic normality for wavelet estimator with asymptotically negatively associated random errors; Ding et al. [18] investigated the mean consistency, complete consistency, and the rate of complete consistency for wavelet estimator with END random errors; Ding et al. [19] established the Berry–Esseen bound of wavelet estimators in nonparametric regression model with asymptotically negatively associated errors; Ding and Chen [20] studied the asymptotic normality of the wavelet estimators in heteroscedastic semiparametric regression model with φ-mixing errors, and so on.

In this work, we further study the consistency properties of wavelet estimator (2) under a more general dependence structure. Now we recall some concepts of dependent random variables.

Definition 1.3

A finite collection of random variables \(X_{1},X_{2},\ldots ,X_{n}\) is said to be END if there exists a constant \(M>0\) such that

$$ P(X_{1}>x_{1},X_{2}>x_{2}, \ldots ,X_{n}>x_{n})\leq M\prod _{i=1}^{n}P(X_{i}>x_{i}) $$

and

$$ P(X_{1}\leq x_{1}, X_{2}\leq x_{2},\ldots ,X_{n}\leq x_{n})\leq M \prod _{i=1}^{n}P(X_{i}\leq x_{i}) $$

for all real numbers \(x_{1}, x_{2}, \ldots , x_{n}\). An infinite sequence \(\{X_{n}, n\geq 1\}\) is said to be END if its every finite subcollection is END.

An array \(\{X_{ni}, 1\leq i\leq n, n\geq 1\}\) of random variables is said to be rowwise END if for every \(n\geq 1\), \(\{X_{ni}, 1\leq i\leq n\}\) is END.

The concept of END random variables was introduced by Liu [21]. It shows that the END structure can reflect not only negative dependence structures, but also some positive ones. It has been proved that the END structure contains NA, negatively superadditive dependence (NSD), and negatively orthant dependence (NOD), the concepts of which were introduced by Joag-Dev and Proschan [22], Hu [23], and Lehmann [24], respectively. Therefore there is an increasing attention to this dependence structure, and many results were successfully established since this concept was raised. For more detail, we refer the readers to Liu [25], Shen [26], Wang and Wang [27], Wu and Guan [28], Shen et al. [29], Yang et al. [30], and Wu et al. [31], among others.

Wang et al. [32] introduced the following concept of m-extended negatively dependent (m-END) random variables.

Definition 1.4

Let \(m\geq 1\) be a fixed integer. A sequence \(\{X_{n},n\geq 1\}\) of random variables is said to be m-END if for any \(n\geq 2\) and any \(i_{1},i_{2},\ldots ,i_{n}\) such that \(|i_{k}-i_{j}|\geq m\) for all \(1\leq k\neq j\leq n\), we have that \(X_{i_{1}},X_{i_{2}},\ldots ,X_{i_{n}}\) are END.

The concept of m-END random variables is a natural extension of END random variables. It degenerates to END if we take \(m=1\). Hence m-END is a more general structure, and it is of interest to investigate this dependence structure. There are already some papers investigating m-END random variables. For example, Xu et al. [33] studied the mean consistency of the weighted estimator in a nonparametric regression model based on m-END random errors; Wang et al. [34] obtained the complete and complete moment convergence for partial sums of m-END random variables and gave their applications to the EV regression models.

We will use the following concept of stochastic domination.

Definition 1.5

A sequence \(\{X_{n}, n\geq 1\}\) of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

$$ P\bigl( \vert X_{n} \vert >x\bigr)\leq CP\bigl( \vert X \vert >x\bigr) $$

for all \(x\geq 0\) and \(n\geq 1\).

An array \(\{X_{ni}, 1\leq i\leq n, n\geq 1\}\) of rowwise random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

$$ P\bigl( \vert X_{ni} \vert >x\bigr)\leq CP\bigl( \vert X \vert >x\bigr) $$

for all \(x\geq 0\) and \(1\leq i\leq n\), \(n\geq 1\).

In this paper, we further investigate the consistency properties of estimator (2) in the nonparametric regression model (1) based on m-END random errors. We establish the pth mean consistency, complete consistency, and the rate of complete consistency under some general conditions. These results improve and extend the corresponding ones of Li et al. [14] and Ding et al. [18]. Moreover, the method used here is different from those of Li et al. [14] and Ding et al. [18].

In this paper, the symbols \(C,c_{1},c_{2},\dots \) represent generic positive constants whose values may vary in different places. Denote \(x^{+}=\max \{0,x\}\) and \(x^{-}=\max \{0,-x\}\). By \(I(A)\) we denote the indicator function of an event A.

The paper is organized as follows. The main results are stated in Sect. 2. Some important lemmas are presented in Sect. 3. The proofs of the main results are provided in Sect. 4.

2 Main results

The following assumptions are needed in the main results.

\(\boldsymbol{(H_{1})}\):

\(g\in H^{\nu }\) for \(\nu >1/2\), and g satisfies the Lipschitz condition of order \(\gamma >0\).

\(\boldsymbol{(H_{1})'}\):

\(g\in H^{\nu }\) for \(\nu \geq 3/2\), and g satisfies the Lipschitz condition of order \(\gamma >0\).

\(\boldsymbol{(H_{2})}\):

ϕ is regular of order \(q\geq \nu \), satisfies the Lipschitz condition of order 1, and \(|\hat{\phi }(x)-1|=O(x)\) as \(x\rightarrow 0\), where ϕ̂ is the Fourier transform of ϕ.

\(\boldsymbol{(H_{3})}\):

\(\max_{1\leq i\leq n}|s_{ni}-s_{n(i-1)}|=O(1/n)\).

Remark 2.1

These assumptions are general conditions for wavelet estimation. They are widely used in the literature, for example, in [1218].

We now present our main results. The first one is the mean consistency of order p for estimator (2).

Theorem 2.1

Suppose \(\boldsymbol{(H_{1})}\)\(\boldsymbol{(H_{3})}\) hold. Assume that \(\{\varepsilon _{ni},1\leq i\leq n,n\geq 1\}\) is an array of zero mean m-END random variables with \(\sup_{1\leq i\leq n,n\geq 1}E|\varepsilon _{ni}|^{p}<\infty \) for some \(p>1\). If \(2^{k}\rightarrow \infty \) and \(2^{k}/n\rightarrow 0\) as \(n\rightarrow \infty \), then for any \(t\in [0,1]\),

$$ g_{n}(t)\xrightarrow{L_{p}}g(t),\quad n\rightarrow \infty . $$

Remark 2.2

Li et al. [14] obtained the weak consistency for (2) with NA random errors. They required \(2^{k}=O(n^{1/3})\) and \(\sup_{1\leq i\leq n,n\geq 1}E|\varepsilon _{ni}|^{p}<\infty \) for some \(p>3/2\). Ding et al. [18] extended the result of Li et al. [14] to the pth mean consistency with END random errors under the moment condition \(\sup_{1\leq i\leq n,n\geq 1}E|\varepsilon _{ni}|^{p}<\infty \) for some \(p\geq 2\). It is obvious that Theorem 2.1 markedly relaxes the choice of \(2^{k}\) and weakens the moment condition. Hence Theorem 2.1 improves and extends the results of Ding et al. [18] and Li et al. [14] to m-END random errors.

The next theorem is about the complete consistency of the wavelet estimator based on m-END random errors.

Theorem 2.2

Suppose \(\boldsymbol{(H_{1})}\)\(\boldsymbol{(H_{3})}\) hold. Assume that \(2^{k}\rightarrow \infty \) and \(2^{k}/n=O(n^{-\alpha })\) for some \(0<\alpha <1\) and \(\{\varepsilon _{ni},1\leq i\leq n,n\geq 1\}\) is an array of zero mean m-END random variables stochastically dominated by a random variable ε with \(E|\varepsilon |^{1+1/\alpha }<\infty \). Then for any \(t\in [0,1]\),

$$ g_{n}(t)\rightarrow g(t)\quad \textit{completely}. $$

Remark 2.3

Ding et al. [18] obtained the complete consistency of wavelet estimator with END random errors, in which the conditions \(2^{k}=O(n^{1/3})\) and \(E|\varepsilon |^{4}<\infty \) are required. Note that even if we choose \(2^{k}=O(n^{1/3})\) in Theorem 2.2, the moment condition is only required to be \(E|\varepsilon |^{5/2}<\infty \). Therefore Theorem 2.2 improves and extends the corresponding result of Ding et al. [18] markedly from END random errors to m-END random errors.

We also provide the rate of complete consistency for wavelet estimator.

Theorem 2.3

Suppose \(\boldsymbol{(H_{1})'}\), \(\boldsymbol{(H_{2})}\), and \(\boldsymbol{(H_{3})}\) hold. Assume that \(2^{k}\rightarrow \infty \) and \(2^{k}/n=O(n^{-\alpha })\) for some \(0<\alpha <1\) and \(\{\varepsilon _{ni},1\leq i\leq n,n\geq 1\}\) is an array of zero mean m-END random variables stochastically dominated by a random variable ε with \(E|\varepsilon |^{2+2/\alpha }<\infty \). Then for any \(0< r<\alpha \) and \(t\in [0,1]\),

$$ \bigl\vert g_{n}(t)-g(t) \bigr\vert =O \bigl( \sqrt{k}/2^{k}+n^{-\gamma } \bigr)+o\bigl(n^{-r/2} \bigr) \quad \textit{completely}. $$

By Theorem 2.3 we can obtain the following accurate rate of complete consistency for the wavelet estimator.

Corollary 2.1

Suppose \(\boldsymbol{(H_{1})'}\), \(\boldsymbol{(H_{2})}\), and \(\boldsymbol{(H_{3})}\) hold with \(\gamma \geq 2/3\). Assume that \(c_{1}n^{1/3}\leq 2^{k}\leq c_{2}n^{1/3}\) and \(\{\varepsilon _{ni},1\leq i\leq n,n\geq 1\}\) is an array of zero mean m-END random variables stochastically dominated by a random variable ε with \(E|\varepsilon |^{5}<\infty \). Then for any \(0< r<2/3\) and \(t\in [0,1]\),

$$ \bigl\vert g_{n}(t)-g(t) \bigr\vert =o\bigl(n^{-r/2} \bigr)\quad \textit{completely}. $$

Remark 2.4

In Corollary 2.1, if \(r\approx 3/2\), then the rate of complete consistency can approximate to \(O(n^{-1/3})\). Ding et al. [18] also obtained the rate of complete consistency for wavelet estimator, where \(E|\varepsilon |^{6}<\infty \) is required. Observe that Corollary 2.1 only requires \(E|\varepsilon |^{5}<\infty \), and the best rate can approximate nearly \(O(n^{-1/3})\). Therefore Corollary 2.1 improves and extends the corresponding result of Ding et al. [18].

3 Some important lemmas

In this section, we state some lemmas, which will be used in proving our main results. The first one is a basic property of m-END random variables, which can be seen in Wang et al. [32].

Lemma 3.1

Let \(\{X_{n},n\geq 1\}\) be a sequence of m-END random variables. If \(\{f_{n},n\geq 1\}\) are nondecreasing (or nonincreasing) functions, then \(\{f_{n}(X_{n}),n\geq 1\}\) is still a sequence of m-END random variables.

The following lemma is about the Marcinkiewicz–Zygmund-type inequality and Rosenthal-type inequality for m-END random variables proved by Xu et al. [33].

Lemma 3.2

Let \(\{X_{n},n\geq 1\}\) be a sequence of m-END random variables with \(EX_{n}=0\) and \(E|X_{n}|^{p}<\infty \) for all \(n\geq 1\) and some \(p\geq 1\). Then

$$ E \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{p}\leq c_{m,p}\sum _{i=1}^{n}E \vert X_{i} \vert ^{p}, \quad 1 \leq p< 2, $$

and

$$ E \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{p}\leq d_{m,p} \Biggl\{ \sum _{i=1}^{n}E \vert X_{i} \vert ^{p}+ \Biggl(\sum_{i=1}^{n}EX_{i}^{2} \Biggr)^{p/2} \Biggr\} , \quad p\geq 2, $$

where \(c_{m,p}\) and \(d_{m,p}\) depend only on m and p.

The following lemma is due to Antoniadis et al. [11].

Lemma 3.3

Suppose that \(\boldsymbol{(H_{1})}\)\(\boldsymbol{(H_{3})}\) hold. Then

$$ Eg_{n}(t)-g(t)=O\bigl(n^{-\gamma }\bigr)+O(\tau _{k}), $$

where

$$ \tau _{k}= \textstyle\begin{cases} (1/2^{k})^{\nu -1/2} &\textit{if } 1/2< \nu < 3/2, \\ \sqrt{k}/2^{k} &\textit{if } \nu =3/2, \\ 1/2^{k}& \textit{if } \nu >3/2. \end{cases} $$

The next lemma is due to Li et al. [14].

Lemma 3.4

Under Assumptions \(\boldsymbol{(H_{1})}\)\(\boldsymbol{(H_{3})}\), we have

(i) \(\vert \int _{A_{ni}}E_{k}(t,s)\, d s \vert =O(\frac{2^{k}}{n})\), \(i=1, \ldots ,n\); (ii) \(\sum_{i=1}^{n} \vert \int _{A_{ni}}E_{k}(t,s) \, d s \vert \leq C\).

The following lemma is proved by Definition 1.5 and integration by parts; see Wu [35] or Shen et al. [36] for detailed proofs.

Lemma 3.5

Let \(\{X_{ni}, 1\leq i\leq n, n\geq 1\}\) be an array of random variables stochastically dominated by a random variable X. For any \(\alpha >0\) and \(b>0\), we have:

$$\begin{aligned}& E \vert X_{ni} \vert ^{\alpha }I \bigl( \vert X_{ni} \vert \leq b \bigr)\leq C_{1} \bigl[E \vert X \vert ^{\alpha }I \bigl( \vert X \vert \leq b \bigr)+b^{\alpha }P \bigl( \vert X \vert >b \bigr) \bigr], \\& E \vert X_{ni} \vert ^{\alpha }I \bigl( \vert X_{ni} \vert > b \bigr)\leq C_{2} E \vert X \vert ^{\alpha }I \bigl( \vert X \vert > b \bigr), \end{aligned}$$

where \(C_{1}\) and \(C_{2}\) are positive constants. Particularly, \(E|X_{ni}|^{\alpha }\leq CE|X|^{\alpha }\).

4 Proofs of main results

Proof of Theorem 2.1

By (1) and (2) it follows that for any \(t\in (0,1)\),

$$ \bigl\vert g_{n}(t)-g(t) \bigr\vert \leq \bigl\vert Eg_{n}(t)-g(t) \bigr\vert + \Biggl\vert \sum _{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni} \Biggr\vert . $$
(3)

Since \(\gamma >0\) and \(2^{k}\rightarrow \infty \) as \(n\rightarrow \infty \), by Lemma 3.3 it follows that

$$ \bigl\vert Eg_{n}(t)-g(t) \bigr\vert \rightarrow 0,\quad n\rightarrow \infty . $$
(4)

Hence we only need to prove

$$ E \Biggl\vert \sum_{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni} \Biggr\vert ^{p}\rightarrow 0,\quad n\rightarrow \infty .$$

We will assume without loss of generality that \(\int _{A_{ni}}E_{k}(t,s)\,ds>0\) since \(\int _{A_{ni}}E_{k}(t,s)\,ds= (\int _{A_{ni}}E_{k}(t,s)\,ds )^{+}- (\int _{A_{ni}}E_{k}(t,s)\,ds )^{-}\). Thus \(\{\int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni},1\leq i\leq n\}\) are still m-END by Lemma 3.1. Hence, if \(1< p\leq 2\), then by Lemmas 3.2 and 3.4 we obtain that

$$\begin{aligned} E \Biggl\vert \sum_{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni} \Biggr\vert ^{p} \leq &C\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{p}E \vert \varepsilon _{ni} \vert ^{p} \\ \leq &C\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert \cdot \biggl(\frac{2^{k}}{n} \biggr)^{p-1}\sup_{1\leq i\leq n,n\geq 1}E \vert \varepsilon _{ni} \vert ^{p} \\ \leq &C \biggl(\frac{2^{k}}{n} \biggr)^{p-1}\rightarrow 0, \quad n \rightarrow \infty . \end{aligned}$$

If \(p>2\), then from \(\sup_{1\leq i\leq n,n\geq 1}E|\varepsilon _{ni}|^{p}<\infty \) noticing by the Jensen inequality that \(\sup_{1\leq i\leq n,n\geq 1}E\varepsilon _{ni}^{2}<\infty \), we also obtain by Lemmas 3.2 and 3.4 that

$$\begin{aligned} E \Biggl\vert \sum_{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni} \Biggr\vert ^{p} \leq &C\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{p}E \vert \varepsilon _{ni} \vert ^{p} \\ &{}+C \Biggl(\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{2}E\varepsilon _{ni}^{2} \Biggr)^{p/2} \\ \leq &C\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert \cdot \biggl(\frac{2^{k}}{n} \biggr)^{p-1}\sup_{1\leq i\leq n,n\geq 1}E \vert \varepsilon _{ni} \vert ^{p} \\ &+C \Biggl(\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert \cdot \frac{2^{k}}{n}\cdot \sup_{1\leq i\leq n,n\geq 1}E\varepsilon _{ni}^{2} \Biggr)^{p/2} \\ \leq &C \biggl(\frac{2^{k}}{n} \biggr)^{p-1}+C \biggl( \frac{2^{k}}{n} \biggr)^{p/2}\rightarrow 0,\quad n\rightarrow \infty . \end{aligned}$$

The proof is finished. □

Proof of Theorem 2.2

By (3) and (4) it follows that to complete the proof, we only need to show that for any \(\epsilon >0\),

$$ \sum_{n=1}^{\infty }P \Biggl( \Biggl\vert \sum_{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni} \Biggr\vert >\epsilon \Biggr)< \infty . $$
(5)

In view of Lemma 3.4(i) and \(2^{k}/n=O(n^{-\alpha })\), we may assume without loss of generality that \(0<\int _{A_{ni}}E_{k}(t,s)\,ds\leq n^{-\alpha }\). For fixed \(1\leq i\leq n\), \(n\geq 1\), define

$$\begin{aligned} X_{ni} =&-I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni}< -1 \biggr) + \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni}I \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni} \biggr\vert \leq 1 \biggr) \\ &{}+I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\, \varepsilon _{ni}>1 \biggr) \end{aligned}$$

and

$$\begin{aligned} Y_{ni} =& \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}+1 \biggr)I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}< -1 \biggr) \\ &{}+ \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}-1 \biggr)I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}>1 \biggr). \end{aligned}$$

By Lemma 3.1 it follows that \(\{X_{ni},1\leq i\leq n\}\) are still m-END. We can easily check that

$$\begin{aligned}& \sum_{n=1}^{\infty }P \Biggl( \Biggl\vert \sum_{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \Biggr\vert >\epsilon \Biggr) \\& \quad \leq \sum_{n=1}^{\infty }P \Biggl(\bigcup _{i=1}^{n} \biggl\{ \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >1 \biggr\} \Biggr) +\sum _{n=1}^{\infty }P \Biggl( \Biggl\vert \sum _{i=1}^{n}X_{ni} \Biggr\vert > \epsilon \Biggr) \\& \quad \doteq I_{1}+I_{2}. \end{aligned}$$

From \(0<\int _{A_{ni}}E_{k}(t,s)\,ds\leq n^{-\alpha }\) and Lemmas 3.4 and 3.5 it follows that

$$\begin{aligned} I_{1} \leq &\sum_{n=1}^{\infty } \sum_{i=1}^{n}P \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >1 \biggr) \\ \leq &\sum_{n=1}^{\infty }\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon _{ni} \vert I \bigl( \vert \varepsilon _{ni} \vert >n^{\alpha } \bigr) \\ \leq &C\sum_{n=1}^{\infty }\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert I \bigl( \vert \varepsilon \vert >n^{\alpha } \bigr) \\ \leq &C\sum_{n=1}^{\infty }\sum _{j=n}^{\infty }E \vert \varepsilon \vert I \bigl(j< \vert \varepsilon \vert ^{1/\alpha }\leq j+1 \bigr) \\ \leq &C\sum_{j=1}^{\infty }jE \vert \varepsilon \vert I \bigl(j< \vert \varepsilon \vert ^{1/ \alpha }\leq j+1 \bigr)\leq CE \vert \varepsilon \vert ^{1+1/\alpha }< \infty . \end{aligned}$$

To prove that \(I_{2}<\infty \), we first show that \(\sum_{i=1}^{n}EX_{ni}\rightarrow 0\) as \(n\rightarrow \infty \). Indeed, from \(E\varepsilon _{ni}=0\), \(0<\int _{A_{ni}}E_{k}(t,s)\,ds\leq n^{-\alpha }\), and Lemmas 3.4 and 3.5 it follows that

$$\begin{aligned} \Biggl\vert \sum_{i=1}^{n}EX_{ni} \Biggr\vert =& \Biggl\vert \sum_{i=1}^{n}EY_{ni} \Biggr\vert \\ \leq &\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert \cdot E \vert \varepsilon _{ni} \vert I \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >1 \biggr) \\ \leq &CE \vert \varepsilon \vert I\bigl( \vert \varepsilon \vert >n^{\alpha }\bigr)\leq Cn^{-1}E \vert \varepsilon \vert ^{1+1/\alpha }I\bigl( \vert \varepsilon \vert >n^{\alpha }\bigr) \rightarrow 0,\quad n \rightarrow \infty , \end{aligned}$$

which implies that \(\vert \sum_{i=1}^{n}EX_{ni} \vert <\epsilon /2\) for all n large enough. Hence by the Markov, \(C_{r}\), and Jensen inequalities and by Lemma 3.2 we obtain that for \(q>2/\alpha \),

$$\begin{aligned} I_{2} \leq &C\sum_{n=1}^{\infty }P \Biggl( \Biggl\vert \sum_{i=1}^{n}(X_{ni}-EX_{ni}) \Biggr\vert >\epsilon /2 \Biggr) \\ \leq &C\sum_{n=1}^{\infty }E \Biggl\vert \sum_{i=1}^{n}(X_{ni}-EX_{ni}) \Biggr\vert ^{q} \\ \leq &C\sum_{n=1}^{\infty }\sum _{i=1}^{n}E \vert X_{ni}-EX_{ni} \vert ^{q}+C \sum_{n=1}^{\infty } \Biggl(\sum_{i=1}^{n}E \vert X_{ni}-EX_{ni} \vert ^{2} \Biggr)^{q/2} \\ \leq &C\sum_{n=1}^{\infty }\sum _{i=1}^{n}E \vert X_{ni} \vert ^{q}+C\sum_{n=1}^{ \infty } \Biggl(\sum_{i=1}^{n}EX_{ni}^{2} \Biggr)^{q/2} \\ \doteq & I_{3}+I_{4}. \end{aligned}$$

By the \(C_{r}\) inequality, Definition 1.5, and Lemma 3.5 it follows that

$$\begin{aligned} I_{3} \leq &C\sum_{n=1}^{\infty } \sum_{i=1}^{n}E \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert ^{q} I \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert \leq 1 \biggr) \\ &{}+C\sum_{n=1}^{\infty }\sum _{i=1}^{n}P \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >1 \biggr) \\ \leq &C\sum_{n=1}^{\infty }\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon _{ni} \vert ^{q} I \biggl( \vert \varepsilon _{ni} \vert \leq \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1} \biggr) \\ &{}+C\sum_{n=1}^{\infty }\sum _{i=1}^{n}P \biggl( \vert \varepsilon _{ni} \vert > \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1} \biggr) \\ \leq &C\sum_{n=1}^{\infty }\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon \vert ^{q} I \biggl( \vert \varepsilon \vert \leq \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1} \biggr) \\ &{}+C\sum_{n=1}^{\infty }\sum _{i=1}^{n}P \biggl( \vert \varepsilon \vert > \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1} \biggr) \\ \doteq & I_{31}+I_{32}. \end{aligned}$$

Similarly to the proof of \(I_{1}<\infty \), we easily obtain

$$ I_{32}\leq C\sum_{n=1}^{\infty } \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert I \bigl( \vert \varepsilon \vert >n^{\alpha } \bigr)< \infty . $$

For \(I_{31}\), we easily check that

$$\begin{aligned} I_{31} =&C\sum_{n=1}^{\infty } \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon \vert ^{q} I \biggl( \vert \varepsilon \vert \leq \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1}, \vert \varepsilon \vert \leq n^{\alpha } \biggr) \\ &{}+C\sum_{n=1}^{\infty }\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon \vert ^{q} I \biggl( \vert \varepsilon \vert \leq \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1}, \vert \varepsilon \vert >n^{\alpha } \biggr) \\ \leq &C\sum_{n=1}^{\infty }\sum _{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon \vert ^{q} I \bigl( \vert \varepsilon \vert \leq n^{\alpha } \bigr) \\ &{}+C\sum _{n=1}^{\infty }\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert I \bigl( \vert \varepsilon \vert >n^{\alpha } \bigr) \\ \doteq & I_{311}+I_{312}. \end{aligned}$$

Similarly to \(I_{32}<\infty \), we have \(I_{312}<\infty \). Now we turn to prove \(I_{311}<\infty \). Indeed, noting that \(q>2/\alpha >1+1/\alpha \), by Lemma 3.4 we obtain that

$$\begin{aligned} I_{311} \leq &C\sum_{n=1}^{\infty } \max_{1\leq i\leq n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q-1}\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert ^{q} I \bigl( \vert \varepsilon \vert \leq n^{\alpha } \bigr) \\ \leq &C\sum_{n=1}^{\infty }n^{-\alpha (q-1)}E \vert \varepsilon \vert ^{q}I \bigl( \vert \varepsilon \vert \leq n^{\alpha } \bigr) \\ =&C\sum_{j=1}^{\infty }E \vert \varepsilon \vert ^{q}I \bigl(j-1< \vert \varepsilon \vert ^{1/ \alpha }\leq j \bigr)\sum_{j=n}^{\infty }n^{-\alpha (q-1)} \\ \leq &C\sum_{j=1}^{\infty }j^{1-\alpha (q-1)}E \vert \varepsilon \vert ^{q}I \bigl(j-1< \vert \varepsilon \vert ^{1/\alpha }\leq j \bigr) \\ \leq &CE \vert \varepsilon \vert ^{1+1/\alpha }< \infty . \end{aligned}$$

Therefore we have proved \(I_{3}<\infty \). Finally, we will show that \(I_{4}<\infty \). Observing that \(|X_{ni}|\leq |\int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}|\) and \(E|\varepsilon |^{1+1/\alpha }<\infty \) implies \(E\varepsilon ^{2}<\infty \), by Lemmas 3.43.5 and \(q>2/\alpha \) we obtain that

$$\begin{aligned} I_{4} \leq &C\sum_{n=1}^{\infty } \Biggl(\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{2}\cdot E \vert \varepsilon _{ni} \vert ^{2} \Biggr)^{q/2} \\ \leq &C\sum_{n=1}^{\infty } \Biggl(n^{-\alpha }\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert \cdot E \vert \varepsilon \vert ^{2} \Biggr)^{q/2} \\ \leq &C\sum_{n=1}^{\infty }n^{-\alpha q/2}< \infty . \end{aligned}$$

The proof is complete. □

Proof of Theorem 2.3

Note that by \(\boldsymbol{(H_{1})'}\) and Lemma 3.3 we have

$$ \bigl\vert g_{n}(t)-Eg_{n}(t) \bigr\vert =O \bigl( \sqrt{k}/2^{k}+n^{-\gamma } \bigr). $$

Hence, in view of (3), we only need to prove that for any \(\epsilon >0\),

$$ \sum_{n=1}^{\infty }P \Biggl( \Biggl\vert \sum_{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \Biggr\vert >\epsilon n^{-r/2} \Biggr)< \infty . $$
(6)

We also assume without loss of generality that \(0<\int _{A_{ni}}E_{k}(t,s)\,ds\leq n^{-\alpha }\). Define for each \(1\leq i\leq n\), \(n\geq 1\),

$$\begin{aligned} Z_{ni} =&-n^{-\alpha /2}I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}< -n^{-\alpha /2} \biggr) \\ &{}+ \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}I \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert \leq n^{-\alpha /2} \biggr) \\ &{}+n^{-\alpha /2}I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}>n^{- \alpha /2} \biggr) \end{aligned}$$

and

$$\begin{aligned} U_{ni} =& \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}+n^{- \alpha /2} \biggr)I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}< -n^{- \alpha /2} \biggr) \\ &{}+ \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}-n^{-\alpha /2} \biggr)I \biggl( \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}>n^{- \alpha /2} \biggr). \end{aligned}$$

Then \(\{Z_{ni},1\leq i\leq n\}\) are still m-END by Lemma 3.1. We can also decompose

$$\begin{aligned}& \sum_{n=1}^{\infty }P \Biggl( \Biggl\vert \sum_{i=1}^{n} \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \Biggr\vert >\epsilon n^{-r/2} \Biggr) \\& \quad \leq \sum_{n=1}^{\infty }P \Biggl(\bigcup _{i=1}^{n} \biggl\{ \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >n^{-\alpha /2} \biggr\} \Biggr) +\sum _{n=1}^{\infty }P \Biggl( \Biggl\vert \sum _{i=1}^{n}Z_{ni} \Biggr\vert >\epsilon n^{-r/2} \Biggr) \\& \quad \doteq J_{1}+J_{2}. \end{aligned}$$

Similarly to the proof of \(I_{1}<\infty \), by \(0<\int _{A_{ni}}E_{k}(t,s)\,ds\leq n^{-\alpha }\), Lemmas 3.43.5, and \(E|\varepsilon |^{2+2/\alpha }<\infty \) we have that

$$\begin{aligned} J_{1} \leq &\sum_{n=1}^{\infty } \sum_{i=1}^{n}P \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >n^{-\alpha /2} \biggr) \\ \leq &\sum_{n=1}^{\infty }n^{\alpha /2} \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon _{ni} \vert I \bigl( \vert \varepsilon _{ni} \vert >n^{\alpha /2} \bigr) \\ \leq &C\sum_{n=1}^{\infty }n^{\alpha /2} \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert I \bigl( \vert \varepsilon \vert >n^{\alpha /2} \bigr) \\ \leq &C\sum_{j=1}^{\infty }E \vert \varepsilon \vert I \bigl(j< \vert \varepsilon \vert ^{2/ \alpha }\leq j+1 \bigr)\sum_{n=1}^{j}n^{\alpha /2} \\ \leq &C\sum_{j=1}^{\infty }j^{1+\alpha /2}E \vert \varepsilon \vert I \bigl(j< \vert \varepsilon \vert ^{2/\alpha }\leq j+1 \bigr)\leq CE \vert \varepsilon \vert ^{2+2/ \alpha }< \infty . \end{aligned}$$

By \(E\varepsilon _{ni}=0\), \(0<\int _{A_{ni}}E_{k}(t,s)\,ds\leq n^{-\alpha }\), and Lemmas 3.43.5 we have that

$$\begin{aligned} n^{r/2} \Biggl\vert \sum_{i=1}^{n}EZ_{ni} \Biggr\vert =&n^{r/2} \Biggl\vert \sum _{i=1}^{n}EU_{ni} \Biggr\vert \\ \leq &n^{r/2}\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert \cdot E \vert \varepsilon _{ni} \vert I \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >n^{-\alpha /2} \biggr) \\ \leq &Cn^{r/2}E \vert \varepsilon \vert I\bigl( \vert \varepsilon \vert >n^{\alpha /2}\bigr)\leq Cn^{(r- \alpha )/2}E \vert \varepsilon \vert ^{2}I\bigl( \vert \varepsilon \vert >n^{\alpha /2}\bigr) \rightarrow 0,\quad n\rightarrow \infty . \end{aligned}$$

Hence by the Markov, \(C_{r}\), and Jensen inequalities and by Lemma 3.2 we obtain that for \(q>\max \{2+2/\alpha ,2/(\alpha -r)\}\),

$$\begin{aligned} J_{2} \leq &C\sum_{n=1}^{\infty }P \Biggl( \Biggl\vert \sum_{i=1}^{n}(Z_{ni}-EZ_{ni}) \Biggr\vert >\epsilon n^{-r/2}/2 \Biggr) \\ \leq &C\sum_{n=1}^{\infty }n^{rq/2}E \Biggl\vert \sum_{i=1}^{n}(Z_{ni}-EZ_{ni}) \Biggr\vert ^{q} \\ \leq &C\sum_{n=1}^{\infty }n^{rq/2} \sum_{i=1}^{n}E \vert Z_{ni} \vert ^{q}+C \sum_{n=1}^{\infty }n^{rq/2} \Biggl(\sum_{i=1}^{n}EZ_{ni}^{2} \Biggr)^{q/2} \\ \doteq & J_{3}+J_{4}. \end{aligned}$$

By the \(C_{r}\) inequality and Lemma 3.5 we have that

$$\begin{aligned} J_{3} \leq &C\sum_{n=1}^{\infty }n^{rq/2} \sum_{i=1}^{n}E \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert ^{q} I \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert \leq n^{-\alpha /2} \biggr) \\ &{}+C\sum_{n=1}^{\infty }n^{(r-\alpha )q/2} \sum_{i=1}^{n}P \biggl( \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni} \biggr\vert >n^{- \alpha /2} \biggr) \\ \leq &C\sum_{n=1}^{\infty }n^{rq/2} \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon \vert ^{q} I \biggl( \vert \varepsilon \vert \leq \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1}\cdot n^{-\alpha /2} \biggr) \\ &{}+C\sum_{n=1}^{\infty }\sum _{i=1}^{n}P \biggl( \vert \varepsilon \vert > \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1}\cdot n^{-\alpha /2} \biggr) \\ \doteq & J_{31}+J_{32}. \end{aligned}$$

Similarly to \(J_{1}<\infty \), we have

$$ J_{32}\leq C\sum_{n=1}^{\infty }n^{\alpha /2} \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert I \bigl( \vert \varepsilon \vert >n^{\alpha /2} \bigr)< \infty . $$

We can also easily obtain by \(r<\alpha \) that

$$\begin{aligned} J_{31} =&C\sum_{n=1}^{\infty } \sum_{i=1}^{n}n^{rq/2} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon \vert ^{q} I \biggl( \vert \varepsilon \vert \leq \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1}\cdot n^{-\alpha /2}, \vert \varepsilon \vert \leq n^{\alpha /2} \biggr) \\ &{}+C\sum_{n=1}^{\infty }n^{rq/2} \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q} \\ &{}\times E \vert \varepsilon \vert ^{q} I \biggl( \vert \varepsilon \vert \leq \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{-1}\cdot n^{-\alpha /2}, \vert \varepsilon \vert >n^{\alpha /2} \biggr) \\ \leq &C\sum_{n=1}^{\infty }n^{\alpha q/2} \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q}E \vert \varepsilon \vert ^{q} I \bigl( \vert \varepsilon \vert \leq n^{\alpha /2} \bigr) \\ &{}+C\sum_{n=1}^{\infty }n^{\alpha /2} \sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert I \bigl( \vert \varepsilon \vert >n^{\alpha /2} \bigr) \\ \doteq & J_{311}+J_{312}. \end{aligned}$$

Similarly to \(J_{32}<\infty \), we also have \(J_{312}<\infty \). For \(J_{311}\), noting that \(q>2+2/\alpha \), by Lemma 3.4 we obtain that

$$\begin{aligned} J_{311} \le &C\sum_{n=1}^{\infty }n^{\alpha q/2} \max_{1\le i\leq n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{q-1}\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert E \vert \varepsilon \vert ^{q} I \bigl( \vert \varepsilon \vert \leq n^{\alpha /2} \bigr) \\ \leq &C\sum_{n=1}^{\infty }n^{\alpha q/2-\alpha (q-1)}E \vert \varepsilon \vert ^{q}I \bigl( \vert \varepsilon \vert \leq n^{\alpha /2} \bigr) \\ =&C\sum_{j=1}^{\infty }E \vert \varepsilon \vert ^{q}I \bigl(j-1< \vert \varepsilon \vert ^{2/ \alpha }\leq j \bigr)\sum_{j=n}^{\infty }n^{\alpha -\alpha q/2} \\ \leq &C\sum_{j=1}^{\infty }j^{\alpha -\alpha q/2+1}E \vert \varepsilon \vert ^{q}I \bigl(j-1< \vert \varepsilon \vert ^{2/\alpha }\leq j \bigr) \\ \leq &CE \vert \varepsilon \vert ^{2+2/\alpha }< \infty . \end{aligned}$$

Hence we have shown that \(J_{3}<\infty \), and it remains to prove that \(J_{4}<\infty \). Observing that \(|Z_{ni}|\leq |\int _{A_{ni}}E_{k}(t,s)\,ds\,\varepsilon _{ni}|\) and \(E\varepsilon ^{2}<\infty \), by Lemmas 3.43.5 and \(q>2/(\alpha -r)\) we have that

$$\begin{aligned} J_{4} \leq &C\sum_{n=1}^{\infty }n^{rq/2} \Biggl(\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert ^{2}\cdot E \vert \varepsilon _{ni} \vert ^{2} \Biggr)^{q/2} \\ \leq &C\sum_{n=1}^{\infty }n^{rq/2} \Biggl(n^{-\alpha }\sum_{i=1}^{n} \biggl\vert \int _{A_{ni}}E_{k}(t,s)\,ds \biggr\vert \cdot E \vert \varepsilon \vert ^{2} \Biggr)^{q/2} \\ \leq &C\sum_{n=1}^{\infty }n^{-(\alpha -r)q/2}< \infty . \end{aligned}$$

The proof is complete. □

Availability of data and materials

Data sharing is not applicable to this paper as no datasets were generated or analyzed during the current study.

References

  1. Watson, G.S.: Smooth regression analysis. Sankhyā, Ser. A 26, 359–372 (1964)

    MathSciNet  MATH  Google Scholar 

  2. Nadaraya, E.A.: Remarks on non-parametric estimates for density functions and regression curves. Theory Probab. Appl. 15, 134–136 (1970)

    Article  Google Scholar 

  3. Priestley, M.B., Chao, M.T.: Non-parametric function fitting. J. R. Stat. Soc. Ser. B 34, 385–392 (1972)

    MathSciNet  MATH  Google Scholar 

  4. Gasser, T., Müller, H.G.: Smoothing Techniques for Curve Estimation pp. 23–68. Springer, Heidelberg (1979)

    Book  Google Scholar 

  5. Georgiev, A.A.: Local properties of function fitting estimates with applications to system identification. In: Grossmann, W., et al. (eds.) Mathematical Statistics and Applications, Volume B, Proceedings 4th Pannonian Symposium on Mathematical Statistics, 4–10 September 1983, Bad Tatzmannsdorf, Austria, pp. 141–151. Reidel, Dordrecht (1985)

    Google Scholar 

  6. Yang, S.C., Wang, Y.B.: Strong consistency of regression function estimator for negatively associated samples. Acta Math. Appl. Sin. 22(4), 522–530 (1999)

    MATH  Google Scholar 

  7. Yang, S.C.: Uniformly asymptotic normality of the regression weighted estimator for negatively associated samples. Stat. Probab. Lett. 62(2), 101–110 (2003)

    Article  MathSciNet  Google Scholar 

  8. Liang, H.Y., Jing, B.Y.: Asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences. J. Multivar. Anal. 95(2), 227–245 (2005)

    Article  MathSciNet  Google Scholar 

  9. Wang, X.J., Zheng, L.L., Xu, C., Hu, S.H.: Complete consistency for the estimator of nonparametric regression models based on extended negatively dependent errors. Statistics 49(2), 396–407 (2015)

    Article  MathSciNet  Google Scholar 

  10. Chen, Z.Y., Wang, H.B., Wang, X.J.: The consistency for the estimator of nonparametric regression model based on martingale difference errors. Stat. Pap. 57(2), 451–469 (2016)

    Article  MathSciNet  Google Scholar 

  11. Antoniadis, A., Gregoire, G., Mckeague, I.W.: Wavelet methods for cure estimation. J. Am. Stat. Assoc. 89, 1340–1353 (1994)

    Article  Google Scholar 

  12. Xue, L.G.: Strong uniform convergence rates of the wavelet estimator of regression function under completed and censored data. Acta Math. Appl. Sin. 25(3), 430–438 (2002) (in Chinese)

    MathSciNet  MATH  Google Scholar 

  13. Sun, Y., Chai, G.X.: Nonparametric wavelet estimation of a fixed designed regression function. Acta Math. Sci. 24A(5), 597–606 (2004) (in Chinese)

    MathSciNet  MATH  Google Scholar 

  14. Li, Y.M., Yang, S.C., Zhou, Y.: Consistency and uniformly asymptotic normality of wavelet estimator in regression model with associated samples. Stat. Probab. Lett. 78, 2947–2956 (2008)

    Article  MathSciNet  Google Scholar 

  15. Liang, H.Y.: Asymptotic normality of wavelet estimator in heteroscedastic model with α-mixing errors. J. Syst. Sci. Complex. 24, 725–737 (2011)

    Article  MathSciNet  Google Scholar 

  16. Li, Y.M., Wei, D.C., Xing, G.D.: Berry–Esseen bounds for wavelet estimator in a regression model with linear process errors. Stat. Probab. Lett. 81, 103–110 (2011)

    Article  MathSciNet  Google Scholar 

  17. Tang, X.F., Xi, M.M., Wu, Y., Wang, X.J.: Asymptotic normality of a wavelet estimator for asymptotically negatively associated errors. Stat. Probab. Lett. 140, 191–201 (2018)

    Article  MathSciNet  Google Scholar 

  18. Ding, L.W., Chen, P., Li, Y.M.: Consistency for wavelet estimator in nonparametric regression model with extended negatively dependent samples. Stat. Pap. 61, 2331–2349 (2020)

    Article  MathSciNet  Google Scholar 

  19. Ding, L.W., Yao, S.W., Chen, P.: On Berry–Esseen bound of wavelet estimators in nonparametric regression model under asymptotically negatively associated assumptions. Commun. Stat., Simul. Comput. (2019, in press). https://doi.org/10.1080/03610918.2019.1659966

    Article  Google Scholar 

  20. Ding, L.W., Chen, P.: Statistical estimation for heteroscedastic semiparametric regression model with random errors. J. Nonparametr. Stat. 32(4), 940–969 (2020)

    Article  MathSciNet  Google Scholar 

  21. Liu, L.: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79, 1290–1298 (2009)

    Article  MathSciNet  Google Scholar 

  22. Joag-Dev, K., Proschan, F.: Negative association of random variables with applications. Ann. Stat. 11, 286–295 (1983)

    Article  MathSciNet  Google Scholar 

  23. Hu, T.Z.: Negatively superadditive dependence of random variables with applications. Chinese J. Appl. Probab. Statist. 16, 133–144 (2000)

    MathSciNet  MATH  Google Scholar 

  24. Lehmann, E.L.: Some concepts of dependence. Ann. Math. Stat. 37, 1137–1153 (1966)

    Article  MathSciNet  Google Scholar 

  25. Liu, L.: Necessary and sufficient conditions for moderate deviations of dependent random variables with heavy tails. Sci. China Math. 53(6), 1421–1434 (2010)

    Article  MathSciNet  Google Scholar 

  26. Shen, A.T.: Probability inequalities for END sequence and their applications. J. Inequal. Appl. 2011, 98 (2011)

    Article  MathSciNet  Google Scholar 

  27. Wang, S.J., Wang, X.J.: Precise large deviations for random sums of END real-valued random variables with consistent variation. J. Math. Anal. Appl. 402(2), 660–667 (2013)

    Article  MathSciNet  Google Scholar 

  28. Wu, Y.F., Guan, M.: Convergence properties of the partial sums for sequences of END random variables. J. Korean Math. Soc. 49(6), 1097–1110 (2012)

    Article  MathSciNet  Google Scholar 

  29. Shen, A.T., Yao, M., Xiao, B.Q.: Equivalent conditions of complete convergence and complete moment convergence for END random variables. Chin. Ann. Math., Ser. B 39(1), 83–96 (2018)

    Article  MathSciNet  Google Scholar 

  30. Yang, W.Z., Xu, H.Y., Chen, L., Hu, S.H.: Complete consistency of estimators for regression models based on extended negatively dependent errors. Stat. Pap. 59(2), 449–465 (2018)

    Article  MathSciNet  Google Scholar 

  31. Wu, Y., Wang, X.J., Hu, T.C., Volodin, A.: Complete f-moment convergence for extended negatively dependent random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113, 333–351 (2019)

    Article  MathSciNet  Google Scholar 

  32. Wang, X.J., Wu, Y., Hu, S.H.: Exponential probability inequality for m-END random variables and its applications. Metrika 79(2), 127–147 (2016)

    Article  MathSciNet  Google Scholar 

  33. Xu, W.F., Wu, Y., Zhang, R., Jiang, H.L., Wang, X.J.: The mean consistency of the weighted estimator in the fixed design regression models based on m-END errors. J. Math. Inequal. 12(3), 765–775 (2018)

    MathSciNet  MATH  Google Scholar 

  34. Wang, Z.J., Wu, Y., Wang, M.G., Wang, X.J.: Complete and complete moment convergence with applications to the EV regression models. Statistics 53(2), 261–282 (2019)

    Article  MathSciNet  Google Scholar 

  35. Wu, Q.Y.: Probability Limit Theory for Mixed Sequence. Science Press of China, Beijing (2006)

    Google Scholar 

  36. Shen, A.T., Zhang, Y., Volodin, A.: Applications of the Rosenthal-type inequality for negatively superadditive dependent random variables. Metrika 78(3), 295–311 (2015)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the anonymous referees and the editor for their many valuable comments and suggestions.

Funding

The paper is supported by the Key research topics of Chinese local Party schools (2019dfdxkt051), Natural Science Foundation of Anhui Province (190808MA20), and the Outstanding talent cultivation subsidy project of Anhui Provincial Education Department (gxgnfx2019163).

Author information

Authors and Affiliations

Authors

Contributions

Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Qihui He.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, Q., Chen, M. Consistency properties for the wavelet estimator in nonparametric regression model with dependent errors. J Inequal Appl 2021, 152 (2021). https://doi.org/10.1186/s13660-021-02603-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02603-0

MSC

Keywords