Skip to main content

The Berry-Esseen bounds of wavelet estimator for regression model whose errors form a linear process with a ρ-mixing

Abstract

In this paper, considering the nonparametric regression model \(Y_{ni}=g(t_{i})+\varepsilon_{i}\) (\(1\leq i\leq n\)), where \(\varepsilon_{i}=\sum_{j=-\infty}^{\infty}a_{j}e_{i-j}\) and \(e_{i-j}\) are identically distributed and ρ-mixing sequences. This paper obtains the Berry-Esseen bounds of the wavelet estimator of \(g(\cdot)\), the rates of the normal approximation are shown as \(O(n^{-1/6})\) under certain conditions.

1 Introduction

The Berry-Esseen theorem of probability distribution concerns mainly research of statistics convergence to a certain distribution and the measure of the probability distributions of the statistics which determines the distribution as regards the absolute distance that can be controlled as an optimal problem.

In recent years, the Berry-Esseen bounds theorem has got extensive investigation. For instance, Xue [1] discussed the Berry-Esseen bound of an estimator for the variance in a semi-parametric regression model under some mild conditions, Liang and Li [2] studied the asymptotic normality and the Berry-Esseen type bound of the estimator with linear process error, Li et al. [3] derived the Berry-Esseen bounds of the wavelet estimator for a nonparametric regression model with linear process errors generated by φ-mixing sequences, Li et al. [4] investigated the Berry-Esseen bounds of the wavelet estimator in a semi-parametric regression model with linear process errors.

To investigate the estimation of the fixed design nonparametric regression model involves a regression function \(g(\cdot)\) which is defined on \([0,1]\):

$$ Y_{i}=g(t_{i})+\varepsilon_{i} \quad (1\leq i\leq n), $$
(1.1)

where \(\{t_{i}\}\) are known fixed design points, we assume \(\{t_{i}\}\) and to be ordered \(0\leq t_{1}\leq\cdots\leq t_{n}\leq1\), and \(\{\varepsilon_{i}\}\) are random errors.

It is well known that a regression function estimation is an important method in data analysis and has a wide range of applications in filtering and prediction in communication and control systems, pattern recognition and classification, and econometrics. So the model (1.1) has been studied widely. The model (1.1) has been applied in solving many practical problems and kinds of estimation methods have been used to obtain estimators of \(g(\cdot)\).

For model (1.1), the following wavelet estimator of \(g(\cdot)\) will be considered:

$$ g_{n}(t)=\sum_{i=1} ^{n}Y_{i} \int_{A_{i}}E_{m}(t,s)\,ds. $$
(1.2)

The wavelet kernel \(E_{m}(t,s)\) can be considered as follows: \(E_{m}(t,s)=2^{m}E_{0}(2^{m}t,2^{m}s)=2^{m}\sum_{k\in Z}\phi(2^{m}t-k)\phi(2^{m}s-k)\), where \(\phi(\cdot)\) is a scaling function, the smooth parameter \(m=m(n)>0\) depending only on n, and \(A_{i}=[s_{i-1},s_{i}]\) being a partition of interval \([0,1]\), \(s_{i}=(1/2)(t_{i}+t_{i+1})\), and \(t_{i}\in A_{i}\), \(1\leq i\leq n\).

As we know wavelets have been used widely in many engineering and technological fields, especially in picture handling by computers. Since the 1990s, in order to meet practical demands, some authors began to consider using wavelet methods in statistics.

Definition 1.1

Let \(\{X_{i}: i=1,2,\ldots\}\) be a sequence of random variables. Write the ρ-mixing coefficient

$$\rho(n)=\sup_{k\in N}\sup_{X\in L_{2}(\mathfrak {F}^{k}_{1}),Y\in L_{2}(\mathfrak{F}^{\infty}_{k+n})} \frac {|\mathrm{E}(X-\mathrm{E}X)(Y-\mathrm{E}Y)|}{\sqrt{\operatorname{Var} X \operatorname{Var} Y}}, $$

where \(\mathfrak{F}^{b}_{a}:=\sigma\{\{X_{i}:a\leq i\leq b\}\}\), \(L_{2}(\mathfrak{F}^{b}_{a})\) is the set of square integrable random variables on the condition of \(\mathfrak{F}^{b}_{a}\).

Definition 1.2

A sequence of random variables \(\{X_{i}: i=1,2,\ldots\}\) is ρ-mixing if \(\rho(n)\to0\), \(n\to\infty\).

Kolmogorov and Rozanov [5] put forward a ρ-mixing random variables sequence. For the wide use of ρ-mixing in science and technology and economics, many scholars have investigated the ρ-mixing and got fruitful meaningful results. For instance, the central limit theorem for ρ-mixing, the law of large numbers for ρ-mixing, the strong invariant principle and weak invariant principle for ρ-mixing, the complete convergence theorem of ρ-mixing, which we can see in Shao’s work [6, 7]. Recently, Jiang [8] discussed the convergence rates in the law of the logarithm of the ρ-mixing random variables, obtaining a sufficient condition for the law of logarithm of ρ-mixing and the convergence rates in the law of the iterated logarithm. Chen and Liu [9] achieved the sufficient and necessary conditions of complete moment convergence for a sequence of identically distributed ρ-mixing random variables. Zhou and Lin [10] investigated the estimation problems of partially linear models for longitudinal data with ρ-mixing error structures, and they studied the strong consistency for the least squares estimator of the parametric component; the strong consistency and uniform consistency for the estimator of nonparametric function were studied under some mild conditions. Tan and Wang [11] studied the complete convergence for weighted sums of non-identically distributed ρ-mixing random variables sequence, and they gave the Marcinkiewicz-Zygmund type strong law of large numbers.

2 Assumptions and main results

First, we give some basic assumptions as follows:

  1. (A1)

    \(\{\varepsilon_{j}\}_{j\in Z}\) has a linear representation \(\varepsilon_{j}=\sum_{k=-\infty}^{\infty}a_{k}e_{k-j}\), where \(\{a_{k}\}\) is a sequence of real numbers with \(\sum_{k=-\infty}^{\infty}|a_{k}|<\infty\), \(\{e_{j}\}\) are identically distributed, ρ-mixing random variables with \(\mathrm{E}e_{j}=0\), \(\mathrm{E}|e_{j}|^{r}<\infty\) for some \(r>2\), and \(\rho(n)=O(n^{-\lambda})\) for \(\lambda>2\).

  2. (A2)

    The spectral density function \(f(\omega)\) of \(\{\varepsilon_{i}\}\) satisfies \(0< c_{1}\leq f(\omega)\leq c_{2}<\infty\) for all \(\omega\in(-\pi, \pi]\).

  3. (A3)
    1. (i)

      \(\phi(\cdot)\) is said to be σ-regular (\(\phi \in S_{\sigma}\), \(\sigma\in N\)) if for any \(\kappa\leq\sigma\) and any integer t, one has \(|d^{\kappa}_{\phi}/dx^{\kappa}|\leq C_{t}(1+|x|)^{-1}\), the \(C_{t}\) depending only on t;

    2. (ii)

      \(\phi(\cdot)\) satisfies the Lipschitz condition with order 1 and \(|\hat{\phi}(\xi)-1|=O(\xi)\) as \(\xi\to\infty\), where ϕ̂ is the Fourier transform of ϕ.

  4. (A4)
    1. (i)

      \(g(\cdot)\) satisfies the Lipschitz condition of order 1;

    2. (ii)

      \(g(\cdot)\in H^{\mu}\), \(\mu>1/2\), A function space \(H^{\mu}\) (\(\mu \in R\)) is said to be Sobolev space of order V, i.e., if \(h\in H^{\mu}\) then \(\int|\hat{h}(w)|^{2}(1+w^{2})^{\mu}\, dw<\infty\), where ĥ is the Fourier transform of h.

  5. (A5)

    \(\max_{1\leq i\leq n}|s_{i}-s_{i-1}-n^{-1}|=o(n^{-1})\).

  6. (A6)

    Set \(p:=p(n)\) and \(q:=q(n)\), write \(k:=[3n/(p+q)]\) such that for \(p+q\leq 3n\), \(qp^{-1}\to0\), and \(\zeta_{in}\to0\), \(i=1,2,3,4\), where \(\zeta _{1n}=qp^{-1} 2^{m}\), \(\zeta_{2n}=p \frac{2^{m}}{n}\), \(\zeta_{3n}=n(\sum_{|j|>n}|a_{j}|)^{2}\), \(\zeta_{4n}= k\rho(q)\).

Remark 2.1

(A1) are the general conditions of the ρ-mixing sequence, such as Shao [6, 7], (A1) is weaker than Li et al.’s [3], (A2)-(A5) are mild regularity conditions for the wavelet estimate in the recent literature, such as Li et al. [3, 4, 12], Liang and Qi [13]. In (A6), p, q, \(2^{m}\) can be defined as increasing sequences, and \(\zeta_{in}\to0\), \(i=1,2,3,4\), are easily satisfied, if p, q and \(2^{m}\) are chosen reasonable. See e.g. Liang and Li [2], Li et al. [3, 4, 12].

In order to facilitate the discussion, write \(\sigma_{n}^{2}:=\sigma _{n}^{2}(t)={\operatorname{Var}}(\hat{g}_{n}(t))\), \(S_{n}:=S_{n}(t)=\sigma_{n}^{-1} \{\hat{g}_{n}(t)-\mathrm{E}\hat{g}_{n}(t)\}\), \(u(n)=\sum_{j=1}^{\infty} \rho(j)\), \(\| X \|_{\beta}= (\mathrm{E}|X|^{\beta})^{1/\beta} \), \(a\wedge b=\min\{a, b\}\). Next, we give the main results as follows.

Theorem 2.1

Suppose that (A1)-(A6) hold, then for each \(t\in[0,1]\), we can get

$$\sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert \leq C \bigl\{ \zeta_{1n}^{1/3}+ \zeta_{2n}^{1/3}+\zeta _{2n}^{\delta/2}+ \zeta_{3n}^{1/3}+ \zeta_{4n}^{1/4} + u(q) \bigr\} . $$

Corollary 2.1

Suppose that (A1)-(A6) hold, then for each \(t\in[0,1]\), we can get

$$\sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert =\circ(1). $$

Corollary 2.2

Suppose that (A1)-(A6) hold, assume that

$$\frac{2^{m}}{n}=O(n^{-\theta})\quad \textit{and}\quad \sup_{n\geq 1} \bigl(n^{\frac{6\lambda\theta+3\lambda+3\theta+4}{2(6\lambda +7)}}\bigr)\sum_{|j|>n}|a_{j}| < \infty $$

for some \(\frac{2}{9-6\lambda}<\theta\leq1\) and for some \(\lambda> 2\), we can get

$$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert \leq C \bigl\{ \zeta_{1n}^{1/3}+ \zeta_{2n}^{1/3}+ \zeta _{3n}^{1/3}+ \zeta_{4n}^{1/4} + u(q) \bigr\} , \\& \sup_{u}\bigl\vert \mathrm{P}\bigl(S_{n}(t)\leq u\bigr) -\Phi(u)\bigr\vert =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda+7}} \bigr). \end{aligned}$$

Observe, taking \(\theta\approx1\) as \(\lambda\to\infty\), that it follows that \(\sup_{u}\vert \mathrm{P}(S_{n}(t)\leq u) -\Phi (u)\vert =O(n^{-1/6})\).

3 Some lemmas

From (1.2), we can see that

$$\begin{aligned} S_{n} = & \sigma_{n}^{-1}\sum _{i=1}^{n} \varepsilon_{i} \int_{A_{i}} E_{m}(t,s) \,ds \\ =&\sigma_{n}^{-1}\sum_{i=1}^{n} \int_{A_{i}} E_{m}(t,s) \,ds \sum _{j=-n}^{n}a_{j}e_{i-j} \\ &{}+ \sigma_{n}^{-1}\sum_{i=1}^{n} \int_{A_{i}} E_{m}(t,s) \,ds \sum _{|j|>n}a_{j}e_{i-j} \\ :=& S_{1n}+S_{2n}. \end{aligned}$$

Write

$$S_{1n}=\sum_{l=1-n}^{2n} \sigma_{n}^{-1} \Biggl(\sum_{i=\max\{1,l-n\}}^{\min\{n,l+n\}}a_{i-l} \int_{A_{i}} E_{m}(t,s) \,ds \Biggr)e_{l} := \sum_{l=1-n}^{2n}W_{nl}, $$

set \(S_{1n}=S_{1n}^{\prime}+S_{1n}^{\prime\prime}+S_{1n}^{\prime\prime \prime}\), where \(S_{1n}^{\prime}=\sum_{v=1}^{k}y_{nv}\), \(S_{1n}^{\prime\prime}=\sum_{w=1}^{k}y_{nv}^{\prime}\), \(S_{1n}^{\prime\prime\prime}=y^{\prime}_{n k+1}\),

$$\begin{aligned}& y_{nv}=\sum_{i=k_{v}}^{k_{v}+p-1}W_{ni}, \qquad y_{nv}^{\prime}=\sum_{i=l_{v}}^{l_{v}+q-1}W_{ni}, \qquad y_{nk+1}^{\prime}=\sum_{i=k(p+q)-n+1}^{2n}W_{ni}, \\& k_{v}=(v-1) (p+q)+1-n, \qquad l_{v}=(v-1) (p+q)+p+1-n, \qquad w=1,\ldots,k, \end{aligned}$$

then

$$S_{n}=S_{1n}^{\prime}+S_{1n}^{\prime\prime}+S_{1n}^{\prime\prime \prime}+S_{2n}. $$

Next, we give the main lemmas as follows.

Lemma 3.1

Let \(\{X_{i}: i=1,2,\ldots\}\) be a ρ-mixing sequence, \(p_{1}\), \(p_{2}\) are two integers, let \(\eta_{l}:=\sum^{(l-1)(p_{1}+p_{2})+p_{1}}_{(l-1)(p_{1}+p_{2})+1}X_{i}\), for \(1\leq l\leq k\). If \(r>1\), \(s>1\), and \(1/r+1/s=1\), then

$$\Biggl\vert \mathrm{E}\exp \Biggl(it\sum_{l=1}^{k} \eta_{l} \Biggr)-\prod_{l=1} ^{k} \mathrm{E}\exp(it\eta_{l})\Biggr\vert \leq C|t|\rho^{1/s}(p_{2}) \sum_{l=1}^{k}\|\eta_{l} \|_{r}. $$

Proof of Lemma 3.1

We can easily see that

$$\begin{aligned} I_{0}: =&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m}}(t_{1}, \ldots,t_{m})-\varphi_{\xi _{1}}(t_{1})\cdots \varphi_{\xi_{m}}(t_{m})\bigr\vert \\ =&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m}}(t_{1},\ldots,t_{m})- \varphi_{\xi _{1},\ldots,\xi_{m-1}}(t_{1},\ldots,t_{m-1}) \varphi_{\xi_{m}}(t_{m})\bigr\vert \\ &{}+\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-1}}(t_{1},\ldots ,t_{m-1})-\varphi_{\xi_{1}}(t_{1})\cdots \varphi_{\xi _{m-1}}(t_{m-1})\bigr\vert =:I_{1}+I_{2}. \end{aligned}$$
(3.1)

As \(\exp(ix) = \cos(x)+i \sin(x)\), \(\sin(x+y) = \sin(x) \cos(y)+\cos(x) \sin(y)\), \(\cos(x+y) = \cos(x) \cos(y)-\sin(x) \sin(y)\), we can get

$$\begin{aligned} I_{1} =&\Biggl\vert \mathrm{E} \exp \Biggl(i\sum _{l=1}^{m}t_{l}\xi_{l} \Biggr)-\mathrm{E}\exp \Biggl(i\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr)\mathrm{E} \exp(it\xi_{m})\Biggr\vert \\ \leq&\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum _{l=1}^{m-1}t_{l}\xi_{l} \Biggr), \cos(t_{m}\xi_{m}) \Biggr)\Biggr\vert +\Biggl\vert \operatorname{Cov} \Biggl(\sin \Biggl(\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr), \sin(t_{m}\xi_{m}) \Biggr)\Biggr\vert \\ &{}+\Biggl\vert \operatorname{Cov} \Biggl(\sin \Biggl(\sum _{l=1}^{m-1}t_{l}\xi_{l} \Biggr), \cos(t_{m}\xi_{m}) \Biggr)\Biggr\vert +\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr), \sin(t_{m}\xi_{m}) \Biggr)\Biggr\vert \\ =:&I_{11}+I_{12}+I_{13}+I_{14}. \end{aligned}$$
(3.2)

It follows from Lemma 3.2 and \(|\sin(x)|\leq|x|\) that we have

$$ \begin{aligned} &I_{12}\leq C\rho^{1/S}(1)\bigl\Vert \sin(t_{m} \xi_{m})\bigr\Vert _{r}\leq C\rho^{1/S}(1)|t_{m}| \bigl\Vert \sin(\xi_{m})\bigr\Vert _{r}, \\ &I_{14}\leq C\rho^{1/S}(1)|t_{m}|\bigl\Vert \sin( \xi_{m})\bigr\Vert _{r}. \end{aligned} $$
(3.3)

Notice that \(\cos(2x) = 1-2\sin2(x)\). Then one has

$$\begin{aligned} I_{11} =&\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum_{l=1}^{m-1}t_{l} \xi_{l} \Biggr), 1-2\sin^{2}(t_{m} \xi_{m}/2) \Biggr)\Biggr\vert \\ =&2\Biggl\vert \operatorname{Cov} \Biggl(\cos \Biggl(\sum _{l=1}^{m-1}t_{l}\xi_{l} \Biggr), \sin^{2}(t_{m}\xi_{m}/2) \Biggr)\Biggr\vert \leq C\rho^{1/s}(1)E^{1/s}\bigl\vert \sin(t_{m}\xi_{m}/2)\bigr\vert ^{2r} \\ \leq&C\rho^{1/s}(1)E^{1/s}\bigl\vert \sin(t_{m} \xi_{m}/2)\bigr\vert ^{r} \leq C\rho^{1/s}(1)|t_{m}| \| \xi_{m}\|_{r}. \end{aligned}$$
(3.4)

Similarly,

$$ I_{13}\leq C\rho^{1/s}(1)|t_{m}|\| \xi_{m}\|_{r}. $$
(3.5)

Therefore, we can obtain

$$ I_{1}\leq C\rho^{1/s}(1)|t_{m}|\| \xi_{m}\|_{r}. $$
(3.6)

Thus, as follows from (3.1) and (3.6), we can get

$$ I_{0}=\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m}}(t_{1}, \ldots,t_{m})-\varphi_{\xi _{1}}(t_{1})\cdots \varphi_{\xi_{m}}(t_{m})\bigr\vert \leq C\rho^{1/s}(1)|t_{m}| \| \xi _{m}\|_{r}+I_{2}. $$
(3.7)

For \(I_{2}\) in (3.7), using the same decomposition as in (3.1) above, it can be found that

$$\begin{aligned} I_{2} :=&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-1}}(t_{1}, \ldots,t_{m})-\varphi _{\xi_{1}}(t_{1})\cdots \varphi_{\xi_{m}}(t_{m})\bigr\vert \\ =&\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-1}}(t_{1},\ldots,t_{m})- \varphi _{\xi_{1},\ldots,\xi_{m-2}}(t_{1},\ldots,t_{m-2}) \varphi_{\xi_{m-1}}(t_{m-1})\bigr\vert \\ &{}+\bigl\vert \varphi_{\xi_{1},\ldots,\xi_{m-2}}(t_{1},\ldots ,t_{m-2})-\varphi_{\xi_{1}}(t_{1})\cdots \varphi_{\xi _{m-2}}(t_{m-2})\bigr\vert =:I_{3}+I_{4}, \end{aligned}$$
(3.8)

and similarly to the proof of \(I_{1}\), we can get

$$I_{3}\leq C\rho^{1/s}(1)|t_{m-1}|\| \xi_{m-1}\|_{r}. $$

Then, we obtain

$$ I_{2}\leq C\rho^{1/s}(1)|t_{m-1}|\| \xi_{m-1}\|_{r}+I_{4}. $$
(3.9)

Combining (3.7)-(3.9), it suffices to show Lemma 3.1. □

Lemma 3.2

Suppose that (A1)-(A5) hold, then we have

$$\sigma_{n}^{2}(t)\geq C 2^{m}n^{-1} \quad \textit{and} \quad \sigma _{n}^{-2}(t)\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds\biggr\vert \leq C. $$

Proof of Lemma 3.2

From (A1) and (1.2), we can get

$$\begin{aligned} \sigma_{n}^{2} =&\sigma^{2}\sum _{i=1}^{n}\biggl( \int_{A_{i}} E_{m}(t,s) \,ds\biggr)^{2}+2\sum _{1\leq i\leq j\leq n}E(\varepsilon_{i}, \varepsilon_{j}) \int _{A_{i}} E_{m}(t,s) \,ds \int_{A_{j}} E_{m}(t,s) \,ds \\ =& \sigma^{2}\sum_{i=1}^{n} \biggl( \int_{A_{i}} E_{m}(t,s) \,ds\biggr)^{2}+I_{3}. \end{aligned}$$

By Lemma A.1, we obtain

$$\begin{aligned} I_{3} \leq&10\sum_{1\leq i\leq j\leq n}\rho(j-i)\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds \int_{A_{j}} E_{m}(t,s) \,ds\biggr\vert \| \varepsilon_{i}\|_{2}\|\varepsilon_{j} \|_{2} \\ =& \sum_{k=1}^{n-1}\rho(k)\sum _{i=1}^{n-k}\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds \int_{A_{k+i}} E_{m}(t,s) \,ds\biggr\vert \| \varepsilon_{i}\|_{2}\|\varepsilon_{j} \|_{2} \\ \leq& 5\sigma^{2}\sum_{k=1}^{n-1} \rho(k)\sum_{i=1}^{n-k} \biggl[ \biggl( \int_{A_{i}} E_{m}(t,s) \,ds \biggr)^{2}+ \biggl( \int_{A_{k+i}} E_{m}(t,s) \,ds \biggr)^{2} \biggr] \\ \leq& 10\sigma^{2}\sum_{k=1}^{n} \rho(k)\sum_{i=1}^{n} \biggl( \int _{A_{i}} E_{m}(t,s) \,ds \biggr)^{2}. \end{aligned}$$

Therefore, from applying (A1) and Lemma A.5, we obtain

$$\sigma_{n}^{2}\leq\sigma^{2} \Biggl(1+20\sum _{k=1}^{n}\rho(k) \Biggr)\sum _{i=1}^{n} \biggl( \int_{A_{i}} E_{m}(t,s) \,ds \biggr)^{2}=C2^{m}n^{-1}. $$

In addition, the same result as (7) was deduced, see Liang and Qi [13], and by (A2), (A4), and (A5), we have

$$\sigma_{n}^{2}(t)\geq C 2^{m}n^{-1} \quad \mbox{and} \quad \sigma _{n}^{-2}(t)\biggl\vert \int_{A_{i}} E_{m}(t,s) \,ds\biggr\vert \leq C. $$

 □

Lemma 3.3

Assume that (A1)-(A6) hold, we can get

  1. (1)

    \(\mathrm{E}(S_{1n}^{\prime\prime})^{2}\leq C\zeta_{1n}\), \(\mathrm{E}(S_{1n}^{\prime\prime\prime})^{2}\leq C\zeta_{2n}\), \(\mathrm{E}(S_{2n})^{2}\leq C\zeta_{3n}\);

  2. (2)

    \(\mathrm{P}(|S''_{1n}|\geq\zeta_{1n}^{1/3}) \leq C \zeta_{1n}^{1/3}\), \(\mathrm{P}(|S'''_{1n}|\geq\zeta_{2n}^{1/3}) \leq C \zeta_{2n}^{1/3}\), \(\mathrm{P}(|S_{2n}|\geq\zeta_{3n}^{1/3}) \leq C \zeta_{3n}^{1/3}\).

Proof of Lemma 3.3

Let (A1)-(A6) be satisfied, and applying Lemmas 3.2 and A.3(i) in the Appendix, we can refer to Li et al.’s [3] Lemma 3.1 of the proof process. □

Lemma 3.4

Assume that (A1)-(A6) hold, let \(s_{n}^{2}=\sum_{v=1}^{k}{ \operatorname{Var}(y_{nv})}\), we can get

$$\bigl\vert s_{n}^{2}-1\bigr\vert \leq C\bigl( \zeta_{1n}^{1/2}+\zeta_{2n}^{1/2}+ \zeta_{3n}^{1/2}+ u(q)\bigr). $$

Let \(\{\eta_{nv}:v=1, \ldots, k\}\) be independent random variables and \(\eta_{nv}\stackrel{\mathcal{D}}{=}y_{nv}\), \(v=1, \ldots, k\). Set \(T_{n}=\sum_{v=1}^{k}{\eta_{nv}}\). Then we get the following.

Proof of Lemma 3.4

Let \(\Delta _{n}=\sum_{1\leq i< j\leq k} {\operatorname{Cov}(y_{ni},y_{nj})}\), then \(s_{n}^{2}=\mathrm{E}(S_{1n}^{\prime})^{2}-2\Delta_{n}\). By \(\mathrm{E}(S_{n})^{2}=1\), Lemma 3.3(1), the \(C_{r}\)-inequality, and the Cauchy-Schwarz inequality, we have

$$\begin{aligned}& \mathrm{E}\bigl(S^{\prime}_{1n}\bigr)^{2}=\mathrm{E} \bigl[S_{n}-\bigl(S^{\prime\prime}_{1n} +S^{\prime\prime\prime}_{1n}+S_{2n} \bigr)\bigr]^{2}=1 +\mathrm{E}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime }_{1n}+S_{2n} \bigr)^{2}-2\mathrm{E}\bigl[S_{n}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime }_{1n}+S_{2n} \bigr)\bigr], \\& \mathrm{E}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime}_{1n}+S_{2n} \bigr)^{2}\leq 2\bigl[\mathrm{E}\bigl(S^{\prime\prime}_{1n} \bigr)^{2}+\mathrm{E}\bigl(S^{\prime\prime\prime }_{1n} \bigr)^{2}+\mathrm{E}(S_{2n})^{2}\bigr]\leq C( \zeta_{1n}+\zeta_{2n}+\zeta_{3n}), \\& \mathrm{E}\bigl[S_{n}\bigl(S^{\prime\prime}_{1n}+S^{\prime\prime\prime }_{1n}+S_{2n} \bigr)\bigr]\leq \mathrm{E}^{{1}/{2}}\bigl(S^{2}_{n} \bigr)\mathrm{E}^{{1}/{2}}\bigl(S^{\prime\prime }_{1n}+S^{\prime\prime\prime}_{1n}+S_{2n} \bigr)^{2}\leq C\bigl(\zeta ^{{1}/{2}}_{1n}+ \zeta^{{1}/{2}}_{2n}+\zeta^{{1}/{2}}_{3n}\bigr). \end{aligned}$$

It has been found that

$$\begin{aligned} \bigl\vert \mathrm{E}\bigl(S_{1n}^{\prime} \bigr)^{2}-1\bigr\vert =& \bigl\vert \mathrm{E} \bigl(S_{1n}^{\prime \prime} +S_{1n}^{\prime\prime\prime}+S_{2n} \bigr)^{2} - 2{\mathrm{E}}\bigl\{ S_{n} \bigl(S_{1n}^{\prime\prime}+S_{1n}^{\prime\prime\prime}+S_{2n} \bigr)\bigr\} \bigr\vert \\ \leq& C\bigl(\zeta_{1n}^{1/2}+\zeta_{2n}^{1/2}+ \zeta_{3n}^{1/2}\bigr). \end{aligned}$$
(3.10)

On the other hand, from the basic definition of ρ-mixing, Lemmas 3.2, A.5(iv), and (A1), we can prove that

$$\begin{aligned} |\Delta_{n}| \leq& \sum_{1\leq i< j\leq k} \sum_{s_{1}=k_{i}}^{k_{i}+p-1} \sum _{t_{1}=k_{j}}^{k_{j}+p-1}{\bigl|{\operatorname{Cov}}(W_{ns_{1}}, W_{nt_{1}})\bigr|} \\ \leq& \sum_{1\leq i< j\leq k}\sum_{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{t_{1}=k_{j}}^{k_{j}+p-1} \sum _{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \sum_{v=\max\{1,t_{1}-n\}}^{\min\{n,t_{1}+n\}} \sigma _{n}^{-2}\biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds \int_{A_{v}} E_{m}(t,s) \,ds \biggr\vert \\ &{}\cdot|a_{u-s_{1}}a_{v-t_{1}}| \bigl\vert { \operatorname{Cov}}(e_{s_{1}},e_{t_{1}})\bigr\vert \\ \leq& C \sum_{1\leq i< j\leq k}\sum _{s_{1}=k_{i}}^{k_{i}+p-1}\sum_{t_{1}=k_{j}}^{k_{j}+p-1} \sum_{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \sum _{v=\max\{1,t_{1}-n\}}^{\min\{n,t_{1}+n\}} \biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}a_{v-t_{1}}| \\ &{}\cdot\rho(t_{1}-s_{1})\sqrt{\operatorname{Var}(e_{s_{1}}) \operatorname{Var}(e_{t_{1}})} \\ \leq& C \sum_{i=1}^{k-1}\sum _{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \biggl\vert \int _{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}| \sum_{j=i+1}^{k}\sum _{t_{1}=k_{j}}^{k_{j}+p-1}\rho (t_{1}-s_{1}) \\ &{}\cdot \sum_{v=\max\{1,t_{1}-n\}}^{\min\{n,t_{1}+n\}}|a_{v-t_{1}}| \\ \leq& C \sum_{i=1}^{k-1}\sum _{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{u=\max\{1,s_{1}-n\}}^{\min\{n,s_{1}+n\}} \biggl\vert \int _{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}|\sum_{j=q} ^{\infty}\rho(j) \\ \leq& C u(q)\sum_{i=1}^{k-1}\sum _{s_{1}=k_{i}}^{k_{i}+p-1} \sum_{u=1}^{n} \biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds\biggr\vert |a_{u-s_{1}}| \\ \leq& C u(q)\sum_{u=1}^{n}\biggl\vert \int_{A_{u}} E_{m}(t,s) \,ds\biggr\vert \Biggl(\sum _{i=1}^{k-1}\sum_{s_{1}=k_{i}}^{k_{i}+p-1} |a_{u-s_{1}}| \Biggr) \leq C u(q). \end{aligned}$$
(3.11)

Hence, combining (3.10) with (3.11), we can see that

$$\bigl\vert s_{n}^{2}-1\bigr\vert \leq\bigl\vert \mathrm{E} \bigl(S_{1n}^{\prime}\bigr)^{2}-1\bigr\vert +2\vert \Delta_{n}\vert \leq C\bigl\{ \zeta_{1n}^{1/2} +\zeta_{2n}^{1/2}+\zeta_{3n}^{1/2}+ u(q) \bigr\} . $$

 □

Lemma 3.5

Assume that (A1)-(A6) hold, and applying this in Lemma  3.4, we can get

$$\sup_{u}\bigl\vert \mathrm{P} (T_{n}/s_{n} \leq u )-\Phi(u)\bigr\vert \leq C\zeta _{2n}^{\delta/2}. $$

Proof of Lemma 3.5

It follows from the Berry-Esseen inequality (Petrov [14]) that we have

$$ \sup_{u}\bigl\vert \mathrm{P} (T_{n}/s_{n} \leq u )-\Phi(u)\bigr\vert \leq C \frac{\sum_{w=1}^{k} \mathrm{E}|y_{nv}|^{r}}{s_{n}^{r}} \quad \mbox{for } r\geq2. $$
(3.12)

From Definition 1.2, hence \(\sum_{j=k_{v}}^{[\log p]}\rho^{2/r}(2^{j})=o(\log p)\). Further, \(\exp (C_{1}\sum_{j=k_{v}}^{[\log p]}\rho^{2/r}(2^{j}) )=o(p^{\iota})\) for any \(C_{1}>0\) and \(\iota>0\) (for ι small enough). According to Lemma A.5(i) and Lemma A.2, we can get

$$\begin{aligned} \sum_{v=1}^{k} \mathrm{E}|y_{nv}|^{r} =& \sum _{v=1}^{k} \mathrm{E}\Biggl\vert \sum _{j=k_{v}}^{k_{v}+p-1} \sum_{i=\max\{1,j-n\}}^{\min\{n,j+n\}} \sigma_{n}^{-1}a_{i-j} \int_{A_{i}}E_{m}(t,s) \,ds e_{j}\Biggr\vert ^{r} \\ \leq& C\sum_{v=1}^{k}p^{r/2}\exp \Biggl(C_{1}\sum_{j=k_{v}}^{[\log p]}\rho \bigl(2^{j}\bigr) \Biggr) \\ &{}\cdot\max_{1\leq j\leq p} \Biggl(\mathrm{E} \Biggl\vert \sum_{i=\max\{1,j-n\}}^{\min\{n,j+n\}} \sigma_{n}^{-1}a_{i-j} \int_{A_{i}}E_{m}(t,s) \,ds e_{j}\Biggr\vert ^{2} \Biggr)^{r/2} \\ &{}+C\sum_{v=1}^{k}p\exp \Biggl(C_{1}\sum_{j=k_{v}}^{[\log p]}\rho ^{2/r}\bigl(2^{j}\bigr) \Biggr) \\ &{}\cdot\max_{1\leq j\leq p} \mathrm{E}\Biggl\vert \sum_{i=\max\{1,j-n\}}^{\min\{n,j+n\}} \sigma_{n}^{-1}a_{i-j} \int_{A_{i}}E_{m}(t,s) \,ds e_{j}\Biggr\vert ^{r} \\ \leq& C\sigma_{n}^{-r} \Biggl(\sum _{j=-\infty}^{\infty}|a_{j}| \Biggr)^{r} \sum_{v=1}^{k} \biggl(p^{r/2+\iota} \biggl(\frac {2^{m}}{n}\biggr)^{r}+p^{1+\iota}\biggl( \frac{2^{m}}{n}\biggr)^{r} \biggr) \\ \leq& Ckp^{r/2+\iota}\biggl(\frac{2^{m}}{n}\biggr)^{r/2}\leq Cnp^{r/2-1}\biggl(\frac {2^{m}}{n}\biggr)^{r/2}\leq Cn\biggl(p \frac{2^{m}}{n}\biggr)^{r/2}=Cn \zeta_{2n}^{r}. \end{aligned}$$
(3.13)

Hence, by Lemma 3.4, and combining (3.12) with (3.13), we can get the result. □

Lemma 3.6

Assume that (A1)-(A6) hold, and applying this in Lemma  3.4, we can get

$$\sup_{u}\bigl\vert \mathrm{P}\bigl(S_{1n}^{\prime} \leq u\bigr)-\mathrm{P}(T_{n}\leq u)\bigr\vert \leq C \bigl\{ \zeta_{2n}^{\delta/2}+ \zeta_{4n}^{1/2} \bigr\} . $$

Proof of Lemma 3.6

Suppose that \(\phi_{1} (t)\) and \(\psi_{1} (t)\) are the characteristic functions of \(S_{1n}^{\prime}\) and \(T_{n}\). Therefore, it follows from Lemmas 3.1, 3.2, A.5, and (A1) that we can obtain

$$\begin{aligned} \bigl\vert \phi_{1} (t)-\psi_{1} (t)\bigr\vert =& \Biggl\vert \mathrm{E}\exp\Biggl( {\mathbf{i}}t \sum _{v=1}^{k} y_{nv}\Biggr)-\prod _{v=1}^{k} \mathrm{E}\exp({\mathbf{i}}t y_{nv})\Biggr\vert \\ \leq& C |t| \rho^{1/2}(q) \sum _{v=1}^{k}\|y_{nv}\|_{2} \\ \leq& C |t| \rho^{1/2}(q) \sum_{v=1}^{k} \Biggl\{ \mathrm{E} \Biggl( \sum_{i=k_{v}}^{k_{v}+p-1} \sigma_{n}^{-1} \sum_{j=\max\{1,i-n\}}^{\min\{n,i+n\}} a_{j-i} \int_{A_{j}}E_{m}(t,s) \,ds |e_{i}| \Biggr)^{2} \Biggr\} ^{1/2} \\ \leq& C |t| \rho^{1/2}(q) \Biggl(\sum_{l=-\infty}^{\infty} |a_{l}| \Biggr) \Biggl\{ k\sum_{v=1}^{k} \sum_{i=k_{v}}^{k_{v}+p-1} \biggl\vert \int_{A_{j}}E_{m}(t,s) \,ds\biggr\vert \Biggr\} ^{1/2} \\ \leq& C |t| \bigl(k\rho(q)\bigr)^{1/2} \leq C |t| \zeta^{1/2}_{4n}, \end{aligned}$$

which implies that

$$ \int_{-T}^{T} \biggl\vert \frac{\phi_{1} (t)-\psi_{1} (t)}{t}\biggr\vert \, dt \leq C \zeta^{1/2}_{4n}T. $$
(3.14)

Note that

$$\mathrm{P}(T_{n}\leq u)=\mathrm{P}({T_{n}}/{s_{n}} \leq{u}/{s_{n}}). $$

Consequently, from Lemma 3.5, it has been found

$$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P}(T_{n}\leq u+y)- \mathrm{P}(T_{n}\leq u)\bigr\vert \\& \quad =\sup_{u}P\bigl\vert ({T_{n}}/{s_{n}} \leq{u+y}/{s_{n}})-\mathrm{P}({T_{n}}/{s_{n}}\leq {u}/{s_{n}})\bigr\vert \\& \quad \leq\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u+y}/{s_{n}})-\Phi({u+y}/{s_{n}})\bigr\vert +\sup _{u}\bigl\vert \Phi ({u+y}/{s_{n}})- \Phi({u}/{s_{n}})\bigr\vert \\& \qquad {}+\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u}/{s_{n}})-\Phi({u}/{s_{n}})\bigr\vert \\& \quad \leq2\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u}/{s_{n}})-\Phi(u)\bigr\vert +\sup_{u}\bigl\vert \Phi({u+y}/{s_{n}})-\Phi({u}/{s_{n}})\bigr\vert \\& \quad \leq C\bigl\{ \zeta_{2n}^{\delta/2}+{\vert y\vert }/{s_{n}}\bigr\} \leq C\bigl\{ \zeta_{2n}^{\delta/2}+ \vert y\vert \bigr\} . \end{aligned}$$

Therefore

$$ T \sup_{u} \int_{|y|\leq c/T} \bigl\vert \mathrm{P}(T_{n} \leq u+y)- \mathrm{P}(T_{n} \leq u )\bigr\vert \, dy \leq C\bigl\{ \zeta_{2n}^{\delta/2}+1/T\bigr\} . $$
(3.15)

Thus, combining (3.14) with (3.15) and taking \(T= \zeta_{4n}^{-1/4} \), it suffices to prove that

$$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P}\bigl(S_{1n}^{\prime} \leq u\bigr)-\mathrm{P}(T_{n}\leq u)\bigr\vert \\& \quad \leq \int_{-T}^{T}{\biggl\vert \frac{\phi_{1} (t)-\psi_{1} (t)}{t} \biggr\vert \, dt} + T \sup_{u} \int_{|y|\leq c/T} \bigl\vert \mathrm{P}(T_{n}\leq u+y) - \mathrm{P}(T_{n}\leq u)\bigr\vert \, dy \\& \quad \leq C\bigl\{ \zeta^{1/2}_{4n} T + \zeta_{2n}^{\delta/2}+1/T\bigr\} = C\bigl\{ \zeta_{2n}^{\delta/2} + \zeta_{4n}^{1/4} \bigr\} . \end{aligned}$$

 □

4 Proofs of the main results

Proof of Theorem 2.1

$$\begin{aligned}& \sup_{u}\bigl\vert \mathrm{P} \bigl(S^{\prime}_{1n}\leq u\bigr)-\Phi(u)\bigr\vert \\& \quad \leq \sup_{u}\bigl\vert \mathrm{P} \bigl(S^{\prime}_{1n}\leq u\bigr)-\mathrm{P}(T_{n}\leq u)\bigr\vert +\sup_{u}\bigl\vert \mathrm{P}(T_{n} \leq u)-\Phi({u}/{s_{n}})\bigr\vert +\sup_{u}\bigl\vert \Phi({u}/{s_{n}})-\Phi(u)\bigr\vert \\& \quad := J_{1n}+J_{2n}+J_{3n}. \end{aligned}$$
(4.1)

According to Lemma 3.6, Lemma 3.5, and Lemma 3.4, it follows that

$$\begin{aligned}& J_{1n}\leq C\bigl\{ \zeta_{2n}^{\delta/2} + \zeta_{4n}^{1/4}\bigr\} , \end{aligned}$$
(4.2)
$$\begin{aligned}& J_{2n}=\sup_{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq {u}/{s_{n}})-\Phi({u}/{s_{n}})\bigr\vert =\sup _{u}\bigl\vert \mathrm{P}({T_{n}}/{s_{n}} \leq u)-\Phi(u)\bigr\vert \leq C\zeta_{2n}^{\delta/2}, \end{aligned}$$
(4.3)
$$\begin{aligned}& J_{3n}\leq C\bigl\vert s_{n}^{2}-1\bigr\vert \leq C\bigl\{ \zeta_{1n}^{1/2} +\zeta_{2n}^{1/2}+ \zeta_{3n}^{1/2}+ u(q)\bigr\} . \end{aligned}$$
(4.4)

Hence, by (4.2)-(4.4) and combining with (4.1), we have

$$ \sup_{u}\bigl\vert {\mathrm{P}}\bigl(S_{1n}^{\prime} \leq u\bigr)-\Phi(u)\bigr\vert \leq C \bigl\{ \zeta_{1n}^{1/2}+ \zeta_{2n}^{1/2}+\zeta _{2n}^{\delta/2}+ \zeta_{3n}^{1/2}+ \zeta_{4n}^{1/4} + u(q) \bigr\} . $$
(4.5)

Thus, by Lemma A.4, Lemma 3.3(2), and (4.5), it suffices to prove that

$$\begin{aligned}& \sup_{u}\bigl\vert {\mathrm{P}}(S_{n}\leq u) - \Phi(u)\bigr\vert \\& \quad \leq C \Biggl\{ \sup_{u}\bigl\vert {\mathrm{P}} \bigl(S_{1n}^{\prime}\leq u\bigr)-\Phi (u)\bigr\vert +\sum _{i=1}^{3}\zeta_{in}^{1/3}+{ \mathrm{P}}\bigl(\bigl\vert S_{1n}^{\prime\prime}\bigr\vert \geq \zeta_{1n}^{1/3}\bigr) \\& \qquad {}+{ \mathrm{P}}\bigl(\bigl\vert S_{1n}^{\prime\prime\prime}\bigr\vert \geq\zeta_{2n}^{1/3} \bigr) +{ \mathrm{P}}\bigl(\vert S_{2n}\vert \geq \zeta_{3n}^{1/3}\bigr) \Biggr\} \\& \quad \leq C \bigl\{ \zeta_{1n}^{1/3}+\zeta_{2n}^{1/3}+ \zeta _{2n}^{\delta/2}+ \zeta_{3n}^{1/3}+ \zeta_{4n}^{1/4} + u(q) \bigr\} . \end{aligned}$$

 □

Proof of Corollary 2.1

By (A1), since \(\sum_{j=1}^{\infty} \rho(j)<\infty\), we can easily see that \(u(q)\to0\), therefore Corollary 2.1 holds. □

Proof of Corollary 2.2

Let \(p=[n^{\tau}]\), \(q=[n^{2\tau-1}]\). Taking \(\tau=\frac{1}{2}+\frac{8\theta-1}{2(6\lambda+7)}\), \(\frac{2}{9-6\lambda}<\theta\leq1\), \(\tau<\theta\). Consequently,

$$\begin{aligned}& \zeta_{1n}^{1/3}=\zeta _{2n}^{1/3}=O \bigl(n^{-\frac{\theta-\tau}{3}} \bigr) =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} \bigr), \\& \zeta^{1/3}_{3n}=n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}}= \biggl(n^{\frac{6\lambda\theta+3\lambda+3\theta+4}{2(6\lambda +7)}} \sum_{|j|>n}|a_{j}| \biggr)^{2/3} =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} \bigr), \\& \zeta_{4n}^{1/4}=O \bigl(n^{-\frac{\tau+\lambda(2\tau -1)-1}{4}} \bigr) =O \bigl(n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} \bigr), \\& u(q)= O \Biggl(\sum_{i=q}^{\infty} i^{-\lambda} \Biggr) = O \bigl( p^{-\lambda+1} \bigr) = O \bigl( n^{-(2\tau-1)(\lambda-1)} \bigr)=O \bigl(n^{-\frac {(8\theta-1)(\lambda-1)}{6\lambda+7}} \bigr). \end{aligned}$$

Finally, taking \(\frac{2}{9-6\lambda}<\theta\), hence \(\frac{(8\theta-1)(\lambda-1)}{6\lambda+7}>\frac{\lambda(2\theta -1)+(\theta-1)}{6\lambda+7}\), it has been found that \(u(q)=O (n^{-\frac{\lambda(2\theta-1)+(\theta-1)}{6\lambda +7}} )\), therefore, the desired result is completed by Corollary 2.1 immediately. □

References

  1. Xue, LG: Berry-Esseen bound of an estimate of error variance in a semiparametric regression model. Acta Math. Sin. 48(1), 157-170 (2005)

    MathSciNet  MATH  Google Scholar 

  2. Liang, HY, Li, YY: A Berry-Esseen type bound of regression estimator based on linear process errors. J. Korean Math. Soc. 45(6), 1753-1767 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  3. Li, YM, Wei, CD, Xin, GD: Berry-Esseen bounds of wavelet estimator in a regression with linear process errors. Stat. Probab. Lett. 81(1), 103-111 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Li, YM, Guo, JH, Yang, SC: The Berry-Esseen bounds of wavelet estimators for semiparametric regression model whose errors for a linear process with a mixing innovations. Acta Math. Appl. Sin. 36(6), 1021-1036 (2013)

    MathSciNet  Google Scholar 

  5. Kolmogorov, AN, Rozanov, UA: On the strong mixing conditions for stationary Gaussian process. Theory Probab. Appl. 5(2), 204-208 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  6. Shao, QM: Almost sure convergence properties of ρ-mixing sequences. Acta Math. Sin. 32(3), 377-393 (1989)

    Google Scholar 

  7. Shao, QM: Maximal inequalities for sums of ρ-mixing sequences. Ann. Probab. 23, 948-965 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  8. Jiang, DY: On the convergence rates in the law of iterated logarithm of ρ-mixing sequences. Math. Appl. 15(3), 32-37 (2002)

    MathSciNet  MATH  Google Scholar 

  9. Chen, PY, Liu, XD: Complete moment convergence for sequence of identically distributed ρ-mixing random variables. Acta Math. Sin. 51(2), 281-290 (2008)

    MATH  Google Scholar 

  10. Zhou, XC, Lin, JG: Strong consistency of estimators in partially linear models for longitudinal data with mixing-dependent structure. J. Inequal. Appl. 2011, 112 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Tan, XL, Wang, M: Strong convergence results for weighted sums of ρ-mixing random variables sequence. J. Jilin Univ. Sci. Ed. 52(5), 927-932 (2014)

    MathSciNet  MATH  Google Scholar 

  12. Li, YM, Yin, CM, Wei, CD: The asymptotic normality for φ-mixing dependent of wavelet regression function estimator. Acta Math. Appl. Sin. 31(6), 1016-1055 (2008)

    MathSciNet  Google Scholar 

  13. Liang, HY, Qi, YY: Asymptotic normality of wavelet estimator of regression function under NA assumptions. Bull. Korean Math. Soc. 4(2), 247-257 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  14. Petrov, VV: Limit Theory for Probability Theory. Oxford University Press, New York (1995)

    Google Scholar 

  15. Yang, SC: Moment inequality for mixing sequences and nonparametric estimation. Acta Math. Sin. 40(2), 271-279 (1997)

    MATH  Google Scholar 

  16. Yang, SC: Uniformly asymptotic normality of the regression weighted estimator for negatively associated samples. Stat. Probab. Lett. 62, 101-110 (2003)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This project is supported by the National Natural Science Foundation of China (11461057).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liwang Ding.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The two authors contributed equally and significantly in writing this article. Both authors read and approved the final manuscript.

Authors’ information

Liwang Ding (1985-), male, lecturer, graduate, major in probability theory and mathematical statistics.

Appendix

Appendix

Lemma A.1

(Shao [7])

Let \(\{X_{i}: i\geq1\}\) be a ρ-mixing sequence, \(s, t>1\), and \(1/s+1/t=1\). If \(X\in L_{s}(\mathcal{F}^{k}_{1})\), \(Y\in L_{t}(\mathcal{F}^{\infty}_{k+n})\), then

$$|\mathrm{E}XY-\mathrm{E}X\mathrm{E}Y|\leq10\rho^{2(\frac{1}{s}\wedge\frac{1}{t})}(n)\| X\| _{s}\| Y\|_{t}. $$

Lemma A.2

(Shao [7])

Assume that \(\mathrm{E}X_{i}=0\) and \(\| X_{i}\|_{q}\) for some \(q\geq2\). Then there exists a positive constant \(K=K(q,\rho(\cdot))\) depending only on q and \(\rho(\cdot)\) such that for any \(k\geq0\), \(n\geq1\),

$$\begin{aligned} \mathrm{E}\max_{1\leq i\leq n}\bigl\vert S_{k}(i)\bigr\vert ^{q} \leq& Kn^{q/2}\exp \Biggl(K\sum _{i=0}^{[\log n]}\rho\bigl(2^{i}\bigr) \Biggr) \max_{k\leq i\leq k+n}\| X_{i}\|_{2}^{q} \\ &{}+nK\exp \Biggl(K\sum_{i=0}^{[\log n]} \rho^{2/q}\bigl(2^{i}\bigr) \Biggr)\max_{k\leq i\leq k+n} \| X_{i}\|_{q}^{q}. \end{aligned}$$

Lemma A.3

(Yang [15])

Let \(\{X_{i}: i=1,2,\ldots\}\) be a ρ-mixing sequence, and there is a \(\lambda>0\), make \(\rho(n)=O(n^{-\lambda})\), with \(\mathrm{E}X_{i}=0\), \(\mathrm{E}|X_{i}|^{r}<\infty\) (\(r>1\)), when any integer \(m\geq1\), there exists a positive constant \(C(m)\), then:

  1. (i)

    for \(1< r\leq2\), we have

    $$\mathrm{E}\Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{r}\leq C(m)n^{\beta(m)}\sum _{i=1}^{n}\mathrm{E}\vert X_{i}\vert ^{r}, $$
  2. (ii)

    for \(r>2\), we have

    $$\mathrm{E}\Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{r}\leq C(m)n^{\beta(m)} \Biggl\{ \sum _{i=1}^{n}\mathrm{E}\vert X_{i}\vert ^{r}+ \Biggl(\sum_{i=1}^{n} \mathrm{E}X_{i}^{2} \Biggr)^{r/2} \Biggr\} . $$

In \(\beta(m)=(r-1)\omega^{m}\) and \(0<\omega<1\).

Lemma A.4

(Yang [16])

Suppose that \(\{\zeta_{n}: n\geq1\}\), \(\{\eta_{n}: n\geq1\}\), and \(\{\xi_{n}: n\geq 1\}\) are three random variable sequences, \(\{\gamma_{n}: n\geq1\}\) is a positive constant sequence, and \(\gamma_{n}\to0\). If \(\sup_{u}|F_{\gamma_{n}}(u)-\Phi(u)|\leq C\gamma_{n}\) then for any \(\varepsilon_{1}>0\), and \(\varepsilon_{2}>0\), then

$$\sup_{u}\bigl\vert F_{\zeta_{n}+\eta_{n}+\gamma_{n}}(u)-\Phi(u)\bigr\vert \leq C\bigl\{ \gamma _{n}+\varepsilon_{1}+ \varepsilon_{2}+\mathrm{P}\bigl(\vert \eta_{n}\vert \geq \varepsilon_{1}\bigr)+\mathrm{P}\bigl(\vert \xi _{n}\vert \geq\varepsilon_{2}\bigr)\bigr\} . $$

Lemma A.5

(Li et al. [12])

Under assumptions (A2)-(A5), we have

  1. (i)

    \(|\int_{A_{i}} E_{m}(t,s) \,ds|=O(\frac{2^{m}}{n})\), \(i=1,2,\ldots,n\);

  2. (ii)

    \(\sum_{i=1}^{n}(\int_{A_{i}} E_{m}(t,s) \,ds)^{2}=O(\frac{2^{m}}{n})\);

  3. (iii)

    \(\sup_{m}\int_{0}^{1} |E_{m}(t,s) \,ds|\leq C\);

  4. (iv)

    \(\sum_{i=1}^{n}|\int_{A_{i}} E_{m}(t,s) \,ds|\leq C\).

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, L., Li, Y. The Berry-Esseen bounds of wavelet estimator for regression model whose errors form a linear process with a ρ-mixing. J Inequal Appl 2016, 107 (2016). https://doi.org/10.1186/s13660-016-1036-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1036-x

MSC

Keywords