Berry-Esseen bounds of weighted kernel estimator for a nonparametric regression model based on linear process errors under a LNQD sequence

In this paper, the authors investigate the Berry-Esseen bounds of weighted kernel estimator for a nonparametric regression model based on linear process errors under a LNQD random variable sequence. The rate of the normal approximation is shown as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O(n^{-1/6})$\end{document}O(n−1/6) under some appropriate conditions. The results obtained in the article generalize or improve the corresponding ones for mixing dependent sequences in some sense.


Introduction
We discuss that the estimation of the fixed design nonparametric regression model involves a regression function g(·) which is defined on a closed interval [0, 1]: where {t i } are known fixed design points, we suppose {t i } to be ordered 0 ≤ t 1 ≤ · · · ≤ t n ≤ 1, and {ε i } are random errors. As we all know, model (1.1) has been considered extensively by many authors, e.g., Schuster and Yakowitz [1] studied the nonparametric model (1.1) with i.i.d. errors. They obtained the strong convergence and asymptotic normality of the estimator of g(·), and Qin [2] obtained the strong consistency of the estimator of g(·). Yang [3][4][5] studied the nonparametric model (1.1) with ϕ-mixing errors, censored data random errors and negatively associated errors. He obtained the complete convergence, strong consistency and uniformly asymptotic normality of the estimator of g(·), respectively. Zhou et al. [6] studied the nonparametric model (1.1) with weakly dependent processes. They obtained the moment consistency, strong consistency, strong convergence rate and asymptotic normality of the estimator of g(·), etc. Inspired by the literature above, we are devoted to investigating the Berry-Esseen bounds of the estimator for linear process errors in the nonparametric regression model (1.1).
In the article, we will discuss the Berry-Esseen bounds of the estimator of g(·) in the model (1.1) with repeated measurements. Here, we recall the weighted kernel estimator of nonparametric regression functions. A popular nonparametric estimate of g(·) is then where K(u) is a Borel measurable function, 0 < h n → 0 as n → ∞. The weighted kernel estimator was first proposed by Priestley and Chao [7], who discussed the weak consistency conditions of g(·), and subsequently it has been studied extensively by many authors. For instance, in the independent assumption, Benedetti [8] gave the sufficient condition for the strong consistency of g(·) under the condition of Eε 4 1 < ∞. Schuster and Yakowitz [1] discussed the uniformly strong consistency of g(·). Qin [2] extended the moment condition to E|ε 1 | 2+δ < ∞ (as δ > 0). Under the mixing dependent assumption, Yang [3] and [9] not only comprehensively improved these results under ϕmixing and ρ-mixing, but reduced the condition to sup i E|ε i | r < ∞ (as r > 1), weakened the addition of the kernel function K(·). Pan and Sun [10] extended this discussion to censored data and gave some sufficient conditions for strong consistency in the independent and ϕ-mixing case. Yang [4] discussed the consistency of weighted kernel estimators of a nonparametric regression function with censored data and obtained strong consistency under some more weakly sufficient conditions. But, up to now, there have been few results related to weighted kernel estimator for model (1.1) with linear process errors.
The Berry-Esseen theorem is the rate of convergence in the central limit theorem. There is a lot of literature regarding this kind of the Berry-Esseen bounds theorem. For the details, Cheng [11] established a Berry-Esseen type theorem showing the near-optimal quality of the normal distribution approximation to the distribution of smooth quantile density estimators. Wang and Zhang [12] obtained a Berry-Esseen type estimate for NA random variables with only finite second moment. They also improved the convergence rate result in the central limit theorem and precise asymptotics in the law of the iterated logarithm for NA and linearly negative quadrant dependent sequences. Liang and Li [13] derived the Berry-Esseen type bound based on linear process errors under negatively associated random variables. Li et al. [14] established the Berry-Esseen bounds of the wavelet estimator for a nonparametric regression model with linear process errors generated by ϕ-mixing sequences. Yang et al. [15] investigated the Berry-Esseen bound of sample quantiles for NA random variables, the rate of normal approximation is shown as O(n -1/9 ), etc.
In this paper, we shall study the above nonparametric regression problem with linear process errors generated by a linearly negative quadrant dependent sequence. Definition 1.1 ([16]) Two random variables X and Y are said to be negative quadrant dependent (NQD in short) if, for any x, y ∈ R, Definition 1.2 ([17]) A sequence {X n , n ≥ 1} of random variables is said to be linearly negative quadrant dependent (LNQD in short) if for any disjoint subsets A, B ∈ Z + and positive r i s, i∈A r i X i and j∈B r j X j are NQD.
The concept of LNQD sequence was introduced by Newman [17], who investigated the central limit theorem for a strictly stationary LNQD process, and it subsequently has been studied by many authors. Wang and Zhang [12] provided the uniform rates of convergence in the central limit theorem for LNQD random variables. Ko et al. [18] established the Hoeffding-type inequality for a LNQD sequence. Ko et al. [19] discussed the strong convergence and central limit theorem for weighted sums of LNQD random variables. Wang et al. [20] presented some exponential inequalities and complete convergence for a LNQD sequence. Wang and Wu [21] gave some strong laws of large numbers and strong convergence properties for arrays of rowwise NA and LNQD random variables. Li et al. [22] established some inequalities and asymptotic normality of the weight function estimate of a regression function for a LNQD sequence. Shen et al. [23] investigated the complete convergence for weighted sums of LNQD random variables based on the exponential bounds and obtained some complete convergence for arrays of rowwise LNQD random variables, etc.
However, there are very few literature works on Berry-Esseen bounds of weighted kernel estimator for nonparametric regression model (1.1) with linear process errors. So, the main purpose of the paper is to investigate the Berry-Esseen bounds of weighted kernel estimator for nonparametric regression model with linear process errors generated by a LNQD sequence.
In what follows, let C be positive constants which may be different in various places. All limits are taken as the sample size n tends to ∞, unless specified otherwise.
The structure of the rest of the paper is as follows. In Section 2, we give some basic assumptions and main results. Some preliminary lemmas are stated in Section 3. Proofs of the main results are provided in Section 4. Authors' declaration is given at the end of the paper.

Remark 2.1 (A1) is a basic condition of the LNQD sequence, and conditions (A2)-(A3)
are general assumption conditions of the weighted kernel estimator which have been used by some authors such as Yang [3,4,9], Pan and Sun [10].
, are easily satisfied, if p, q are chosen reasonable, which is the same as in Yang [5] and Li et al. [14]. So, (A5) is a standard regularity condition used commonly in the literature.

Remark 2.4
We develop the weighted kernel estimator methods in the nonparametric regression model (1.1) which are different from estimation methods of Liang and Li [13], Li et al. [14]. Our theorem and corollaries improve Theorem 3.1 of Li et al. [22] for the case of linear process errors generated by LNQD sequences and also generalize the results of Li et al. [14] from linear process errors generated by LNQD sequences to the ones generated by ϕ-mixing sequences. So, our results obtained in the paper generalize and improve some corresponding ones for ϕ-mixing random variables to the case of LNQD setting.

Some preliminary lemmas
First, we have by (1.1) and (1.2) that Next, we give the following main lemmas.

Lemma 3.2 ([22])
Let {X j , j ≥ 1} be a LNQD random variable sequence with zero mean and finite second moment sup j≥1 E(X 2 j ) < ∞. Assume that {a j , j ≥ 1} is a real constant sequence satisfying a := sup j≥1 |a j | < ∞. Then, for any r > 1, E n j=1 a j X j r ≤ Da r n r/2 .

Lemma 3.5 Assume that (A1)-(A5) hold true, we can get that
Proof of Lemma 3.5 By Lemma 3.2 and assumptions (A4)-(A5), we can obtain that This completes the proof of Lemma 3.5(1). In addition, Lemma 3.5(2) can be derived from the Markov inequality and Lemma 3.5(1) immediately.
Proof of Lemma 3.6 Let n = 1≤i<j≤k Cov(z ni , z nj ), then u 2 n = E(U 1n ) 2 -2 n . By E(U n ) 2 = 1, Lemma 3.5(1), the C r -inequality and the Cauchy-Schwarz inequality, it follows that Hence, it has been found that On the other hand, from the basic definition of LNQD sequence, Lemma 3.1, (A1) and (A4), we can prove that Therefore, combining equations (3.1) and (3.2), we can get that Lemma 3.7 Assume that (A1)-(A5) hold true and, applying these in Lemma 3.6, we can obtain that sup y P(H n /u n ≤ y) -(y) ≤ Cξ 4n .
Proof of Corollary 2.1 By (A1) we can easily see that V (q) → 0, therefore Corollary 2.1 holds.