Skip to main content

The consistency for estimator of nonparametric regression model based on NOD errors

Abstract

By using some inequalities for NOD random variables, we give its application to investigate the nonparametric regression model based on these errors. Some consistency results for the estimator of g(x) are presented, including the mean convergence, uniform convergence, almost sure convergence and convergence rate. We generalize some related results and as an example of designed assumptions for weight functions, we give the nearest neighbor weights.

AMS Mathematical Subject Classification 2000: 62G05; 62G08.

1 Introduction

Consider a fixed design regression model

Y n i = g x n i + ε n i , i = 1 , 2 , . . . , n ,
(1.1)

where x ni are design points on a set A in q for some q ≥ 1, g(·) is an unknown function on A and ε ni are random errors. Assume that for each n, {ε ni , 1 ≤ in} has the same distribution as {ε i , 1 ≤ in}. As an estimator of g(·), the following weighted regression estimator is given:

g n ( x ) = i = 1 n W n i ( x ) Y n i ,

where W ni (x) = W ni (x,xn 1,...,x nn ) are weighted functions.

The above estimator was first proposed by Georgiev [1] and subsequently has been studied by many authors. In the independent case, consistency and asymptotic normality have been investigated by Georgiev and Greblicki [2], Georgiev [3], Müller [4], and the references therein. Fan [5] extended the work of Georgiev [3] and Müller [4] in the estimation of the regression model to the case of Lq-mixingale for some 1 ≤ q ≤ 2. Roussas [6] discussed strong consistency and quadratic mean consistency of g n (x), and Roussas et al. [7] established asymptotic normality of g n (x) assuming that the errors are from a strictly stationary stochastic process and satisfying the strong mixing condition. Tran et al. [8] obtained the asymptotic normality of g n (x) assuming that the errors form a linear time series, more precisely, a weakly stationary linear process based on a martingale difference sequence. Hu et al. [9] generalized the main results of Tran et al. [8]. Liang and Jing [10] established the consistency, uniform consistency, and asymptotic normality of g n (x) under negatively associated (NA) samples. Meanwhile, for the semiparametric regression model, Ren and Chen [11] obtained the strong consistency for the least squares estimator of β and the nonparametric estimator of g(t) based on NA samples, Hu [12] obtained the consistency and complete consistency for these estimations based on the linear time series, Baek and Liang [13] established some asymptotic results for these estimations under NA samples, Liang et al. [14] also established some asymptotic results for a linear process based NA samples, etc. For more details of semiparametric regression model, one can refer to Hardle et al. [15] and the references therein.

In this article, we investigate the nonparametric regression model based on negatively orthant dependent (NOD) random variables, which is weaker than NA random variables. Some related definitions are given as follows:

Definition 1.1 Two random variables X and Y are said to be NQD if for x, y R,

P X x , Y y P X x P Y y .

A sequence of random variables {X n , n ≥ 1} is said to be pairwise NQD if for all i, j N, Ij, and X i and X j are NQD.

The concept of NQD was intruduced by Lehmann [16] and he pointed out some useful properties of NQD, for example, let X and Y be NQD, then

(i) EXYEXEY,

(ii) P(X > x, Y > y) ≤ P(X > x)P(Y > y), for x, y R,

(iii) if f, g are both nondecreasing (or nonincreasing) functions, then f(X) and g(Y) are NQD.

Definition 1.2 A finite collection of random variables X1, X2,..., X n is said to be NA if for every pair of disjoint subsets A1, A2 of {1, 2,..., n},

Cov f X i : i A 1 , g X j : j A 2 0 ,

whenever f and g are coordinatewise nondecreasing such that this covariance exists.

An infinite sequence {X n }n≥1is NA if every finite subcollection is NA.

Definition 1.3 A finite collection of random variables X1, X2,...,X n is said to be negatively upper orthant dependent (NUOD) if for all real numbers x1,x2,...,x n ,

P X i > x i , i = 1 , 2 , . . . , n i = 1 n P X i > x i ,

and negatively lower orthant dependent (NLOD) if for all real numbers x1,x2,...,x n ,

P X i x i , i = 1 , 2 , . . . , n i = 1 n P X i x i .

A finite collection of random variables X1, X2,...,X n is said to be NOD if they are both NUOD and NLOD.

An infinite sequence {X n }n≥1is said to be NOD (NUOD or NLOD) if every finite subcollection is NOD (NUOD or NLOD).

The concepts of NA and NOD sequences were introduced by Joag-Dev and Proschan [17]. They pointed out that NA random variables are NOD random variables, but neither NUOD nor NLOD implies NA. Various results and examples of NOD random variables can be found in Joag-Dev and Proschan [17], Bozorgnia et al. [18], Asadian et al. [19], Wang et al. [20], Wu [21, 22], Wang et al. [23, 24], Li et al. [25] and Sung [26], etc. Obviously, by the definitions of NOD and pairwise NQD, NOD random variables are pairwise NQD random variables. For more results and examples of pairwise NQD random variables, one can refer to Lehmann [16], Matula [27], Wu [28], Gan and Chen [29], Li and Yang [30], etc. But unlike NOD random variables, pairwise NQD random variables have not some nice inequalities such as Bernstein-type inequality as we know.

Inspired by Liang and Jing [10] and other articles referred above, we investigate the nonparametric regression model based on NOD random errors. By using the moment inequality, Bernstein-type inequality and truncating method for NOD random variables, we obtain some consistency results for estimator of g(x) such as the mean convergence, uniform convergence, almost sure convergence and convergence rate. We generalize some results of Liang and Jing [10] for NA random variables to the case of NOD random variables. Meanwhile, as an example of designed assumptions for weight functions, we give the nearest neighbor weights.

For any function g(x), we use c(g) to denote all continuity points of function g on the set A in q for some q ≥ 1. Let c, c1, c2, C, C1, C2,... denote the positive constants whose values may vary at each occurrence. x denotes the largest integer not exceeding x, I(B) is the indicator function of set B, x+ = xI(x ≥ 0), x- = -xI(x < 0) and x denotes Euclidean norm of x. In this article, main results are presented in Section 2, some lemmas and the proofs of main results are presented in Sections 3 and 4, respectively.

2 The main results

Under the nonparametric regression model of (1.1), for any fixed point x A, some assumptions on weighted function W ni (x) = W ni (x, xn 1,..., x nn ) are given as follows:

(H1) i = 1 n W n i ( x ) 1 as n → ∞;

(H2) i = 1 n W n i ( x ) C for all n;

(H3) i = 1 n W n i 2 ( x ) 0 as n → ∞;

(H4) i = 1 n W n i ( x ) g x n i - g ( x ) I x n i - x > a 0 as n → ∞ for all a > 0.

Theorem 2.1 Let {ε n , n ≥ 1} be a mean zero NOD sequence. Assume that the conditions (H1)-(H4) hold true. If sup n 1 E ε n 2 <, then for x c(g) and some p (0, 2],

E g ( x ) - g ( x ) p 0 , a s n .
(2.1)

If sup n 1 E ε n p < for some p > 2, then (2.1) also holds true.

In order to obtain uniform convergence for the estimator of g(x), for any fixed point x on a compact set A in q for some q ≥ 1, some uniform version of assumptions on W ni (x) = W ni (x,xn 1,...,x nn ) are replaced by that as follows:

( H 1 ) sup x A i = 1 n W n i ( x ) - 1 0 as n ;
( H 2 ) sup x A i = 1 n W n i ( x ) C for all n ;
( H 3 ) sup x A i = 1 n W n i 2 ( x ) 0 as n ;
( H 4 ) sup x A i = 1 n W n i ( x ) I x n i - x > a 0 as n for all a > 0 .

Theorem 2.2 Let {ε n , n > 1} be a mean zero NOD sequence. Assume that the conditions ( H 1 ) - ( H 4 ) hold true and g is continuous on the compact set A. If sup n 1 E ε n 2 < , then for some p (0, 2],

sup x A E g n ( x ) - g ( x ) p 0 , a s n .
(2.2)

If sup n 1 E ε n p < for some p > 2, then (2.2) also holds true.

Next, we will study the almost sure convergence and convergence rate for the estimator of g(x). Similarly, for any fixed point x on the compact set A in q for some q ≥ 1, some assumptions on the W ni (x) = W ni (x, xn 1,...,x nn ) are shown as follows:

(H5) i = 1 n W n i ( x ) - 1 =O n - 1 / 4 ;;

(H6) i = 1 n W n i ( x ) C for all n ≥ 1 and max 1 i n | W n i ( x ) | = O ( n 1 / 2 log 3 / 2 n ) ;

(H7) i = 1 n W n i ( x ) g ( x n i ) - g ( x ) I x n i - x > a n - 1 / 4 =O n - 1 / 4 for some a > 0.

Theorem 2.3 Let {ε n , n ≥ 1} be a mean zero NOD sequence such that sup n 1 E ε n 2 < . Suppose that the conditions (H5)-(H7) hold true and g(x) satisfies a local Lipschitz condition around the point x. Then for x A,

g n ( x ) g ( x ) , as n , a . s .
(2.3)

Theorem 2.4 Let {ε n , n ≥ 1} be a mean zero NOD sequence such that sup n 1 E ε n 2 < . Suppose that the conditions (H5)-(H7) hold true and g(x) satisfies a local Lipschitz condition around the point x. Then for x A,

g n ( x ) - g ( x ) = O n - 1 / 4 , a . s .
(2.4)

Remark 2.1 The similar assumptions on weighted functions can be found in Ren and Chen [11], Hu et al. [31] and Liang and Jing [10], etc. Under the NA sequence and other assumptions, for some p > 1, Liang and Jing [10] obtained the result E|g n (x)-g(x)|p→ 0 as n → ∞ (see Liang and Jing [10, Theorem 2.1]). In our Theorem 2.1, we give the result E|g n (x)-g(x)|p→ 0 as n → ∞ for some p > 0. Liang and Jing [10] also studied the strong consistency of the estimator for g(x). In our Theorems 2.3 and 2.4, the strong consistency and convergence rate of the estimator for g(x) are presented. Since NA sequence is a NOD sequence, we generalize some results of Liang and Jing [10] to the case of NOD sequence.

Example 2.1 Here, we give an example that the designed assumptions (H5)-(H7) are satisfied for the nearest neighbor weights. Without loss of generality, let A = [0,1] and x n i = i n ,1in. For x A, we rewrite |xn 1- x|, |xn 2- x|,..., |x nn - x| as follows

x R 1 ( x ) ( n ) - x x R 2 ( x ) ( n ) - x x R n ( x ) ( n ) - x ,
(2.5)

if |x ni - x| = |x nj - x|, |x ni - x| is in frond of |x nj - x| for i < j. Let k n = n5/8 and define the nearest neighbor weight functions as following

W n i ( x ) = W n i x , x n 1 , x n 2 , , x n n = 1 k n if x n i - x x R k ( x ) ( n ) - x , 0 , otherwise .
(2.6)

Consequently, for every x [0,1], we can find by definition of R i (x) and choice of x ni that

i = 1 n W n i ( x ) = i = 1 n W n R i ( x ) ( x ) = i = 1 k n 1 k n = 1 ,
(2.7)
max 1 i n W n i ( x ) = 1 k n c 1 n 5 / 8 ,
(2.8)
i = 1 n W n i ( x ) I x n i - x > a n - 1 / 4 i = 1 n W n i ( x ) x n i - x 2 a 2 n - 1 / 2 = i = 1 k n x R i ( x ) ( n ) - x 2 k n a 2 n 1 / 2 i = 1 k n i / n 2 k n a 2 n 1 / 2 k n n a 2 n 1 / 2 c 2 n 1 / 4 , a > 0 .
(2.9)

If g is continuous on [0,1], then by (2.6)-(2.9), it can find that the assumptions of (H1)-(H7) and ( H 1 ) - ( H 4 ) are satisfied.

3 Some lemmas

Lemma 3.1 (cf. Bozorgnia et al. [18]). Let random variables X1,X2,...,X n be NOD, f1,f2,...,f n be all nondecreasing (or nonincreasing) functions, then random variables f1(X1), f2(X2),..., f n (X n ) are NOD.

Lemma 3.2 (cf. Asadian et al. [19]). Let {X n , n ≥ 1} be a NOD sequence such that EX n = 0 and E|X n |p< ∞ for all n ≥ 1 and p ≥ 2. Then

E i = 1 n X i p c p i = 1 n E X i p + i = 1 n E X i 2 p / 2 ,

where c p depends only on p.

Lemma 3.3 (cf. Wang et al. [20]). Let {X n }n≥1be a sequence of NOD random variables such that EX n = 0 and |X n | ≤ b for each n ≥ 1, where b is a positive constant.

Denote Δ n 2 = i = 1 n E X i 2 for each n ≥ 1. Then for every ε > 0,

P i = 1 n X i ε 2 exp - ε 2 2 2 Δ n 2 + b ε .

4 Proofs of the main results

Proof of Theorem 2.1: By C r inequality, it has

E g n ( x ) - g ( x ) p c p E g n ( x ) - E g n ( x ) p + E g n ( x ) - g ( x ) p .
(4.1)

For x c(g) and a > 0,

E g n ( x ) - g ( x ) i = 1 n W n i ( x ) g x n i - g ( x ) I x n i - x a + i = 1 n W n i ( x ) g x n i - g ( x ) I x n i - x > a + g ( x ) i = 1 n W n i ( x ) - 1 .

So, similar to the proof of (2.10) in Hu et al. [31], by conditions (H1), (H2) and (H4), it is easy to have that

E g n ( x ) - g ( x ) 0 , x c ( g ) .
(4.2)

On the other hand, by Lemma 3.1, for the fixed x, we can see that W n i + ( x ) ε i , 1 i n and W n i - ( x ) ε i , i n are also NOD sequences. Combining with W n i ( x ) ε i = W n i + ( x ) ε i - W n i - ( x ) ε i , without loss of generality, we assume W ni (x) ≥ 0 in the proof. If 0 < p ≤ 2, by Jensen's inequality, Lemma 3.2, (H3) and sup n 1 E ε n 2 < , we have

E g n ( x ) - E g n ( x ) p = E i = 1 n W n i ( x ) ε n i p = E i = 1 n W n i ( x ) ε i p E i = 1 n W n i ( x ) ε i 2 p / 2 C 1 i = 1 n W n i 2 ( x ) E ε i 2 p / 2 C 2 i = 1 n W n i 2 ( x ) p / 2 0 ,
(4.3)

following from that {ε ni , 1 ≤ in} has the same distribution as {ε i , 1 ≤ in} for each n. Otherwise, for p > 2, by Lemma 3.2, sup n 1 E ε p < and (H3) again, we obtain

E g n ( x ) - E g n ( x ) p = E i = 1 n W n i ( x ) ε n i p = E i = 1 n W n i ( x ) ε i p C 3 i = 1 n W n i p ( x ) E ε i p + i = 1 n W n i 2 ( x ) E ε i 2 p / 2 C 4 i = 1 n W n i 2 ( x ) p / 2 + i = 1 n W n i 2 ( x ) p / 2 0 ,
(4.4)

since i = 1 n a i α 1 / α i = 1 n a i β 1 / β for any positive number sequence {a i ,1 ≤ in} and 1 ≤ αβ. Therefore, by (4.1)-(4.4), the desired result (2.1) has been proved completely.

Proof of Theorem 2.2: Since g is continuous in the compact set A, g is uniformly continuous in the compact set A. Consequently, similar to the proof of Theorem 2.1, we can get that

lim n sup x A E g n ( x ) - E g n ( x ) p = 0 , lim n sup x A E g n ( x ) - g ( x ) = 0 .

Therefore,

lim n sup x A E g n ( x ) - g ( x ) p c p lim n sup x A E g n ( x ) - E g n ( x ) p + lim n sup x A E g n ( x ) - g ( x ) p = 0 ,

which implies the desired result (2.2).

Proof of Theorem 2.3: Combining the proof of (4.2) with the assumptions of (H5)-(H7) and g(x) satisfying a local Lipschitz condition around the point x, we can get that

E g n ( x ) - g ( x ) = O n - 1 / 4 .
(4.5)

Therefore, for x A, to prove (2.3), we only have to show that

g n ( x ) - E g n ( x ) 0 , as n , a . s .
(4.6)

Without loss of generality, we assume that W ni (x) ≥ 0 in the proof. Let

ε 1 , i ( n ) = - i 1 / 2 I ε n i < - i 1 / 2 + ε n i I ε n i i 1 / 2 + i 1 / 2 I ε n i > i 1 / 2 , ε 2 , i ( n ) = ε n i - i 1 / 2 I ε n i > i 1 / 2 , ε 3 , i ( n ) = ε n i + i 1 / 2 I ε n i < - i 1 / 2 , ε 1 , i = - i 1 / 2 I ε i < - i 1 / 2 + ε i I ε i i 1 / 2 + i 1 / 2 I ε i > i 1 / 2 , ε 2 , i = ε i - i 1 / 2 I ε i > i 1 / 2 , ε 3 , i = ε i + i 1 / 2 I ε i < - i 1 / 2 .

Since E ε n i =E ε i =0 for each n, it is easy to see that

g n ( x ) - E g n ( x ) = i = 1 n W n i ( x ) ε n i = i = 1 n W n i ( x ) ε 1 , i ( n ) - E ε 1 , i ( n ) + i = 1 n W n i ( x ) ε 2 , i ( n ) - E ε 2 , i ( n ) + i = 1 n W n i ( x ) ε 3 , i ( n ) - E ε 3 , i ( n ) = : T n 1 + T n 2 + T n 3 .
(4.7)

Obviously, for the fixed x and n, W n i ( x ) ε 1 , i - E ε 1 , i 1 i n is a NOD sequence with mean zero. Meanwhile, by the condition (H6), it has

max 1 i n W n i ( x ) ε 1 , i - E ε 1 , i 2 n 1 / 2 max 1 i n W n i ( x ) c 1 log - 3 / 2 n , i = 1 n Var W n i ( x ) ε 1 , i - E ε 1 , i i = 1 n W n i 2 ( x ) E ε i 2 c 2 max 1 i n W n i ( x ) i = 1 n W n i ( x ) c 3 n - 1 / 2 log - 3 / 2 n .

Since {ε ni , 1 ≤ in} has the same distribution as {ε i , 1 ≤ in} for each n, we obtain by applying Lemma 3.3 that for every ϵ > 0,

P T n 1 ε = P i = 1 n W n i ( x ) ε 1 , i ( n ) - E ε 1 , i ( n ) ε = P i = 1 n W n i ( x ) ε 1 , i - E ε 1 , i ε 2 exp ε 2 2 2 c 3 n - 1 / 2 log - 3 / 2 n + c 1 ε log - 3 / 2 n 2 exp - c 4 log 3 / 2 n c 5 n - 2 , for n large enough ,

which implies

T n 1 = i = 1 n W n i ( x ) ε 1 , i ( n ) - E ε 1 , i ( n ) 0 , as n , a . s . ,
(4.8)

following from Borel-Cantelli lemma.

Next, we turn to estimate Tn 2and Tn 3. It can be checked by sup n 1 E ε n 2 < that

i = 1 E ε 2 , i ( n ) i 1 / 2 log 5 / 4 ( 2 i ) = i = 1 E ε 2 , i i 1 / 2 log 5 / 4 ( 2 i ) i = 1 E ε i I ε i > i 1 / 2 i 1 / 2 log 5 / 4 ( 2 i ) i = 1 E ε i 2 i log 5 / 4 ( 2 i ) < ,
(4.9)

which implies

i = 1 ε 2 , i ( n ) i 1 / 2 log 5 / 4 ( 2 i ) < , a . s .

Consequently, by Kronecker's lemma, we have that

1 n 1 / 2 log 5 / 4 ( 2 n ) i = 1 n ε 2 , i ( n ) 0 , a . s .

Thus, by the condition (H6), it is easy to see that

i = 1 n W n i ( x ) ε 2 , i ( n ) max 1 i n W n i ( x ) i = 1 n ε 2 , i ( n ) c 7 n - 1 / 2 log - 3 / 2 n i = 1 n ε 2 , i ( n ) = o log - 1 / 4 n , a . s .
(4.10)

Obviously, by sup n 1 E ε n 2 < and (H6) again,

i = 1 n W n i ( x ) E ε 2 , i ( n ) = i = 1 n W n i ( x ) E ε 2 , i max 1 i n W n i ( x ) i = 1 n E ε i I ε i i 1 / 2 c 8 n - 1 / 2 log - 3 / 2 n i = 1 n i - 1 / 2 E ε i 2 I ε i i 1 / 2 = O log - 3 / 2 n .
(4.11)

Combining (4.10) with (4.11), it follows

T n 2 = i = 1 n W n i ( x ) ε 2 , i ( n ) - E ε 2 , i ( n ) = o log - 1 / 4 n , a . s .
(4.12)

Likewise, by sup n 1 E ε n 2 <, we will found that

i = 1 E ε 3 , i ( n ) i 1 / 2 log 5 / 4 ( 2 i ) = i = 1 E ε 3 , i i 1 / 2 log 5 / 4 ( 2 i ) i = 1 - E ε i I ε i < - i 1 / 2 i 1 / 2 log 5 / 4 ( 2 i ) i = 1 E ε i 2 i log 5 / 4 ( 2 i ) <

which implies

i = 1 ε 3 , i ( n ) i 1 / 2 log 5 / 4 ( 2 i ) < , a . s .

Then, by Kronecker's lemma

1 n 1 / 2 log 5 / 4 ( 2 n ) i = 1 n ε 3 , i ( n ) 0 , a . s .

Consequently, by (H6), it has that

i = 1 n W n i ( x ) ε 3 , i ( n ) max 1 i n W n i ( x ) i = 1 n ε 3 , i ( n ) = o log - 1 / 4 n , a . s .

On the other hand, by (H6) and sup n 1 E ε n 2 < again,

i = 1 n W n i ( x ) E ε 3 , i ( n ) = i = 1 n W n i ( x ) E ε 3 , i max 1 i n W n i ( x ) i = 1 n E ε i I ε i > i 1 / 2 c n - 1 / 2 log - 3 / 2 n i = 1 n i - 1 / 2 E ε i 2 I ε i > i 1 / 2 = O log - 3 / 2 n .

Finally,

T n 3 = i = 1 n W n i ( x ) ε 3 , i ( n ) - E ε 3 , i ( n ) = o log - 1 / 4 n , a . s .
(4.13)

Therefore, by (4.7), (4.8), (4.12) and (4.13), (4.6) is completely proved. The desired result (2.3) follows from (4.5) and (4.6) immediately.

Proof of Theorem 2.4: By the estimation of (4.5), to prove (2.4), we only need to prove that |g n (x) - Eg n (x)| = O(n-1/4), a.s. It is also to assume that W ni (x) ≥ 0 in the proof. Similar to the proof of Theorem 2.3, we will use the same notation ε q , i ( n ) , ε q , i and T nq for q = 1, 2, 3, where i1/2 is replaced by i1/4. Obviously sup n 1 E ε n 4 < implies sup n 1 E ε n 2 <, by (H6), it has

max 1 i n W n i ( x ) ε 1 , i - E ε 1 , i 2 n 1 / 4 max 1 i n W n i ( x ) c 1 n - 1 / 4 log - 3 / 2 n , i = 1 n Var W n i ( x ) ε 1 , i - E ε 1 , i i = 1 n W n i 2 ( x ) E ε i 2 c 2 n - 1 / 2 log - 3 / 2 n .

Since {ε ni , 1 ≤ in} has the same distribution as {ε i , 1 ≤ in} for each n, we obtain by applying Lemma 3.3 that for every ϵ > 0

P T n 1 ε n - 1 / 4 = P i = 1 n W n i ( x ) ε 1 , i ( n ) - E ε 1 , i ( n ) ε n - 1 / 4 = P i = 1 n W n i ( x ) ε 1 , i - E ε 1 , i ε n - 1 / 4 2 exp - ε 2 n - 1 / 2 2 2 c 2 n - 1 / 2 log - 3 / 2 n + c 1 ε n - 1 / 2 log - 3 / 2 n 2 exp - c 3 log 3 / 2 n c 4 n - 2 , for n large enough ,

which implies by Borel-Cantelli lemma that

n 1 / 4 T n 1 0 , a . s .
(4.14)

Meanwhile, it can be checked by sup n 1 E ε n 4 < that

i = 1 E ε 2 , i ( n ) i 1 / 4 log 3 / 2 ( 2 i ) = i = 1 E ε 2 , i i 1 / 4 log 3 / 2 ( 2 i ) i = 1 E ε i I ε i > i 1 / 4 i 1 / 4 log 3 / 2 ( 2 i ) i = 1 E ε i 4 i log 3 / 2 ( 2 i ) < ,

which implies

i = 1 ε 2 , i ( n ) i 1 / 4 log 3 / 2 ( 2 i ) < , a . s .

Then, we have by Kronecker's lemma that

1 n 1 / 4 log 3 / 2 ( 2 n ) i = 1 n ε 2 , i ( n ) 0 , a . s .

Consequently, by (H.6), it follows

i = 1 n W n i ( x ) ε 2 , i ( n ) max 1 i n W n i ( x ) i = 1 n ε 2 , i ( n ) = o n - 1 / 4 , a . s . ,
(4.15)

and

i = 1 n W n i ( x ) E ε 2 , i ( n ) = i = 1 n W n i ( x ) E ε 2 , i max 1 i n W n i ( x ) i = 1 n E ε i I ε i > i 1 / 4 c 5 n - 1 / 2 log - 3 / 2 n i = 1 n i - 3 / 4 E ε i 4 I ε i > i 1 / 4 = O n - 1 / 4 log - 3 / 2 n .
(4.16)

On the other hand, it can be checked that

i = 1 E ε 3 , i ( n ) i 1 / 4 log 3 / 2 ( 2 i ) = i = 1 E ε 3 , i i 1 / 4 log 3 / 2 ( 2 i ) i = 1 - E ε i I ε i < - i 1 / 4 i 1 / 4 log 3 / 2 ( 2 i ) i = 1 E ε i 4 i log 3 / 2 ( 2 i ) < ,

which implies

i = 1 ε 3 , i ( n ) i 1 / 4 log 3 / 2 ( 2 i ) < , a . s .

So, by Kronecker's lemma,

1 n 1 / 4 log 3 / 2 ( 2 n ) i = 1 n ε 3 , i ( n ) 0 , a . s .

Consequently, by (H.6), we have

i = 1 n W n i ( x ) ε 3 , i ( n ) max 1 i n W n i ( x ) i = 1 n ε 3 , i ( n ) = o n - 1 / 4 , a . s . ,
(4.17)

and

i = 1 n W n i ( x ) E ε 3 , i ( n ) = i = 1 n W n i ( x ) E ε 3 , i max 1 i n W n i ( x ) i = 1 n E ε i I ε i > i 1 / 4 c n - 1 / 2 log - 3 / 2 n i = 1 n i - 3 / 4 E ε i 4 I ε i > i 1 / 4 = O n - 1 / 4 log - 3 / 2 n .
(4.18)

Finally, similar to the proof of (2.3), by (4.14)-(4.18), it easily has that |g n (x) -Eg n (x)| = O(n-1/4), a.s..

References

  1. Georgiev AA: Local properties of function fitting estimates with applications to system identification. In Mathematical Statistics and Applications, Proceedings of the 4th Pannonian Symposium on Mathematical Statistics, Bad Tatzmannsdorf. Bad Tatzmannsdorf, Austria. Reidel, Dordrecht; 1985:141–151. Austria, 4–10, Sept 1983

    Google Scholar 

  2. Georgiev AA, Greblicki W: Nonparametric function recovering from noisy observations. J Stat Plan Infer 1986, 13: 1–14.

    Article  MathSciNet  Google Scholar 

  3. Georgiev AA: Consistent nonparametric multiple regression: the fixed design case. J Multivar Anal 1988, 25(1):100–110. 10.1016/0047-259X(88)90155-8

    Article  MathSciNet  Google Scholar 

  4. Müller HG: Weak and universal consistency of moving weighted averages. Period Math Hungar 1987, 18(3):241–250. 10.1007/BF01848087

    Article  MathSciNet  Google Scholar 

  5. Fan Y: Consistent nonparametric multiple regression for dependent heterogeneous processes: the fixed design case. J Multivar Anal 1990, 33(1):72–88. 10.1016/0047-259X(90)90006-4

    Article  Google Scholar 

  6. Roussas GG: Consistent regression estimation with fixed design points under dependence conditions. Stat Probab Lett 1989, 8(1):41–50. 10.1016/0167-7152(89)90081-3

    Article  MathSciNet  Google Scholar 

  7. Roussas GG, Tran LT, Ioannides DA: Fixed design regression for time series: Asymptotic normality. J Multivar Anal 1992, 40(2):262–291. 10.1016/0047-259X(92)90026-C

    Article  MathSciNet  Google Scholar 

  8. Tran L, Roussas G, Yakowitz S, Van BT: Fixed-design regression for linear time series. Ann Stat 1996, 24(3):975–991.

    Article  Google Scholar 

  9. Hu SH, Zhu CH, Chen YB: Fixed-design regression for linear time series. Acta Math Sci Ser B Engl Ed 2002, 22(1):9–18.

    MathSciNet  Google Scholar 

  10. Liang HY, Jing BY: Asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences. J Multivar Anal 2005, 95(2):227–245. 10.1016/j.jmva.2004.06.004

    Article  MathSciNet  Google Scholar 

  11. Ren Z, Chen MH: Strong consistency of a class of estimators in partial linear model for negative associated samples. Chinese J Appl Prob Stat 2002, 18(1):60–66.

    MathSciNet  Google Scholar 

  12. Hu SH: Fixed-design semiparametric regression for linear time series. Acta Math Sci Ser B Engl Ed 2006, 26(1):74–82.

    Article  MathSciNet  Google Scholar 

  13. Baek J-II, Liang HY: Asymptotics of estimators in semi-parametric model under NA samples. J Stat Plan Infer 2006, 136(10):3362–3382. 10.1016/j.jspi.2005.01.008

    Article  MathSciNet  Google Scholar 

  14. Liang HY, Mammitzsch V, Steineback J: On a semiparametric regression model whose errors form a linear process with negatively associated innovations. Statistics 2006, 40(3):207–226. 10.1080/02331880600688163

    Article  MathSciNet  Google Scholar 

  15. Härdle W, Liang H, Gao JT: Partially Linear Models, Springer Series in Economics and Statistics. Physica-Verlag, New York; 2000.

    Chapter  Google Scholar 

  16. Lehmann EL: Some concepts of dependence. Ann Math Stat 1966, 37(5):1137–1153. 10.1214/aoms/1177699260

    Article  Google Scholar 

  17. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann Stat 1983, 11(1):286–295. 10.1214/aos/1176346079

    Article  MathSciNet  Google Scholar 

  18. Bozorgnia A, Patterson RF, Taylor RL: Limit Theorems for Dependent Random Variables. In World Congress Nonlinear Analysts'92. Volume I-IV. de Gruyter, Berlin; 1996:1639–1650. (Tampa, FL, 1992)

    Google Scholar 

  19. Asadian N, Fakoor V, Bozorgnia A: Rosenthal's type inequalities for negatively orthant dependent random variables. J Iranian Stat Soc 2006, 5(1–2):69–75.

    Google Scholar 

  20. Wang XJ, Hu SH, Yang WZ, Ling NX: Exponential inequalities and inverse moment for NOD sequence. Stat Probab Lett 2010, 80(5–6):452–461. 10.1016/j.spl.2009.11.023

    Article  MathSciNet  Google Scholar 

  21. Wu QY: Complete convergence for negatively dependent sequences of random variables. J Inequal Appl 2010, 2010: 10. (Article ID 507293)

    Google Scholar 

  22. Wu QY: A strong limit theorem for weighted sums of sequences of negatively dependent random variables. J Inequal Appl 2010, 2010: 8. (Article ID 383805)

    Google Scholar 

  23. Wang XJ, Hu SH, Shen AT, Yang WZ: An exponential inequality for a NOD sequence and a strong law of large numbers. Appl Math Lett 2011, 24(2):219–223. 10.1016/j.aml.2010.09.007

    Article  MathSciNet  Google Scholar 

  24. Wang XJ, Hu SH, Volodin AI: Strong limit theorems for weighted sums of NOD sequence and exponential inequalities. Bull Korean Math Soc 2011, 48(5):923–938. 10.4134/BKMS.2011.48.5.923

    Article  MathSciNet  Google Scholar 

  25. Li XQ, Yang WZ, Hu SH, Wang XJ: The Bahadur representation for sample quantile under NOD sequence. J Nonparametr Stat 2011, 23(1):59–65. 10.1080/10485252.2010.486033

    Article  MathSciNet  Google Scholar 

  26. Sung SH: On the exponential inequalities for negatively dependent random variables. J Math Anal Appl 2011, 381(2):538–545. 10.1016/j.jmaa.2011.02.058

    Article  MathSciNet  Google Scholar 

  27. Matula P: A note on the almost sure convergence of sums of negatively dependent random variables. Stat Probab Lett 1992, 15(3):209–213. 10.1016/0167-7152(92)90191-7

    Article  MathSciNet  Google Scholar 

  28. Wu QY: Probability Limit Theory of Mixing Sequences. Science Press, Beijing; 2006.

    Google Scholar 

  29. Gan SX, Chen PY: Some limit theorems for sequences of pairwise NQD random variables. Acta Math Sci Ser B Engl 2008, 28(2):269–281.

    Article  MathSciNet  Google Scholar 

  30. Li R, Yang WG: Strong convergence of pairwise NQD random sequences. J Math Anal Appl 2008, 344(2):741–747. 10.1016/j.jmaa.2008.02.053

    Article  MathSciNet  Google Scholar 

  31. Hu SH, Pan GM, Gao QB: Estimate problem of regression models with linear process errors. Appl Math A J Chinese Univ 2003, 18A(1):81–90.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to Associate Editor Prof. Andrei Volodin and two anonymous referees for their careful reading and insightful comments. This work was supported by the National Natural Science Foundation of China (11171001, 11126176), HSSPF of the Ministry of Education of China (10YJA910005), Natural Science Foundation of Anhui Province (1208085QA03) and Provincial Natural Science Research Project of Anhui Colleges (KJ2010A005).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuhe Hu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Yang, W., Wang, X., Wang, X. et al. The consistency for estimator of nonparametric regression model based on NOD errors. J Inequal Appl 2012, 140 (2012). https://doi.org/10.1186/1029-242X-2012-140

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-140

Keywords