Skip to main content

Berry-Esseen bounds for wavelet estimator in semiparametric regression model with linear process errors

Abstract

Consider the semiparametric regression model Y i = x i β + g (t i ) + ε i , i = 1, . . . , n, where the linear process errors ε i = j = - a j e i - j with j = - a j <, and {e i } are identically distributed and strong mixing innovations with zero mean. Under appropriate conditions, the Berry-Esseen type bounds of wavelet estimators for β and g(·) are established. Our results obtained generalize the results of nonparametric regression model by Li et al. to semiparametric regression model.

Mathematical Subject Classification: 62G05; 62G08.

1 Introduction

Regression analysis is one of the most mature and widely applied branches of statistics. For a long time, however, its main theory has concerned parametric and nonparametric regressions. Recently, semiparametric regressions have received more and more attention. This is mainly because semiparametric regression reduces the high risk of misspecification relating to a fully parametric model and avoids some serious drawbacks of fully nonparametric methods.

In 1986, Engle et al. [1] first introduced the following semiparametric regression model:

Y i = x i β+g ( t i ) + ε i ,i=1,...,n,
(1.1)

where β is an unknown parameter of interest, {(x i , t i )} are nonrandom design points, {y i } are the response variables, g(·) is an unknown function defined on the closed interval [0, 1], and {ε i } are random errors.

The model (1.1) has been extensively studied. When the errors {ε i } are independent and identically distributed (i.i.d.) random variables, Chen and Shiah [2], Donald and Dewey [3], and Hamilton and Truong [4] used various estimation methods to obtain estimators of the unknown quantities in (1.1) and discussed the asymptotic properties of these estimators. When {ε i } are MA (∞) errors with the form ε i = j = 0 a j e i - j , where {e i } are i.i.d. random variables,{a j } satisfy j = - a j < and sup n n j = n a j < , the law of the iterated logarithm for the semiparametric least square estimator (SLSE) of β and strong convergence rates of the nonparametric estimator of g(·) were discussed by Sun et al. [5]. The Berry-Esseen type bounds for estimators of β and g(·) in model (1) under the linear process errors ε i = j = - a j e i - j with identically distributed and negatively associated random variables {e i } were derived by Liang and Fan [6].

Let us now recall briefly the definition of strong-mixing dependence. A sequence {e i , i Z} is said to be strong mixing (or α-mixing) if α(n) → 0 as n → ∞, where α(n) = sup P ( A B ) - P ( A ) P ( B ) : A - m , B m + n , and n m denotes the σ-field generated by {e i : min}.

For the properties of strong-mixing, one can read the book of Lin and Liu [7]. Recently, Yang and Li [810] and Xing et al. [1113] established moment bounds and maximal moment inequality for partial sums for strong mixing sequences and their application. In this article, we study the Berry-Esseen type bounds for wavelet estimators of β and g(·) in model (1.1) based linear process errors {ε i } satisfying the following basic assumption (A1). Our results obtained generalize the results in [14] to semiparametric regression model.

(A1) (i) Let ε i = j = - a j e i - j , where j = - a j < , {e j , j = 0, ± 1, ± 2, . . .} are identically distributed and strong mixing random variables with zero mean.

  1. (ii)

    For δ > 0, E|e 0|2+δ< ∞ and mixing coefficients α(n) = O(n -λ) for λ > (2 + δ)/δ.

Now, we introduce wavelet estimators of β and g for model (1.1). Let β be given, since Ee i = 0, we have g(t i ) = E(y i - x i β), i = 1, . . . , n. Hence a natural estimator of g(·) is

g n ( t , β ) = j = 1 n ( Y j - x j β ) A j E m ( t , s ) d s ,

where A j = [sj-1, s j ] are intervals that partition [0, 1] with t j A j and 0 ≤ t1 ≤ · · · ≤ t n ≤ 1, and wavelet kernel E m (t, s) can be defined by

E m ( t , s ) = 2 m E 0 ( 2 m t , 2 m s ) , E 0 ( t , s ) = j z φ ( t - j ) φ ( s - j ) ,

where m = m(n) > 0 is a integer depending only on n, φ(·) is father wavelet with compact support. Set

x ̃ i = x i - j = 1 n x j A j E m ( t i , s ) d s , i = Y i - j = 1 n Y j A j E m ( t i , s ) d s , S n 2 = i = 1 n x ̃ i 2 .
(1.2)

In order to estimate β, we seek to minimize

S S ( β ) = i = 1 n Y i - x i β - g n ( t i , β ) 2 = i = 1 n ( i - x ̃ i β ) 2 .
(1.3)

The minimizer to (1.3) is found to be

β ^ n = S n - 2 i = 1 n x ̃ i i .
(1.4)

So, a plug-in estimator of the nonparametric component g(·), based on β ^ n , is given by

ĝ n ( t ) ĝ n ( t , β ^ n ) = i = 1 n ( Y i - x i β ^ n ) A i E m ( t , s ) d s .
(1.5)

In the following, the symbols c, C, C1, C2, . . . denote positive constants whose values may change from one place to another, b n = O(a n ) means b n ca n , [x] denotes the integral part of x, ||e i || r : = (E|e i |r)1/r, Φ(u) represents the standard normal distribution function.

The article is organized as follows. In Section 2, we give some assumptions and main results. Sections 3 and 4 are devoted to the proofs of preliminary results. Proofs of theorems will be provided in Section 5. Some known results used in the proofs of preliminary and main results are appended in Appendix.

2 Assumptions, notations and results

At first we list some assumptions used in this article.

(A2) There exists a function h(·) defined on [0, 1] such that x i = h(t i ) + u i and

  1. (i)

    lim n n - 1 i = 1 n u i 2 = 0 0 < 0 < , (ii) max 1 i n | u i |=O ( 1 ) .

  2. (iii)

    For any permutation (j 1, . . . , j n ) of the integers (1, . . . , n),

    lim sup n l n log n max 1 m n i = 1 m u j i < .

(A3) The spectral density f (ω) of {ε i } satisfies 0 < c1f (ω) ≤ c2< ∞, for ω (-π, π].

(A4) Let g(·) and h(·) satisfy the Lipschitz condition of order 1 on [0, 1], and h (·) Hv , ν> 3 2 , where Hv is the Sobolev space of order v.

(A5) Scaling function φ(·) is γ-regular (γ is a positive integer) and has a compact support, satisfies the Lipschitz condition of order 1 and | φ ^ ( ξ ) -1|=O ( ξ ) as ξ → 0, where φ ^ denotes the Fourier transform of φ.

(A6) max1 ≤ in|s i -si-1| = O (n-1).

(A7) There exists a positive constant d1, such that min 1 i n ( t i - t i - 1 ) d 1 1 n .

For the sake of convenience, we use the following notations. Let p = p(n), q = q(n) denote positive integers such that p + q ≤ 3n and qp-1c < ∞. Set

σ n 1 2 = Var i = 1 n u i ε i , σ n 2 2 = Var i = 1 n ε i A i E m ( t , s ) d s , u ( n ) = j = n α δ / ( 2 + δ ) ( j ) ; γ 1 n = q p - 1 , γ 2 n = p n - 1 , γ 3 n = n | j | > n | a j | 2 , γ 4 n = n p - 1 α ( q ) ; λ 1 n = q p - 1 2 m , λ 2 n = p n - 1 2 m , λ 3 n = γ 3 n , λ 4 n = γ 4 n , λ 5 n = 2 - m / 2 + 2 m / n log n ; μ n ( ρ , p ) = i = 1 3 γ i n 1 / 3 + u ( q ) + γ 2 n ρ + γ 4 n 1 / 4 ; v n ( m ) = 2 - 2 m 3 + ( 2 m / n ) 1 / 3 log 2 / 3 n + 2 - m log n + n 1 / 2 2 - 2 m .

After these assumptions and notations we can formulate the main results as follows:

Theorem 2.1. Suppose that (A1)-(A7) hold. If ρ satisfies

0 < ρ 1 / 2 , ρ < min δ 2 , δ λ - ( 2 + δ ) 2 λ + ( 2 + δ ) ,
(2.1)

then

sup u P S n 2 ( β ^ n - β ) σ n 1 u - Φ ( u ) C 1 ( μ n ( ρ , p ) + v n ( m ) ) .

Corollary 2.1 Under the same conditions as in Theorem 2.1, if ρ = 1/3, 2m= O(n2/5), sup n 1 n 7 / 8 ( log n ) - 9 / 8 | j | > n | a j |< and δ > 1/3, λmax { 2 + δ δ , 7 δ + 14 6 δ - 2 } , then for each t [0, 1], we have

P S n 2 ( β ^ n - β ) σ n 1 u - Φ ( u ) C 2 n - λ 6 λ + 7 .
(2.2)

Theorem 2.2. Suppose that the conditions in Theorem 2.1 are satisfied. Let n-12m→ 0, then for each t [0, 1]

sup u P ĝ n ( t ) - E ĝ n ( t ) σ n 2 u - Φ ( u ) C 3 i = 1 3 λ in l / 3 + u ( q ) + λ 2 n ρ + λ 4 n 1 / 4 + λ 5 n ( 2 + δ ) / ( 3 + δ ) .

Corollary 2.2. Under the conditions of Theorem 2.2 with ρ = 1/3, δ > 2/3, if n-12m= O(n-θ) with λ + 1 2 λ + 1 <θ3/4, and λ> ( 2 + δ ) ( 9 θ - 2 ) 2 θ ( 3 δ - 2 ) + 2 , then

sup u P ĝ n t - E ĝ n t σ n 2 u - Φ u C 4 n - min λ 2 θ - 1 + θ - 1 6 λ + 7 , 4 λ + 4 22 λ + 11 .
(2.3)

Remark 2.1. Let h ̃ ( t ) = h ( t ) - j = 1 n h ( t j ) A j E m ( t , s ) d s , under the assumptions (A4)-(A7) and by the relation (11) of the proof of Theorem 3.2 in [15], we obtain sup t | h ̃ ( t ) |=O ( n - 1 + 2 - m ) . Similarly, let g ̃ ( t ) =g ( t ) - j = 1 n g ( t j ) A j E m ( t , s ) ds, then sup t | g ̃ ( t ) |=O ( n - 1 + 2 - m ) .

Remark 2.2. (i) By Corollary 2.1, the Berry-Esseen bound of the wavelet estimator β ^ n is near O ( n - 1 6 ) for sufficiently large λ, which is faster than the one in [16]) that can get O(n-δ/4log n) for δ ≤ 1/2 or O(n-1/8) for δ > 1/2 for strong mixing sequence, but slower than the one in [6] for weighted estimate that can get O(n-1/4(log n)3/4).

  1. (ii)

    From Corollary 2.2, the Berry-Esseen bound of the wavelet estimator ĝ n (·) is near O(n-1/12) for sufficiently large λ and θ = 3/4.

3 Some preliminary lemmas for β ^ n

From the definition of β ^ n in (1.4), we write

S n β : = σ n 1 - 1 S n 2 ( β ^ n - β ) = σ n 1 - 1 i = 1 n x ̃ i ε i - i = 1 n x ̃ i j = 1 n ε j A j E m ( t i , s ) d s + i = 1 n x ̃ i g ̃ i : = S n 1 + S n 2 + S n 3 ,
(3.1)

where

S n 1 = σ n 1 - 1 i = 1 n x ̃ i ε i = σ n 1 - 1 i = 1 n u i ε i + σ n 1 - 1 i = 1 n h ̃ i ε i - σ n 1 - 1 i = 1 n ε i j = 1 n u j A j E m ( t i , s ) d s = S n 11 + S n 12 + S n 13 ,
(3.2)
| S n 2 | = σ n 1 - 1 i = 1 n x ̃ i j = 1 n ε j A j E m ( t i , s ) d s σ n 1 - 1 i = 1 n u i j = 1 n ε j A j E m ( t i , s ) d s + σ n 1 - 1 i = 1 n h ̃ i j = 1 n ε j A j E m ( t i , s ) d s + σ n 1 - 1 i = 1 n l = 1 n u l A l E m ( t i , s ) d s j = 1 n ε j A j E m ( t i , s ) d s = : S n 21 + S n 22 + S n 23 ,
(3.3)
S n 3 = σ n 1 - 1 i = 1 n x ̃ i g ̃ i .
(3.4)

For Sn 11, we can write

S n 11 = σ n 1 - 1 i = 1 n u i ε i = σ n 1 - 1 i = 1 n u i j = - n n a j e i - j + σ n 1 - 1 i = 1 n u i | j | > n a j e i - j : = S n 111 + S n 112 .
(3.5)

It is not difficult to see that

S n 111 = l = 1 - n 2 n σ n 1 - 1 i = max { 1 , l - n } min { n , l + n } u i a i - l e l l = 1 - n 2 n Z n l .

Let k = [3n/(p + q)], then Sn 111may be split as

S n 111 = S n 111 + S n 111 + S n 111 ,
(3.6)

where

S n 111 = w = 1 k y 1 n w , S n 111 = w = 1 k y 1 n w , S n 111 = y 1 n k + 1 , y 1 n w = i = k w k w + p - 1 Z n i , y 1 n w = i = l w l w + q - 1 Z n i , y 1 n k + 1 = i = k ( p + q ) - n + 1 2 n Z n i , k w = ( w - 1 ) ( p + q ) + 1 - n , l w = ( w - 1 ) ( p + q ) + p + 1 - n , w = 1 , . . . , k .

From (3.1) to (3.6), we can write that

S n β = S n 111 + S n 111 + S n 111 + S n 112 + S n 12 + S n 13 + S n 2 + S n 3 .

Now, we establish the following lemmas with its proofs.

Lemma 3.1. Suppose that (A1), (A2)(i), and (A3) hold, then

c 1 π n σ n 1 2 c 2 π n , c 3 n - 1 2 m σ n 2 2 c 4 n - 1 2 m .

Proof. According to the proofs of (3.4) and Theorem 2.3 in [17], for any sequence {γ l }lN, we have

2 c 1 π l = 1 n γ l 2 E l = 1 n γ l ε l 2 2 c 2 π l = 1 n γ l 2 ,

which implies the desired results by Lemma A.4 and assumption(A2)(i).   ♣

Lemma 3.2. Let assumptions (A1)-(A3), (A5), and (A6) be satisfied, then

E S n 111 2 C γ 1 n , E S n 111 2 C γ 2 n , E S n 112 2 C γ 3 n ;
(3.7)
P S n 111 γ 1 n 1 / 3 C γ 1 n 1 / 3 , P S n 111 γ 2 n 1 / 3 C γ 2 n 1 / 3 , P S n 112 γ 3 n 1 / 3 C γ 3 n 1 / 3 .
(3.8)

Proof. By Lemmas 3.1 and A.1(i), and assumptions (A1)(i) and (A2)(i), we have

E S n 111 2 C w = 1 k i = l w l w + q - 1 σ n 1 - 2 j = max 1 , i - n min n , i + n u i a j - i 2 e i 2 + δ 2 C w = 1 k i = l w l w + q - 1 n - 1 max 1 i n | u i | 2 j = max 1 , i - n min n , i + n a j - i 2 C k q n - 1 i = - a j 2 C q p - 1 = C γ 1 n ,
(3.9)
E S n 111 2 C i = k p + q - n + 1 2 n σ n 1 - 2 j = max 1 , i - n min n , i + n u i a j - i 2 e i 2 + δ 2 C i = k ( p + q ) - n + 1 2 n n - 1 max 1 i n | u i | 2 j = max 1 , i - n min n , i + n a j - i 2 C 3 n - k p + q n - 1 j = - a j 2 C p n - 1 = C γ 2 n ,
(3.10)

and by the Cauchy-inequality

E S n 112 2 C σ n 1 - 2 i = 1 n u i 2 E i = 1 n j > n a j e i - j 2 C σ n 1 - 2 n i = 1 n j > n a j 2 e i 2 + δ 2 C n j > n a j 2 = C γ 3 n .
(3.11)

Then, from (3.9) to (3.11), the proof of (3.7) is complete, which implies the desired result (3.8) by the Markov-inequality.   ♣

Lemma 3.3. Let assumptions (A1)-(A7) be satisfied, then

a P S n 12 C n - 1 + 2 - m 2 / 3 C n - 1 + 2 - m 2 / 3 . b P S n 13 C 2 m n - 1 log 2 n 1 / 3 C 2 m n - 1 log 2 n 1 / 3 . c P S n 21 C 2 m n - 1 log 2 n 1 / 3 C 2 m n - 1 log 2 n 1 / 3 . d P S n 22 C n - 1 + 2 - m 2 / 3 C n - 1 + 2 - m 2 / 3 . e P S n 23 C 2 m n - 1 log 2 n 1 / 3 C 2 m n - 1 log 2 n 1 / 3 . f S n 3 C 2 - m log n + n 1 / 2 2 - 2 m .
(3.12)

Proof. (a) By assumption (A2), Remark 2.1 and Lemma 3.1, we get

E S n 12 2 c 2 π σ n 1 - 2 i = 1 n h ̃ i 2 C n - 1 + 2 - m 2 ,

By this and the Markov inequality, we have

P S n 12 C n - 1 + 2 - m 2 / 3 C n - 1 + 2 - m 2 / 3 .
(3.13)
  1. (b)

    Applying Lemmas 3.1, A.4, and A.5, we get that

    E S n 13 2 σ n 1 - 2 c 2 i = 1 n j = 1 n u j A j E m t i , s d s 2 c 2 σ n 1 - 2 max 1 i , j n A j E m t i , s d s max 1 j n i = 1 n A j E m t i , s d s max 1 m n i = 1 m u j i 2 C 2 m n - 1 log 2 n .

Therefore

P S n 13 C 2 m n - 1 log 2 n 1 / 3 C 2 m n - 1 log 2 n 1 / 3 .
(3.14)
  1. (c)

    Changing the order of summation in {S n 21}, similarly to the calculation for E S n 13 2 ,

    E S n 21 2 σ n 1 - 2 c 2 j = 1 n i = 1 n u i A j E m t i , s d s 2 c 2 σ n 1 - 2 max 1 i , j n A j E m t i , s d s max 1 i n j = 1 n A j E m t i , s d s max 1 m n i = 1 m u j i 2 C 2 m n - 1 log 2 n .

Therefore, we obtain that

P S n 21 C 2 m n - 1 log 2 n 1 / 3 C 2 m n - 1 log 2 n 1 / 3 .
(3.15)
  1. (d)

    Similarly, by Lemmas 3.1, A.4, A.5, and Remark 2.1, we get that

    E S n 22 2 c 2 σ n 1 - 2 i = 1 n j = 1 n h ̃ j A i E m t j , s d s 2 c 2 σ n 1 - 2 sup t j h ̃ j 2 i = 1 n j = 1 n A i E m t j , s d s j = 1 n A i E m t i , s d s c 2 σ n 1 - 2 sup t j h ̃ j 2 n C n - 1 + 2 - m 2 .

Thus, we have

P { | S n 22 | C ( n - 1 + 2 - m ) 2 / 3 } C ( n - 1 + 2 - m ) 2 / 3 .
(3.16)
  1. (e)

    We write that

    S n 23 = σ n 1 - 1 j = 1 n i = 1 n A j E m t i , s d s l = 1 n A l E m t i , s u l d s ε j .

Similarly to the calculation for E S n 13 2 by (3.13), Lemmas 3.1, A.4, and A.5, we obtain that

E S n 23 2 c 2 π σ n 1 - 2 j = 1 n i = 1 n A j E m t i , s d s l = 1 n A l E m t i , s u l d s 2 c 2 π σ n 1 - 2 max 1 i , j n A i E m t j , s d s max 1 j n i = 1 n A i E m t j , s d s max 1 l n j = 1 n A l E m t j , s d s max 1 m n i = 1 m u j i 2 C 2 m n - 1 log 2 n .

Hence, we have

P S n 23 C 2 m n - 1 log 2 n 1 / 3 C 2 m n - 1 log 2 n 1 / 3 .
(3.17)
  1. (f)

    By assumption (A2), Remarks 2.1, Lemma A.5, and the Abel inequality, we have

    σ n 1 S n 3 i = 1 n u i g ̃ i + i = 1 n h ̃ i g ̃ i + i = 1 n j = 1 n u j A j E m t i , s d s g ̃ i c max 1 i n | g ̃ i | max 1 k n i = 1 k u j i + n max 1 i n | h ̃ i | max 1 i n | g ̃ i | + max 1 i n | g ̃ i | max 1 j n i = 1 n A j | E m t i , s | d s max 1 k n i = 1 k | u j i | = C 1 n - 1 + 2 - m n log n + C 2 n n - 1 + 2 - m 2 .

Thus, by Lemma 3.1 we obtain

S n 3 C 1 n - 1 + 2 - m log n + C 2 n n - 1 + 2 - m 2 C 2 - m log n + n 1 / 2 2 - 2 m .
(3.18)

Therefore, the desired result (3.12) follows from (3.13)-(3.18) immediately.   ♣

Lemma 3.4. Suppose that (A1)-(A3), (A5), and (A6) hold. Set s n 2 w = 1 k Var y 1 n w , then

s n 2 - 1 C γ 1 n 1 / 2 + γ 2 n 1 / 2 + γ 3 n 1 / 2 + u q .

Proof. Let Γ n = 1 i < j k Cov(y1ni, y1nj), then s n 2 = E S n 111 2 - 2 Γ n . By (3.5) and (3.6), it is easy to verify that E S n 11 2 = 1 , and

E S n 111 2 =1+ E S n 111 + S n 111 + S n 112 2 -2 E S n 11 S n 111 + S n 111 + S n 112 .

According to Lemma 3.2, the C r -inequality and the Cauchy-Schwarz inequality

E S n 111 + S n 111 + S n 112 2 C γ 1 n + γ 2 n + γ 3 n ,

and

E S n 11 S n 111 + S n 111 + S n 112 C γ 1 n 1 / 2 + γ 2 n 1 / 2 + γ 3 n 1 / 2 .

Thus, we obtain

| E S n 111 2 - 1 | C γ 1 n 1 / 2 + γ 2 n 1 / 2 + γ 3 n 1 / 2 .
(3.19)

On the other hand, from Lemma 1.2.4 in [7], Lemmas 3.1 and A.4(iv), we can estimate

| Γ n | 1 i j k s 1 = k i k i + p - 1 t 1 = k j k j + p - 1 | Cov ( Z n s 1 , Z n t 1 ) | C n 1 i j k s 1 = k i k i + p - 1 t 1 = k j k j + p - 1 u = max 1 , s 1 - n min n , s 1 + n v = max 1 , t 1 - n min n , t 1 + n | u u - s 1 u v - t 1 | | a u - s 1 a v - t 1 | | Cov ( e s 1 , e t 1 ) | C n 1 i j k s 1 = k i k i + p - 1 t 1 = k j k j + p - 1 u = max 1 , s 1 - n min n , s 1 + n v = max 1 , t 1 - n min n , t 1 + n | a u - s 1 a v - t 1 | α δ / 2 + δ ( t 1 - s 1 ) | | e t 1 | | 2 + δ | | e s 1 | | 2 + δ C n i = 1 k - 1 s 1 = k i k i + p - 1 u = max 1 , s 1 - n min n , s 1 + n j = i + 1 k t 1 = k j k j + p - 1 v = max 1 , t 1 - n min n , t 1 + n α δ / 2 + δ ( t 1 - s 1 ) | a u - s 1 a v - t 1 | C n i = 1 k - 1 s 1 = k i k i + p - 1 j = i + 1 k t 1 = k j k j + p - 1 α δ / 2 + δ t 1 - s 1 C n i = 1 k - 1 s 1 = k i k i + p - 1 t 1 : t 1 - s 1 q α δ / 2 + δ t 1 - s 1 C k p n - 1 u q C u q .
(3.20)

Therefore, by (3.19) and (3.20), it follows that

s n 2 - 1 | E S 1 n 2 - 1 | + 2 Γ n C γ 1 n 1 / 2 + γ 2 n 1 / 2 + γ 3 n 1 / 2 + u q .

   ♣

Assume that {η1nw: w = 1, . . . , k} are independent random variables, and its distribution is the same as that of {y1nw, w = 1, . . . , k}. Set T n = w = 1 k η 1 n w , B n 1 2 = w = 1 k Var ( η 1 n w ). Clearly B n 1 2 = s n 1 2 . Then, we have the following lemmas:

Lemma 3.5. Let assumptions (A1)-(A3), (A5), (A6), and (2.1) hold, the

sup u P T n / B n 1 u - Φ u C γ 2 n ρ .

Proof. By the Berry-Esseen inequality (see [18], Theorem 5.7]), we have

sup u | P T n / B n 1 u - Φ ( u ) | C w = 1 k E y 1 n w r B n 1 r , for 2 < r 3 .
(3.21)

From (2.1), we have 0 < 2ρ ≤ 1, 0 < 2ρ < δ, and, (2 + δ)/δ < (1 + ρ) (2 + δ)/(δ - 2ρ) < λ. Let r = 2(1 + ρ), τ = δ- 2ρ, then r + τ = 2 + δ, and r r + τ 2 τ = 1 + ρ 2 + δ δ - 2 ρ <λ. According to Lemmas 3.1 and A.1(ii), and the C r -inequality, taking ε = ρ, we get that

w = 1 k E| y 1 n w | r C w = 1 k p ρ j = k w k w + p - 1 i = max 1 , j - n min n , j + n σ n 1 - 1 u i a i - j r E e j r + j = k w k w + p - 1 i = max 1 , j - n min n , j + n σ n 1 - 1 u i a i - j 2 | | e i | | 2 + δ 2 r / 2 C σ n 1 - r k p 1 + ρ C γ 2 n ρ .
(3.22)

Therefore, from Lemma 3.4, relations (3.21) and (3.22), we obtain the result.   ♣

Lemma 3.6. Suppose that the conditions in Lemma 3.5 are satisfied, then

sup u | P S n 111 u - P T n u | C γ 2 n ρ + γ 4 n 1 / 4 .

Proof. Let ϕ1(t) and ψ1 (t) be the characteristic functions of S n 111 and Tn 1, respectively.

Since

ψ 1 t = E exp i t T n 1 = w = 1 k E exp i t η 1 n w = w = 1 k E exp i t y 1 n w ,

then from Lemmas A.1(i), A.2, and 3.1, it follows that

| ϕ 1 ( t ) - ψ 1 ( t ) | C t α 1 / 2 ( q ) w = 1 k | | y 1 n w | | 2 C t α 1 / 2 ( q ) w = 1 k E i = k w k w + p - 1 σ n - 1 j = max 1 , i - n min n , i + n u i a j - i e i 2 1 / 2 C t α 1 / 2 ( q ) w = 1 k i = k w k w + p - 1 σ n - 2 j = max 1 , i - n min n , i + n u i a j - i 2 E e i 2 + δ 2 / 2 + δ 1 / 2 C t α 1 / 2 ( q ) k w = 1 k i = k w k w + p - 1 σ n - 2 1 / 2 C t α 1 / 2 ( q ) k p 1 / 2 n - 1 / 2 C t ( k α ( q ) ) 1 / 2 = C t γ 4 n 1 / 2 .

Therefore

- T T ϕ 1 ( t ) - ψ 1 ( t ) t dtC γ 4 n 1 / 2 T.
(3.23)

As in the calculation of (4.7) in [14], using Lemma 3.5, we have

T sup u y c / T | P ( T n u + y ) - P ( T n u ) | d y C γ 2 n ρ + 1 / T .
(3.24)

Therefore, combining (3.23) and (3.24), choosing T= γ 4 n - 1 / 4 , and using the Esseen inequality (see [[18], Theorem 5.3]), we conclude that

sup u | P ( S n 111 u ) - P ( T n u ) | - T T ϕ 1 ( t ) - ψ 1 ( t ) t d t + T sup u y c / T | G ̃ n ( u + y ) - G ̃ n ( u ) | d y = C γ 2 n ρ + γ 4 n 1 / 4 .

   ♣

4 Some preliminary lemmas for ĝ n (t)

From the definition of ĝ n (t) in (1.5), We can decompose the sum into three parts:

S n g : = σ n 2 - 1 ( ĝ n ( t ) - E ĝ n ( t ) ) = σ n 2 - 1 i = 1 n ε i A i E m ( t , s ) d s + σ n 2 - 1 i = 1 n x i ( β - β ^ n ) A i E m ( t , s ) d s - σ n 2 - 1 i = 1 n x i ( β - E β ^ n ) A i E m ( t , s ) d s = : H 1 n + H 2 n + H 3 n .

Let us decompose the vector H1ninto two parts:

H 1 n = σ n 2 - 1 i = 1 n A i E m ( t , s ) d s j = - n n a j e i - j + σ n 2 - 1 i = 1 n A i E m ( t , s ) d s | j | > n a j e i - j = : H 11 n + H 12 n ,

Where

H 11 n = σ n 2 - 1 l = 1 - n 2 n i = max 1 , l - n min n , n + l a i - l A i E m t , s d s e l = l = 1 - n 2 n M n l .

Similar to Sn 111in (3.6), H11ncan be split as H 11 n = H 11 n + H 11 n + H 11 n , where

H 11 n = w = 1 k y 2 n w , H 11 n = w = 1 k y 2 n w , H 11 n = y 2 n k + 1 , y 2 n w = i = k w k w + p - 1 M n i , y 2 n w = i = l w l w + q - 1 M n i , y 2 n k + 1 = i = k p + q - n + 1 2 n M n i .
(4.1)

Then

S n g = H 11 n + H 11 n + H 11 n + H 2 n + H 3 n .
(4.2)

Set T n 2 = w = 1 k η 2 n w , B n 2 2 = w = 1 k Var ( η 2 n w ) . Similarly to Lemmas 3.2-3.6, we have the following lemmas without proofs, except for Lemma 4.2.

Lemma 4.1. Suppose that the conditions in Theorem 2.2 are satisfied, then

E H 11 n 2 C λ 1 n , E H 11 n 2 C λ 2 n , E H 12 n 2 C λ 3 n ; P H 11 n λ 1 n 1 / 3 C λ 1 n 1 / 3 , P H 11 n λ 2 n 1 / 3 C λ 2 n 1 / 3 , P H 12 n λ 3 n 1 / 3 C λ 3 n 1 / 3 .

Lemma 4.2. Let assumptions (A1)-(A7) be satisfied, then

E H 2 n 2 + δ c λ 5 n 2 + δ , P H 2 n > λ 5 n 2 + δ / 3 + δ λ 5 n 2 + δ / 3 + δ , H 3 n c λ 5 n .

Lemma 4.3. Under the conditions of Theorem 2.2, set s n 2 2 = w = 1 k Var y 2 n w , then

s n 2 2 - 1 C λ 1 n 1 / 2 + λ 2 n 1 / 2 + λ 3 n 1 / 2 + u q .

Lemma 4.4. Suppose that the conditions in Theorem 2.2 are satisfied, then

sup u P T n 2 / B n 2 u - Φ u c λ 2 n ρ .

Lemma 4.5. Suppose that the conditions in Theorem 2.2 are satisfied, then

sup u | P H 11 n u - P T n 2 u | C λ 2 n ρ + γ 4 n 1 / 4 .

Proof of Lemma 4.2. Similar to the proof of (A.8) in [6], we first verify that

lim n S n n = lim n 1 n i = 1 n x ̃ i 2 = , where 0 < < .
(4.3)

From (1.2), we write

1 n i = 1 n x ̃ i 2 = 1 n i = 1 n u i 2 + 1 n i = 1 n h ̃ i 2 + 1 n i = 1 n j = 1 n u j A j E m t i , s d s 2 + 2 n i = 1 n u i h ̃ i - 2 n i = 1 n u i j = 1 n u j A j E m t i , s d s - 2 n i = 1 n h ̃ i j = 1 n u j A j E m t i , s d s = L 1 n + L 2 n + L 3 n + 2 L 4 n - 2 L 5 n - 2 L 6 n .
(4.4)

By assumption (A2)(i) and Remark 2.1, we have

L 1 n , L 2 n max 1 i n h ̃ i 2 = O n - 1 + 2 - m 0 ,
(4.5)

and by assumption (A2)(iii), Lemmas A.4 and A.5, we get that

L 3 n c n max 1 i , j n A j | E m t i , s | d s max 1 j n i = 1 n | A j E m t i , s | d s max 1 j n i = 1 l u j i 2 = O 2 m log 2 n n 0 , L 4 n c n max 1 i n h ̃ i max 1 l n i = 1 l u j i = O log n 2 m n 0 , L 5 n c n max 1 i , j n A j E m t i , s d s max 1 l n i = 1 l u j i max 1 l n i = 1 l u j i = O 2 m log 2 n n 0 , L 6 n c n max 1 i n h ̃ i max 1 j n i = 1 n A j | E m t i , s | d s max 1 l n i = 1 l u j i = O log n 2 m n 0 .
(4.6)

Therefore, from (4.4) to (4.6), we complete the proof of (4.3).

Recalling the fact that if ξ n ξ ~ N (0, 1) then E ξ n E ξ = 2 / π and E|ξ n |2+δ→ E|ξ|2+δ< ∞, by Theorem 2.1, Lemma 3.1 and relation (4.3), we deduce that

| β - E β ^ n | E | β - β ^ n | O σ n 1 S n 2 = O n - 1 / 2 ,

and

E | β ^ n - β | 2 + δ O σ n 1 S n 2 2 + δ = O n - 1 + δ / 2 .

Therefore, applying the Abel Inequality, by (A2)(iii) and (A4)(i), we get that

H 3 n = σ n 2 - 1 β - E β ^ n . i = 1 n x i A i E m t , s d s σ n 2 - 1 n - 1 / 2 sup 0 t 1 h t + max 1 i n A i E m t , s d s max 1 l n i = 1 l u j i c 2 - m / 2 + 2 m / n log n = c λ 5 n ,
(4.7)

and

E | H 2 n | 2 + δ = σ n 2 - 2 + δ E β - β ^ n 2 + δ i = 1 n x i A i E m t , s d s 2 + δ c σ n 2 - 2 + δ n - 2 + δ / 2 sup 0 t 1 h ̃ t + max 1 i n A i E m t , s d s max 1 l n i = 1 l u j i 2 + δ c λ 5 n 2 + δ .
(4.8)

By (4.8), it follows

P | H 2 n | > λ 5 n 2 + δ 3 + δ λ 5 n 2 + δ 3 + δ .
(4.9)

Therefore, the proof is completed by (4.7)-(4.9).

5 Proofs of main results

Proof of Theorem 2.1. Note that

sup u P S n 111 u - Φ u sup u P S n 111 u - P T n u + sup u P T n u - Φ u s n + sup u Φ u s n - Φ u = : J 1 n + J 2 n + J 3 n .
(5.1)

By Lemmas 3.6, 3.5, and 3.4, we obtain, respectively

J 1 n C γ 2 n ρ + γ 4 n 1 / 4 ,
(5.2)
J 2 n = sup u | P T n / s n u / s n - Φ u s n = sup u P T n / s n u - Φ u C γ 4 n ρ ,
(5.3)

and

J 3 n C s n 2 - 1 C γ 1 n 1 / 2 + γ 2 n 1 / 2 + γ 3 n 1 / 2 + u q .
(5.4)

Hence, from (5.1) to (5.4), we get that

sup u | P S n 111 u - Φ u | C i = 1 3 γ i n 1 / 2 + u q + γ 2 n ρ + γ 4 n 1 / 4 .
(5.5)

Therefore, according to Lemma A.3, relations (3.8), (3.12), and (5.5), we obtain

sup u | P S n β u - Φ u | = sup u | P S n 111 + S n 111 + S n 111 + S n 112 + S n 12 + S n 13 + S n 2 + S n 3 u - Φ u | C sup u | P S n 111 u - Φ u | + i = 1 3 γ i n 1 / 3 + P S n 111 γ 1 n 1 / 3 + P S n 111 γ 2 n 1 / 3 + P S n 112 γ 3 n 1 / 3 + ( n - 1 + 2 - m ) 2 / 3 + ( 2 m n - 1 log 2 n ) 1 / 3 ) + 2 - m log n + n 1 / 2 2 - 2 m + P ( | S n 12 | ( n - 1 + 2 - m ) 2 / 3 ) + P ( | S n 13 | ( 2 m n - 1 log 2 n ) 1 / 3 ) + P ( | S n 21 | ( 2 m n - 1 log 2 n ) 1 / 3 ) + P ( | S n 22 | ( n - 1 + 2 - m ) 2 / 3 ) + P ( | S n 23 | ( 2 m n - 1 log 2 n ) 1 / 3 ) c γ 1 n 1 / 2 + γ 2 n 1 / 2 + γ 3 n 1 / 2 + u q + γ 2 n ρ + γ 4 n 1 / 4 + i = 1 3 γ i n 1 / 3 + c 2 - 2 m 3 + 2 m n - 1 log 2 n 1 / 3 + 2 - m log n + n 1 / 2 2 - 2 m c i = 1 3 γ i n 1 / 3 + u q + γ 2 n ρ + γ 4 n 1 / 4 + c 2 - 2 m / 3 + 2 m n - 1 log 2 n 1 / 3 + 2 - m log n + n 1 / 2 2 - 2 m = O μ n ρ , p + O υ n m .

This completes the proof of the theorem. ♣

Proof of Corollary 2.1. Let p= n τ ,q= n 2 τ - 1 , where τ= 3 λ + 7 6 λ + 7 ,then

γ 1 n 1 / 3 = γ 2 n 1 / 3 = O n τ - 1 / 3 = O n - λ 6 λ + 7 , γ 4 n 1 / 4 = O n - λ 6 λ + 7 , λ 3 n 1 / 3 = n - 1 / 4 log n 3 / 4 sup n 1 n 7 / 8 log n - 9 / 8 j > n a j 2 / 3 = O n - 1 / 4 log n 3 / 4 .
By δ > 1/3, λ max 2 + δ δ , 7 δ + 14 6 δ - 2 , we get
u q = O q - λ δ / 2 + δ + 1 = O n - 7 6 λ + 7 λ δ 2 + δ - 1 O n - λ 6 λ + 7 .

Thus, we have that

μ n ρ , p = O n - λ 6 λ + 7 , υ n m = O n - 1 5 .

Therefore, (2.2) follows from Theorem 2.1 directly. ♣

Proof of Theorem 2.2. Analogous to the Proof of Theorem 2.1, we write

sup u | P ( H 11 n u ) - Φ ( u ) | sup u | P ( H 11 n u ) - P ( T n 2 u ) | + sup u | P ( T n 2 u ) - Φ ( u / s n 2 ) | + sup u | Φ ( u / s n 2 ) - Φ ( u ) | = : J 1 n + J 2 n + J 3 n .
(5.6)

By Lemmas 4.5, 4.4, and 4.3, we obtain, respectively

J 1 n C λ 2 n ρ + γ 4 n 1 / 4 , J 2 n = sup u | P T n 2 / s n 2 u - Φ u | C λ 2 n ρ ,
(5.7)
J 3 n C s n 2 2 - 1 C λ 1 n 1 / 2 + λ 2 n 1 / 2 + λ 3 n 1 / 2 + u q .
(5.8)

Therefore, from (5.6) to (5.8), we have

sup n | P H 11 n u - Φ u | C i = 1 3 λ i n 1 / 2 + u q + λ 2 n ρ + λ 4 n 1 / 4 .

Thus, according to Lemmas A.3, 4.1, and 4.2, we obtain

sup u | P ( S n g u ) - Φ ( u ) | = sup u | P ( H 11 n + H 11 n + H 11 n + H 12 n + H n 2 + H n 3 u ) - Φ ( u ) | C sup u | P ( H 11 n u ) - Φ ( u ) | + i = 1 3 λ i n 1 / 3 + P H 11 n λ 1 n 1 / 3 + P H 11 n λ 2 n 1 / 3 + P H 12 n λ 3 n 1 / 3 + λ 5 n + λ 5 n ( 2 + δ ) / ( 3 + δ ) + P H 12 n > λ 5 n ( 2 + δ ) / ( 3 + δ ) c i = 1 3 λ i n 1 / 2 + u ( q ) + λ 2 n ρ + λ 4 n 1 / 4 + i = 1 3 λ i n 1 / 3 + λ 5 n ( 2 + δ ) / ( 3 + δ ) c i = 1 3 λ i n 1 / 3 + u ( q ) + λ 2 n ρ + λ 4 n 1 / 4 + λ 5 n 2 + δ / 3 + δ .

Now we complete the Proof of Theorem 2.2. ♣

Proof of Corollary 2.2. Let p = n τ , q = n 2 τ - 1 , where τ = 1 2 + 8 θ - 1 2 6 λ + 7 . We can get τ<θ by λ + 1 2 λ + 1 <θ. By this, we can obtain

λ 1 n 1 / 3 = λ 2 n 1 / 3 = O n - θ - τ / 3 = O n - λ 2 θ - 1 + θ - 1 6 λ + 7 , λ 4 n 1 / 4 = o n - λ 2 θ - 1 + θ - 1 6 λ + 7 , λ 3 n 1 / 3 = n - 1 / 4 log n 3 / 4 sup n 1 n 7 / 8 log n - 9 / 8 j > n a j 2 / 3 = O n - 1 / 4 log n 3 / 4 .

And by δ > 2/3, and θ ≤3/4, we have

λ 5 n 2 + δ / 3 + δ = O n - θ 2 . 2 + δ 3 + δ O n - 4 11 θ O n - 4 λ + 4 22 λ + 11 .

Since λ> ( 2 + δ ) ( 9 θ - 2 ) 2 θ ( 3 δ - 2 ) + 2 , we can obtain

8 θ - 1 λ δ - δ - 2 7 + 6 λ 2 + δ > λ 2 θ - 1 + θ - 1 6 λ + 7 .

By this we have

u q = O q - λ δ / 2 + δ + 1 = O n - 8 θ - 1 λ δ - δ - 2 7 + 6 λ 2 + δ = O n - λ 2 θ - 1 + θ - 1 6 λ + 7 .

Therefore, (2.3) follows from Theorem 2.2 directly. ♣

Appendix

Lemma A.1. [8, 9] Let {b j : j ≥ 1} be a real sequence.

  1. (i)

    Let {X j : j ≥ 1} be an α-mixing sequence of random variables with E| X i | 2 + δ <,δ>0, then E j = 1 n b j X j 2 1 + 20 m = 1 n α δ / 2 + δ m j = 1 n b j 2 X j 2 + δ 2 ,n1.

  2. (ii)

    Let r > 2, δ > 0, and {X i , i ≥ 1} be an α-mixing sequence of random variables with EX i = 0 and E|X i |r+δ< ∞. Suppose that θ > r (r + δ)/(2δ) and α(n) ≤ Cn -θfor some C > 0. Then, for any ε > 0, there exists a positive constants K = K (ε, r, δ,θ, C) < ∞ such that

    E max 1 j n i = 1 j X i r K n ε i = 1 n E X i r + i = 1 n X i r + δ 2 r / 2 .

Lemma A.2. [10] Let {X j : j ≥ 1} be an α-mixing sequence of random variables, p and, q are two positive integers. Let η l := j = l - 1 p + q + 1 l - 1 p + q + p X j for 1lk. If r > 0, s > 0 and 1 r + 1 s =1, then

E exp i t l = 1 k η l - l = 1 k E exp i t η l C t α 1 / s q l = 1 k η l r .

Lemma A.3. [14] Suppose that {ζ n : n ≥ 1}, {η n : n ≥ 1} and {ξ n : n ≥ 1} are random sequences, {γ n : n ≥ 1} is a positive constant sequence with γ n → 0. If sup u | F ζ n u -Φ u |C γ n , then for any ε1 > 0, and ε2> 0

sup u | F ζ n + η n + ξ n u - Φ u | C γ n + ε 1 + ε 2 + P η n ε 1 + P ξ n ε 2 .

Lemma A.4. [19] Under assumptions (A5)-(A6), we have

  1. (i)

    | A i E m ( t , s ) ds|=O ( 2 m n ) ,i=1,2,...,n; (ii) i = 1 n A i E m ( t , s ) d s 2 =O ( 2 m n ) ;

  2. (iii)

    sup m 0 1 | E m t , s ds|C; (iv) i = 1 n | A i E m t , s ds|C.

Lemma A.5. [20] Under assumptions (A5) and (A7), we have

  1. (i)

    max 1 i n j = 1 n A j | E m t i , s |dsc; (ii) max 1 i n j = 1 n A i | E m t j , s |dsc.

References

  1. Engle R, Granger C, Rice J, Weiss A: Nonparametric estimates of the relation between weather and electricity sales. J Am Stat Assoc 1986, 81: 310–320. 10.2307/2289218

    Article  Google Scholar 

  2. Chen H, Shiah J: Data-driven efficient estimation for a partially linear model. Ann Stat 1994, 22: 211–237. 10.1214/aos/1176325366

    Article  Google Scholar 

  3. Donald G, Dewey K: Series estimation of semi linear models. J Multivar Anal 1994, 50: 30–40. 10.1006/jmva.1994.1032

    Article  Google Scholar 

  4. Hamilton SA, Truong YK: Local linear estimation in partly linear models. J Multivar Anal 1997, 60: 1–19. 10.1006/jmva.1996.1642

    Article  MathSciNet  Google Scholar 

  5. Sun XQ, You JH, Chen GM, Zhou X: Convergence rates of estimators in partial linear regression models with MA(∞) error process. Commun Stat Theory Methods 2002, 31: 2251–2273. 10.1081/STA-120017224

    Article  MathSciNet  Google Scholar 

  6. Liang HY, Fan GL: Berry-Esseen type bounds of estimators in a semiparametric model with linear process errors. J Multivar Anal 2009, 100(1):1–15. 10.1016/j.jmva.2008.03.006

    Article  MathSciNet  Google Scholar 

  7. Lin ZY, Lu CR: Limit Theory for Mixing Dependent Random Variables. Science Press, Beijing; 1996.

    Google Scholar 

  8. Yang SC: Moment bounds for strong mixing sequences and their application. J Math Res Expos 2000, 20(3):349–359.

    Google Scholar 

  9. Yang SC: Maximal moment inequality for partial sums of strong mixing sequences and application. Acta Math Sin 2007, 26B(3):1013–1024.

    Article  Google Scholar 

  10. Yang SC, Li YM: Uniformly asymptotic normality of the regression weighted estimator for strong mixing samples. Acta Math Sin 2006, 49A(5):1163–1170.

    Google Scholar 

  11. Xing GD, Yang SC: On the Maximal inequalities for partial sums of strong mixing random variables with applications. Thai J Math 2011, 9(1):11–19.

    MathSciNet  Google Scholar 

  12. Xing GD, Yang SC, Chen A: A maximal moment inequality for α -mixing sequences and its applications. Stat Probab Lett 2009, 79: 1429–1437. 10.1016/j.spl.2009.02.016

    Article  MathSciNet  Google Scholar 

  13. Xing GD, Yang SC, Liu Y, Yu KM: A note on the Bahadur representation of sample quantiles for α -mixing random variables. Monatsh Math 2011. doi:10.1007/s00605–011–0334–0

    Google Scholar 

  14. Li YM, Wei CD, Xing GD: Berry-Esseen bounds for wavelet estimator in a regression model with linear process errors. Stat Probab Lett 2011, 81(1):103–110. 10.1016/j.spl.2010.09.024

    Article  MathSciNet  Google Scholar 

  15. Antoniadis A, Gregoire G, McKeague IW: Wavelet methods for curve estimation. J Am Stat Assess 1994, 89: 1340–1352. 10.2307/2290996

    Article  MathSciNet  Google Scholar 

  16. Negiahi H: The rate of convergence to normality for strong mixing sequences of random variables. Sci Rep Yokohama Natl Univ Sect 1977, 24(11):17–25. 1 Math. Phys. Chem

    Google Scholar 

  17. You JH, Chen M, Chen G: Asymptotic normality of some estimators in a fixed-design semiparametric regression model with linear time series errors. J Syst Sci Complex 2004, 17(4):511–522.

    MathSciNet  Google Scholar 

  18. Petrov VV: Limit Theory for Probability Theory. Oxford University Press, New York; 1995.

    Google Scholar 

  19. Li YM, Guo JH: Asymptotic normality of wavelet estimator for strong mixing errors. J Korean Stat Soc 2009, 38: 383–390. 10.1016/j.jkss.2009.03.002

    Article  Google Scholar 

  20. Xue LG: Rates of random weighting approximation of wavelet estimates in semipara-metric regression model. Acta Math Appl Sin 2003, 26: 11–25.

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the two anonymous referees for providing detailed list of comments and suggestions which greatly improved an earlier version of this article. This work is partially supported by the National Science Foundation of China (No. 11061029) and the Natural Science Foundation of Jiangxi Province (No. 2008GZS0046).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongming Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

The two authors contributed equally to this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wei, C., Li, Y. Berry-Esseen bounds for wavelet estimator in semiparametric regression model with linear process errors. J Inequal Appl 2012, 44 (2012). https://doi.org/10.1186/1029-242X-2012-44

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-44

Keywords