- Research
- Open Access
- Published:
Berry-Esseen bounds for wavelet estimator in semiparametric regression model with linear process errors
Journal of Inequalities and Applications volume 2012, Article number: 44 (2012)
Abstract
Consider the semiparametric regression model Y i = x i β + g (t i ) + ε i , i = 1, . . . , n, where the linear process errors with , and {e i } are identically distributed and strong mixing innovations with zero mean. Under appropriate conditions, the Berry-Esseen type bounds of wavelet estimators for β and g(·) are established. Our results obtained generalize the results of nonparametric regression model by Li et al. to semiparametric regression model.
Mathematical Subject Classification: 62G05; 62G08.
1 Introduction
Regression analysis is one of the most mature and widely applied branches of statistics. For a long time, however, its main theory has concerned parametric and nonparametric regressions. Recently, semiparametric regressions have received more and more attention. This is mainly because semiparametric regression reduces the high risk of misspecification relating to a fully parametric model and avoids some serious drawbacks of fully nonparametric methods.
In 1986, Engle et al. [1] first introduced the following semiparametric regression model:
where β is an unknown parameter of interest, {(x i , t i )} are nonrandom design points, {y i } are the response variables, g(·) is an unknown function defined on the closed interval [0, 1], and {ε i } are random errors.
The model (1.1) has been extensively studied. When the errors {ε i } are independent and identically distributed (i.i.d.) random variables, Chen and Shiah [2], Donald and Dewey [3], and Hamilton and Truong [4] used various estimation methods to obtain estimators of the unknown quantities in (1.1) and discussed the asymptotic properties of these estimators. When {ε i } are MA (∞) errors with the form , where {e i } are i.i.d. random variables,{a j } satisfy and the law of the iterated logarithm for the semiparametric least square estimator (SLSE) of β and strong convergence rates of the nonparametric estimator of g(·) were discussed by Sun et al. [5]. The Berry-Esseen type bounds for estimators of β and g(·) in model (1) under the linear process errors with identically distributed and negatively associated random variables {e i } were derived by Liang and Fan [6].
Let us now recall briefly the definition of strong-mixing dependence. A sequence {e i , i ∈ Z} is said to be strong mixing (or α-mixing) if α(n) → 0 as n → ∞, where α(n) = sup , and denotes the σ-field generated by {e i : m ≤ i ≤ n}.
For the properties of strong-mixing, one can read the book of Lin and Liu [7]. Recently, Yang and Li [8–10] and Xing et al. [11–13] established moment bounds and maximal moment inequality for partial sums for strong mixing sequences and their application. In this article, we study the Berry-Esseen type bounds for wavelet estimators of β and g(·) in model (1.1) based linear process errors {ε i } satisfying the following basic assumption (A1). Our results obtained generalize the results in [14] to semiparametric regression model.
(A1) (i) Let , where , {e j , j = 0, ± 1, ± 2, . . .} are identically distributed and strong mixing random variables with zero mean.
-
(ii)
For δ > 0, E|e 0|2+δ< ∞ and mixing coefficients α(n) = O(n -λ) for λ > (2 + δ)/δ.
Now, we introduce wavelet estimators of β and g for model (1.1). Let β be given, since Ee i = 0, we have g(t i ) = E(y i - x i β), i = 1, . . . , n. Hence a natural estimator of g(·) is
where A j = [sj-1, s j ] are intervals that partition [0, 1] with t j ∈ A j and 0 ≤ t1 ≤ · · · ≤ t n ≤ 1, and wavelet kernel E m (t, s) can be defined by
where m = m(n) > 0 is a integer depending only on n, φ(·) is father wavelet with compact support. Set
In order to estimate β, we seek to minimize
The minimizer to (1.3) is found to be
So, a plug-in estimator of the nonparametric component g(·), based on , is given by
In the following, the symbols c, C, C1, C2, . . . denote positive constants whose values may change from one place to another, b n = O(a n ) means b n ≤ ca n , [x] denotes the integral part of x, ||e i || r : = (E|e i |r)1/r, Φ(u) represents the standard normal distribution function.
The article is organized as follows. In Section 2, we give some assumptions and main results. Sections 3 and 4 are devoted to the proofs of preliminary results. Proofs of theorems will be provided in Section 5. Some known results used in the proofs of preliminary and main results are appended in Appendix.
2 Assumptions, notations and results
At first we list some assumptions used in this article.
(A2) There exists a function h(·) defined on [0, 1] such that x i = h(t i ) + u i and
-
(i)
(ii)
-
(iii)
For any permutation (j 1, . . . , j n ) of the integers (1, . . . , n),
(A3) The spectral density f (ω) of {ε i } satisfies 0 < c1 ≤ f (ω) ≤ c2< ∞, for ω ∈ (-π, π].
(A4) Let g(·) and h(·) satisfy the Lipschitz condition of order 1 on [0, 1], and h (·) ∈ Hv , , where Hv is the Sobolev space of order v.
(A5) Scaling function φ(·) is γ-regular (γ is a positive integer) and has a compact support, satisfies the Lipschitz condition of order 1 and as ξ → 0, where denotes the Fourier transform of φ.
(A6) max1 ≤ i ≤ n|s i -si-1| = O (n-1).
(A7) There exists a positive constant d1, such that .
For the sake of convenience, we use the following notations. Let p = p(n), q = q(n) denote positive integers such that p + q ≤ 3n and qp-1 ≤ c < ∞. Set
After these assumptions and notations we can formulate the main results as follows:
Theorem 2.1. Suppose that (A1)-(A7) hold. If ρ satisfies
then
Corollary 2.1 Under the same conditions as in Theorem 2.1, if ρ = 1/3, 2m= O(n2/5), and δ > 1/3, , then for each t ∈ [0, 1], we have
Theorem 2.2. Suppose that the conditions in Theorem 2.1 are satisfied. Let n-12m→ 0, then for each t ∈ [0, 1]
Corollary 2.2. Under the conditions of Theorem 2.2 with ρ = 1/3, δ > 2/3, if n-12m= O(n-θ) with , and , then
Remark 2.1. Let under the assumptions (A4)-(A7) and by the relation (11) of the proof of Theorem 3.2 in [15], we obtain . Similarly, let then .
Remark 2.2. (i) By Corollary 2.1, the Berry-Esseen bound of the wavelet estimator is near for sufficiently large λ, which is faster than the one in [16]) that can get O(n-δ/4log n) for δ ≤ 1/2 or O(n-1/8) for δ > 1/2 for strong mixing sequence, but slower than the one in [6] for weighted estimate that can get O(n-1/4(log n)3/4).
-
(ii)
From Corollary 2.2, the Berry-Esseen bound of the wavelet estimator ĝ n (·) is near O(n-1/12) for sufficiently large λ and θ = 3/4.
3 Some preliminary lemmas for
From the definition of in (1.4), we write
where
For Sn 11, we can write
It is not difficult to see that
Let k = [3n/(p + q)], then Sn 111may be split as
where
From (3.1) to (3.6), we can write that
Now, we establish the following lemmas with its proofs.
Lemma 3.1. Suppose that (A1), (A2)(i), and (A3) hold, then
Proof. According to the proofs of (3.4) and Theorem 2.3 in [17], for any sequence {γ l }l∈N, we have
which implies the desired results by Lemma A.4 and assumption(A2)(i). ♣
Lemma 3.2. Let assumptions (A1)-(A3), (A5), and (A6) be satisfied, then
Proof. By Lemmas 3.1 and A.1(i), and assumptions (A1)(i) and (A2)(i), we have
and by the Cauchy-inequality
Then, from (3.9) to (3.11), the proof of (3.7) is complete, which implies the desired result (3.8) by the Markov-inequality. ♣
Lemma 3.3. Let assumptions (A1)-(A7) be satisfied, then
Proof. (a) By assumption (A2), Remark 2.1 and Lemma 3.1, we get
By this and the Markov inequality, we have
-
(b)
Applying Lemmas 3.1, A.4, and A.5, we get that
Therefore
-
(c)
Changing the order of summation in {S n 21}, similarly to the calculation for ,
Therefore, we obtain that
-
(d)
Similarly, by Lemmas 3.1, A.4, A.5, and Remark 2.1, we get that
Thus, we have
-
(e)
We write that
Similarly to the calculation for by (3.13), Lemmas 3.1, A.4, and A.5, we obtain that
Hence, we have
-
(f)
By assumption (A2), Remarks 2.1, Lemma A.5, and the Abel inequality, we have
Thus, by Lemma 3.1 we obtain
Therefore, the desired result (3.12) follows from (3.13)-(3.18) immediately. ♣
Lemma 3.4. Suppose that (A1)-(A3), (A5), and (A6) hold. Set then
Proof. Let Cov(y1ni, y1nj), then By (3.5) and (3.6), it is easy to verify that , and
According to Lemma 3.2, the C r -inequality and the Cauchy-Schwarz inequality
and
Thus, we obtain
On the other hand, from Lemma 1.2.4 in [7], Lemmas 3.1 and A.4(iv), we can estimate
Therefore, by (3.19) and (3.20), it follows that
♣
Assume that {η1nw: w = 1, . . . , k} are independent random variables, and its distribution is the same as that of {y1nw, w = 1, . . . , k}. Set , . Clearly Then, we have the following lemmas:
Lemma 3.5. Let assumptions (A1)-(A3), (A5), (A6), and (2.1) hold, the
Proof. By the Berry-Esseen inequality (see [18], Theorem 5.7]), we have
From (2.1), we have 0 < 2ρ ≤ 1, 0 < 2ρ < δ, and, (2 + δ)/δ < (1 + ρ) (2 + δ)/(δ - 2ρ) < λ. Let r = 2(1 + ρ), τ = δ- 2ρ, then r + τ = 2 + δ, and According to Lemmas 3.1 and A.1(ii), and the C r -inequality, taking ε = ρ, we get that
Therefore, from Lemma 3.4, relations (3.21) and (3.22), we obtain the result. ♣
Lemma 3.6. Suppose that the conditions in Lemma 3.5 are satisfied, then
Proof. Let ϕ1(t) and ψ1 (t) be the characteristic functions of and Tn 1, respectively.
Since
then from Lemmas A.1(i), A.2, and 3.1, it follows that
Therefore
As in the calculation of (4.7) in [14], using Lemma 3.5, we have
Therefore, combining (3.23) and (3.24), choosing , and using the Esseen inequality (see [[18], Theorem 5.3]), we conclude that
♣
4 Some preliminary lemmas for ĝ n (t)
From the definition of ĝ n (t) in (1.5), We can decompose the sum into three parts:
Let us decompose the vector H1ninto two parts:
Where
Similar to Sn 111in (3.6), H11ncan be split as where
Then
Set , . Similarly to Lemmas 3.2-3.6, we have the following lemmas without proofs, except for Lemma 4.2.
Lemma 4.1. Suppose that the conditions in Theorem 2.2 are satisfied, then
Lemma 4.2. Let assumptions (A1)-(A7) be satisfied, then
Lemma 4.3. Under the conditions of Theorem 2.2, set then
Lemma 4.4. Suppose that the conditions in Theorem 2.2 are satisfied, then
Lemma 4.5. Suppose that the conditions in Theorem 2.2 are satisfied, then
Proof of Lemma 4.2. Similar to the proof of (A.8) in [6], we first verify that
From (1.2), we write