The asymptotic normality of internal estimator for nonparametric regression

In this paper, we aim to study the asymptotic properties of internal estimator of nonparametric regression with independent and dependent data. Under some weak conditions, we present some results on asymptotic normality of the estimator. Our results extend some corresponding ones.


Introduction
In this paper, we consider the nonparametric regression model where (X i , Y i ) ∈ R d × R, d ≥ 1, and U i are random variables satisfying E(U i |X i ) = 0, 1 ≤ i ≤ n, n ≥ 1. So we have Let K(x) be a kernel function. Define K h (x) = h -d K(x/h), where h = h n is a sequence of positive bandwidths tending to zero as n → ∞. Kernel-type estimators of the regression function are widely used in various situations because of their flexibility and efficiency in the dependent and independent data. For the independent data, Nadaraya [1] and Watson [2] gave the most popular nonparametric estimator of the unknown function m(x) named the Nadaraya-Watson estimator m NW (x): . (1.1) Jones et al. [3] considered various versions of kernel-type regression estimators such as the Nadaraya-Watson estimator (1.1) and the local linear estimator. They also investigated the internal estimator for a known density f (·). Here the factor 1 f (X i ) is internal to the summation, whereas the estimator m NW (x) has the factor 1 f (x) = 1 n -1 n i=1 K h (x-X i ) externally to the summation. The internal estimator was first proposed by Mack and Müller [4]. Jones et al. [3] studied various kernel-type regression estimators, including the introduced internal estimator (1.2). Linton and Nielsen [5] introduced an integration method based on direct integration of initial pilot estimator (1.2). Linton and Jacho-Chávez [6] studied the other internal estimator Here L(·) is a kernel function, b is the bandwidth, and the density f (·) is unknown. Under the independent data, Linton and Jacho-Chávez [6] obtained the asymptotic normality of the internal estimator m n (x) in (1.3). Shen and Xie [7] obtained the complete convergence and uniform complete convergence of internal estimator m n (x) in (1.2) under the geometrical α-mixing (or strong mixing) data. Li et al. [8] weakened the conditions of Shen and Xie [7] and obtained the convergence rate and uniform convergence rate for the estimator m n (x) in probability.
As far as we know, there are no results on asymptotic normality of the internal estimator m n (x). Similarly to Linton and Jacho-Chávez [6], we investigate the asymptotic normality of the internal estimator m n (x) with independent data and ϕ-mixing data, respectively. Asymptotic normality results are presented in Sect. 3. Denote F m n = σ (X i , n ≤ i ≤ m) and define the coefficients If ϕ(n) ↓ 0 as n → ∞, then {X n } n≥1 is said to be a ϕ-mixing sequence. The concept of ϕ-mixing is introduced by Dobrushin [9], and many properties of ϕmixing are presented in Chap. 4 of Billingsley [10]. If the coefficient of the process is geometrically decreasing, then the autoregressive moving average (ARMA) process can construct a geometric ϕ-mixing sequence. Györfi et al. [11,12] gave more examples and applications to nonparametric estimation. We can also refer to Fan and Yao [13] and Bosq and Blanke [14] for the works on nonparametric regression under independent and dependent data.
Regarding notation, for x = (x 1 , . . . , . Throughout the paper, c, c 1 , c 2 , c 3 , . . . , d, B 0 , B 1 denote some positive constants not depending on n, which may be different in various places, x denotes the largest integer not exceeding x, → means to take the limit as n → ∞, and c n ∼ d n means that c n d n → 1, D − → means the convergence in distribution, and X D = Y means that random variables X and Y have the same distribution. A sequence {X i , i ≥ 1} is said to be second-order stationary if (X 1 , X 1+k ) D = (X i , X i+k ) for i ≥ 1, k ≥ 1.

Some assumptions
In this section, we list some assumptions.
The kernel density function is symmetric and satisfies and The known density f (·) of X 1 is upon its compact support S f and such that inf x∈S f f (x) > 0. Let (2.2) and (2.3) be fulfilled. Moreover, for all j ≥ 1, we have where f j (x 1 , x j+1 ) denotes the joint density of (X 1 , X j+1 ).

Asymptotic normality of internal estimator m n (x) with independent and dependent data
In this section, we show some results on asymptotic normality of the internal estimator of a nonparametric regression model with independent and dependent data. Theorem 3.1 is for independent data, and Theorem 3.2 is for ϕ-mixing data.
is positive and continuous at point

Conclusion
Linton and Jacho-Chávez [6] obtained some asymptotic normality results of the internal estimator m n (x) under independent data. Comparing Theorem 1 and Corollary 1 of Linton and Jacho-Chávez [6], our asymptotic normality results on the internal estimator m n (x) in Theorems 3.1 and 3.2 are relatively simple. Meanwhile, we use the method of Bernstein's big-block and small-block and the inequalities of ϕ-mixing random variables to investigate the asymptotic normality of the internal estimator m n (x) for m(x), and we also obtain the asymptotic normality result of (3.1). Obviously, α-mixing is weaker than ϕ-mixing, but some moment inequalities of α-mixing are more complicated than those of ϕ-mixing [16,17]. For simplicity, we study the asymptotic normality of internal estimator m n (x) under ϕ-mixing and obtain the asymptotic normality result of Theorem 3.2.

Proof of Theorem 3.1 It is easy to see that
To prove (3.1), we apply (5.1)-(5.3) and have to show that where σ 2 (x) is defined by (3.1).