- Research
- Open access
- Published:
Adaptive wavelet estimations for the derivative of a density in GARCH-type model
Journal of Inequalities and Applications volume 2019, Article number: 106 (2019)
Abstract
Recently, Rao investigated the estimations for the derivative of a density in GARCH-type model \(S=\sigma ^{2}Z\) over \(L^{2}\)-risk (Commun. Stat., Theory Methods 46:2396–2410, 2017). This paper extends those estimations to \(L^{p}\)-risk (\(1\leq p<\infty \)). In addition, we provide a lower bound for this model, which indicates one of our convergence rates to be nearly-optimal.
1 Introduction
The GARCH-type model
is considered in this paper, where \(\sigma ^{2}\) and Z are independent random variables. In practice, we always assume that the density function \(f_{\sigma ^{2}}\) of \(\sigma ^{2}\) is unknown and \(\operatorname{supp} f_{\sigma ^{2}}\subseteq [0,1]\), while the density of Z is known. We want to estimate the first derivative of \(f_{\sigma ^{2}}\) based on n independent and identically distributed (i.i.d.) observed samples \(S_{1},\ldots ,S_{n}\) of S by wavelet methods, so that we also need suppose the differentiability of \(f_{\sigma ^{2}}\) and \(f'_{\sigma ^{2}}\in L^{p}([0,1])\).
Non-parametric estimations of a density and regression function are widely investigated in the literature [12, 14, 16]. It is well known that the estimations for the derivatives of a density are also important and interesting, which could reflect monotonicity, concavity or convexity properties of density functions. Asymptotic properties of the kernel estimators for a density derivative have been considered earlier in [15], while the wavelet type estimator was discussed in [17].
As usual, we consider the \(L^{p}\) minimax risk (\(L^{p}\)-risk) [13],
where the infimum runs over all possible estimators \(\hat{f}_{n}\) and Σ is a class of functions. Here and after, EX stands for the mathematical expectation of a random variable X and \(\|f\|_{p}\) denotes the ordinary \(L^{p}\) norm.
In 2012, Chesneau and Doosti [9] investigated the wavelet estimation of density for GARCH model under various dependence structures. Next year, Chesneau [8] studied the wavelet estimation of a density in GARCH-type model leading to upper bounds under \(L^{2}\)-risk. In 2017, Rao [17] considered \(L^{2}\)-risk for the derivative of a density in GARCH-type model over a Besov ball by wavelets.
In this paper, we address to extend Rao’s work [17] to \(L^{p}\)-risk \((1\leq p<\infty )\). Moreover, we show that one of our convergence rates is nearly-optimal. On the other hand, this work can also be seen as a generalization of multiplicative censoring model. Vardi [18, 19] introduced the multiplicative censoring model which unifies several models including non-parametric inference for renewal processes, non-parametric deconvolution problems and estimation of decreasing density functions. Recently, Abbaszadeh et al. [1] considered the wavelet estimation of a density and its derivatives under \(L^{p}\)-risk (\(1\leq p<\infty \)) in the multiplicative censoring one. The density estimations for the multiplicative censoring model also can be found in [2, 3] and [6, 7].
This paper is organized as follows. Section 2 briefly describes the Besov ball and wavelet estimators. The theoretical results are given in Sect. 3. Some lemmas are provided in Sect. 4. The proofs are gathered in Sect. 5.
2 Besov ball and estimators
This section describes the Besov ball and wavelet estimators. First, we introduce the Besov ball and its wavelet characterizations.
2.1 Besov ball
Let \(W_{r}^{n}(\mathbb{R})\) be the Sobolev space with a non-negative integer n,
and \(\|f\|_{W_{r}^{n}}:=\|f\|_{r}+\|f^{(n)}\|_{r}\). Then \(L^{r}( \mathbb{R})\) can be considered as \(W_{r}^{0}(\mathbb{R})\). For \(1\leq r,q\leq \infty \) and \(s=n+\alpha \) with \(\alpha \in (0,1]\), a Besov space \(B_{r,q}^{s}(\mathbb{R})\) is defined by
with the norm \(\|f\|_{B_{r,q}^{s}}:=\|f\|_{W_{r}^{n}}+\|t^{-\alpha } \omega _{r}^{2}(f^{(n)},t)\|_{q}^{\ast }\). Here, \(\omega _{r}^{2}(f,t):= \sup_{|h|\leq t}\|f(\cdot +2h)-2f(\cdot +h)+f(\cdot )\|_{r}\) denotes the smoothness modulus of f and
When \(s>0\) and \(1\leq r,q,r' \leq \infty \), it is well known that
-
(i)
\(B_{r,q}^{s}\hookrightarrow B_{r,\infty }^{s}\hookrightarrow B_{\infty ,\infty }^{s-\frac{1}{r}}\) for \(s>\frac{1}{r}\);
-
(ii)
\(B_{r,q}^{s}\hookrightarrow B_{r',q}^{s'}\) for \(r\leq r'\) and \(s-\frac{1}{r}=s'-\frac{1}{r'}\);
-
(iii)
\(B_{\infty ,\infty }^{s}(\mathbb{R})\) is the classical Hölder space \(H^{s} (\mathbb{R})\),
where \(A\hookrightarrow B\) stands for a Banach space A continuously embedded in another Banach space B. More precisely, \(\|u\|_{B} \leq c_{1}\|u\|_{A}\) (\(u\in A\)) holds for some constant \(c_{1}>0\). By (i), \(B_{r,q}^{s}(\mathbb{R})\hookrightarrow L^{\infty }(\mathbb{R})\) for \(s>\frac{1}{r}\). All these notations and claims can be found in [13].
In this paper, a Besov ball
is considered.
Let ϕ be a scaling function and ψ be the corresponding wavelet function such that
constitutes an orthonormal basis of \(L^{2}(\mathbb{R})\), where τ is a positive integer and \(g_{j,k}(x)=2^{\frac{j}{2}}g(2^{j}x-k)\) for \(g=\phi \) or ψ. Then, for \(h\in L^{2}(\mathbb{R})\),
with \(\alpha _{j,k}=\langle h,\phi _{j,k}\rangle \), \(\beta _{j,k}=\langle h,\psi _{j,k}\rangle \) and
In particular, when \(\phi ,\psi \) and h have compact supports, the cardinality of \(\varOmega _{j}\) satisfies \(|\varOmega _{j}|\leq C2^{j}\), where \(C>0\) is a constant depending only on the support lengths of \(\phi , \psi \) and h.
As usual, the orthogonal projection operator \(P_{j}\) is given by
When \(\phi \in C^{m}\) (so does ψ) is compactly supported, the identity (1) and (2) hold in \(L^{p}\) sense for \(p\geq 1\) [13]. Here and throughout, \(C^{m}\) stands for the set consisting of all m times continuously differentiable functions.
The following wavelet characterization theorem of Besov space is needed in Sect. 5.
Lemma 2.1
([13])
Let a scaling function \(\phi \in C^{m}\) be compactly supported. Then, for \(r,q\in [1,+\infty ], 0< s< m\) and \(h\in L^{r}(\mathbb{R})\), the following assertions are equivalent:
In each case,
Here and afterwards, \(A\lesssim B\) means \(A\leq c_{2}B\) for some constant \(c_{2}>0\); \(A\gtrsim B\) denotes \(B\lesssim A\); we also use \(A\thicksim B\) to stand for both \(A\lesssim B\) and \(A\gtrsim B\).
2.2 Estimators
This part introduces our wavelet estimators for the GARCH-type model \(S=\sigma ^{2}Z\) described earlier. Suppose
where v is a known positive integer and \(U_{1},\ldots ,U_{v}\) are i.i.d. random variables with standard uniform distribution. Clearly, the density function of Z satisfies
As in [8, 17], we assume that there exists a known constant \(C_{\ast }\) such that
where \(f_{s}\) is the density function of S.
For any \(x\in [0,1]\), \(h\in C^{k}([0,1])\), we define
and
where k is a positive integer. Then the following lemma holds.
Lemma 2.2
([8])
Let G and T be defined as above. Then
-
(i)
\(f_{{\sigma }^{2}}(x)=G_{v}(f_{s})(x), x\in [0,1]\);
-
(ii)
For any \(h\in C^{v}([0,1])\),
$$\begin{aligned} \int _{0}^{1}f_{{\sigma }^{2}}(x)h(x)\,dx= \int _{0}^{1}f_{s}(x)T_{v}(h) (x)\,dx. \end{aligned}$$
Next, we will introduce wavelet estimators, which can be found in Ref. [17]. Define
Here and after, let ϕ be Daubechies’ scaling function \(D_{2N}\) with large N and ψ be the corresponding wavelet function. It is well known that \(\phi ,\psi \in C^{v+1}\) with N large enough. Furthermore, the linear wavelet estimator is given by
where \(j_{0}\) is a positive integer which will be chosen later.
In order to get adaptivity, we need the thresholding method [4, 14, 17]. As in [17], let
with the constants \(\varUpsilon =c\gamma \), \(c>\max \{8C_{\min },1\}\) and \(\gamma \geq p(2v+3)\). Here,
with \(C_{\ast }\) given in (3). This special choice c is used in Lemma 4.3, while \(\gamma \geq p(2v+3)\) is needed in the estimations of \(Ee_{1}\) and \(Ee_{3}\) (see Sect. 5). Here, we replace \(\lambda _{j}=2^{(v+1)j}\sqrt{\frac{\ln n}{n}}\) (see [17]) by \(\lambda _{j}=2^{(v+1)j}\sqrt{\frac{j}{n}}\), which is used in the proof of Lemma 4.3. In fact, the universal threshold of classical adaptive density estimation is \(\sqrt{ \frac{j}{n}}\) (see [11]) and two forms do not influence the convergence rates of our results.
The nonlinear wavelet estimator is given by
with some positive integer τ.
3 Results
This section describes the results in this paper.
Theorem 3.1
Assume \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\), then, for \(p\in [1,+\infty )\), the estimator \(\widehat{f'_{{\sigma }^{2}}} ^{\mathrm{lin}}\) in (7) with \(2^{j_{0}}\sim n^{ \frac{1}{2s'+2(v+1)+1}}\) satisfies
where \(s'=s-(\frac{1}{r}-\frac{1}{p})_{+}\) and \(a_{+}=\max \{a,0\}\).
Remark 1
When \(p=2\) and \(r\geq 2\), the above estimation shows
which coincides with Theorem 5.1 of Ref. [17].
Remark 2
The condition \(s>\frac{1}{r}\) can be replaced by \(s'=s-(\frac{1}{r}- \frac{1}{p})_{+}>0\), because the former condition is only used to conclude \(B_{r,q}^{s}\hookrightarrow B_{p,q}^{s'}\) in the proof of Theorem 3.1.
The next theorem gives an adaptive upper bound estimation by the nonlinear wavelet estimator \(\widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}\) in (9).
Theorem 3.2
Let \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\). Then, for \(p\in [1,+\infty )\),
with \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\).
Remark 3
When \(sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p\geq 0\), \(\alpha = \frac{s}{2s+2(v+1)+1}\). In particular, the above result with \(p=2\) coincides with Theorem 5.2 in [17].
Remark 4
The condition \(s>\frac{1}{r}\) in Theorem 3.2 can’t be replaced by \(s'=s- (\frac{1}{r}-\frac{1}{p})_{+}>0\) for \(r\leq p\), since we need \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\leq \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\leq \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(v+1)+1}\) for the estimation of \(A_{3}\) in Sect. 5.
Remark 5
Let m be a constant such that \(m>s\), and \(2^{j_{0}}\thicksim n^{ \frac{1}{2m+2(v+1)+1}}\). Then the number of calculations can be reduced effectively, when the level τ in \(\widehat{f'_{{\sigma }^{2}}} ^{\mathrm{non}}\) is replaced by \(j_{0}\).
The following theorem shows a lower bound estimation.
Theorem 3.3
Assume \(s>0\) and \(r,q\in [1,+\infty ]\), then, for any \(p\in [1,+ \infty )\),
where \(\widehat{f}'_{{\sigma }^{2}}\) runs over all possible estimators of \(f'_{{\sigma }^{2}}\).
Remark 6
Combining Theorem 3.3 with Theorem 3.2, we find that the convergence rate \(\frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\) is nearly-optimal. As for the other one, we will study it below.
4 Some lemmas
This section is devoted to providing some lemmas, which are needed for the proofs of our theorems.
Lemma 4.1
([13])
Let g be a scaling function or a wavelet function with
Then there exists \(C>0\) such that, for \(\lambda =\{\lambda _{k}\} \in l^{p}(\mathbb{Z})\) and \(1\leq p\leq \infty \),
We need the well-known Rosenthal’s inequality [13], in order to prove Lemma 4.2.
Rosenthal’s inequality. Let \(X_{1},\ldots ,X_{n}\) be independent random variables and \(EX_{i}=0\). Then
where \(C_{p}>0\) is a constant.
Lemma 4.2
Let \(\widehat{\alpha }_{j,k}\) and \(\widehat{\beta }_{j,k}\) be given by (6). Then, for \(p\in (0,+\infty )\),
-
(i)
\(E\widehat{\alpha }_{j,k}=\alpha _{j,k}, E\widehat{\beta }_{j,k}= \beta _{j,k}\);
-
(ii)
\(E|\widehat{\alpha }_{j,k}- {\alpha }_{j,k}|^{p} \lesssim n^{- \frac{p}{2}} 2^{(v+1)jp}, E|\widehat{\beta }_{j,k}- {\beta }_{j,k}|^{p} \lesssim n^{-\frac{p}{2}} 2^{(v+1)jp}\),
where \(\alpha _{j,k}=\langle f'_{\sigma ^{2}}, \phi _{j,k}\rangle \) and \(\beta _{j,k}=\langle f'_{\sigma ^{2}}, \psi _{j,k}\rangle \).
Proof
(i) One only need prove \(E\widehat{\alpha }_{j,k}={\alpha }_{j,k}\) and the second one is the same. According to the definition of \(\widehat{\alpha }_{j,k}\) in (6), one gets
thanks to \(S_{1},\ldots ,S_{n}\) are i.i.d. On the other hand, \(f_{{\sigma }^{2}}(0)=f_{{\sigma }^{2}}(1)=0\) follows from \(\operatorname{supp} f_{{\sigma }^{2}}\subseteq [0,1]\) and the continuity of \(f_{{\sigma }^{2}}\). These with Lemma 2.2 imply
(ii) One also prove the first inequality and the second one is similar. By (6) and the results of (i),
Let \(X_{i}:=E[T_{v} (({\phi }_{j,k})')(S_{i})]-T_{v} (({\phi }_{j,k})')(S _{i})\). Then \(X_{1},\ldots ,X_{n}\) are i.i.d., \(EX_{i}=0\) and
According to (4),
Hence,
Clearly,
where \(C_{1}=2(v+2)!\sum_{u=0}^{v}\|\phi ^{(u+1)}\|_{\infty }\). On the other hand, \(\sup_{x\in [0,1]}f_{s}(x)\leq C_{\ast }\) in (3) and \(S_{i}\in [0,1]\) show that
This with (11) and \(S_{i}\in [0,1]\) leads to
Furthermore,
where \(C_{2}=(v+1)[(v+2)!]^{2}C_{\ast }\sum_{u=0}^{v}\|\phi ^{(u+1)}\|_{2}^{2}\).
When \(0< p<2\), by using (10), Jensen’s inequality and (13),
For the case of \(2\leq p<\infty \), according to Rosenthal’s inequality,
because of (12) and (13). Moreover, \(n^{1- \frac{p}{2}} 2^{(\frac{p}{2}-1)j}\leq 1\) follows from \(2^{j}\leq n\) and \(p\geq 2\). Then
due to (10). This completes the proof. □
Bernstein’s inequality [13] is necessary in the proof of Lemma 4.3.
Bernstein’s inequality. Let \(X_{1},\ldots ,X_{n}\) be i.i.d. random variables, \(EX_{i}=0\) and \(|X_{i}|\leq \|X\|_{\infty }\) (\(i=1, \ldots ,n\)). Then, for each \(\gamma >0\),
Lemma 4.3
Let \(\beta _{j,k}\) be the wavelet coefficient of \(f'_{\sigma ^{2}}\), \(\widehat{\beta }_{j,k}\) be defined in (6) and \(\varUpsilon =c\gamma \). Then, for any \(j>0\), \(j2^{j}\leq n\) and \(\gamma \geq 1\), there exists a constant \(c\geq \max \{8C_{\min },1\}\) such that
where \(C_{\min }\) is given by (8).
Proof
According to the definition of \(\widehat{\beta }_{j,k}\) in (6), one obtains
where \(Y_{i}:=E[T_{v}(({\psi }_{j,k})')(S_{i})]-T_{v} (({\psi }_{j,k})')(S _{i})\).
where
Then Bernstein’s inequality tells that
On the other hand, combining with (14), \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\) and \(j2^{j}\leq n\), one shows
This with (15), \(c\geq \max \{8C_{\min },1\}\) implies that
thanks to \(j>0\) and \(\gamma >1\).
Hence, it follows from (16)–(17) that
which is the conclusion of Lemma 4.3. □
At the end of this section, we list two more lemmas which will play key roles in the proof of Theorem 3.3.
Lemma 4.4
([5])
Let \(g\in B_{r,q}^{s}(\mathbb{R})\) and \(f(x)=g(bx)\ (b\geq 1)\). Then
To state the last lemma, we need a concept: Let P and Q be two probability measures on (\(\varOmega ,\aleph \)) and P be absolutely continuous with respect to Q (denoted by \(P\ll Q\)), the Kullback–Leibler divergence is defined by
where p and q are density functions of \(P,Q\), respectively.
Lemma 4.5
(Fano’s lemma, [10])
Let \(( \varOmega , \aleph , P_{k})\) be probability measurable spaces and \(A_{k}\in \aleph , k=0,1,\ldots ,m\). If \(A_{k}\cap A_{v}=\emptyset \) for \(k\neq v\), then
where \(A^{c}\) stands for the complement of A and \(\kappa _{m}= \inf_{0\leq v\leq m} \frac{1}{m} \sum_{k\neq v}K(P_{k}, P _{v})\).
5 Proofs of results
In this section, we will prove our main results.
5.1 Proofs of upper bounds
We rewrite Theorem 3.1 as follows before giving its proof.
Theorem 5.1
Assume \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\), then, for \(p\in [1,+\infty )\), the estimator \(\widehat{f'_{{\sigma }^{2}}} ^{\mathrm{lin}}\) in (7) with \(2^{j_{0}}\sim n^{ \frac{1}{2s'+2(v+1)+1}}\) satisfies
where \(s'=s-(\frac{1}{r}-\frac{1}{p})_{+}\) and \(a_{+}=\max \{a,0\}\).
Proof
When \(r>p\), \(s'=s-(\frac{1}{r}-\frac{1}{p})_{+}=s\). Denote \(\varOmega = \operatorname{supp} (\widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}})\). Then
due to the Hölder inequality. Furthermore, according to Jensen’s inequality and \(\frac{p}{r}<1\),
By \(s'=s-\frac{1}{r}+\frac{1}{p}\leq s\) and \(r\leq p\), one finds \(B_{r,q}^{s}\hookrightarrow {B_{p,q}^{s'}}\). Hence,
Next, one only need estimate \(\sup_{f'_{{\sigma }^{2}}\in B_{p,q}^{s'}(M)} E\| \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}}\|_{p}^{p}\) by (18) and (19). Note that
Combining \(2^{j_{0}}\sim n^{\frac{1}{2s'+2(v+1)+1}}\), \(f'_{{\sigma } ^{2}}\in B_{p,q}^{s'}(M)\) with Lemma 2.1, one concludes
On the other hand,
thanks to Lemma 4.1 and Lemma 4.2. Then it follows
from \(2^{j_{0}}\sim n^{\frac{1}{2s'+2(v+1)+1}}\). This with (20) and (21) leads to
Combining (23) with (18) and (19), one finds that
The proof is done. □
Now, the upper bound of nonlinear wavelet estimator (Theorem 3.2) is restated below.
Theorem 5.2
Let \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\). Then, for any \(p\in [1,+\infty )\), the estimator \(\widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}\) in (9) satisfies
where \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\).
Proof
When \(r>p\), similar to (18),
Hence, it suffices to establish the result for \(r\leq p\). According to (1), (2) and (9), \(E\| \widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}-f'_{{\sigma }^{2}}\|_{p}^{p}\lesssim A_{1}+A_{2}+A_{3}\), where
Next, one proves \(A_{1}+A_{2}+A_{3}\lesssim (\ln n)^{p}(n^{-1}\ln n)^{ \alpha p}\) for \(f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)\) and \(r\leq p\).
By the same arguments as (22),
thanks to \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}<\frac{1}{2}\).
Note that \(f'_{{\sigma }^{2}}\in B_{r,q}^{s}\hookrightarrow B_{p,q} ^{s-\frac{1}{r}+\frac{1}{p}}\) for \(r\leq p\). This with Lemma 2.1 and \(2^{j_{1}}\thicksim (\frac{n}{\ln n})^{ \frac{1}{2(v+1)+1}}\) shows
because \(s>\frac{1}{r}\) and \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\leq \frac{s- \frac{1}{r}+\frac{1}{p}}{2(v+1)+1}\).
To estimate \(A_{2}\), define
Then \(E\|\sum_{j={\tau }}^{j_{1}} \sum_{k\in \varOmega _{j}}( \widetilde{\beta }_{j,k}-{\beta }_{j,k}) {\psi }_{j,k}\|_{p}^{p} \lesssim (\ln n)^{p-1}\sum_{i=1}^{4}Ee_{i}\) by Lemma 4.1, where
By the Hölder inequality and \(\{k\in \widehat{B}_{j}\cap {B} _{j}^{c}\}\subseteq \{|\widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}/2\}\),
This with Lemma 4.2 and Lemma 4.3 shows that
where one uses \(\gamma >p(2v+3)\) and \(\alpha =\min \{ \frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\}<\frac{1}{2}\).
From \(k\in \widehat{B}_{j}^{c}\cap {C}_{j}\), one finds \(| \widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}\) and \(|\beta _{j,k}|\leq |\widehat{\beta }_{j,k}- {\beta }_{j,k}|+| \widehat{\beta }_{j,k}|\leq 2|\widehat{\beta }_{j,k}- {\beta }_{j,k}|\). On the other hand, \(\{k\in \widehat{B}_{j}^{c}\cap {C}_{j}\}\subseteq \{|\widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}\} \subseteq \{|\widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}/2\}\). Therefore, it follows from the same arguments as \(Ee_{1}\) that
Next, one estimates \(Ee_{2}\) and \(Ee_{4}\). Define
Then, by \(s>\frac{1}{r}\) and \(2^{j_{1}}\thicksim (\frac{n}{\ln n})^{ \frac{1}{2(v+1)+1}}\),
When \(\omega \geq 0\), one writes down
According to (22),
Note that \(\frac{2|\beta _{jk}|}{\varUpsilon \lambda _{j}}\geq 1\), \(\sum_{k}|\beta _{jk}|^{r}\lesssim 2^{-j(s+\frac{1}{2}-\frac{1}{r})r}\) from \(k\in B_{j}\), \(f'_{\sigma ^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1; On the other hand, Lemma 4.2 tells \(E|\widehat{\beta }_{j,k}-{\beta }_{j,k}|^{p}\lesssim 2^{(v+1)jp} n ^{-\frac{p}{2}}\). These with \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\) and \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p\geq 0\) lead to
Combining (25)–(27) with \(2^{j_{0}^{*}}\sim (\frac{n}{ \ln n})^{\frac{1}{2s+2(v+1)+1}}\), one obtains
because of \(\omega \geq 0\) and \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}= \frac{s}{2s+2(v+1)+1}\).
When \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p<0\), \(\alpha = \min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\}=\frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\). Define \(p_{1}=(1-2\alpha )p\). Then \(r\leq p_{1}\leq p\) follows from
Moreover, \(\sum_{k}|\beta _{jk}|^{p_{1}}\leq (\sum_{k}|\beta _{jk}|^{r})^{\frac{p _{1}}{r}} \lesssim 2^{-j(s+\frac{1}{2}-\frac{1}{r})p_{1}} \) thanks to \(r\leq p_{1}\), \(f'_{\sigma ^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1. This with (27) and \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\) shows
due to \(\frac{p-p_{1}}{2}=\alpha p\) and \(\frac{p}{2}-1+(v+1)(p-p_{1})-(s+ \frac{1}{2}-\frac{1}{r})p_{1}=0\).
Finally, one estimates \(Ee_{4}\). When \(\omega =sr+(v+\frac{3}{2})r-(v+ \frac{3}{2})p\geq 0\), \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}= \frac{s}{2s+2(v+1)+1}\). Furthermore,
where \(j_{0}^{*}\) is given by (24).
Since \(|\beta _{j,k}|\leq 2\varUpsilon \lambda _{j}\lesssim 2^{(v+1)j}(jn ^{-1})^{\frac{1}{2}}\) holds by \(k\in C_{j}^{c}\) and \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\), one concludes that
On the other hand,
due to \(|\beta _{j,k}|\leq 2\varUpsilon \lambda _{j}\) and \(r\leq p\).
Clearly, \(\|{\beta }_{j,\cdot }\|_{l_{r}} \lesssim 2^{-j(s-\frac{1}{r}+ \frac{1}{2})}\) by \(f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1. This with \(\lambda _{j}=2^{(v+1)j}\sqrt{\frac{j}{n}}\) implies that
because \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p\geq 0\).
According to (28)–(30) and \(2^{j_{0}^{*}}\sim (\frac{n}{ \ln n})^{\frac{1}{2s+2(v+1)+1}}\), one obtains
by \(\alpha =\frac{s}{2s+2(v+1)+1}\).
For the case of \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p<0\). Let
where \(j_{1}^{*}\) is given by (24). Similar to (30),
thanks to \(\omega <0\).
To estimate \(Ee_{42}'\), one observes that \(\|\beta _{j,\cdot }\|_{l_{p}} \lesssim \|\beta _{j,\cdot }\|_{l_{r}}\lesssim 2^{-j(s-\frac{1}{r}+ \frac{1}{2})}\) by \(r\leq p\), \(f'_{\sigma ^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1. Hence,
because of \(s>\frac{1}{r}\).
Combining (31)–(33) with \(2^{j_{1}^{*}}\sim (\frac{n}{ \ln n})^{\frac{1}{2(s-\frac{1}{r})+2(v+1)+1}}\), one knows
thanks to \(\omega <0\) and \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}=\frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\). This completes the proof of Theorem 5.2. □
5.2 Proof of lower bound
Finally, we are in a position to state and prove the lower bound estimation.
Theorem 5.3
Assume \(s>0\) and \(r,q\in [1,+\infty ]\), then, for any \(p\in [1,+ \infty )\),
where \(\widehat{f}'_{{\sigma }^{2}}\) runs over all possible estimators of \(f'_{{\sigma }^{2}}\).
Proof
It is sufficient to construct density functions \(h_{k}\) such that \(h'_{k}\in B_{r,q}^{s}(M)\) and
Define \(g(x)=Cm(x)\), where \(m\in C_{0}^{\infty }\) with \(\operatorname{supp} m \subseteq [0,1]\), \(\int _{\mathbb{R}} m(x)\,dx=0\) and \(C>0\) is a constant. Let \(C_{0}^{\infty }\) stand for the set of all infinitely many times differentiable and compactly supported functions. Furthermore, one chooses a density function \(h_{0}\) satisfying \(h_{0}\in B_{r,q}^{s+1}( \frac{M}{2}), \operatorname{supp} h_{0}\subseteq [0,1]\) and \(h_{0}(x)\geq M _{1}>0\) for \(x\in [\frac{1}{2},\frac{3}{4}]\).
Take \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\) and
where G is given by (5) and \(g_{j,l}(x)=2^{\frac{j}{2}}g(2^{j}x-l)\) with \(l=2^{j-1}\).
First, one checks that \(h_{1}\) is a density function. Since \(\operatorname{supp} g_{j,l}\subseteq [\frac{1}{2},\frac{3}{4}]\) by \(\operatorname{supp} m\subseteq [0,1]\) and j large enough, one finds \(h_{1}(x)\geq 0\) for \(x\notin [\frac{1}{2},\frac{3}{4}]\). It is easy to calculate that
where \(C_{u}>0\) is a constant. Then, for \(x\in [\frac{1}{2}, \frac{3}{4}]\) and large j,
thanks to \(h_{0}(x)|_{[\frac{1}{2},\frac{3}{4}]}\geq M_{1}\) and \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\). On the other hand, \(\int _{\mathbb{R}} g(x)\,dx=\int _{\mathbb{R}} Cm(x)\,dx=0\) and \(\operatorname{supp} g_{j,l}\subseteq [\frac{1}{2},\frac{1}{2}+2^{-j}]\) by \(\operatorname{supp} m\subseteq [0,1]\) and \(l=2^{j-1}\). This with \(m\in C_{0}^{\infty }\) and \(g(x)=Cm(x)\) shows
for any \(u\in \{1,\ldots , v\}\). Therefore,
From this with (36) one concludes that \(h_{1}\) is a density function.
Next, one shows \(h'_{0}, h'_{1}\in B_{r,q}^{s}(M)\). Clearly, \(h'_{0}\in B_{r,q}^{s}(M)\) by \(h_{0}\in B_{r,q}^{s+1}(\frac{M}{2})\). Hence, one only needs prove \(h'_{1}\in B_{r,q}^{s}(M)\).
On the other hand, for each \(\tau \in \{0,\ldots ,u\}\),
because of Lemma 4.4. Combining this with \(l=2^{j-1}\) and \([2^{j}(x-\frac{1}{2}+\frac{1}{2})]^{u}= \sum_{\tau =0}^{u}C ^{\tau }_{u}[2^{j}(x-\frac{1}{2})]^{u-\tau }2^{-\tau }2^{\tau j}\), one obtains
Denote \(M'=\max \{C_{u},C_{u}^{\tau }: \tau =0,\ldots ,u; u=1,\ldots ,v\}\). Then there exists a constant \(C>0\) such that
thanks to \(g(x)=Cm(x)\) and \(m\in C_{0}^{\infty }\subseteq B_{r,q}^{s+1}\). This with (37) and (38) leads to
because \(h_{0}\in B_{r,q}^{s+1}(\frac{M}{2})\) and \(a_{j}=2^{-j(s- \frac{1}{r}+\frac{1}{2}+v+1)}\). Therefore, \(h_{1}\in B_{r,q}^{s+1}(M)\) and \(h'_{1}\in B_{r,q}^{s}(M)\).
According to (35),
where \(C'_{u}>0\) is a constant and \(l=2^{j-1}\). Hence,
On the other hand, by using \([2^{j}(x-\frac{1}{2}+\frac{1}{2})]^{u}= \sum_{\tau =0}^{u} C^{\tau }_{u}[2^{j}(x-\frac{1}{2})]^{u- \tau }2^{-\tau }2^{\tau j}\), one concludes that
and
Let \(x'=2^{j}(x-\frac{1}{2})\). Then there exists a constant \(C'>0\) such that
thanks to (39)–(41), \(g\in C_{0}^{\infty }\) and \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\).
Define \(A_{k}=\{ \Vert \widehat{f}'_{{\sigma }^{2}}-h'_{k} \Vert _{p}<\frac{\delta _{j}}{2}\}\ (k\in \{0,1\})\). Then \(A_{0}\cap A_{1}= \emptyset \). According to Lemma 4.5,
where \(P_{f}^{n}\) stands for the probability measure corresponding to the density function \(f^{n}(x):=f(x_{1})f(x_{2})\cdots f(x_{n})\). Hence,
This with (42) implies
Next, one shows \(\kappa _{1}\leq C_{0}na_{j}^{2}\). Recall that
Then
due to \(f_{s_{0}}^{n}(x)=\prod_{j=1}^{n}f_{s_{0}}(x_{j})\) and \(f_{s_{1}}^{n}(x)=\prod_{j=1}^{n}f_{s_{1}}(x_{j})\). Since \(\ln u \leq u-1\) holds for \(u>0\), one knows
According to Chesneau’s work in Ref. [8], \(f_{s_{k}}(x)=\frac{1}{(v-1)!}\int _{x}^{1}(\ln y-\ln x)^{v-1}h_{k}(y) \frac{1}{y}\,dy\). Then
because of (34) and \(G_{v}(g_{j,l})(x)=-x[G_{v-1}(g_{j,l})(x)]'\).
By the formula of integration by parts,
because \(l=2^{j-1}\) and \((\ln y-\ln x)^{v-m}G_{v-m}(g_{j,l})(y)| _{x}^{\frac{1}{2}+2^{-j}}=0\) for any \(m\in \{1,\ldots ,v-1\}\). On the other hand, for each \(x\in [\frac{1}{2}, \frac{1}{2}+2^{-j}]\) and large j,
thanks to \(f_{s_{0}}(x)=\frac{1}{(v-1)!}\int _{x}^{1}(\ln y-\ln x)^{v-1}h _{0}(y) \frac{1}{y}\,dy\) and \(h_{0}(x)|_{[\frac{1}{2},\frac{3}{4}]} \geq M_{1}\). Combining with (44)–(47), one obtains
where \(C_{0}>0\) is a constant.
Choose \(2^{j}\thicksim n^{\frac{1}{2(s-\frac{1}{r})+2(v+1)+1}}\) and recall \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\). Then
Substituting \(\delta _{j}\thicksim 2^{-j(s-\frac{1}{r}+\frac{1}{p})}\), \(2^{j}\thicksim n^{\frac{1}{2(s-\frac{1}{r})+2(v+1)+1}}\) into (43), one obtains
which is the desired conclusion. □
References
Abbaszadeh, M., Chesneau, C., Doosti, H.: Multiplicative censoring: estimation of a density and its derivatives under the \(L^{p}\)-risk. REVSTAT 11(3), 255–276 (2013)
Andersen, K., Hansen, M.: Multiplicative censoring: density estimation by a series expansion approach. J. Stat. Plan. Inference 98, 137–155 (2001)
Asgharian, M., Carone, M., Fakoor, V.: Large-sample study of the kernel density estimators under multiplicative censoring. Ann. Stat. 40, 159–187 (2012)
Blumensath, T., Davies, M.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27, 265–274 (2009)
Cai, T.T.: Rates of convergence and adaption over Besov spaces under pointwise risk. Stat. Sin. 13, 881–902 (2003)
Chaubey, Y.P., Chesneau, C., Doosti, H.: On linear wavelet density estimation: some recent developments. J. Indian Soc. Agric. Stat. 65, 169–179 (2011)
Chaubey, Y.P., Chesneau, C., Doosti, H.: Adaptive wavelet estimation of a density from mixtures under multiplicative censoring. Statistics 49(3), 638–659 (2015)
Chesneau, C.: Wavelet estimation of a density in a GARCH-type model. Commun. Stat., Theory Methods 42, 98–117 (2013)
Chesneau, C., Doosti, H.: Wavelet linear density estimation for a GARCH model under various dependence structures. J. Iran. Stat. Soc. 11, 1–21 (2012)
Devore, R., Kerkyacharian, G., Picard, D., Temlyakov, V.: On mathematical methods of learning. In: Found. Comput. Math., vol. 6, pp. 3–58 (2006). Special Issue for D. Smale
Donoho, D.L., Johnstone, M.I., Kerkyacharian, G., Picard, D.: Density estimation by wavelet thresholding. Ann. Stat. 24(2), 508–539 (1996)
Guo, H.J., Liu, Y.M.: Convergence rates of multivariate regression estimators with errors-in-variables. Numer. Funct. Anal. Optim. 38(12), 1564–1588 (2017)
Härdle, W., Kerkyacharian, G., Picard, D., Tsybakov, A.: Wavelet, Approximation and Statistical Applications. Springer, New York (1998)
Li, R., Liu, Y.M.: Wavelet optimal estimations for a density with some additive noises. Appl. Comput. Harmon. Anal. 36, 416–433 (2014)
Prakasa Rao, B.L.S.: Nonparametric Functional Estimation. Academic Press, Orlando (1983)
Prakasa Rao, B.L.S.: Nonparametric function estimation: an overview. In: Ghosh, S. (ed.) Asymptotics, Nonparametrics and Time Series, pp. 461–509. Marcel Dekker, New York (1999)
Prakasa Rao, B.L.S.: Wavelet estimation for derivative of a density in a GARCH-type model. Commun. Stat., Theory Methods 46, 2396–2410 (2017)
Vardi, Y.: Multiplicative censoring, renewal process, deconvolution and decreasing density: nonparametric estimation. Biometrika 76, 751–761 (1989)
Vardi, Y., Zhang, C.H.: Large sample study of empirical distributions in a random multiplicative censoring model. Ann. Stat. 20, 1022–1039 (1992)
Acknowledgements
The authors would like to thank Professor Youming Liu for his important comments and suggestions.
Availability of data and materials
Not applicable.
Funding
This paper is supported by the National Natural Science Foundation of China (No. 11771030) and the Beijing Natural Science Foundation (No. 1172001).
Author information
Authors and Affiliations
Contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Cao, K., Wei, J. Adaptive wavelet estimations for the derivative of a density in GARCH-type model. J Inequal Appl 2019, 106 (2019). https://doi.org/10.1186/s13660-019-2056-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-019-2056-0