Skip to main content

Adaptive wavelet estimations for the derivative of a density in GARCH-type model

Abstract

Recently, Rao investigated the estimations for the derivative of a density in GARCH-type model \(S=\sigma ^{2}Z\) over \(L^{2}\)-risk (Commun. Stat., Theory Methods 46:2396–2410, 2017). This paper extends those estimations to \(L^{p}\)-risk (\(1\leq p<\infty \)). In addition, we provide a lower bound for this model, which indicates one of our convergence rates to be nearly-optimal.

1 Introduction

The GARCH-type model

$$\begin{aligned} S=\sigma ^{2}Z \end{aligned}$$

is considered in this paper, where \(\sigma ^{2}\) and Z are independent random variables. In practice, we always assume that the density function \(f_{\sigma ^{2}}\) of \(\sigma ^{2}\) is unknown and \(\operatorname{supp} f_{\sigma ^{2}}\subseteq [0,1]\), while the density of Z is known. We want to estimate the first derivative of \(f_{\sigma ^{2}}\) based on n independent and identically distributed (i.i.d.) observed samples \(S_{1},\ldots ,S_{n}\) of S by wavelet methods, so that we also need suppose the differentiability of \(f_{\sigma ^{2}}\) and \(f'_{\sigma ^{2}}\in L^{p}([0,1])\).

Non-parametric estimations of a density and regression function are widely investigated in the literature [12, 14, 16]. It is well known that the estimations for the derivatives of a density are also important and interesting, which could reflect monotonicity, concavity or convexity properties of density functions. Asymptotic properties of the kernel estimators for a density derivative have been considered earlier in [15], while the wavelet type estimator was discussed in [17].

As usual, we consider the \(L^{p}\) minimax risk (\(L^{p}\)-risk) [13],

$$\begin{aligned} \inf_{\hat{f}_{n}}\sup_{f_{\sigma ^{2}}\in \varSigma } E \Vert \hat{f_{n}}-f _{\sigma ^{2}} \Vert _{p}, \end{aligned}$$

where the infimum runs over all possible estimators \(\hat{f}_{n}\) and Σ is a class of functions. Here and after, EX stands for the mathematical expectation of a random variable X and \(\|f\|_{p}\) denotes the ordinary \(L^{p}\) norm.

In 2012, Chesneau and Doosti [9] investigated the wavelet estimation of density for GARCH model under various dependence structures. Next year, Chesneau [8] studied the wavelet estimation of a density in GARCH-type model leading to upper bounds under \(L^{2}\)-risk. In 2017, Rao [17] considered \(L^{2}\)-risk for the derivative of a density in GARCH-type model over a Besov ball by wavelets.

In this paper, we address to extend Rao’s work [17] to \(L^{p}\)-risk \((1\leq p<\infty )\). Moreover, we show that one of our convergence rates is nearly-optimal. On the other hand, this work can also be seen as a generalization of multiplicative censoring model. Vardi [18, 19] introduced the multiplicative censoring model which unifies several models including non-parametric inference for renewal processes, non-parametric deconvolution problems and estimation of decreasing density functions. Recently, Abbaszadeh et al. [1] considered the wavelet estimation of a density and its derivatives under \(L^{p}\)-risk (\(1\leq p<\infty \)) in the multiplicative censoring one. The density estimations for the multiplicative censoring model also can be found in [2, 3] and [6, 7].

This paper is organized as follows. Section 2 briefly describes the Besov ball and wavelet estimators. The theoretical results are given in Sect. 3. Some lemmas are provided in Sect. 4. The proofs are gathered in Sect. 5.

2 Besov ball and estimators

This section describes the Besov ball and wavelet estimators. First, we introduce the Besov ball and its wavelet characterizations.

2.1 Besov ball

Let \(W_{r}^{n}(\mathbb{R})\) be the Sobolev space with a non-negative integer n,

$$\begin{aligned} W_{r}^{n}(\mathbb{R}):=\bigl\{ f: f\in L^{r}( \mathbb{R}), f^{(n)}\in L^{r}( \mathbb{R})\bigr\} , \end{aligned}$$

and \(\|f\|_{W_{r}^{n}}:=\|f\|_{r}+\|f^{(n)}\|_{r}\). Then \(L^{r}( \mathbb{R})\) can be considered as \(W_{r}^{0}(\mathbb{R})\). For \(1\leq r,q\leq \infty \) and \(s=n+\alpha \) with \(\alpha \in (0,1]\), a Besov space \(B_{r,q}^{s}(\mathbb{R})\) is defined by

$$\begin{aligned} B_{r,q}^{s}(\mathbb{R}):=\bigl\{ f: f\in W_{r}^{n}( \mathbb{R}), \bigl\Vert t^{- \alpha }\omega _{r}^{2} \bigl(f^{(n)},t\bigr) \bigr\Vert _{q}^{\ast }< \infty \bigr\} \end{aligned}$$

with the norm \(\|f\|_{B_{r,q}^{s}}:=\|f\|_{W_{r}^{n}}+\|t^{-\alpha } \omega _{r}^{2}(f^{(n)},t)\|_{q}^{\ast }\). Here, \(\omega _{r}^{2}(f,t):= \sup_{|h|\leq t}\|f(\cdot +2h)-2f(\cdot +h)+f(\cdot )\|_{r}\) denotes the smoothness modulus of f and

$$\begin{aligned} \Vert h \Vert _{q}^{\ast }:= \textstyle\begin{cases} (\int _{0}^{+\infty } \vert h(t) \vert ^{q}\frac{dt}{t})^{\frac{1}{q}} & \text{if $1\leq q< \infty $;} \\ \mathop{\mathrm{ess sup}} _{t} \vert h(t) \vert & \text{if $q=\infty $.} \end{cases}\displaystyle \end{aligned}$$

When \(s>0\) and \(1\leq r,q,r' \leq \infty \), it is well known that

  1. (i)

    \(B_{r,q}^{s}\hookrightarrow B_{r,\infty }^{s}\hookrightarrow B_{\infty ,\infty }^{s-\frac{1}{r}}\) for \(s>\frac{1}{r}\);

  2. (ii)

    \(B_{r,q}^{s}\hookrightarrow B_{r',q}^{s'}\) for \(r\leq r'\) and \(s-\frac{1}{r}=s'-\frac{1}{r'}\);

  3. (iii)

    \(B_{\infty ,\infty }^{s}(\mathbb{R})\) is the classical Hölder space \(H^{s} (\mathbb{R})\),

where \(A\hookrightarrow B\) stands for a Banach space A continuously embedded in another Banach space B. More precisely, \(\|u\|_{B} \leq c_{1}\|u\|_{A}\) (\(u\in A\)) holds for some constant \(c_{1}>0\). By (i), \(B_{r,q}^{s}(\mathbb{R})\hookrightarrow L^{\infty }(\mathbb{R})\) for \(s>\frac{1}{r}\). All these notations and claims can be found in [13].

In this paper, a Besov ball

$$\begin{aligned} B_{r,q}^{s}(M)=\bigl\{ f\in B_{r,q}^{s}( \mathbb{R}): \Vert f \Vert _{B_{r,q}^{s}} \leq M\bigr\} ,\quad M>0, \end{aligned}$$

is considered.

Let ϕ be a scaling function and ψ be the corresponding wavelet function such that

$$\begin{aligned} \{\phi _{\tau ,k}, \psi _{j,k}: j\geq \tau , k\in \mathbb{Z}\} \end{aligned}$$

constitutes an orthonormal basis of \(L^{2}(\mathbb{R})\), where τ is a positive integer and \(g_{j,k}(x)=2^{\frac{j}{2}}g(2^{j}x-k)\) for \(g=\phi \) or ψ. Then, for \(h\in L^{2}(\mathbb{R})\),

$$\begin{aligned} h=\sum_{k\in \varOmega _{\tau }}\alpha _{\tau ,k} \phi _{\tau ,k} +\sum_{j= \tau }^{\infty }\sum _{k\in \varOmega _{j}}\beta _{j,k}\psi _{j,k} \end{aligned}$$
(1)

with \(\alpha _{j,k}=\langle h,\phi _{j,k}\rangle \), \(\beta _{j,k}=\langle h,\psi _{j,k}\rangle \) and

$$\begin{aligned} \varOmega _{j}=\{k\in \mathbb{Z}:\operatorname{supp} h\cap \operatorname{supp} \phi _{j,k}\neq \emptyset \}\cup \{k\in \mathbb{Z}:\operatorname{supp} h\cap \operatorname{supp} \psi _{j,k}\neq \emptyset \}. \end{aligned}$$

In particular, when \(\phi ,\psi \) and h have compact supports, the cardinality of \(\varOmega _{j}\) satisfies \(|\varOmega _{j}|\leq C2^{j}\), where \(C>0\) is a constant depending only on the support lengths of \(\phi , \psi \) and h.

As usual, the orthogonal projection operator \(P_{j}\) is given by

$$\begin{aligned} P_{j}h=\sum_{k\in \varOmega _{j}}\alpha _{j,k}\phi _{j,k}. \end{aligned}$$
(2)

When \(\phi \in C^{m}\) (so does ψ) is compactly supported, the identity (1) and (2) hold in \(L^{p}\) sense for \(p\geq 1\) [13]. Here and throughout, \(C^{m}\) stands for the set consisting of all m times continuously differentiable functions.

The following wavelet characterization theorem of Besov space is needed in Sect. 5.

Lemma 2.1

([13])

Let a scaling function \(\phi \in C^{m}\) be compactly supported. Then, for \(r,q\in [1,+\infty ], 0< s< m\) and \(h\in L^{r}(\mathbb{R})\), the following assertions are equivalent:

$$\begin{aligned} &\mathrm{(i)}\quad h\in B_{r,q}^{s}(\mathbb{R}); \qquad \mathrm{(ii)}\quad 2^{js} \Vert P_{j}h-h \Vert _{r}\in l_{q}; \\ &\mathrm{(iii)} \quad \Vert \alpha _{j_{0},\cdot } \Vert _{l_{r}}+ \bigl\Vert \bigl\{ 2^{j(s+\frac{1}{2} - \frac{1}{r})} \Vert \beta _{j_{0},\cdot } \Vert _{l_{r}} \bigr\} _{j\geq j_{0}} \bigr\Vert _{l _{q}}< \infty. \end{aligned}$$

In each case,

$$\begin{aligned} \Vert h \Vert _{B_{r,q}^{s}}\thicksim \Vert h \Vert _{s,r,q}:= \Vert \alpha _{j_{0},\cdot } \Vert _{l_{r}}+ \bigl\Vert \bigl\{ 2^{j(s+\frac{1}{2} -\frac{1}{r})} \Vert \beta _{j_{0},\cdot } \Vert _{l_{r}}\bigr\} _{j\geq j_{0}} \bigr\Vert _{l_{q}}. \end{aligned}$$

Here and afterwards, \(A\lesssim B\) means \(A\leq c_{2}B\) for some constant \(c_{2}>0\); \(A\gtrsim B\) denotes \(B\lesssim A\); we also use \(A\thicksim B\) to stand for both \(A\lesssim B\) and \(A\gtrsim B\).

2.2 Estimators

This part introduces our wavelet estimators for the GARCH-type model \(S=\sigma ^{2}Z\) described earlier. Suppose

$$\begin{aligned} Z=\prod_{i=1}^{v}U_{i}, \end{aligned}$$

where v is a known positive integer and \(U_{1},\ldots ,U_{v}\) are i.i.d. random variables with standard uniform distribution. Clearly, the density function of Z satisfies

$$\begin{aligned} f_{Z}(z)=\frac{1}{(v-1)!}(-\ln z)^{v-1},\quad 0\leq z\leq 1. \end{aligned}$$

As in [8, 17], we assume that there exists a known constant \(C_{\ast }\) such that

$$\begin{aligned} \sup_{x\in [0,1]}f_{s}(x)\leq C_{\ast }, \end{aligned}$$
(3)

where \(f_{s}\) is the density function of S.

For any \(x\in [0,1]\), \(h\in C^{k}([0,1])\), we define

$$\begin{aligned} T(h) (x)=\bigl(xh(x)\bigr)'=h(x)+xh'(x),\qquad T_{k}(h) (x)=T\bigl(T_{k-1}(h)\bigr) (x) \end{aligned}$$
(4)

and

$$\begin{aligned} G(h) (x)=-xh'(x),\qquad G_{k}(h) (x)=G\bigl(G_{k-1}(h)\bigr) (x), \end{aligned}$$
(5)

where k is a positive integer. Then the following lemma holds.

Lemma 2.2

([8])

Let G and T be defined as above. Then

  1. (i)

    \(f_{{\sigma }^{2}}(x)=G_{v}(f_{s})(x), x\in [0,1]\);

  2. (ii)

    For any \(h\in C^{v}([0,1])\),

    $$\begin{aligned} \int _{0}^{1}f_{{\sigma }^{2}}(x)h(x)\,dx= \int _{0}^{1}f_{s}(x)T_{v}(h) (x)\,dx. \end{aligned}$$

Next, we will introduce wavelet estimators, which can be found in Ref. [17]. Define

$$\begin{aligned} \widehat{\alpha }_{j_{0},k}=-{\frac{1}{n}} {\sum _{i=1}^{n}} T _{v} \bigl(({\phi }_{j,k})'\bigr) (S_{i}) \quad\text{and}\quad \widehat{\beta }_{j,k}=-{\frac{1}{n}} {\sum _{i=1}^{n}} T_{v} \bigl(( {\psi }_{j,k})'\bigr) (S_{i}). \end{aligned}$$
(6)

Here and after, let ϕ be Daubechies’ scaling function \(D_{2N}\) with large N and ψ be the corresponding wavelet function. It is well known that \(\phi ,\psi \in C^{v+1}\) with N large enough. Furthermore, the linear wavelet estimator is given by

$$\begin{aligned} \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}= \sum _{k\in \varOmega _{j_{0}}} \widehat{\alpha }_{j_{0},k}{\phi }_{j_{0},k}, \end{aligned}$$
(7)

where \(j_{0}\) is a positive integer which will be chosen later.

In order to get adaptivity, we need the thresholding method [4, 14, 17]. As in [17], let

$$\begin{aligned} 2^{j_{1}}\thicksim \biggl(\frac{n}{\ln n}\biggr)^{\frac{1}{2(v+1)+1}},\qquad \lambda _{j}=2^{(v+1)j}\sqrt{\frac{j}{n}},\qquad \widetilde{\beta }_{j,k}= \widehat{\beta }_{j,k}I\bigl\{ \vert \widehat{ \beta }_{j,k} \vert \geq \varUpsilon \lambda _{j}\bigr\} \end{aligned}$$

with the constants \(\varUpsilon =c\gamma \), \(c>\max \{8C_{\min },1\}\) and \(\gamma \geq p(2v+3)\). Here,

$$\begin{aligned} C_{\min }=(v+2)!\sum_{u=0}^{v} \bigl[(v+1) (v+2)!C_{\ast } \bigl\Vert \psi ^{(u+1)} \bigr\Vert _{2}^{2}+2 \bigl\Vert \psi ^{(u+1)} \bigr\Vert _{\infty } \bigr] \end{aligned}$$
(8)

with \(C_{\ast }\) given in (3). This special choice c is used in Lemma 4.3, while \(\gamma \geq p(2v+3)\) is needed in the estimations of \(Ee_{1}\) and \(Ee_{3}\) (see Sect. 5). Here, we replace \(\lambda _{j}=2^{(v+1)j}\sqrt{\frac{\ln n}{n}}\) (see [17]) by \(\lambda _{j}=2^{(v+1)j}\sqrt{\frac{j}{n}}\), which is used in the proof of Lemma 4.3. In fact, the universal threshold of classical adaptive density estimation is \(\sqrt{ \frac{j}{n}}\) (see [11]) and two forms do not influence the convergence rates of our results.

The nonlinear wavelet estimator is given by

$$\begin{aligned} \widehat{f'_{\sigma ^{2}}}^{\mathrm{non}}=\sum _{k\in \varOmega _{\tau }} \widehat{\alpha }_{\tau ,k} {\phi }_{\tau ,k}+{\sum_{j={\tau }} ^{j_{1}}} \sum _{k\in \varOmega _{j}}\widetilde{\beta } _{j,k} {\psi }_{j,k} \end{aligned}$$
(9)

with some positive integer τ.

3 Results

This section describes the results in this paper.

Theorem 3.1

Assume \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\), then, for \(p\in [1,+\infty )\), the estimator \(\widehat{f'_{{\sigma }^{2}}} ^{\mathrm{lin}}\) in (7) with \(2^{j_{0}}\sim n^{ \frac{1}{2s'+2(v+1)+1}}\) satisfies

$$\begin{aligned} {\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)}} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \lesssim n^{-\frac{s'p}{2s'+2(v+1)+1}}, \end{aligned}$$

where \(s'=s-(\frac{1}{r}-\frac{1}{p})_{+}\) and \(a_{+}=\max \{a,0\}\).

Remark 1

When \(p=2\) and \(r\geq 2\), the above estimation shows

$$\begin{aligned} {\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)}} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{2}^{2} \lesssim n^{-\frac{2s}{2s+2(v+1)+1}}, \end{aligned}$$

which coincides with Theorem 5.1 of Ref. [17].

Remark 2

The condition \(s>\frac{1}{r}\) can be replaced by \(s'=s-(\frac{1}{r}- \frac{1}{p})_{+}>0\), because the former condition is only used to conclude \(B_{r,q}^{s}\hookrightarrow B_{p,q}^{s'}\) in the proof of Theorem 3.1.

The next theorem gives an adaptive upper bound estimation by the nonlinear wavelet estimator \(\widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}\) in (9).

Theorem 3.2

Let \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\). Then, for \(p\in [1,+\infty )\),

$$\begin{aligned} {\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)}} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \lesssim (\ln n)^{p} \bigl(n^{-1}\ln n\bigr)^{\alpha p} \end{aligned}$$

with \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\).

Remark 3

When \(sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p\geq 0\), \(\alpha = \frac{s}{2s+2(v+1)+1}\). In particular, the above result with \(p=2\) coincides with Theorem 5.2 in [17].

Remark 4

The condition \(s>\frac{1}{r}\) in Theorem 3.2 can’t be replaced by \(s'=s- (\frac{1}{r}-\frac{1}{p})_{+}>0\) for \(r\leq p\), since we need \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\leq \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\leq \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(v+1)+1}\) for the estimation of \(A_{3}\) in Sect. 5.

Remark 5

Let m be a constant such that \(m>s\), and \(2^{j_{0}}\thicksim n^{ \frac{1}{2m+2(v+1)+1}}\). Then the number of calculations can be reduced effectively, when the level τ in \(\widehat{f'_{{\sigma }^{2}}} ^{\mathrm{non}}\) is replaced by \(j_{0}\).

The following theorem shows a lower bound estimation.

Theorem 3.3

Assume \(s>0\) and \(r,q\in [1,+\infty ]\), then, for any \(p\in [1,+ \infty )\),

$$\begin{aligned} \inf_{\widehat{f}'_{{\sigma }^{2}}}\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)} E \bigl\Vert \widehat{f}'_{{\sigma } ^{2}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \gtrsim n^{-\frac{(s-\frac{1}{r}+ \frac{1}{p})p}{2(s-\frac{1}{r})+2(v+1)+1}}, \end{aligned}$$

where \(\widehat{f}'_{{\sigma }^{2}}\) runs over all possible estimators of \(f'_{{\sigma }^{2}}\).

Remark 6

Combining Theorem 3.3 with Theorem 3.2, we find that the convergence rate \(\frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\) is nearly-optimal. As for the other one, we will study it below.

4 Some lemmas

This section is devoted to providing some lemmas, which are needed for the proofs of our theorems.

Lemma 4.1

([13])

Let g be a scaling function or a wavelet function with

$$\begin{aligned} \sup_{x\in \mathbb{R}}\sum_{k} \bigl\vert g(x-k) \bigr\vert < +\infty. \end{aligned}$$

Then there exists \(C>0\) such that, for \(\lambda =\{\lambda _{k}\} \in l^{p}(\mathbb{Z})\) and \(1\leq p\leq \infty \),

$$\begin{aligned} \biggl\Vert \sum_{k\in {\mathbb{Z}}}\lambda _{k}g_{jk} \biggr\Vert _{p}\leq C2^{j( \frac{1}{2}-\frac{1}{p})} \Vert \lambda \Vert _{l_{p}}. \end{aligned}$$

We need the well-known Rosenthal’s inequality [13], in order to prove Lemma 4.2.

Rosenthal’s inequality. Let \(X_{1},\ldots ,X_{n}\) be independent random variables and \(EX_{i}=0\). Then

$$\begin{aligned} E \Biggl\vert \sum_{i=1}^{n}X_{i} \Biggr\vert ^{p}\leq \textstyle\begin{cases} C_{p} [ \sum_{i=1}^{n}E \vert X_{i} \vert ^{p}+ (\sum_{i=1}^{n}E \vert X_{i} \vert ^{2} )^{\frac{p}{2}} ], &2\leq p< \infty; \\ (\sum_{i=1}^{n}E \vert X_{i} \vert ^{2} )^{\frac{p}{2}}, &0< p< 2, \end{cases}\displaystyle \end{aligned}$$

where \(C_{p}>0\) is a constant.

Lemma 4.2

Let \(\widehat{\alpha }_{j,k}\) and \(\widehat{\beta }_{j,k}\) be given by (6). Then, for \(p\in (0,+\infty )\),

  1. (i)

    \(E\widehat{\alpha }_{j,k}=\alpha _{j,k}, E\widehat{\beta }_{j,k}= \beta _{j,k}\);

  2. (ii)

    \(E|\widehat{\alpha }_{j,k}- {\alpha }_{j,k}|^{p} \lesssim n^{- \frac{p}{2}} 2^{(v+1)jp}, E|\widehat{\beta }_{j,k}- {\beta }_{j,k}|^{p} \lesssim n^{-\frac{p}{2}} 2^{(v+1)jp}\),

where \(\alpha _{j,k}=\langle f'_{\sigma ^{2}}, \phi _{j,k}\rangle \) and \(\beta _{j,k}=\langle f'_{\sigma ^{2}}, \psi _{j,k}\rangle \).

Proof

(i) One only need prove \(E\widehat{\alpha }_{j,k}={\alpha }_{j,k}\) and the second one is the same. According to the definition of \(\widehat{\alpha }_{j,k}\) in (6), one gets

$$\begin{aligned} E\widehat{\alpha }_{j,k}=-E\bigl[T_{v} \bigl(({\phi }_{j,k})'\bigr) (S_{1})\bigr]=- \int _{0}^{1} T_{v} \bigl(({\phi }_{j,k})'\bigr) (x)f_{s}(x)\,dx \end{aligned}$$

thanks to \(S_{1},\ldots ,S_{n}\) are i.i.d. On the other hand, \(f_{{\sigma }^{2}}(0)=f_{{\sigma }^{2}}(1)=0\) follows from \(\operatorname{supp} f_{{\sigma }^{2}}\subseteq [0,1]\) and the continuity of \(f_{{\sigma }^{2}}\). These with Lemma 2.2 imply

$$ E\widehat{\alpha }_{j,k}=- \int _{0}^{1}f_{{\sigma }^{2}}(x) ({\phi } _{j,k})'(x)\,dx=-f_{{\sigma }^{2}}(x){\phi }_{j,k}(x)| _{0}^{1} + \int _{0}^{1}f'_{{\sigma }^{2}}(x){\phi }_{j,k}(x)\,dx={\alpha }_{j,k}. $$

(ii) One also prove the first inequality and the second one is similar. By (6) and the results of (i),

$$\begin{aligned} \widehat{\alpha }_{j,k}-{\alpha }_{j,k}= \widehat{\alpha }_{j,k}-E \widehat{\alpha }_{j,k}= {\frac{1}{n}} {\sum _{i=1}^{n}} \bigl\{ E\bigl[T _{v} \bigl(({\phi }_{j,k})'\bigr) (S_{i}) \bigr]-T_{v} \bigl(({\phi }_{j,k})'\bigr) (S_{i}) \bigr\} . \end{aligned}$$

Let \(X_{i}:=E[T_{v} (({\phi }_{j,k})')(S_{i})]-T_{v} (({\phi }_{j,k})')(S _{i})\). Then \(X_{1},\ldots ,X_{n}\) are i.i.d., \(EX_{i}=0\) and

$$\begin{aligned} E \vert \widehat{\alpha }_{j,k}-{\alpha }_{j,k} \vert ^{p}= E \Biggl\vert {\frac{1}{n}} { \sum_{i=1}^{n}}X_{i} \Biggr\vert ^{p}= n^{-p}E \Biggl\vert {\sum _{i=1} ^{n}}X_{i} \Biggr\vert ^{p}. \end{aligned}$$
(10)

According to (4),

$$\begin{aligned} \bigl|T_{v}\bigl(({\phi }_{j,k})' \bigr) (x)\bigr|\leq (v+2)!\sum_{u=0}^{v} \bigl\vert x^{u}(\phi _{j,k})^{(u+1)}(x) \bigr\vert . \end{aligned}$$
(11)

Hence,

$$\begin{aligned} \sup_{x\in [0,1]}\bigl|T_{v} \bigl(({\phi }_{j,k})' \bigr) (x)\bigr| &\leq (v+2)! \sup_{x\in [0,1]}\sum _{u=0}^{v} \bigl\vert x^{u} (\phi _{j,k})^{(u+1)}(x) \bigr\vert \\ &\leq (v+2)!\sum_{u=0}^{v}\sup _{x\in [0,1]} \bigl\vert (\phi _{j,k})^{(u+1)}(x) \bigr\vert \\ &\leq (v+2)! \sum_{u=0}^{v} \bigl\Vert \phi ^{(u+1)} \bigr\Vert _{\infty }2^{(v+ \frac{3}{2})j}. \end{aligned}$$

Clearly,

$$\begin{aligned} \vert X_{i} \vert \leq E\bigl|T_{v} \bigl(({\phi }_{j,k})'\bigr) (S_{i}) \bigr| +\bigl| T_{v}\bigl(({\phi }_{j,k})'\bigr) (S _{i})\bigr|\leq C_{1} 2^{(v+\frac{3}{2})j}, \end{aligned}$$
(12)

where \(C_{1}=2(v+2)!\sum_{u=0}^{v}\|\phi ^{(u+1)}\|_{\infty }\). On the other hand, \(\sup_{x\in [0,1]}f_{s}(x)\leq C_{\ast }\) in (3) and \(S_{i}\in [0,1]\) show that

$$\begin{aligned} E \bigl\vert (\phi _{j,k})^{(u+1)}(S_{i}) \bigr\vert ^{2}= \int _{0}^{1} \bigl\vert (\phi _{j,k})^{(u+1)}(x) \bigr\vert ^{2}f _{s}(x)\,dx\leq C_{\ast } \bigl\Vert \phi ^{(u+1)} \bigr\Vert _{2}^{2}2^{(2u+2)j}. \end{aligned}$$

This with (11) and \(S_{i}\in [0,1]\) leads to

$$\begin{aligned} E \bigl[T_{v} \bigl(({\phi }_{j,k})'\bigr) (S_{i}) \bigr]^{2} &\leq \bigl[(v+2)!\bigr]^{2}E \Biggl[\sum_{u=0}^{v} \bigl\vert S_{i}^{u}(\phi _{j,k})^{(u+1)}(S_{i}) \bigr\vert \Biggr]^{2} \\ &\leq (v+1)\bigl[(v+2)!\bigr]^{2}\sum_{u=0}^{v}E \bigl\vert (\phi _{j,k})^{(u+1)}(S _{i}) \bigr\vert ^{2} \\ &\leq (v+1)\bigl[(v+2)!\bigr]^{2}C_{\ast }\sum _{u=0}^{v} \bigl\Vert \phi ^{(u+1)} \bigr\Vert _{2}^{2}2^{(2v+2)j}. \end{aligned}$$

Furthermore,

$$\begin{aligned} E \vert X_{i} \vert ^{2}\leq E \bigl[ T_{v} \bigl(({\phi }_{j,k})'\bigr) (S_{i}) \bigr]^{2} \leq C_{2}2^{(2v+2)j}, \end{aligned}$$
(13)

where \(C_{2}=(v+1)[(v+2)!]^{2}C_{\ast }\sum_{u=0}^{v}\|\phi ^{(u+1)}\|_{2}^{2}\).

When \(0< p<2\), by using (10), Jensen’s inequality and (13),

$$\begin{aligned} E \vert \widehat{\alpha }_{j,k}-{\alpha }_{j,k} \vert ^{p}= n^{-p}E \Biggl\vert {\sum _{i=1}^{n}}X_{i} \Biggr\vert ^{p} \lesssim n^{-p} \Biggl[\sum _{i=1} ^{n}E \vert X_{i} \vert ^{2} \Biggr]^{\frac{p}{2}}\lesssim n^{-\frac{p}{2}}2^{(v+1)jp}. \end{aligned}$$

For the case of \(2\leq p<\infty \), according to Rosenthal’s inequality,

$$\begin{aligned} E \Biggl\vert \sum_{i=1}^{n} X_{i} \Biggr\vert ^{p} &\lesssim \sum _{i=1}^{n}E \vert X_{i} \vert ^{p}+ \Biggl(\sum_{i=1}^{n}E \vert X_{i} \vert ^{2} \Biggr)^{ \frac{p}{2}} \\ &\lesssim n2^{(v+\frac{3}{2})(p-2)j}2^{(2v+2)j}+\bigl(n2^{(2v+2)j} \bigr)^{ \frac{p}{2}} \\ &\lesssim n^{\frac{p}{2}}2^{(v+1)pj}\bigl[n^{1-\frac{p}{2}}2^{( \frac{p}{2}-1)j}+1 \bigr] \end{aligned}$$

because of (12) and (13). Moreover, \(n^{1- \frac{p}{2}} 2^{(\frac{p}{2}-1)j}\leq 1\) follows from \(2^{j}\leq n\) and \(p\geq 2\). Then

$$\begin{aligned} E \vert \widehat{\alpha }_{j,k}-{\alpha }_{j,k} \vert ^{p}= n^{-p}E \Biggl\vert {\sum _{i=1}^{n}}X_{i} \Biggr\vert ^{p}\lesssim n^{-\frac{p}{2}}2^{(v+1)pj} \end{aligned}$$

due to (10). This completes the proof. □

Bernstein’s inequality [13] is necessary in the proof of Lemma 4.3.

Bernstein’s inequality. Let \(X_{1},\ldots ,X_{n}\) be i.i.d. random variables, \(EX_{i}=0\) and \(|X_{i}|\leq \|X\|_{\infty }\) (\(i=1, \ldots ,n\)). Then, for each \(\gamma >0\),

$$\begin{aligned} P \Biggl\{ \Biggl\vert \frac{1}{n}\sum_{i=1}^{n} \Biggr\vert >\gamma \Biggr\} \leq 2 \exp \biggl(-\frac{n\gamma ^{2}}{2(EX_{i}^{2}+ \Vert X \Vert _{\infty }\gamma /3)} \biggr). \end{aligned}$$

Lemma 4.3

Let \(\beta _{j,k}\) be the wavelet coefficient of \(f'_{\sigma ^{2}}\), \(\widehat{\beta }_{j,k}\) be defined in (6) and \(\varUpsilon =c\gamma \). Then, for any \(j>0\), \(j2^{j}\leq n\) and \(\gamma \geq 1\), there exists a constant \(c\geq \max \{8C_{\min },1\}\) such that

$$\begin{aligned} P\bigl\{ \vert \widehat{\beta }_{j,k}- {\beta }_{j,k} \vert >\varUpsilon \lambda _{j}/2\bigr\} \lesssim 2^{-\gamma j}, \end{aligned}$$

where \(C_{\min }\) is given by (8).

Proof

According to the definition of \(\widehat{\beta }_{j,k}\) in (6), one obtains

$$ \widehat{\beta }_{j,k}-{\beta }_{j,k} =\frac{1}{n}\sum _{i=1} ^{n} \bigl\{ E\bigl[T_{v} \bigl(({\psi }_{j,k})'\bigr) (S_{i}) \bigr]-T_{v} \bigl(({\psi }_{j,k})'\bigr) (S _{i}) \bigr\} =\frac{1}{n}\sum_{i=1}^{n}Y_{i}, $$

where \(Y_{i}:=E[T_{v}(({\psi }_{j,k})')(S_{i})]-T_{v} (({\psi }_{j,k})')(S _{i})\).

Similar to (12) and (13),

$$\begin{aligned} \vert Y_{i} \vert \leq C'_{1}2^{(v+\frac{3}{2})j}:=M \quad\text{and}\quad EY_{i}^{2}\leq C'_{2}2^{(2v+2)j}, \end{aligned}$$
(14)

where

$$\begin{aligned} C'_{1}=2(v+2)!\sum _{u=0}^{v} \bigl\Vert \psi ^{(u+1)} \bigr\Vert _{\infty } \quad\text{and} \quad C'_{2}=(v+1)\bigl[(v+2)! \bigr]^{2}C_{\ast }\sum_{u=0}^{v} \bigl\Vert \psi ^{(u+1)} \bigr\Vert _{2}^{2}. \end{aligned}$$
(15)

Then Bernstein’s inequality tells that

$$\begin{aligned} P\bigl\{ \vert \widehat{\beta }_{j,k}- {\beta }_{j,k} \vert >\varUpsilon \lambda _{j}/2\bigr\} =P \Biggl\{ \Biggl\vert \frac{1}{n} \sum_{i=1}^{n}Y_{i} \Biggr\vert >\varUpsilon \lambda _{j}/2\Biggr\} \leq 2\exp \biggl\{ -\frac{n(\varUpsilon \lambda _{j}/2)^{2}}{2(EY_{i} ^{2}+M\varUpsilon \lambda _{j}/6)} \biggr\} . \end{aligned}$$
(16)

On the other hand, combining with (14), \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\) and \(j2^{j}\leq n\), one shows

$$\begin{aligned} EY_{i}^{2}+M\varUpsilon \lambda _{j}/6 &\leq C'_{2}2^{(2v+2)j}+\frac{C'_{1} \varUpsilon }{6}2^{(v+\frac{3}{2})j} 2^{(v+1)j}\sqrt{\frac{j}{n}} \\ &=2^{(2v+2)j} \biggl(C'_{2}+\frac{C'_{1}\varUpsilon }{6} \sqrt{ \frac{ j2^{j}}{n}} \biggr) \leq \bigl(C'_{2}+C'_{1} \varUpsilon \bigr)2^{(2v+2)j}. \end{aligned}$$

This with (15), \(c\geq \max \{8C_{\min },1\}\) implies that

$$\begin{aligned} \frac{n(\varUpsilon \lambda _{j}/2)^{2}}{2(EY_{i}^{2}+M{\varUpsilon \lambda _{j}}/6)} \geq \frac{\varUpsilon ^{2}j}{8(C'_{2}+C'_{1}\varUpsilon )}=\frac{(c \gamma )^{2}j}{8(C'_{2}+C'_{1}c\gamma )} \geq \gamma j\ln 2 \end{aligned}$$
(17)

thanks to \(j>0\) and \(\gamma >1\).

Hence, it follows from (16)–(17) that

$$\begin{aligned} P\bigl\{ | \widehat{\beta }_{j,k}- {\beta }_{j,k}| > \varUpsilon \lambda _{j}/2\bigr\} \leq 2\exp \biggl\{ -\frac{(c\gamma )^{2}j}{8(C'_{2}+C'_{1}c \gamma )} \biggr\} \lesssim 2^{-\gamma j}, \end{aligned}$$

which is the conclusion of Lemma 4.3. □

At the end of this section, we list two more lemmas which will play key roles in the proof of Theorem 3.3.

Lemma 4.4

([5])

Let \(g\in B_{r,q}^{s}(\mathbb{R})\) and \(f(x)=g(bx)\ (b\geq 1)\). Then

$$\begin{aligned} \Vert f \Vert _{B_{r,q}^{s}}\leq b^{s-\frac{1}{r}} \Vert g \Vert _{B_{r,q}^{s}}. \end{aligned}$$

To state the last lemma, we need a concept: Let P and Q be two probability measures on (\(\varOmega ,\aleph \)) and P be absolutely continuous with respect to Q (denoted by \(P\ll Q\)), the Kullback–Leibler divergence is defined by

$$\begin{aligned} K(P,Q):= \int _{p\cdot q>0} p(x)\ln {\frac{p(x)}{q(x)}}\,dx, \end{aligned}$$

where p and q are density functions of \(P,Q\), respectively.

Lemma 4.5

(Fano’s lemma, [10])

Let \(( \varOmega , \aleph , P_{k})\) be probability measurable spaces and \(A_{k}\in \aleph , k=0,1,\ldots ,m\). If \(A_{k}\cap A_{v}=\emptyset \) for \(k\neq v\), then

$$\begin{aligned} \sup_{0\leq k\leq m} P_{k}\bigl(A_{k}^{c} \bigr)\geq \min \biggl\{ \frac{1}{2}, \sqrt{m} \exp \bigl(-3e^{-1}- \kappa _{m}\bigr)\biggr\} , \end{aligned}$$

where \(A^{c}\) stands for the complement of A and \(\kappa _{m}= \inf_{0\leq v\leq m} \frac{1}{m} \sum_{k\neq v}K(P_{k}, P _{v})\).

5 Proofs of results

In this section, we will prove our main results.

5.1 Proofs of upper bounds

We rewrite Theorem 3.1 as follows before giving its proof.

Theorem 5.1

Assume \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\), then, for \(p\in [1,+\infty )\), the estimator \(\widehat{f'_{{\sigma }^{2}}} ^{\mathrm{lin}}\) in (7) with \(2^{j_{0}}\sim n^{ \frac{1}{2s'+2(v+1)+1}}\) satisfies

$$\begin{aligned} {\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)}} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \lesssim n^{-\frac{s'p}{2s'+2(v+1)+1}}, \end{aligned}$$

where \(s'=s-(\frac{1}{r}-\frac{1}{p})_{+}\) and \(a_{+}=\max \{a,0\}\).

Proof

When \(r>p\), \(s'=s-(\frac{1}{r}-\frac{1}{p})_{+}=s\). Denote \(\varOmega = \operatorname{supp} (\widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}})\). Then

$$\begin{aligned} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} &=E \int \bigl\vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\vert ^{p}\,dx \\ &\leq E \biggl[ \int _{\varOmega }\bigl( \bigl\vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{ {\sigma }^{2}} \bigr\vert ^{p}\bigr) ^{\frac{r}{p}}\,dx \biggr]^{\frac{p}{r}} \biggl( \int _{\varOmega } 1\,dx\biggr)^{1-\frac{p}{r}}\lesssim E\bigl( \bigl\Vert \widehat{f'_{{\sigma }^{2}}} ^{\mathrm{lin}}-f'_{ {\sigma }^{2}} \bigr\Vert _{r}^{r}\bigr)^{\frac{p}{r}} \end{aligned}$$

due to the Hölder inequality. Furthermore, according to Jensen’s inequality and \(\frac{p}{r}<1\),

$$\begin{aligned} \sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p}\lesssim \sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)} \bigl(E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{r}^{r}\bigr)^{ \frac{p}{r}}. \end{aligned}$$
(18)

By \(s'=s-\frac{1}{r}+\frac{1}{p}\leq s\) and \(r\leq p\), one finds \(B_{r,q}^{s}\hookrightarrow {B_{p,q}^{s'}}\). Hence,

$$\begin{aligned} {\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)}} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \lesssim \sup_{f'_{{\sigma }^{2}}\in B_{p,q}^{s'}(M)} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p}. \end{aligned}$$
(19)

Next, one only need estimate \(\sup_{f'_{{\sigma }^{2}}\in B_{p,q}^{s'}(M)} E\| \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}}\|_{p}^{p}\) by (18) and (19). Note that

$$\begin{aligned} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} & \leq E \bigl[ \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-P_{j_{0}}f'_{ {\sigma }^{2}} \bigr\Vert _{p}+ \bigl\Vert P_{j_{0}}f'_{{\sigma }^{2}}-f'_{ {\sigma }^{2}} \bigr\Vert _{p} \bigr]^{p} \\ &\lesssim E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-P_{j_{0}}f'_{ {\sigma }^{2}} \bigr\Vert _{p}^{p}+ \bigl\Vert P_{j_{0}}f'_{{\sigma }^{2}}-f'_{ {\sigma }^{2}} \bigr\Vert _{p}^{p}. \end{aligned}$$
(20)

Combining \(2^{j_{0}}\sim n^{\frac{1}{2s'+2(v+1)+1}}\), \(f'_{{\sigma } ^{2}}\in B_{p,q}^{s'}(M)\) with Lemma 2.1, one concludes

$$\begin{aligned} \bigl\Vert P_{j_{0}}f'_{{\sigma }^{2}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p}\lesssim 2^{-j _{0}s'p}\lesssim n^{-\frac{s'p}{2s'+2(v+1)+1}}. \end{aligned}$$
(21)

On the other hand,

$$\begin{aligned} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-P_{j_{0}}f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p}\lesssim 2^{j_{0}(\frac{p}{2}-1)}\sum _{k\in \varOmega _{j_{0}}} E \vert \widehat{\alpha }_{j_{0},k}-{\alpha } _{j_{0},k} \vert ^{p}\lesssim n^{-\frac{p}{2}}2^{(v+1+\frac{1}{2})j_{0}p} \end{aligned}$$
(22)

thanks to Lemma 4.1 and Lemma 4.2. Then it follows

$$\begin{aligned} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-P_{j_{0}}f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \lesssim n^{-\frac{s'p}{2s'+2(v+1)+1}} \end{aligned}$$

from \(2^{j_{0}}\sim n^{\frac{1}{2s'+2(v+1)+1}}\). This with (20) and (21) leads to

$$\begin{aligned} \sup_{f'_{{\sigma }^{2}}\in B_{p,q}^{s'}(M)} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p}\lesssim n^{-\frac{s'p}{2s'+2(v+1)+1}}. \end{aligned}$$
(23)

Combining (23) with (18) and (19), one finds that

$$\begin{aligned} \sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{lin}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p}\lesssim n^{-\frac{s'p}{2s'+2(v+1)+1}}. \end{aligned}$$

The proof is done. □

Now, the upper bound of nonlinear wavelet estimator (Theorem 3.2) is restated below.

Theorem 5.2

Let \(r\in [1,+\infty ), q\in [1,+\infty ]\) and \(s>\frac{1}{r}\). Then, for any \(p\in [1,+\infty )\), the estimator \(\widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}\) in (9) satisfies

$$\begin{aligned} {\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)}} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \lesssim (\ln n)^{p} \bigl(n^{-1}\ln n\bigr)^{\alpha p}, \end{aligned}$$

where \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\).

Proof

When \(r>p\), similar to (18),

$$\begin{aligned} \sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)} E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p}\lesssim \sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)} \bigl(E \bigl\Vert \widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}-f'_{{\sigma }^{2}} \bigr\Vert _{r}^{r}\bigr)^{ \frac{p}{r}}. \end{aligned}$$

Hence, it suffices to establish the result for \(r\leq p\). According to (1), (2) and (9), \(E\| \widehat{f'_{{\sigma }^{2}}}^{\mathrm{non}}-f'_{{\sigma }^{2}}\|_{p}^{p}\lesssim A_{1}+A_{2}+A_{3}\), where

A 1 = E k Ω τ ( α ˆ j , k α j , k ) ϕ j , k p p ; A 2 = E j = τ j 1 k Ω j ( β ˜ j , k β j , k ) ψ j , k p p and A 3 = P j 1 + 1 f σ 2 f σ 2 p p .

Next, one proves \(A_{1}+A_{2}+A_{3}\lesssim (\ln n)^{p}(n^{-1}\ln n)^{ \alpha p}\) for \(f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)\) and \(r\leq p\).

By the same arguments as (22),

$$\begin{aligned} A_{1}\lesssim 2^{\tau (\frac{p}{2}-1)}\sum_{k\in \varOmega _{ \tau }} E \vert \widehat{\alpha }_{\tau ,k}-{\alpha }_{\tau ,k} \vert ^{p}\lesssim n^{-\frac{p}{2}}2^{(v+1+\frac{1}{2})\tau p}\thicksim n^{-\frac{p}{2}} \lesssim \bigl(n^{-1}\ln n\bigr)^{\alpha p} \end{aligned}$$

thanks to \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}<\frac{1}{2}\).

Note that \(f'_{{\sigma }^{2}}\in B_{r,q}^{s}\hookrightarrow B_{p,q} ^{s-\frac{1}{r}+\frac{1}{p}}\) for \(r\leq p\). This with Lemma 2.1 and \(2^{j_{1}}\thicksim (\frac{n}{\ln n})^{ \frac{1}{2(v+1)+1}}\) shows

$$\begin{aligned} A_{3}= \bigl\Vert P_{j_{1}+1}f'_{{\sigma }^{2}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \lesssim 2^{-j_{1}(s-\frac{1}{r}+\frac{1}{p})p}\lesssim \biggl( \frac{\ln n}{n}\biggr)^{\frac{(s-\frac{1}{r}+\frac{1}{p})p}{2(v+1)+1}} \lesssim \bigl(n^{-1}\ln n\bigr)^{\alpha p}, \end{aligned}$$

because \(s>\frac{1}{r}\) and \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}\leq \frac{s- \frac{1}{r}+\frac{1}{p}}{2(v+1)+1}\).

To estimate \(A_{2}\), define

$$\begin{aligned} \widehat{B}_{j}=\bigl\{ k: \vert \widehat{\beta }_{j,k} \vert \geq \varUpsilon \lambda _{j} \bigr\} ;\qquad {B}_{j}= \biggl\{ k: \vert {\beta }_{j,k} \vert \geq \frac{1}{2} \varUpsilon \lambda _{j} \biggr\} \quad\text{and}\quad {C}_{j}=\bigl\{ k: \vert \beta _{j,k} \vert \geq 2\varUpsilon \lambda _{j}\bigr\} . \end{aligned}$$

Then \(E\|\sum_{j={\tau }}^{j_{1}} \sum_{k\in \varOmega _{j}}( \widetilde{\beta }_{j,k}-{\beta }_{j,k}) {\psi }_{j,k}\|_{p}^{p} \lesssim (\ln n)^{p-1}\sum_{i=1}^{4}Ee_{i}\) by Lemma 4.1, where

$$\begin{aligned} &e_{1}={\sum_{j={\tau }}^{j_{1}}} 2^{j(\frac{p}{2}-1)}\sum_{k\in \varOmega _{j}} \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert ^{p} I\bigl\{ k\in \widehat{B}_{j}\cap {B}_{j}^{c}\bigr\} ; \\ &e_{2}={\sum_{j={\tau }}^{j_{1}}} 2^{j(\frac{p}{2}-1)}\sum_{k\in \varOmega _{j}} \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert ^{p} I\{k\in \widehat{B}_{j}\cap {B}_{j}\}; \\ &e_{3}={\sum_{j={\tau }}^{j_{1}}} 2^{j(\frac{p}{2}-1)}\sum_{k\in \varOmega _{j}} \vert {\beta }_{j,k} \vert ^{p} I\bigl\{ k\in \widehat{B} _{j}^{c}\cap {C}_{j}\bigr\} ; \\ &e_{4}={\sum_{j={\tau }}^{j_{1}}} 2^{j(\frac{p}{2}-1)}\sum_{k\in \varOmega _{j}} \vert {\beta }_{j,k} \vert ^{p} I\bigl\{ k\in \widehat{B} _{j}^{c}\cap {C}_{j}^{c}\bigr\} . \end{aligned}$$

By the Hölder inequality and \(\{k\in \widehat{B}_{j}\cap {B} _{j}^{c}\}\subseteq \{|\widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}/2\}\),

$$\begin{aligned} E \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert ^{p}I\bigl\{ k\in \widehat{B}_{j} \cap {B}_{j}^{c} \bigr\} \leq \bigl(E \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert ^{2p}\bigr)^{ \frac{1}{2}} P^{\frac{1}{2}} \bigl\{ \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert > \varUpsilon \lambda _{j}/2\bigr\} . \end{aligned}$$

This with Lemma 4.2 and Lemma 4.3 shows that

$$\begin{aligned} Ee_{1}\lesssim \sum_{j={\tau }}^{j_{1}}2^{j(\frac{p}{2}-1)} \sum_{k\in \varOmega _{j}} n^{-\frac{p}{2}} 2^{j[(v+1)p-\frac{ \gamma }{2}]} \lesssim n^{-\frac{p}{2}}2^{\tau (vp+\frac{3}{2}p-\frac{ \gamma }{2})}\lesssim \biggl(\frac{\ln n}{n} \biggr)^{\alpha p}, \end{aligned}$$

where one uses \(\gamma >p(2v+3)\) and \(\alpha =\min \{ \frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\}<\frac{1}{2}\).

From \(k\in \widehat{B}_{j}^{c}\cap {C}_{j}\), one finds \(| \widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}\) and \(|\beta _{j,k}|\leq |\widehat{\beta }_{j,k}- {\beta }_{j,k}|+| \widehat{\beta }_{j,k}|\leq 2|\widehat{\beta }_{j,k}- {\beta }_{j,k}|\). On the other hand, \(\{k\in \widehat{B}_{j}^{c}\cap {C}_{j}\}\subseteq \{|\widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}\} \subseteq \{|\widehat{\beta }_{j,k}- {\beta }_{j,k}|>\varUpsilon \lambda _{j}/2\}\). Therefore, it follows from the same arguments as \(Ee_{1}\) that

$$\begin{aligned} Ee_{3}\lesssim \sum_{j={\tau }}^{j_{1}} 2^{j(\frac{p}{2}-1)} \sum_{k\in \varOmega _{j}} \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert ^{p}I \bigl\{ k\in \widehat{B}_{j}^{c}\cap {C}_{j}\bigr\} \lesssim \biggl(\frac{\ln n}{n}\biggr)^{ \alpha p}. \end{aligned}$$

Next, one estimates \(Ee_{2}\) and \(Ee_{4}\). Define

$$\begin{aligned} \begin{aligned} &\omega =sr+\biggl(v+\frac{3}{2}\biggr)r-\biggl(v+ \frac{3}{2}\biggr)p,\qquad 2^{j_{0}^{*}}\sim \biggl(\frac{n}{ \ln n} \biggr)^{\frac{1}{2s+2(v+1)+1}},\\ & 2^{j_{1}^{*}}\sim \biggl(\frac{n}{\ln n} \biggr)^{\frac{1}{2(s- \frac{1}{r})+2(v+1)+1}}. \end{aligned} \end{aligned}$$
(24)

Then, by \(s>\frac{1}{r}\) and \(2^{j_{1}}\thicksim (\frac{n}{\ln n})^{ \frac{1}{2(v+1)+1}}\),

$$\begin{aligned} 0< \frac{1}{2s+2(v+1)+1},~\frac{1}{2(s-\frac{1}{r})+2(v+1)+1}< \frac{1}{2(v+1)+1} \quad\text{and}\quad \tau < j_{0}^{*}, j_{1}^{*}< j_{1}. \end{aligned}$$

When \(\omega \geq 0\), one writes down

$$\begin{aligned} e_{2}=\Biggl(\sum_{j={\tau }}^{j_{0}^{*}}+ \sum_{j=j_{0}^{*}} ^{j_{1}}\Biggr) 2^{j(\frac{p}{2}-1)}\sum _{k\in \varOmega _{j}} \vert \widehat{\beta }_{j,k}-{ \beta }_{j,k} \vert ^{p}I\{k\in \widehat{B}_{j} \cap {B}_{j}\} :=e_{21}+e_{22}. \end{aligned}$$
(25)

According to (22),

$$\begin{aligned} Ee_{21}\lesssim \sum_{j={\tau }}^{j_{0}^{*}}2^{j(\frac{p}{2}-1)} \sum_{k\in \varOmega _{j}} E \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert ^{p} \lesssim n^{-\frac{p}{2}}2^{(v+1+\frac{1}{2})j_{0}^{*}p}. \end{aligned}$$
(26)

Note that \(\frac{2|\beta _{jk}|}{\varUpsilon \lambda _{j}}\geq 1\), \(\sum_{k}|\beta _{jk}|^{r}\lesssim 2^{-j(s+\frac{1}{2}-\frac{1}{r})r}\) from \(k\in B_{j}\), \(f'_{\sigma ^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1; On the other hand, Lemma 4.2 tells \(E|\widehat{\beta }_{j,k}-{\beta }_{j,k}|^{p}\lesssim 2^{(v+1)jp} n ^{-\frac{p}{2}}\). These with \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\) and \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p\geq 0\) lead to

$$\begin{aligned} Ee_{22} &\lesssim \sum_{j=j_{0}^{*}}^{j_{1}} 2^{j( \frac{p}{2}-1)}\sum_{k\in \varOmega _{j}} E \vert \widehat{\beta }_{j,k}- {\beta }_{j,k} \vert ^{p}\biggl( \frac{ \vert \beta _{jk} \vert }{\varUpsilon \lambda _{j}}\biggr)^{r} \\ &\lesssim \sum_{j=j_{0}^{*}}^{j_{1}}2^{j(\frac{p}{2}-1)}2^{(v+1)jp} n^{-\frac{p}{2}}\lambda _{j}^{-r}\sum _{k} \vert \beta _{jk} \vert ^{r} \lesssim 2^{-j _{0}^{*}\omega }n^{-\frac{p-r}{2}}. \end{aligned}$$
(27)

Combining (25)–(27) with \(2^{j_{0}^{*}}\sim (\frac{n}{ \ln n})^{\frac{1}{2s+2(v+1)+1}}\), one obtains

$$\begin{aligned} Ee_{2}=Ee_{21}+Ee_{22}\lesssim n^{-\frac{p}{2}}2^{(v+1+\frac{1}{2})j _{0}^{*}p}+2^{-j_{0}^{*}\omega }n^{-\frac{p-r}{2}} \lesssim \biggl(\frac{ \ln n}{n}\biggr)^{\frac{sp}{2s+2(v+1)+1}}=\biggl(\frac{\ln n}{n} \biggr)^{\alpha p} \end{aligned}$$

because of \(\omega \geq 0\) and \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}= \frac{s}{2s+2(v+1)+1}\).

When \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p<0\), \(\alpha = \min \{\frac{s}{2s+2(v+1)+1}, \frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\}=\frac{s-\frac{1}{r}+ \frac{1}{p}}{2(s- \frac{1}{r})+2(v+1)+1}\). Define \(p_{1}=(1-2\alpha )p\). Then \(r\leq p_{1}\leq p\) follows from

$$\begin{aligned} \omega < 0 \quad\text{and}\quad r\leq p_{1}=(1-2\alpha )p= \frac{2(v+1)p+p-2}{2(s-\frac{1}{r})+2(v+1)+1}\leq p. \end{aligned}$$

Moreover, \(\sum_{k}|\beta _{jk}|^{p_{1}}\leq (\sum_{k}|\beta _{jk}|^{r})^{\frac{p _{1}}{r}} \lesssim 2^{-j(s+\frac{1}{2}-\frac{1}{r})p_{1}} \) thanks to \(r\leq p_{1}\), \(f'_{\sigma ^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1. This with (27) and \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\) shows

$$\begin{aligned} Ee_{2} &\lesssim \sum_{j=\tau }^{j_{1}} 2^{j(\frac{p}{2}-1)} \sum_{k\in \varOmega _{j}} E \vert \widehat{\beta }_{j,k}-{\beta }_{j,k} \vert ^{p}\biggl( \frac{ \vert \beta _{jk} \vert }{\varUpsilon \lambda _{j}}\biggr)^{p_{1}} \\ &\lesssim \sum_{j=\tau }^{j_{1}}2^{j(\frac{p}{2}-1)}2^{(v+1)jp} n^{-\frac{p}{2}}\lambda _{j}^{-p_{1}}\sum _{k} \vert \beta _{jk} \vert ^{p_{1}} \\ &\lesssim n^{-\frac{p-p_{1}}{2}}\sum_{j=\tau }^{{j_{1}}} 2^{j[ \frac{p}{2}-1+(v+1)(p-p_{1})-(s+\frac{1}{2}-\frac{1}{r})p_{1}]} \lesssim \ln n \biggl(\frac{\ln n}{n}\biggr)^{\alpha p} \end{aligned}$$

due to \(\frac{p-p_{1}}{2}=\alpha p\) and \(\frac{p}{2}-1+(v+1)(p-p_{1})-(s+ \frac{1}{2}-\frac{1}{r})p_{1}=0\).

Finally, one estimates \(Ee_{4}\). When \(\omega =sr+(v+\frac{3}{2})r-(v+ \frac{3}{2})p\geq 0\), \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}= \frac{s}{2s+2(v+1)+1}\). Furthermore,

$$\begin{aligned} e_{4}=\Biggl(\sum_{j={\tau }}^{j_{0}^{*}}+ \sum_{j={j_{0}^{*}}+1} ^{j_{1}}\Biggr) 2^{j(\frac{p}{2}-1)}\sum _{k\in \varOmega _{j}} \vert {\beta } _{j,k} \vert ^{p} I\bigl\{ k\in \widehat{B}_{j}^{c}\cap {C}_{j}^{c}\bigr\} :=e_{41}+e _{42}, \end{aligned}$$
(28)

where \(j_{0}^{*}\) is given by (24).

Since \(|\beta _{j,k}|\leq 2\varUpsilon \lambda _{j}\lesssim 2^{(v+1)j}(jn ^{-1})^{\frac{1}{2}}\) holds by \(k\in C_{j}^{c}\) and \(\lambda _{j}=2^{(v+1)j}\sqrt{ \frac{j}{n}}\), one concludes that

$$\begin{aligned} Ee_{41} &= E\sum_{j={\tau }}^{j_{0}^{*}}2^{j(\frac{p}{2}-1)} \sum_{k\in \varOmega _{j}} \vert \beta _{j,k} \vert ^{p}I\bigl\{ k\in \widehat{B} _{j}^{c}\cap {C}_{j}^{c}\bigr\} \\ &\lesssim \sum_{j={\tau }}^{j_{0}^{*}}2^{j(\frac{p}{2}-1)}2^{j} 2^{(v+1)jp}\bigl(jn^{-1}\bigr)^{\frac{p}{2}}\lesssim 2^{(v+1+\frac{1}{2})j_{0} ^{*}p}\biggl(\frac{\ln n}{n}\biggr)^{\frac{p}{2}}. \end{aligned}$$
(29)

On the other hand,

$$\begin{aligned} Ee_{42} &= E\sum_{j={j_{0}^{*}+1}}^{j_{1}} 2^{j(\frac{p}{2}-1)} \sum_{k\in \varOmega _{j}} \vert \beta _{j,k} \vert ^{p} I\bigl\{ k\in \widehat{B} _{j}^{c}\cap {C}_{j}^{c}\bigr\} \\ &\lesssim \sum_{j=j_{0}^{*}+1}^{j_{1}}2^{j(\frac{p}{2}-1)} \sum_{k\in \varOmega _{j}} \vert \beta _{j,k} \vert ^{p}\bigl(\lambda _{j} \vert \beta _{j,k} \vert ^{-1}\bigr)^{p-r} \\ &\lesssim \sum_{j={j_{0}^{*}+1}}^{j_{1}}2^{j(\frac{p}{2}-1)} \lambda _{j}^{p-r}\sum_{k} \vert \beta _{j,k} \vert ^{r} \end{aligned}$$

due to \(|\beta _{j,k}|\leq 2\varUpsilon \lambda _{j}\) and \(r\leq p\).

Clearly, \(\|{\beta }_{j,\cdot }\|_{l_{r}} \lesssim 2^{-j(s-\frac{1}{r}+ \frac{1}{2})}\) by \(f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1. This with \(\lambda _{j}=2^{(v+1)j}\sqrt{\frac{j}{n}}\) implies that

$$\begin{aligned} Ee_{42} \lesssim \sum_{j={j_{0}^{*}+1}}^{j_{1}}2^{j( \frac{p}{2}-1)} 2^{(v+1)(p-r)j}2^{-j(sr+\frac{r}{2}-1)}\biggl( \frac{\ln n}{n}\biggr)^{\frac{p-r}{2}} \lesssim \biggl(\frac{\ln n}{n}\biggr)^{ \frac{p-r}{2}}2^{-j_{0}^{\ast }\omega }, \end{aligned}$$
(30)

because \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p\geq 0\).

According to (28)–(30) and \(2^{j_{0}^{*}}\sim (\frac{n}{ \ln n})^{\frac{1}{2s+2(v+1)+1}}\), one obtains

$$\begin{aligned} Ee_{4}=Ee_{41}+Ee_{42}\lesssim 2^{(v+1+\frac{1}{2})j_{0}^{*}p} \biggl(\frac{ \ln n}{n}\biggr)^{\frac{p}{2}}+\biggl(\frac{\ln n}{n} \biggr)^{\frac{p-r}{2}}2^{-j_{0} ^{\ast }\omega } \lesssim \biggl(\frac{\ln n}{n} \biggr)^{\alpha p} \end{aligned}$$

by \(\alpha =\frac{s}{2s+2(v+1)+1}\).

For the case of \(\omega =sr+(v+\frac{3}{2})r-(v+\frac{3}{2})p<0\). Let

$$\begin{aligned} e_{4}=\Biggl(\sum_{j={\tau }}^{j_{1}^{*}}+ \sum_{j={j_{1}^{*}}+1} ^{j_{1}}\Biggr) 2^{j(\frac{p}{2}-1)}\sum _{k\in \varOmega _{j}} \vert {\beta } _{j,k} \vert ^{p} I\bigl\{ k\in \widehat{B}_{j}^{c}\cap {C}_{j}^{c}\bigr\} :=e_{41}'+e _{42}', \end{aligned}$$
(31)

where \(j_{1}^{*}\) is given by (24). Similar to (30),

$$\begin{aligned} Ee_{41}'=\sum _{j={\tau }}^{j_{1}^{*}} 2^{j(\frac{p}{2}-1)} \sum _{k\in \varOmega _{j}} \vert {\beta }_{j,k} \vert ^{p} I\bigl\{ k\in \widehat{B} _{j}^{c}\cap {C}_{j}^{c} \bigr\} \lesssim \biggl(\frac{\ln n}{n}\biggr)^{\frac{p-r}{2}}2^{-j _{1}^{\ast }\omega } \end{aligned}$$
(32)

thanks to \(\omega <0\).

To estimate \(Ee_{42}'\), one observes that \(\|\beta _{j,\cdot }\|_{l_{p}} \lesssim \|\beta _{j,\cdot }\|_{l_{r}}\lesssim 2^{-j(s-\frac{1}{r}+ \frac{1}{2})}\) by \(r\leq p\), \(f'_{\sigma ^{2}}\in B_{r,q}^{s}(M)\) and Lemma 2.1. Hence,

$$\begin{aligned} Ee_{42}' &=\sum _{j={j_{1}^{*}+1}}^{j_{1}}2^{j(\frac{p}{2}-1)} \sum _{k\in \varOmega _{j}} \vert \beta _{j,k} \vert ^{p}I \bigl\{ k\in \widehat{B}_{j} ^{c}\cap {C}_{j}^{c} \bigr\} \lesssim \sum_{j={j_{1}^{*}+1}}^{j_{1}}2^{j( \frac{p}{2}-1)} \Vert \beta _{j,\cdot } \Vert _{l_{r}}^{p} \\ &\lesssim \sum_{j={j_{1}^{*}+1}}^{j_{1}}2^{j(\frac{p}{2}-1)}2^{-j(s- \frac{1}{r}+\frac{1}{2})p} \lesssim 2^{-j_{1}^{*}(s-\frac{1}{r}+ \frac{1}{p})p} \end{aligned}$$
(33)

because of \(s>\frac{1}{r}\).

Combining (31)–(33) with \(2^{j_{1}^{*}}\sim (\frac{n}{ \ln n})^{\frac{1}{2(s-\frac{1}{r})+2(v+1)+1}}\), one knows

$$\begin{aligned} Ee_{4}=Ee_{41}'+Ee_{42}' \lesssim \biggl(\frac{\ln n}{n}\biggr)^{\frac{p-r}{2}}2^{-j _{1}^{\ast }\omega } +2^{-j_{1}^{*}(s-\frac{1}{r}+\frac{1}{p})p} \lesssim \biggl(\frac{\ln n}{n}\biggr)^{\alpha p} \end{aligned}$$

thanks to \(\omega <0\) and \(\alpha =\min \{\frac{s}{2s+2(v+1)+1}, \frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\}=\frac{s- \frac{1}{r}+ \frac{1}{p}}{2(s-\frac{1}{r})+2(v+1)+1}\). This completes the proof of Theorem 5.2. □

5.2 Proof of lower bound

Finally, we are in a position to state and prove the lower bound estimation.

Theorem 5.3

Assume \(s>0\) and \(r,q\in [1,+\infty ]\), then, for any \(p\in [1,+ \infty )\),

$$\begin{aligned} \inf_{\widehat{f}'_{{\sigma }^{2}}}\sup_{f'_{{\sigma }^{2}}\in B_{r,q}^{s}(M)} E \bigl\Vert \widehat{f}'_{{\sigma } ^{2}}-f'_{{\sigma }^{2}} \bigr\Vert _{p}^{p} \gtrsim n^{-\frac{(s-\frac{1}{r}+ \frac{1}{p})p}{2(s-\frac{1}{r})+2(v+1)+1}}, \end{aligned}$$

where \(\widehat{f}'_{{\sigma }^{2}}\) runs over all possible estimators of \(f'_{{\sigma }^{2}}\).

Proof

It is sufficient to construct density functions \(h_{k}\) such that \(h'_{k}\in B_{r,q}^{s}(M)\) and

$$\begin{aligned} \sup_{k} E \bigl\Vert \widehat{f}'_{{\sigma }^{2}}-h'_{k} \bigr\Vert _{p}^{p} \gtrsim n^{-\frac{(s-\frac{1}{r}+ \frac{1}{p})p}{2(s-\frac{1}{r})+2(v+1)+1}}. \end{aligned}$$

Define \(g(x)=Cm(x)\), where \(m\in C_{0}^{\infty }\) with \(\operatorname{supp} m \subseteq [0,1]\), \(\int _{\mathbb{R}} m(x)\,dx=0\) and \(C>0\) is a constant. Let \(C_{0}^{\infty }\) stand for the set of all infinitely many times differentiable and compactly supported functions. Furthermore, one chooses a density function \(h_{0}\) satisfying \(h_{0}\in B_{r,q}^{s+1}( \frac{M}{2}), \operatorname{supp} h_{0}\subseteq [0,1]\) and \(h_{0}(x)\geq M _{1}>0\) for \(x\in [\frac{1}{2},\frac{3}{4}]\).

Take \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\) and

$$\begin{aligned} h_{1}(x)=h_{0}(x)+a_{j}G_{v}(g_{j,l}) (x), \end{aligned}$$
(34)

where G is given by (5) and \(g_{j,l}(x)=2^{\frac{j}{2}}g(2^{j}x-l)\) with \(l=2^{j-1}\).

First, one checks that \(h_{1}\) is a density function. Since \(\operatorname{supp} g_{j,l}\subseteq [\frac{1}{2},\frac{3}{4}]\) by \(\operatorname{supp} m\subseteq [0,1]\) and j large enough, one finds \(h_{1}(x)\geq 0\) for \(x\notin [\frac{1}{2},\frac{3}{4}]\). It is easy to calculate that

$$\begin{aligned} G_{v}(g_{j,l}) (x)=(-1)^{v}\sum _{u=1}^{v}C_{u}x^{u}(g_{j,l})^{(u)}(x), \end{aligned}$$
(35)

where \(C_{u}>0\) is a constant. Then, for \(x\in [\frac{1}{2}, \frac{3}{4}]\) and large j,

$$\begin{aligned} h_{1}(x) &\geq M_{1}- \Biggl\vert a_{j}\sum_{u=1}^{v}C_{u}x^{u}(g_{j,l})^{(u)}(x) \Biggr\vert \\ &\geq M_{1}-a_{j}2^{\frac{j}{2}}\sum _{u=1}^{v}C_{u}2^{uj} \bigl\Vert {g}^{(u)}\bigl(2^{j}\cdot -l\bigr) \bigr\Vert _{\infty } \\ &\geq M_{1}-2^{-j(s-\frac{1}{r}+1)}\sum_{u=1}^{v}C_{u} \bigl\Vert g ^{(u)} \bigr\Vert _{\infty }\geq 0 \end{aligned}$$
(36)

thanks to \(h_{0}(x)|_{[\frac{1}{2},\frac{3}{4}]}\geq M_{1}\) and \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\). On the other hand, \(\int _{\mathbb{R}} g(x)\,dx=\int _{\mathbb{R}} Cm(x)\,dx=0\) and \(\operatorname{supp} g_{j,l}\subseteq [\frac{1}{2},\frac{1}{2}+2^{-j}]\) by \(\operatorname{supp} m\subseteq [0,1]\) and \(l=2^{j-1}\). This with \(m\in C_{0}^{\infty }\) and \(g(x)=Cm(x)\) shows

$$\begin{aligned} \int x^{u}(g_{j,l})^{(u)}(x) \,dx&=x^{u}(g_{j,l})^{(u-1)} (x) | _{\frac{1}{2}}^{\frac{1}{2}+2^{-j}}-u \int x^{u-1}(g_{j,l})^{(u-1)}(x)\,dx \\ &=\cdots =(-1)^{m}\frac{u!}{(u-m)!} \int x^{u-m}(g_{j,l})^{(u-m)} (x) \,dx\\ &=(-1)^{u}u! \int g_{j,l}(x)\,dx=0 \end{aligned}$$

for any \(u\in \{1,\ldots , v\}\). Therefore,

$$\begin{aligned} \int h_{1}(x)\,dx= \int h_{0}(x)\,dx+(-1)^{v}a_{j}\sum _{u=1}^{v}C _{u} \int x^{u}(g_{j,l})^{(u)}(x)\,dx=1. \end{aligned}$$

From this with (36) one concludes that \(h_{1}\) is a density function.

Next, one shows \(h'_{0}, h'_{1}\in B_{r,q}^{s}(M)\). Clearly, \(h'_{0}\in B_{r,q}^{s}(M)\) by \(h_{0}\in B_{r,q}^{s+1}(\frac{M}{2})\). Hence, one only needs prove \(h'_{1}\in B_{r,q}^{s}(M)\).

By (34) and (35),

$$\begin{aligned} \Vert h_{1} \Vert _{B_{r,q}^{s+1}}\leq \Vert h_{0} \Vert _{B_{r,q}^{s+1}}+a_{j}\sum _{u=1}^{v}C_{u} \bigl\Vert x^{u}(g_{j,l})^{(u)}(x) \bigr\Vert _{B_{r,q}^{s+1}}. \end{aligned}$$
(37)

On the other hand, for each \(\tau \in \{0,\ldots ,u\}\),

$$\begin{aligned} \biggl\Vert \biggl[2^{j}\biggl(x-\frac{1}{2}\biggr) \biggr]^{u-\tau }g^{(u)}\biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \biggr\Vert _{B_{r,q}^{s+1}} \leq 2^{j(s+1-\frac{1}{r})} \bigl\Vert x^{u-\tau }g^{(u)}(x) \bigr\Vert _{B_{r,q}^{s+1}} \end{aligned}$$

because of Lemma 4.4. Combining this with \(l=2^{j-1}\) and \([2^{j}(x-\frac{1}{2}+\frac{1}{2})]^{u}= \sum_{\tau =0}^{u}C ^{\tau }_{u}[2^{j}(x-\frac{1}{2})]^{u-\tau }2^{-\tau }2^{\tau j}\), one obtains

$$\begin{aligned} \bigl\Vert x^{u}(g_{j,l})^{(u)}(x) \bigr\Vert _{B_{r,q}^{s+1}} &=2^{\frac{j}{2}} \biggl\Vert \biggl[2^{j}\biggl(x- \frac{1}{2}+\frac{1}{2}\biggr) \biggr]^{u}g^{(u)}\biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \biggr\Vert _{B_{r,q} ^{s+1}} \\ &\leq 2^{(u+\frac{1}{2})j}\sum_{\tau =0}^{u}C^{\tau }_{u} \biggl\Vert \biggl[2^{j}\biggl(x- \frac{1}{2}\biggr) \biggr]^{u-\tau }g^{(u)}\biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \biggr\Vert _{B_{r,q}^{s+1}} \\ &\leq 2^{j(s-\frac{1}{r}+\frac{1}{2}+u+1)}\sum_{\tau =0}^{u}C ^{\tau }_{u} \bigl\Vert x^{u-\tau }g^{(u)}(x) \bigr\Vert _{B_{r,q}^{s+1}}. \end{aligned}$$
(38)

Denote \(M'=\max \{C_{u},C_{u}^{\tau }: \tau =0,\ldots ,u; u=1,\ldots ,v\}\). Then there exists a constant \(C>0\) such that

$$\begin{aligned} x^{u-\tau }g^{(u)}\in B_{r,q}^{s+1}\biggl( \frac{M}{2v^{2}M^{\prime 2}}\biggr) \end{aligned}$$

thanks to \(g(x)=Cm(x)\) and \(m\in C_{0}^{\infty }\subseteq B_{r,q}^{s+1}\). This with (37) and (38) leads to

$$\begin{aligned} \Vert h_{1} \Vert _{B_{r,q}^{s+1}}\leq \Vert h_{0} \Vert _{B_{r,q}^{s+1}}+a_{j}\sum_{u=1}^{v} uM^{\prime 2}2^{j(s-\frac{1}{r}+\frac{1}{2}+u+1)}\frac{M}{2v ^{2}M^{\prime 2}}\leq \frac{M}{2}+ \frac{M}{2}=M, \end{aligned}$$

because \(h_{0}\in B_{r,q}^{s+1}(\frac{M}{2})\) and \(a_{j}=2^{-j(s- \frac{1}{r}+\frac{1}{2}+v+1)}\). Therefore, \(h_{1}\in B_{r,q}^{s+1}(M)\) and \(h'_{1}\in B_{r,q}^{s}(M)\).

According to (35),

$$\begin{aligned} \bigl[G_{v}(g_{j,l})(x)\bigr] '=(-1)^{v} \sum_{u=0}^{v}C'_{u}x^{u}(g_{j,l})^{(u+1)}(x), \end{aligned}$$

where \(C'_{u}>0\) is a constant and \(l=2^{j-1}\). Hence,

$$\begin{aligned} \bigl\Vert h'_{1}-h'_{0} \bigr\Vert _{p} &= a_{j} \bigl\Vert \bigl[G_{v}(g_{j,l})\bigr]' \bigr\Vert _{p} \\ &= a_{j}2^{\frac{3j}{2}} \Biggl\Vert \sum _{u=0}^{v}C'_{u} \biggl[2^{j}\biggl(x- \frac{1}{2}+\frac{1}{2}\biggr) \biggr]^{u}g^{(u+1)} \biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \Biggr\Vert _{p}. \end{aligned}$$
(39)

On the other hand, by using \([2^{j}(x-\frac{1}{2}+\frac{1}{2})]^{u}= \sum_{\tau =0}^{u} C^{\tau }_{u}[2^{j}(x-\frac{1}{2})]^{u- \tau }2^{-\tau }2^{\tau j}\), one concludes that

$$\begin{aligned} &\Biggl\Vert \sum_{u=0}^{v}C'_{u} \biggl[2^{j}\biggl(x-\frac{1}{2}+\frac{1}{2}\biggr) \biggr]^{u}g ^{(u+1)} \biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \Biggr\Vert _{p} \\ &\quad\geq \Biggl\Vert \sum_{u=0}^{v}C'_{u} 2^{-u}2^{uj}g^{(u+1)}\biggl[2^{j} \biggl(x-\frac{1}{2}\biggr)\biggr] \Biggr\Vert _{p} \\ &\qquad{}- \Biggl\Vert \sum_{u=0}^{v}C'_{u} \Biggl\{ \sum_{\tau =0}^{u-1}C^{\tau } _{u}\biggl[2^{j} \biggl(x-\frac{1}{2}\biggr) \biggr]^{u-\tau }2^{-\tau } 2^{\tau j}\Biggr\} g^{(u+1)} \biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \Biggr\Vert _{p} \end{aligned}$$
(40)

and

$$\begin{aligned} &\Biggl\Vert \sum_{u=0}^{v}C'_{u}2^{-u}2^{uj}g^{(u+1)} \biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \Biggr\Vert _{p} \\ &\quad \geq \biggl\Vert C'_{v}2^{-v}2^{vj}g^{(v+1)} \biggl[2^{j}\biggl(x- \frac{1}{2}\biggr)\biggr] \biggr\Vert _{p} \\ &\qquad{}- \Biggl\Vert \sum_{u=0}^{v-1}C'_{u} 2^{-u}2^{uj}g^{(u+1)}\biggl[2^{j} \biggl(x- \frac{1}{2}\biggr)\biggr] \Biggr\Vert _{p}. \end{aligned}$$
(41)

Let \(x'=2^{j}(x-\frac{1}{2})\). Then there exists a constant \(C'>0\) such that

$$\begin{aligned} \bigl\Vert h'_{1}-h'_{0} \bigr\Vert _{p} \geq {}& a_{j}2^{\frac{3}{2}j}2^{-\frac{j}{p}} \Biggl\{ C'_{v}2^{-v}2^{vj} \bigl\Vert g^{(v+1)} \bigr\Vert _{p}-2^{(v-1)j}\sum _{u=0} ^{v-1}C'_{u}2^{-u} \bigl\Vert g^{(u+1)} \bigr\Vert _{p} \\ &{}-2^{(v-1)j}\sum_{u=0}^{v}C'_{u} \sum_{\tau =0}^{u-1}C ^{\tau }_{u}2^{-\tau } \bigl\Vert \bigl(x'\bigr)^{u-\tau }g^{(u+1)} \bigr\Vert _{p} \Biggr\} \\ \geq {}& C'a_{j}2^{\frac{3}{2}j}2^{-\frac{j}{p}}2^{jv} =C'2^{-j(s- \frac{1}{r}+\frac{1}{p})}:=\delta _{j} \end{aligned}$$

thanks to (39)–(41), \(g\in C_{0}^{\infty }\) and \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\).

Define \(A_{k}=\{ \Vert \widehat{f}'_{{\sigma }^{2}}-h'_{k} \Vert _{p}<\frac{\delta _{j}}{2}\}\ (k\in \{0,1\})\). Then \(A_{0}\cap A_{1}= \emptyset \). According to Lemma 4.5,

$$\begin{aligned} \sup_{k\in \{0,1\}} P_{f_{s_{k}}}^{n} \bigl(A_{k}^{c}\bigr)\geq \min \biggl\{ \frac{1}{2}, \exp \bigl(-3e^{-1}-\kappa _{1}\bigr)\biggr\} , \end{aligned}$$
(42)

where \(P_{f}^{n}\) stands for the probability measure corresponding to the density function \(f^{n}(x):=f(x_{1})f(x_{2})\cdots f(x_{n})\). Hence,

$$\begin{aligned} E \bigl\Vert \widehat{f}'_{\sigma ^{2}}-h'_{k} \bigr\Vert _{p}^{p} \geq \biggl( \frac{\delta _{j}}{2} \biggr)^{p}P_{f_{s_{k}}}^{n}\biggl( \bigl\Vert \widehat{f}'_{{\sigma } ^{2}}-h'_{k} \bigr\Vert _{p}\geq \frac{\delta _{j}}{2}\biggr) =\biggl(\frac{\delta _{j}}{2} \biggr)^{p}P _{f_{s_{k}}}^{n}\bigl(A_{k}^{c} \bigr). \end{aligned}$$

This with (42) implies

$$\begin{aligned} \sup_{k\in \{0,1\}} E \bigl\Vert \widehat{f}'_{\sigma ^{2}}-h'_{k} \bigr\Vert _{p} ^{p}\geq \sup_{k\in \{0,1\}}\biggl( \frac{\delta _{j}}{2}\biggr)^{p}P_{f_{s _{k}}}^{n} \bigl(A_{k}^{c}\bigr)\geq \biggl(\frac{\delta _{j}}{2} \biggr)^{p}\min \biggl\{ \frac{1}{2},\exp \bigl(-3e^{-1}- \kappa _{1}\bigr)\biggr\} . \end{aligned}$$
(43)

Next, one shows \(\kappa _{1}\leq C_{0}na_{j}^{2}\). Recall that

κ 1 = inf 0 u 1 k u K ( P f s k n , P f s u n ) K ( P f s 1 n , P f s 0 n ) .
(44)

Then

$$\begin{aligned} K\bigl(P_{f_{s_{1}}}^{n}, P_{f_{s_{0}}}^{n}\bigr) \leq \sum_{i=1}^{n} \int f_{s_{1}}(x_{i})\ln \frac{f_{s_{1}}(x_{i})}{f_{s_{0}}(x_{i})}\,dx _{i}=n \int f_{s_{1}}(x)\ln \frac{f_{s_{1}}(x)}{f_{s_{0}}(x)}\,dx \end{aligned}$$

due to \(f_{s_{0}}^{n}(x)=\prod_{j=1}^{n}f_{s_{0}}(x_{j})\) and \(f_{s_{1}}^{n}(x)=\prod_{j=1}^{n}f_{s_{1}}(x_{j})\). Since \(\ln u \leq u-1\) holds for \(u>0\), one knows

$$\begin{aligned} K\bigl(P_{f_{s_{1}}}^{n}, P_{f_{s_{0}}}^{n}\bigr) & \leq n \int f_{s_{1}}(x) \biggl( \frac{f _{s_{1}}(x)}{f_{s_{0}}(x)}-1\biggr)\,dx \\ &= n \int f^{-1}_{s_{0}}(x)\bigl| f_{s_{1}}(x)-f_{s_{0}}(x) \bigr| ^{2} \,dx. \end{aligned}$$
(45)

According to Chesneau’s work in Ref. [8], \(f_{s_{k}}(x)=\frac{1}{(v-1)!}\int _{x}^{1}(\ln y-\ln x)^{v-1}h_{k}(y) \frac{1}{y}\,dy\). Then

$$\begin{aligned} f_{s_{1}}(x)-f_{s_{0}}(x) &=\frac{a_{j}}{(v-1)!} \int _{x}^{\frac{1}{2}+2^{-j}}( \ln y-\ln x)^{v-1}G_{v}(g_{j,l}) (y)\frac{1}{y}\,dy \\ &=-\frac{a_{j}}{(v-1)!} \int _{x}^{\frac{1}{2}+2^{-j}}(\ln y-\ln x)^{v-1}\bigl[G _{v-1}(g_{j,l}) (y)\bigr]'\,dy \end{aligned}$$

because of (34) and \(G_{v}(g_{j,l})(x)=-x[G_{v-1}(g_{j,l})(x)]'\).

By the formula of integration by parts,

$$\begin{aligned} f_{s_{1}}(x)-f_{s_{0}}(x) &=-\frac{a_{j}}{(v-2)!} \int _{x}^{ \frac{1}{2}+2^{-j}} (\ln y-\ln x)^{v-2} \bigl[G_{v-2}(g_{j,l}) (y)\bigr]'\,dy= \cdots \\ &=-\frac{a_{j}}{(v-m)!} \int _{x}^{\frac{1}{2}+2^{-j}}(\ln y-\ln x)^{v-m} \bigl[G_{v-m}(g_{j,l}) (y)\bigr]'\,dy=\cdots \\ &=-a_{j} \int _{x}^{\frac{1}{2}+2^{-j}}(g_{j,l})'(y) \,dy=a_{j}g_{j,l}(x), \end{aligned}$$
(46)

because \(l=2^{j-1}\) and \((\ln y-\ln x)^{v-m}G_{v-m}(g_{j,l})(y)| _{x}^{\frac{1}{2}+2^{-j}}=0\) for any \(m\in \{1,\ldots ,v-1\}\). On the other hand, for each \(x\in [\frac{1}{2}, \frac{1}{2}+2^{-j}]\) and large j,

$$\begin{aligned} f_{s_{0}}(x) &\geq \frac{M_{1}}{(v-1)!} \int _{x}^{\frac{3}{4}}(\ln y- \ln x)^{v-1} \frac{1}{y}\,dy=\frac{M_{1}}{v!}\biggl(\ln \frac{3}{4}-\ln x \biggr)^{v} \\ &\geq \frac{M_{1}}{v!}\biggl[\ln \frac{3}{4}-\ln \biggl( \frac{1}{2}+2^{-j}\biggr)\biggr]^{v} \geq M_{2}>0 \end{aligned}$$
(47)

thanks to \(f_{s_{0}}(x)=\frac{1}{(v-1)!}\int _{x}^{1}(\ln y-\ln x)^{v-1}h _{0}(y) \frac{1}{y}\,dy\) and \(h_{0}(x)|_{[\frac{1}{2},\frac{3}{4}]} \geq M_{1}\). Combining with (44)–(47), one obtains

$$\begin{aligned} \kappa _{1}\leq M_{2}^{-1}n \int\bigl| a_{j}g_{j,l}(x)\bigr| ^{2} \,dx \leq C _{0}na_{j}^{2}, \end{aligned}$$

where \(C_{0}>0\) is a constant.

Choose \(2^{j}\thicksim n^{\frac{1}{2(s-\frac{1}{r})+2(v+1)+1}}\) and recall \(a_{j}=2^{-j(s-\frac{1}{r}+\frac{1}{2}+v+1)}\). Then

$$\begin{aligned} \kappa _{1}\lesssim na_{j}^{2}=n2^{-j[2(s-\frac{1}{r})+2(v+1)+1]} \thicksim 1 \quad \text{and} \quad e^{-\kappa _{1}}\gtrsim 1. \end{aligned}$$

Substituting \(\delta _{j}\thicksim 2^{-j(s-\frac{1}{r}+\frac{1}{p})}\), \(2^{j}\thicksim n^{\frac{1}{2(s-\frac{1}{r})+2(v+1)+1}}\) into (43), one obtains

$$\begin{aligned} \sup_{k\in \{0,1\}} E \bigl\Vert \widehat{f}'_{\sigma ^{2}}-h'_{k} \bigr\Vert _{p} ^{p}\gtrsim \delta _{j}^{p} \gtrsim n^{-\frac{(s-\frac{1}{r} + \frac{1}{p})p}{2(s-\frac{1}{r})+2(v+1)+1}}, \end{aligned}$$

which is the desired conclusion. □

References

  1. Abbaszadeh, M., Chesneau, C., Doosti, H.: Multiplicative censoring: estimation of a density and its derivatives under the \(L^{p}\)-risk. REVSTAT 11(3), 255–276 (2013)

    MathSciNet  MATH  Google Scholar 

  2. Andersen, K., Hansen, M.: Multiplicative censoring: density estimation by a series expansion approach. J. Stat. Plan. Inference 98, 137–155 (2001)

    Article  MathSciNet  Google Scholar 

  3. Asgharian, M., Carone, M., Fakoor, V.: Large-sample study of the kernel density estimators under multiplicative censoring. Ann. Stat. 40, 159–187 (2012)

    Article  MathSciNet  Google Scholar 

  4. Blumensath, T., Davies, M.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27, 265–274 (2009)

    Article  MathSciNet  Google Scholar 

  5. Cai, T.T.: Rates of convergence and adaption over Besov spaces under pointwise risk. Stat. Sin. 13, 881–902 (2003)

    MATH  Google Scholar 

  6. Chaubey, Y.P., Chesneau, C., Doosti, H.: On linear wavelet density estimation: some recent developments. J. Indian Soc. Agric. Stat. 65, 169–179 (2011)

    MathSciNet  Google Scholar 

  7. Chaubey, Y.P., Chesneau, C., Doosti, H.: Adaptive wavelet estimation of a density from mixtures under multiplicative censoring. Statistics 49(3), 638–659 (2015)

    Article  MathSciNet  Google Scholar 

  8. Chesneau, C.: Wavelet estimation of a density in a GARCH-type model. Commun. Stat., Theory Methods 42, 98–117 (2013)

    Article  MathSciNet  Google Scholar 

  9. Chesneau, C., Doosti, H.: Wavelet linear density estimation for a GARCH model under various dependence structures. J. Iran. Stat. Soc. 11, 1–21 (2012)

    MathSciNet  MATH  Google Scholar 

  10. Devore, R., Kerkyacharian, G., Picard, D., Temlyakov, V.: On mathematical methods of learning. In: Found. Comput. Math., vol. 6, pp. 3–58 (2006). Special Issue for D. Smale

    Google Scholar 

  11. Donoho, D.L., Johnstone, M.I., Kerkyacharian, G., Picard, D.: Density estimation by wavelet thresholding. Ann. Stat. 24(2), 508–539 (1996)

    Article  MathSciNet  Google Scholar 

  12. Guo, H.J., Liu, Y.M.: Convergence rates of multivariate regression estimators with errors-in-variables. Numer. Funct. Anal. Optim. 38(12), 1564–1588 (2017)

    Article  MathSciNet  Google Scholar 

  13. Härdle, W., Kerkyacharian, G., Picard, D., Tsybakov, A.: Wavelet, Approximation and Statistical Applications. Springer, New York (1998)

    Book  Google Scholar 

  14. Li, R., Liu, Y.M.: Wavelet optimal estimations for a density with some additive noises. Appl. Comput. Harmon. Anal. 36, 416–433 (2014)

    Article  MathSciNet  Google Scholar 

  15. Prakasa Rao, B.L.S.: Nonparametric Functional Estimation. Academic Press, Orlando (1983)

    MATH  Google Scholar 

  16. Prakasa Rao, B.L.S.: Nonparametric function estimation: an overview. In: Ghosh, S. (ed.) Asymptotics, Nonparametrics and Time Series, pp. 461–509. Marcel Dekker, New York (1999)

    MATH  Google Scholar 

  17. Prakasa Rao, B.L.S.: Wavelet estimation for derivative of a density in a GARCH-type model. Commun. Stat., Theory Methods 46, 2396–2410 (2017)

    Article  MathSciNet  Google Scholar 

  18. Vardi, Y.: Multiplicative censoring, renewal process, deconvolution and decreasing density: nonparametric estimation. Biometrika 76, 751–761 (1989)

    Article  MathSciNet  Google Scholar 

  19. Vardi, Y., Zhang, C.H.: Large sample study of empirical distributions in a random multiplicative censoring model. Ann. Stat. 20, 1022–1039 (1992)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Professor Youming Liu for his important comments and suggestions.

Availability of data and materials

Not applicable.

Funding

This paper is supported by the National Natural Science Foundation of China (No. 11771030) and the Beijing Natural Science Foundation (No. 1172001).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kaikai Cao.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, K., Wei, J. Adaptive wavelet estimations for the derivative of a density in GARCH-type model. J Inequal Appl 2019, 106 (2019). https://doi.org/10.1186/s13660-019-2056-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2056-0

Keywords