- Research
- Open Access
- Published:
Wasserstein bounds in CLT of approximative MCE and MLE of the drift parameter for Ornstein-Uhlenbeck processes observed at high frequency
Journal of Inequalities and Applications volume 2023, Article number: 62 (2023)
Abstract
This paper deals with the rate of convergence for the central limit theorem of estimators of the drift coefficient, denoted θ, for the Ornstein-Uhlenbeck process \(X := \{X_{t},t\geq 0\}\) observed at high frequency. We provide an approximate minimum contrast estimator and an approximate maximum likelihood estimator of θ, namely \(\widetilde{\theta}_{n}:= {1}/{ (\frac{2}{n} \sum_{i=1}^{n}X_{t_{i}}^{2} )}\), and \(\widehat{\theta}_{n}:= -{\sum_{i=1}^{n} X_{t_{i-1}} (X_{t_{i}}-X_{t_{i-1}} )}/{ (\Delta _{n} \sum_{i=1}^{n} X_{t_{i-1}}^{2} )}\), respectively, where \(t_{i} = i \Delta _{n}\), \(i=0,1,\ldots , n \), \(\Delta _{n}\rightarrow 0\). We provide Wasserstein bounds in the central limit theorem for \(\widetilde{\theta}_{n}\) and \(\widehat{\theta}_{n}\).
1 Introduction
Let \(X:= \{X_{t}, t \geq 0 \}\) be the Ornstein-Uhlenbeck (OU) process driven by Brownian motion \(\{W_{t},t\geq 0 \} \). More precisely, X is the solution of the following linear stochastic differential equation
where \(\theta >0\) is an unknown parameter.
The drift parametric estimation for the OU process (1.1) has been widely studied in the literature. There are several methods that can estimate the parameter θ in (1.1) such as maximum likelihood estimation, least squares estimation, and minimum contrast estimation; we refer to monographs [14, 15]. While for the study of the asymptotic distribution of the estimators of θ based on discrete observations of X, there is extensive literature, and only several works have been dedicated to the rates of weak convergence of the distributions of the estimators to the standard normal distribution.
From a practical point of view, in parametric inference, it is more realistic and interesting to consider asymptotic estimation for (1.1) based on discrete observations. Thus, let us assume that the process X given in (1.1) is observed equidistantly in time with the step size \(\Delta _{n}\): \(t_{i}=i \Delta _{n}\), \(i=0, \ldots , n\), and \(T=n \Delta _{n}\) denotes the length of the “observation window”. Here we are concerned with the approximate minimum contrast estimator (AMCE)
and the approximate maximum likelihood estimator (AMLE)
which are discrete versions of the minimum contrast estimator (MCE) and the maximum likelihood estimator (MLE) defined as follows:
Recall that, for two random variables X and Y, the Wasserstein metric is given by
where \(\operatorname{Lip}(1)\) is the set of all Lipschitz functions with the Lipschitz constant ⩽1.
Rates of convergence in the central limit theorem of the MCE \(\bar{\theta}_{T}\) and MLE \(\check{\theta}_{T}\) under the Kolmogorov and Wasserstein distances have been studied as follows: There exist \(c, C>0\) depending only on θ such that
where \(\mathcal{N}\sim \mathcal{N} (0,1 )\) denotes a standard normal random variable.
The purpose of this manuscript is to derive upper bounds of the Wasserstein distance for the rates of convergence of the distribution of the AMCE \(\widetilde{\theta}_{n}\) and the AMLE \(\widehat{\theta}_{n}\). These estimators are unbiased, and we show that they are consistent and admit a central limit theorem as \(\Delta _{n}\rightarrow 0\) and \(T\rightarrow \infty \). Moreover, we bound the rate of convergence to the normal distribution in terms of the Wasserstein distance.
Note that the papers [2] and [4] provided explicit upper bounds for the Kolmogorov distance for the rates of convergence of the distribution of \(\widetilde{\theta}_{n}\) and \(\widehat{\theta}_{n}\), respectively. On the other hand, [7] provided Wasserstein bounds in central limit theorem for \(\widetilde{\theta}_{n}\). Let us describe what is proved in this direction:
-
Theorem 2.1 in [2] shows that there exists \(C>0\) depending on θ such that
$$\begin{aligned} \sup_{x \in \mathbb{R}} \biggl\vert P \biggl(\sqrt{ \frac{T}{2 \theta}} (\widetilde{\theta}_{n}-\theta ) \leqslant x \biggr)-P (\mathcal{N}\leqslant x ) \biggr\vert \leq C \max \biggl(\sqrt{ \frac{\log T}{T}},\frac{T^{4}}{n^{2}\log T} \biggr). \end{aligned}$$(1.2) -
Theorem 2.3 in [4] proves that there exists \(C>0\) depending on θ such that
$$\begin{aligned} \sup_{x \in \mathbb{R}} \biggl\vert P \biggl(\sqrt{ \frac{T}{2 \theta}} (\widehat{\theta}_{n}-\theta ) \leqslant x \biggr)-P (\mathcal{N}\leqslant x ) \biggr\vert \leq C \max \biggl(\sqrt{ \frac{\log T}{T}},\frac{T^{2}}{n\log T} \biggr). \end{aligned}$$(1.3) -
Theorem 5.4 in [7] establishes that there exists \(C>0\) depending on θ such that
$$\begin{aligned} d_{W} \biggl(\sqrt{\frac{T}{2\theta}} (\widetilde{ \theta}_{n}- \theta ), \mathcal{N} \biggr) \leq C\max \biggl( \frac{1}{\sqrt{T}},\sqrt{\frac{T^{2}}{n}} \biggr). \end{aligned}$$(1.4)
Remark 1.1
Note that in [2, Theorem 2.1], [4, Theorem 2.3], and [7, Theorem 5.4], the asymptotic normality of the distribution of \(\widetilde{\theta}_{n}\) and \(\widehat{\theta}_{n}\) need \(n\Delta _{n}^{2}=\frac{T^{2}}{n} \rightarrow 0\) and \(T \rightarrow \infty \). However, Theorem 3.6 and Theorem 4.1, which are stated and proved below, show that, respectively, the asymptotic normality of the distribution of \(\widetilde{\theta}_{n}\) and \(\widehat{\theta}_{n}\) only need \(\Delta _{n}=\frac{T}{n} \rightarrow 0\) and \(T \rightarrow \infty \).
The aim of the present paper is to provide new explicit bounds for the rate of convergence in the CLT of the estimators \(\widetilde{\theta}_{n}\) and \(\widehat{\theta}_{n}\) under the Wasserstein metric as follows: There exists a constant \(C>0\) such that, for all \(n\geq 1\), \(T>0\),
see Theorem 3.6, and
see Theorem 4.1.
Remark 1.2
The estimates (1.5) and (1.6) show that we have improved the bounds on the error of normal approximation for \(\widetilde{\theta}_{n}\) and \(\widehat{\theta}_{n}\). In other words, it is clear that the obtained bounds in (1.5) and (1.6) are sharper than the bounds in (1.2), (1.3), and (1.4).
To finish this introduction, we note the general structure of this paper. Section 2 contains some preliminaries presenting the tools needed for the analysis of the Wiener space, including Wiener chaos calculus and Malliavin calculus. Upper bounds for the rates of convergence of the distribution of the AMCE \(\widetilde{\theta}_{n}\) and the AMLE \(\widehat{\theta}_{n}\) are provided in Sect. 3 and Sect. 4, respectively.
2 Preliminaries
This section gives a brief overview of some useful facts from the Malliavin calculus on Wiener space. Some of the results presented here are essential for the proofs in the present paper. For our purposes, we focus on special cases that are relevant to our setting and omit the general high-level theory. We direct the interested reader to [18, Chap. 1]and [16, Chap. 2].
The first step is to identify the general centered Gaussian process \((Z_{t})_{\geq 0}\) with an isonormal Gaussian process \(X = \{ X(h), h \in \mathcal{H}\}\) for some Hilbert space \(\mathcal{H}\), that is, X is a centered Gaussian family defined a common probability space \((\Omega , \mathcal{F}, P)\) satisfying, for every \(h_{1}, h_{2} \in \mathcal{H}\), \(E [ X(h_{1}) X(h_{2}) ] = \langle h_{1}, h_{2} \rangle _{\mathcal{H}}\).
One can define \(\mathcal{H}\) as the closure of real-valued step functions on \([0, \infty )\) with respect to the inner product \(\langle \mathbf{1}_{[0, t]}, \mathbf{1}_{[0, s]} \rangle _{ \mathcal{H}}= E[ Z_{t} Z_{s}]\). Note that \(X(\mathbf{1}_{[0, t]}) \overset{d}{=} Z_{t}\).
The next step involves the multiple Wiener-Itô integrals. The formal definition involves the concepts of the Malliavin derivative and divergence. We refer the reader to [18, Chap. 1]and [16, Chap. 2]. For our purposes, we define the multiple Wiener-Itô integral \(I_{p}\) via the Hermite polynomials \(H_{p}\). In particular, for \(h \in \mathcal{H}\) with \(\lVert h \rVert _{\mathcal{H}}= 1\), and any \(p \geq 1\),
For \(p = 1\) and \(p = 2\), we have the following:
Note also that \(I_{0}\) can be taken to be the identity operator.
• Some notation for Hilbert spaces. Let \(\mathcal{H}\) be a Hilbert space. Given an integer \(q \geq 2\), the Hilbert spaces \(\mathcal{H}^{\otimes q}\) and \(\mathcal{H}^{\odot q}\) correspond to the qth tensor product and qth symmetric tensor product of \(\mathcal{H}\). If \(f \in \mathcal{H}^{\otimes q}\) is given by \(f = \sum_{j_{1}, \ldots , j_{q}} a(j_{1}, \ldots , j_{q}) e_{j_{1}} \otimes \cdots e_{j_{q}}\), where \((e_{j_{i}})_{i \in [1, q]}\) form an orthonormal basis of \(\mathcal{H}^{\otimes q}\), then the symmetrization f̃ is given by
where the first sum runs over all permutations σ of \(\{1, \ldots , q\}\). Then f̃ is an element of \(\mathcal{H}^{\odot q}\). We also make use of the concept of contraction. The rth contraction of two tensor products \(e_{j_{1}} \otimes \cdots \otimes e_{j_{p}}\) and \(e_{k_{1}} \otimes \cdots e_{k_{q}}\) is an element of \(\mathcal{H}^{\otimes (p + q - 2r)}\) given by
• Isometry property of integrals [16, Proposition 2.7.5] Fix integers \(p, q \geq 1\) as well as \(f \in \mathcal{H}^{\odot p}\) and \(g \in \mathcal{H}^{\odot q}\).
• Product formula [16, Proposition 2.7.10] Let \(p,q \geq 1\). If \(f \in \mathcal{H}^{\odot p}\) and \(g \in \mathcal{H}^{\odot q}\) then
• Hypercontractivity in Wiener Chaos. For every \(q\geq 1\), \({\mathcal{H}}_{q}\) denotes the qth Wiener chaos of W, defined as the closed linear subspace of \(L^{2}(\Omega )\) generated by the random variables \(\{H_{q}(W(h)),h\in {{\mathcal{H}}},\Vert h\Vert _{{\mathcal{H}}}=1\}\), where \(H_{q}\) is the qth Hermite polynomial. For any \(F \in \oplus _{l=1}^{q}{\mathcal{H}}_{l}\) (i.e., in a fixed sum of Wiener chaoses), we have
It should be noted that the constants \(c_{p,q}\) above are known with some precision when F is a single chaos term: indeed, by [16, Corollary 2.8.14], \(c_{p,q}= ( p-1 ) ^{q/2}\).
• Optimal fourth moment theorem. Let N denote the standard normal law. Let a sequence \(X:X_{n}\in {\mathcal{H}}_{q}\), such that \(EX_{n}=0\) and \(\operatorname{Var} [ X_{n} ] =1\), and assume that \(X_{n}\) converges to a normal law in distribution, which is equivalent to \(\lim_{n}E [ X_{n}^{4} ] =3\). Then we have the optimal estimate for total variation distance \(d_{\mathrm{TV}} ( X_{n},\mathcal{N} ) \), known as the optimal 4th moment theorem, proved in [17]. This optimal estimate also holds with the Wasserstein distance \(d_{W} ( X_{n},\mathcal{N} ) \), see [7, Remark 2.2], as follows: there exist two constants \(c,C>0\) depending only on the sequence X but not on n, such that
Moreover, we recall that the third and fourth cumulants are, respectively,
In particular, when \(E[X]=0\), we have that
If \({g\in \mathcal{H}^{\otimes 2}}\), then the third and fourth cumulants for \(I_{2}(g)\) satisfy the following (see (6.2) and (6.6) in [1], respectively),
and
Lemma 2.1
([19])
Fix an integer \(M \geq 2 \). We have
where \(\mathbf{k}= (k_{1}, \ldots , k_{M} )\), and \(\mathbf{v} \in \mathbb{R}^{M}\) is a fixed vector whose components are 1 or −1.
Throughout the paper \(\mathcal{N}\) denotes a standard normal random variable. Also, C denotes a generic positive constant (perhaps depending on θ but not on anything else), which may change from line to line.
3 Approximate minimum contrast estimator
In this section, we prove the consistency and provide upper bounds in the Wasserstein distance for the rate of normal convergence of an approximate minimum contrast estimator of the drift parameter θ of the Ornstein-Uhlenbeck process \(X :=\{X_{t},t\geq 0 \} \) driven by Brownian motion \(\{W_{t},t\geq 0 \} \), defined as solution of the following linear stochastic differential equation
where \(\theta >0\) is an unknown parameter. Since (3.1) is linear, it is immediate to see that its solution can be expressed explicitly as
Moreover,
is a stationary Gaussian process, see [5, 9].
Furthermore,
Since \(Z :=\{Z_{t}, t\geq 0 \}\) is a continuous centered stationary Gaussian process, then it can be represented as a Wiener-Itô (multiple) integral \(Z_{t} \overset{d}{=} I_{1}(\mathbf{1}_{[0, t]})\) for every \(t\geq 0\), according to (2.1). Let \(\rho (r)=E(Z_{r}Z_{0})\) denote the covariance of Z for every \(r\geq 0\). It is easy to show that
In particular, \(\rho (0)=\frac{1}{2\theta}\). Moreover, notice that \(\rho (r)=\rho (-r)\) for all \(r<0\).
Our goal is to estimate θ based on the discrete observations of X, using the approximative minimum contrast estimator:
where \(g(x):=\frac{1}{2x}\), \(t_{i}=i \Delta _{n}\), \(i=0, \ldots , n\), \(\Delta _{n} \rightarrow 0\) and \(T=n \Delta _{n}\), whereas \(f_{n} (X ),\ n\geq 1\), are given by
To analyze the estimator \(\widetilde{\theta}_{n}\) of θ based on discrete high-frequency data in time of X, we first estimate the limiting variance \(\rho (0)=\frac{1}{2\theta}\) by the estimator \(f_{n} (X )\), given by (3.6).
Let us introduce
According to (2.2), \(F_{n}(Z)\) can be written as
We will make use of the following technical lemmas.
Lemma 3.1
Let X and Z be the processes given in (3.2) and (3.3), respectively. Then there exists \(C>0\) depending only on θ such that for every \(p \geqslant 1\) and for all \(n \in \mathbb{N}\),
Proof
By (3.4), we can write
Combining this and the fact that Z is a stationary Gaussian process, we deduce
where we used \(\frac{\Delta _{n}}{1-e^{-\theta \Delta _{n}}}\rightarrow \frac{1}{\theta}\) as \(n\rightarrow \infty \). Thus, the desired result is obtained. □
Lemma 3.2
There exists \(C>0\) depending only on θ such that for large n
Consequently, using (3.8), for large n
Proof
Using the well-known Wick formula, we have
This implies
Further,
Moreover,
and as \(n\rightarrow \infty \)
Combining (3.12), (3.13), and (3.14) and \(\frac{\Delta _{n}}{1-e^{-2\theta \Delta _{n}}}\rightarrow \frac{1}{2\theta}\), there exists \(C>0\) depending only on θ such that for large n
Therefore, the desired result is obtained. □
Lemma 3.3
There exists \(C>0\) depending only on θ such that for large n,
Consequently,
Proof
Using \(\mathbf{1}_{[0, s]}^{\otimes 2} \otimes _{1} \mathbf{1}_{[0, t]}^{ \otimes 2}= \langle \mathbf{1}_{[0, s]}, \mathbf{1}_{[0, t]} \rangle _{\mathcal{H}} \mathbf{1}_{[0, s]} \otimes \mathbf{1}_{[0, t]}=\rho (t-s) \mathbf{1}_{[0, s]} \otimes \mathbf{1}_{[0, t]}\), we can write
Combining this with (2.8) and (3.7), we get
On the other hand,
Combining (3.18) and (3.19) yields
which implies (3.15).
where we used
Furthermore,
where we used the the change of variables \(k_{1}-k_{2}=j_{1}\), \(k_{2}-k_{4}=j_{2}\) and \(k_{3}-k_{4}=j_{3}\), and then applying the Brascamp-Lieb inequality given by Lemma 2.1. Therefore, the proof of (3.16) is complete. □
Theorem 3.4
There exists \(C>0\) depending only on θ such that for all \(n\geq 1\),
Proof
Using (3.8) and (3.9), we obtain
where the latter inequality comes from (2.7) and (3.17). □
Theorem 3.5
Suppose \(\Delta _{n}\rightarrow 0\) and \(T\rightarrow \infty \). Then, the estimator \(\widetilde{\theta}_{n}\) of θ is weakly consistent, that is, \(\widetilde{\theta}_{n}\rightarrow \theta \) in probability, as \(\Delta _{n}\rightarrow 0\) and \(T\rightarrow \infty \).
If, moreover, \(n \Delta _{n}^{\eta }\rightarrow 0\) for some \(1<\eta <2\) or \(n \Delta _{n}^{\eta }\rightarrow \infty \) for some \(\eta >1\), then \(\widetilde{\theta}_{n}\) is strongly consistent, that is, \(\widetilde{\theta}_{n}\rightarrow \theta \) almost surely.
Proof
Using (3.5), it is sufficient to prove that the results of the theorem are satisfied for the estimator \(f_{n}(X)\) of \(\frac{1}{2\theta}\).
The weak consistency of \(f_{n}(X)\) is an immediate consequence from (3.10).
If \(n \Delta _{n}^{\eta }\rightarrow 0\) for some \(1<\eta <2\), the strong consistency of \(f_{n}(X)\) has been proved by [10, Theorem 11].
Now, suppose that \(n \Delta _{n}^{\eta }\rightarrow \infty \) for some \(\eta >1\). It follows from (3.10) that
Combining this with the hypercontractivity property (2.6) and [13, Lemma 2.1], which is a well-known direct consequence of the Borel-Cantelli Lemma, we obtain \(f_{n}(X)\rightarrow \frac{1}{2\theta}\) almost surely. □
Theorem 3.6
There exists \(C>0\) depending only on θ such that for all \(n\geq 1\),
Proof
Recall that by definition \(\theta =g (\frac{1}{2\theta} )\). We have
for some random point \(\zeta _{n}\) between \(f_{n} (X )\) and \(\frac{1}{2\theta}\).
Thus, we can write
Therefore,
where we have used that \(d_{W} (x_{1}+x_{2}, y ) \leq {E} [ \vert x_{2} \vert ]+d_{W} (x_{1}, y )\) for any random variables \(x_{1}\), \(x_{2}\), y.
The second term in the inequality above is bounded in Theorem 3.4. By Hölder’s inequality, and the hypercontractivity property (2.6), for \(p, q>1\) with \(1 / p+\) \(1 / q=1\)
for some constant \(C>0\) depending on p.
Consequently, using (3.23), (3.24) and Theorem 3.4, we deduce that for every \(p\geq 1\)
To establish (3.22), it is left to show that \({E} \vert \zeta _{n} \vert ^{-3p} < \infty \) for some \(p \geq 1\). Using the monotonocity of \(x^{-3}\) and the fact that \(\zeta _{n} \in [|f_{n} (X ), \frac{1}{2\theta}|]\), it is enough to show that \(E |f_{n} (X )|^{-3p} < \infty \) for some \(p \geq 1\). This follows as an application of the technical [7, Proposition 6.3]. □
4 Approximate maximum likelihood estimator
In this section, we study an approximate maximum likelihood estimator of θ based on discrete observations of X.
The maximum likelihood estimator for θ based on continuous observations of the process X given by (3.1) is defined by
Here we want to study the asymptotic distribution of a discrete version of (4.1). Then, we assume that the process X given in (3.1) is observed equidistantly in time with the step size \(\Delta _{n}\): \(t_{i}=i \Delta _{n}\), \(i=0, \ldots , n\), and \(T=n \Delta _{n}\) denotes the length of the “observation window”. Let us consider the following discrete version of \(\check{\theta}_{T}\):
Note that [6] and [11], respectively, proved the weak and strong consistency of the estimator \(\widehat{\theta}_{n}\) as \(T \rightarrow \infty \) and \(\Delta _{n} \rightarrow 0\).
Let X be the process given by (3.1), and let us introduce the following sequences
and
where
Thus,
Therefore,
where \(f_{n}(X)\) is given by (3.6).
Next, since \(\zeta _{t_{i-1}}\) and \(\zeta _{t_{i}}-\zeta _{t_{i-1}}\) are independent, we have
Moreover, since
there exists \(C>0\) depending only on θ such that for large n
Using \(E[\Lambda _{n}]=0\) and the fact that \(\zeta _{t_{i-1}}\) and \(\zeta _{t_{i}}-\zeta _{t_{i-1}}\) are independent, we get
On the other hand,
This implies
where the latter inequality comes from the fact that \({\frac{1-e^{-2\theta \Delta _{n}}}{\Delta _{n}}} \rightarrow 2\theta \) as \(n\rightarrow \infty \).
Theorem 4.1
There exists a constant \(C>0\) such that, for all \(n\geq 1\),
Proof
Define \(G_{n}:=\frac{1}{\sqrt{T}} \Lambda _{n}\). Using (2.7), (4.4), and (4.5), we have
Combining (4.7) with (4.2), (4.3), and (3.10), we obtain
where we used the fact that \(E |f_{n} (X )|^{-4} < \infty \), which is a direct application of the technical [7, Proposition 6.3]. The proof of (4.6) is thus complete. □
Availability of data and materials
Not applicable.
References
Biermé, H., Bonami, A., Nourdin, I., Peccati, G.: Optimal Berry-Esseen rates on the Wiener space: the barrier of third and fourth cumulants. ALEA Lat. Am. J. Probab. Math. Stat. 9(2), 473–500 (2012)
Bishwal, J.P.: Rates of weak convergence of approximate minimum contrast estimators for the discretely observed Ornstein-Uhlenbeck process. Stat. Probab. Lett. 76(13), 1397–1409 (2006)
Bishwal, J.P.: Uniform rate of weak convergence of the minimum contrast estimator in the Ornstein-Uhlenbeck process. Methodol. Comput. Appl. Probab. 12(3), 323–334 (2010)
Bishwal, J.P.N., Bose, A.: Rates of convergence of approximate maximum likelihood estimators in the Ornstein-Uhlenbeck process. Comput. Math. Appl. 42(1–2), 23–38 (2001)
Cheridito, P., Kawaguchi, H., Maejima, M.: Fractional Ornstein-Uhlenbeck processes. Electron. J. Probab. 8, 1–14 (2003)
Dorogovcev, A.Ja.: The consistency of an estimate of a parameter of stochastic differential equation. Theory Probab. Math. Stat. 10, 73–82 (1976)
Douissi, S., Es-Sebaiy, K., Kerchev, G., Nourdin, N.: Berry-Esseen bounds of second moment estimators for Gaussian processes observed at high frequency. Electron. J. Stat. 16(1), 636–670 (2022). https://doi.org/10.1214/21-EJS1967
Es-Sebaiy, K., Al-Foraih, M., Alazemi, F.: Wasserstein bounds in the CLT of the MLE for the drift coefficient of a stochastic partial diffenrential equation. Fractal Fract. 5, 187 (2021)
Es-Sebaiy, K., Viens, F.: Optimal rates for parameter estimation of stationary Gaussian processes. Stoch. Process. Appl. 129(9), 3018–3054 (2019)
Hu, Y., Nualart, D., Zhou, H.: Parameter estimation for fractional Ornstein-Uhlenbeck processes of general Hurst parameter. Stat. Inference Stoch. Process. 22(1), 111–142 (2019)
Kasonga, R.A.: The consistency of a nonlinear least squares estimator from diffusion processes. Stoch. Process. Appl. 30, 263–275 (1988)
Kim, Y.T., Park, H.S.: Optimal Berry-Esseen bound for an estimator of parameter in the Ornstein-Uhlenbeck process. J. Korean Stat. Soc. 46(3), 413–425 (2017)
Kloeden, P., Neuenkirch, A.: The pathwise convergence of approximation schemes for stochastic differential equations. LMS J. Comput. Math. 10, 235–253 (2007)
Kutoyants, Y.A.: Statistical Inference for Ergodic Diffusion Processes. Springer, Berlin (2004)
Liptser, R.S., Shiryaev, A.N.: Statistics of Random Processes: II Applications, 2nd edn. Applications of Mathematics. Springer, Berlin, Heidelberg, New York (2001)
Nourdin, I., Peccati, G.: Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality. Cambridge Tracts in Mathematics, vol. 192. Cambridge University Press, Cambridge (2012)
Nourdin, I., Peccati, G.: The optimal fourth moment theorem. Proc. Am. Math. Soc. 143, 3123–3133 (2015)
Nualart, D.: The Malliavin Calculus and Related Topics. Springer, Berlin (2006)
Nualart, D., Zhou, H.: Total variation estimates in the Breuer-Major theorem. Ann. Inst. Henri Poincaré Probab. Stat. 57(2), 740–777 (2021)
Funding
This project was funded by Kuwait Foundation for the Advancement of Sciences (KFAS) under project code: PR18-16SM-04.
Author information
Authors and Affiliations
Contributions
Investigation, K.E., F.A. and M.A.; Methodology, K.E., F.A. and M.A.; Writing—review and editing, K.E., F.A. and M.A.. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Es-Sebaiy, K., Alazemi, F. & Al-Foraih, M. Wasserstein bounds in CLT of approximative MCE and MLE of the drift parameter for Ornstein-Uhlenbeck processes observed at high frequency. J Inequal Appl 2023, 62 (2023). https://doi.org/10.1186/s13660-023-02976-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-023-02976-4
MSC
- 60F05
- 60G15
- 60G10
- 62F12
- 60H07
Keywords
- Parameter estimation
- Ornstein-Uhlenbeck process
- Rate of normal convergence of the estimators
- High frequency data