# Moderate deviations and central limit theorem for positive diffusions

## Abstract

In this paper, we establish a central limit theorem and a moderate deviation principle for the positive diffusions, including the CEV and CIR models. The proof is based on the exponential approximations theorem and Burkholder-Davis-Gundy’s inequality.

## Introduction

Consider the following stochastic differential equation:

$$dX_{t}^{\varepsilon}= b\bigl(X_{t}^{\varepsilon}\bigr)\,dt+\sqrt{\varepsilon}\sigma \bigl(X_{t}^{\varepsilon}\bigr) \,dB_{t}, \qquad X_{0}^{\varepsilon}=x_{0}>0,$$
(1.1)

where B is a standard one-dimensional Brownian motion on some filtered probability space $$(\Omega, \mathcal{F}, (\mathcal {F}_{t}),\mathbb{P})$$, and the diffusion coefficient $$\sigma: \mathbb{R}\rightarrow\mathbb{R}$$ and the drift coefficient $$b:\mathbb{R}\rightarrow\mathbb{R}$$ satisfy the following assumption:

1. (H):

σ is Hölder continuous with exponent $$\gamma\in[\frac{1}{2}, 1]$$ and $$\sigma(0)=0$$; b is $$C^{1}$$-continuous, its derivative $$b'$$ is locally Lipschitz continuous and $$b(0)>0$$. We assume that there exists some positive constant L such that

$$\bigl\vert \sigma(x)-\sigma(y)\bigr\vert \le L \vert x-y\vert ^{\gamma}, \qquad xb(x) \le L \bigl(1+\vert x\vert ^{2}\bigr), \quad \forall x,y \in\mathbb{R}^{+}.$$

Under assumption (H), equation (1.1) admits a unique strong solution $$X_{t}^{\varepsilon}$$ and furthermore $$X_{t}^{\varepsilon}\ge0$$; see , Chapter IX, Theorem 3.5 and .

Letting $$b(x)=\alpha(\beta-x)$$, $$\sigma(x)=\rho x^{\gamma}$$ for some constants $$\alpha>0$$, $$\beta>0$$, and $$\gamma\in[1/2,1]$$, (1.1) is a constant elasticity of variance (CEV) model, with the Cox-Ingersoll-Ross (CIR) model as a special case for $$\gamma=1/2$$. CEV and CIR models are widely used in mathematical finance; see  or . CIR can also be defined as a sum of squared Ornstein-Uhlenbeck processes; see .

Given $$T>0$$, let $$C([0,T]; \mathbb{R})$$ be the Banach space of continuous functions $$g:[0,T]\to\mathbb{R}$$ equipped with the sup-norm $$\|g\|:=\sup_{t\in [0,T]}|g(t)|$$.

When $$\varepsilon \rightarrow0^{+}$$, we expect that $$\|X^{\varepsilon}-X^{0}\|\rightarrow0$$ in probability, where $$X^{0}$$ is the solution of the non-perturbed dynamical system

$$dX^{0}_{t}=b\bigl(X_{t}^{0} \bigr)\,dt, \qquad X^{0}_{t}=x_{0}.$$
(1.2)

The purpose of this paper is to consider the moderate deviations and the central limit theorem for $$\{X_{t}^{\varepsilon}: t\in[0,T]\}$$. More precisely, we study the asymptotic behavior of

$$Z^{\varepsilon}_{t}=\frac{1}{\sqrt{\varepsilon}h(\varepsilon )} \bigl(X^{\varepsilon}_{t} -X^{0}_{t} \bigr),\quad t\in[0,T],$$

where $$h(\varepsilon )$$ is some deviation scale which strongly influences the asymptotic behavior of $$Z^{\varepsilon}$$.

1. (1)

The case $$h(\varepsilon )=1/\sqrt{\varepsilon}$$ provides some large deviations estimates, which have been extensively studied in recent years.

2. (2)

If $$h(\varepsilon )$$ is identically equal to 1, we are in the domain of the central limit theorem (CLT for short). We will show that $$(X^{\varepsilon}-X^{0})/\sqrt{\varepsilon}$$ converges to a solution of stochastic differential equation as ε decreases to 0.

3. (3)

To fill in the gap between the CLT scale $$[h(\varepsilon )=1]$$ and the large deviations scale $$[h(\varepsilon )=1/\sqrt{\varepsilon}]$$, we will study the so-called moderate deviation principle (MDP for short, cf. ), that is, when the deviation scale satisfies

$$h(\varepsilon )\to+\infty, \qquad \sqrt{\varepsilon}h(\varepsilon ) \to 0 \quad \text{as } \varepsilon \to0.$$
(1.3)

Throughout this paper, we assume that (1.3) is in place.

Since the original work of Freidlin-Wentzell , the large deviation principles for small noise diffusion equations have been extensively studied in the literature; see . The problem of large deviations for diffusions of the form (1.1) has been studied, where the case of diffusion coefficient $$\sigma(x)=\rho\sqrt {x}$$ in  and a general drift case in .

Like the large deviations, the moderate deviation problems arise in the theory of statistical inference quite naturally. The estimates of moderate deviations can provide us with the rate of convergence and a useful method for constructing asymptotic confidence intervals; see  and references therein.

This type of moderate deviations for stochastic (partial) differential equations with Lipschitz coefficients has been studied; see [11, 12] and so on. Wang and Zhang  established the moderate deviation principle for stochastic reaction-diffusion equations. Budhiraja et al.  studied the moderate deviation principles for stochastic differential equations driven by a Poisson random measure in finite and infinite dimensions.

For the stochastic differential equation with non-Lipschitz diffusion coefficients, Chen et al.  established the moderate deviation principles for small perturbation Wishart processes. Comparing with the results in , the drift term of Wishart process is the identity matrix, while the drift term of equation (1.1) is locally Lipschitz continuous. Chen et al. also proved that the eigenvalue process satisfies a moderate deviation principle by using the powerful delta method in large deviation theory proposed by Gao and Zhao . One of the key elements in the delta method is to prove the Hadamard differentiability, which seems to be difficult to verify for the model (1.1). Here we prove the results directly by the exponential approximations , Theorem 4.2.13.

The main result of this paper is the following theorem.

### Theorem 1.1

Assume the condition (H). Then as $$\varepsilon \to 0$$:

1. (1)

(CLT) $$(X^{\varepsilon}-X^{0})/\sqrt{\varepsilon}$$ converges in probability on $$C([0,T];\mathbb{R})$$ to the stochastic process $$Y^{0}$$, determined by

$$\left \{ \textstyle\begin{array}{l} dY^{0}_{t}=b'(X^{0}_{t})Y_{t}^{0}\,dt+\sigma(X_{t}^{0})\,dB_{t}; \\ Y^{0}_{0}=0, \end{array}\displaystyle \right .$$
(1.4)

where $$b'$$ is the derivative of b.

2. (2)

(MDP) $$Z^{\varepsilon}:= (X^{\varepsilon}-X^{0} )/(\sqrt {\varepsilon}h(\varepsilon ))$$ obeys the LDP on the space $$C([0,T];\mathbb{R})$$ with speed $$h^{2}(\varepsilon )$$ and with the rate function

$$I(\varphi)=\frac{1}{2} \int_{0}^{T} \bigl\vert \sigma\bigl(X^{0}_{t} \bigr)^{-1}\bigl(\dot{\varphi }_{t}-b' \bigl(X_{t}^{0}\bigr)\varphi_{t}\bigr)\bigr\vert ^{2} \,dt,$$
(1.5)

if φ is absolutely continuous with $$\varphi_{0}=0$$ and $$I(\varphi)=+\infty$$ otherwise. More precisely, for any Borel measurable subset A of $$C([0,T];\mathbb{R})$$,

\begin{aligned} -\inf_{\varphi\in A^{o}} I(\varphi) &\le\liminf_{\varepsilon \to0} h^{-2}(\varepsilon ) \log \mathbb{P} \bigl(Z^{\varepsilon}\in A \bigr) \\ &\le\limsup_{\varepsilon \to0} h^{-2}(\varepsilon ) \log\mathbb {P} \bigl(Z^{\varepsilon}\in A \bigr)\le-\inf_{\varphi\in\bar{A}} I( \varphi), \end{aligned}

where $$A^{o}$$ and Ā denote the interior and the closure of A, respectively.

The rest of this paper is organized as follows. In Section 2, we first prove that under the assumption (H), $$X^{\varepsilon}$$ is bounded in the sense of Freidlin-Wentzell’s LDP, so we reduce our study to the case where b and $$b'$$ are globally Lipschitzian. We establish the CLT and MDP in Section 3 and Section 4, respectively.

## Reduction to the bounded case

At first, we shall prove that under the assumption (H), $$X^{\varepsilon}$$ is bounded in the sense of LDP. For any $$R\ge0$$, let

$$\tau_{R}^{\varepsilon}:=\inf\bigl\{ t;\bigl\vert X^{\varepsilon}_{t}\bigr\vert \ge R\bigr\} .$$

### Lemma 2.1

Under the assumption (H),

$$\lim_{R\to+\infty}\limsup_{\varepsilon \rightarrow 0} \varepsilon \log\mathbb{P} \bigl(\tau^{\varepsilon}_{R}\le T \bigr)=- \infty.$$
(2.1)

### Proof

One can obtain (2.1) by a similar argument in Lemma 5.6.18 of . For the convenience of reader, we shall give a short proof.

Let $$f(x):=\log(1+|x|^{2})$$. By Itô’s formula,

\begin{aligned} df\bigl(X^{\varepsilon}_{t}\bigr) =&\sqrt{\varepsilon }f'\bigl(X^{\varepsilon}_{t}\bigr)\sigma \bigl(X_{t}^{\varepsilon}\bigr)\,dB_{t}+ b \bigl(X^{\varepsilon }_{t}\bigr)f'\bigl(X^{\varepsilon }_{t} \bigr)\,dt+\frac{\varepsilon }{2}\sigma ^{2}\bigl(X^{\varepsilon }_{t} \bigr)f''\bigl(X_{t}^{\varepsilon}\bigr)\,dt \\ =:&\sqrt{\varepsilon } f'\bigl(X^{\varepsilon}_{t} \bigr)\sigma\bigl(X_{t}^{\varepsilon}\bigr)\,dB_{t}+ \mathscr{L}^{\varepsilon}f\bigl(X^{\varepsilon}_{t}\bigr)\,dt, \end{aligned}

where $$\mathscr{L}^{\varepsilon}$$ is the generator of $$X^{\varepsilon}$$.

Consider the local martingale

$$M^{\varepsilon}_{t}:=\sqrt{\varepsilon } \int_{0}^{t} f'\bigl(X^{\varepsilon}_{s}\bigr)\sigma\bigl(X_{s}^{\varepsilon}\bigr) \,dB_{s}.$$

By the Hölder continuity of σ in (H), its quadratic variation process $$\langle M^{\varepsilon}\rangle$$ satisfies

$$\bigl\langle M^{\varepsilon}\bigr\rangle _{t}=\varepsilon \int_{0}^{t} \bigl\vert \sigma \bigl(X_{s}^{\varepsilon}\bigr)f'\bigl(X^{\varepsilon}_{s}\bigr)\bigr\vert ^{2}\, ds \le4\varepsilon L^{2}t.$$

Notice that, for all $$\varepsilon \in(0,1]$$,

$$\mathscr{L}^{\varepsilon}f(x)= \frac{\varepsilon \sigma ^{2}(x)+2b(x)x}{1+\vert x\vert ^{2}} - \frac{2 \varepsilon \vert \sigma(x)x\vert ^{2}}{(1+\vert x\vert ^{2})^{2}} \le2L^{2}+L,$$

where L is the constant in (H). Consequently, for all $$t\ge0$$ and $$\varepsilon \in(0,1]$$,

$$f\bigl(X^{\varepsilon}_{t}\bigr)\le f(x_{0})+ M^{\varepsilon}_{t}+ (2L+1)Lt.$$
(2.2)

For any $$R>0$$ large enough so that $$c(R,T):=\log(1+R^{2})-[\log(1+|x_{0}|^{2})+(2L+1)LT]>0$$, we have by the Bernstein inequality for continuous local martingale and (2.2)

\begin{aligned} \mathbb{P}\bigl(\tau^{\varepsilon}_{R}\le T\bigr)&=\mathbb{P} \Bigl(\sup_{t\in [0,T]}\bigl\vert X^{\varepsilon}_{t} \bigr\vert \ge R \Bigr)=\mathbb{P} \Bigl(\sup_{t\in[0,T]}f \bigl(X^{\varepsilon}_{t}\bigr)\ge \log\bigl(1+R^{2}\bigr) \Bigr) \\ &\le\mathbb{P} \Bigl(\sup_{t\in[0,T]}\bigl\vert M^{\varepsilon}_{t}\bigr\vert \ge c(R,T) \Bigr) \\ &\le 2\exp \biggl\{ -\frac{c(R,T)^{2}}{8\varepsilon L^{2}T} \biggr\} , \end{aligned}

where the desired result follows. □

Now for any $$R>0$$ large enough so that $$c(R,T)>0$$, let $$\sigma^{(R)}(x)=\sigma(x)$$ and $$b^{(R)}(x)=b(x)$$ for $$|x|\le R$$, such that $$\sigma^{(R)}$$ is bounded, $$b^{(R)}$$ is $$C^{1}$$ and $$(b^{(R)})'$$ is Lipschitz continuous and bounded. Consider the solution $$X_{t}^{\varepsilon , R}$$ of the corresponding stochastic differential equation (1.1) with $$(\sigma, b)$$ replaced by $$(\sigma^{(R)}, b^{(R)})$$. We have by Lemma 2.1 and the proof above

\begin{aligned}& \limsup_{\varepsilon \rightarrow0}\varepsilon \log\mathbb {P} \bigl(X_{t}^{\varepsilon}\ne X_{t}^{\varepsilon ,R} \text{ for some }t\in[0,T] \bigr) \\& \quad \le\limsup_{\varepsilon \rightarrow0}\varepsilon \log\mathbb {P} \bigl( \tau^{\varepsilon}_{R}\le T \bigr)\le-\frac{c(R,T)^{2}}{8LT}. \end{aligned}

Hence by the approximation lemma (, Theorem 4.2.13), $$(X^{\varepsilon}-X^{0})/(\sqrt{\varepsilon}h(\varepsilon ))$$ and $$(X^{\varepsilon ,R}-X^{0})/(\sqrt{\varepsilon}h(\varepsilon ))$$ obey the same LDP.

In the rest of paper, considering $$(\sigma^{(R)}, b^{(R)})$$ if necessary, we may and will suppose that

1. (L):

b and $$b'$$ are global Lipschitz continuous on $$\mathbb{R}$$, i.e., there exists a constant L such that

$$\max\bigl\{ \bigl\vert b(x)-b(y)\bigr\vert , \bigl\vert b'(x)-b'(y)\bigr\vert \bigr\} \le L\vert x-y\vert , \quad \forall x,y\in \mathbb{R}^{+}.$$

## Central limit theorem

In this section, we will establish the central limit theorem.

### Lemma 3.1

There exists some constant $$C(x_{0}, T, L)>0$$ such that, for any $$\varepsilon \in(0,1]$$,

$$\mathbb{E} \bigl[ \bigl\Vert X^{\varepsilon}\bigr\Vert ^{2} \bigr]\le C(x_{0},T, L).$$

### Proof

Notice that

$$X_{t}^{\varepsilon}=x_{0}+ \int_{0}^{t}b\bigl(X_{s}^{\varepsilon}\bigr) \,ds+\sqrt {\varepsilon}\int_{0}^{t} \sigma\bigl(X_{s}^{\varepsilon}\bigr)\,dB_{s}.$$

Thus

$$\bigl|X_{t}^{\varepsilon}\bigr|^{2}\le3 x_{0}^{2}+3 \biggl( \int_{0}^{t} b\bigl(X_{s}^{\varepsilon}\bigr)\, ds \biggr)^{2}+3\varepsilon \biggl\vert \int_{0}^{t} \sigma\bigl(X_{s}^{\varepsilon}\bigr)\,dB_{s}\biggr\vert ^{2}.$$
(3.1)

Taking the supremum up to time $$s\in[0, t]$$ in (3.1), and then taking the expectation, by the Burkholder-Davis-Gundy inequality, we have

\begin{aligned} \mathbb{E} \Bigl[ \sup_{0\le s\le t}\bigl\vert X_{s}^{\varepsilon}\bigr\vert ^{2} \Bigr]&\le 3 x_{0}^{2}+3 \mathbb{E} \biggl[ \biggl( \int_{0}^{t} \bigl\vert b\bigl(X_{s}^{\varepsilon}\bigr)\bigr\vert \,ds \biggr)^{2} \biggr]+3\varepsilon \mathbb{E} \biggl[\sup_{0\le s\le t} \biggl\vert \int_{0}^{s} \sigma\bigl(X_{u}^{\varepsilon}\bigr)\,dB_{u}\biggr\vert ^{2} \biggr] \\ &\le3 x_{0}^{2}+3\mathbb{E} \biggl[t \int_{0}^{t}\bigl\vert b\bigl(X_{s}^{\varepsilon}\bigr)\bigr\vert ^{2}\,ds \biggr]+12\varepsilon \mathbb{E} \biggl[ \int_{0}^{t} \bigl\vert \sigma\bigl(X_{s}^{\varepsilon}\bigr)\bigr\vert ^{2}\,ds \biggr] \\ &\le3 x_{0}^{2}+3 tL\mathbb{E} \biggl[ \int_{0}^{t}\bigl(1+\bigl\vert X_{s}^{\varepsilon}\bigr\vert ^{2}\bigr)\,ds \biggr]+12L^{2}\varepsilon \mathbb{E} \biggl[ \int_{0}^{t}\bigl(1+\bigl\vert X_{s}^{\varepsilon}\bigr\vert ^{2}\bigr)\,ds \biggr]. \end{aligned}

By Gronwall’s inequality, we have

$$\mathbb{E} \bigl[ \bigl\Vert X^{\varepsilon}\bigr\Vert ^{2} \bigr]\le\bigl(3 x_{0}^{2}+12\varepsilon TL^{2}+3T^{2}L \bigr)\cdot\exp \bigl(12\varepsilon TL^{2}+3T^{2}L\bigr).$$

The proof is complete. □

### Lemma 3.2

There exists some constant $$C(T, L)>0$$ such that

$$\mathbb{E} \bigl[ \bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert ^{2} \bigr]\le\varepsilon C(T, L).$$

### Proof

Notice that

$$X^{\varepsilon}_{t}-X^{0}_{t}= \int_{0}^{t} \bigl[b\bigl(X^{\varepsilon}_{s}\bigr)-b\bigl(X^{0}_{s}\bigr) \bigr]\,ds+\sqrt {\varepsilon}\int_{0}^{t} \sigma\bigl(X_{s}^{\varepsilon}\bigr)\,dB_{s}.$$
(3.2)

Taking the supremum up to time $$s\in[0,t]$$ in (3.2), and then taking the expectation, by the Burkholder-Davis-Gundy inequality, we have

\begin{aligned} \mathbb{E} \Bigl[ \sup_{0\le s\le t}\bigl\vert X^{\varepsilon}_{s}-X^{0}_{s}\bigr\vert ^{2} \Bigr] \le&2\mathbb{E} \biggl[ \int_{0}^{t}\bigl\vert b\bigl(X^{\varepsilon}_{s}\bigr)-b\bigl(X^{0}_{s}\bigr)\bigr\vert \,ds \biggr]^{2} \\ &{}+2\varepsilon \mathbb{E} \biggl[ \biggl(\sup _{0\le s\le t} \int_{0}^{s} \sigma\bigl(X_{u}^{\varepsilon } \bigr) \,dB_{u} \biggr)^{2} \biggr] \\ \le&2L^{2} t\mathbb{E} \biggl[ \int_{0}^{t}\bigl\vert X^{\varepsilon}_{s}-X^{0}_{s}\bigr\vert ^{2}\,ds \biggr] +8\varepsilon \mathbb{E} \biggl[ \int_{0}^{t}\bigl\vert \sigma\bigl(X_{s}^{\varepsilon}\bigr)\bigr\vert ^{2}\,ds \biggr] \\ \le&2L^{2} t\mathbb{E} \biggl[ \int_{0}^{t}\bigl\vert X^{\varepsilon}_{s}-X^{0}_{s}\bigr\vert ^{2}\,ds \biggr] + 8\varepsilon L^{2}\mathbb{E} \biggl[ \int_{0}^{t}\bigl(1+\bigl\vert X_{s}^{\varepsilon}\bigr\vert ^{2}\bigr)\,ds \biggr]. \end{aligned}

By Gronwall’s inequality and Lemma 3.1, we have

$$\mathbb{E} \bigl[\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert ^{2} \bigr]\le\varepsilon C(T, L).$$

The proof is complete. □

For any $$\varepsilon >0$$, let

$$Y^{\varepsilon }:=\frac{1}{\sqrt{\varepsilon}}\bigl(X^{\varepsilon}-X^{0} \bigr).$$

The proof of the CLT in Theorem 1.1 relies on the following theorem.

### Theorem 3.1

Under the assumptions (H) and (L), there exists a constant $$C(T, L)$$ depending on T, L such that

$$\mathbb{E} \bigl[ \bigl\Vert Y^{\varepsilon}-Y^{0}\bigr\Vert ^{2} \bigr]\le\varepsilon ^{\frac{\gamma}{2}} C(T, L)\rightarrow0 \quad \textit{as } \varepsilon \rightarrow0.$$

### Proof

Notice that

\begin{aligned} Y^{\varepsilon}_{t}-Y^{0}_{t}&= \int_{0}^{t} \biggl(\frac{b(X^{\varepsilon}_{s})-b(X^{0}_{s})}{\sqrt{\varepsilon}}-b' \bigl(X_{s}^{0}\bigr)Y^{0}_{s} \biggr) \,ds+ \int_{0}^{t}\bigl(\sigma\bigl(X_{s}^{\varepsilon}\bigr)-\sigma \bigl(X_{s}^{0}\bigr)\bigr)\,dB_{s} \\ &= I_{1}^{\varepsilon}(t)+I_{2}^{\varepsilon}(t)+I_{3}^{\varepsilon}(t), \end{aligned}
(3.3)

where

\begin{aligned}& I_{1}^{\varepsilon}(t) := \int_{0}^{t} \biggl(\frac{b(X^{\varepsilon}_{s})-b(X^{0}_{s})}{\sqrt{\varepsilon}}-b' \bigl(X_{s}^{0}\bigr)Y^{\varepsilon}_{s} \biggr) \,ds, \\& I_{2}^{\varepsilon}(t) := \int_{0}^{t} \bigl( b' \bigl(X_{s}^{0}\bigr) \bigl(Y^{\varepsilon}_{s}-Y^{0}_{s}\bigr) \bigr)\,ds, \\& I_{3}^{\varepsilon}(t) := \int_{0}^{t}\bigl(\sigma\bigl(X_{s}^{\varepsilon}\bigr)-\sigma\bigl(X_{s}^{0}\bigr)\bigr)\,dB_{s}. \end{aligned}

By Taylor’s formula, there exists a random variable $$\eta^{\varepsilon}(t)$$ taking values in $$(0,1)$$ such that

$$b\bigl(X^{\varepsilon}_{t}\bigr)-b\bigl(X^{0}_{t} \bigr)=b'\bigl(X^{0}_{t}+\eta^{\varepsilon}(t) \bigl(X^{\varepsilon}_{t}-X^{0}_{t}\bigr)\bigr) \times\bigl(X^{\varepsilon}_{t}-X^{0}_{t}\bigr).$$

Since $$b'$$ is Lipschitz continuous, we have

$$\bigl\vert b'\bigl(X^{0}_{t}+ \eta^{\varepsilon}(t) \bigl(X^{\varepsilon}_{t}-X^{0}_{t} \bigr)\bigr)-b'\bigl(X^{0}_{t}\bigr)\bigr\vert \le L \eta^{\varepsilon}(t)\bigl\vert X^{\varepsilon}_{t}-X^{0}_{t} \bigr\vert \le L \bigl\vert X^{\varepsilon}_{t}-X^{0}_{t} \bigr\vert .$$

Hence,

$$\sup_{0\le s\le t}\bigl\vert I_{1}^{\varepsilon}(s)\bigr\vert \le\frac{L}{\sqrt {\varepsilon}} \int_{0}^{t} \bigl\vert X^{\varepsilon}_{s}-X^{0}_{s}\bigr\vert ^{2} \,ds.$$
(3.4)

Taking the expectation, by Lemma 3.2, we have, for any $$t\in [0, T]$$,

$$\mathbb{E} \Bigl[\sup_{0\le s\le t}\bigl\vert I_{1}^{\varepsilon}(s)\bigr\vert \Bigr]\le \frac{L}{\sqrt {\varepsilon}} \mathbb{E} \biggl[ \int_{0}^{t} \bigl\vert X^{\varepsilon}_{s}-X^{0}_{s}\bigr\vert ^{2} \,ds \biggr] \le\sqrt{\varepsilon}C_{1}(T, L).$$
(3.5)

Since $$|b'|\le L$$, for any $$t\in[0,T]$$,

$$\mathbb{E} \Bigl[ \sup_{0\le s\le t}\bigl\vert I_{2}^{\varepsilon}(s)\bigr\vert \Bigr]\le L \mathbb{E} \biggl[ \int_{0}^{t}\bigl\vert Y^{\varepsilon}_{s}-Y^{0}_{s} \bigr\vert \,ds \biggr].$$
(3.6)

By Lemma 3.2, the Burkholder-Davis-Gundy inequality, Hölder’s inequality and Fubini’s theorem, we have, for any $$t\in[0,T]$$,

\begin{aligned} \mathbb{E} \Bigl[\sup_{0\le s\le t}\bigl\vert I_{3}^{\varepsilon}(s)\bigr\vert \Bigr]&\le c_{1}L \mathbb{E} \biggl[ \biggl( \int_{0}^{t}\bigl\vert X_{s}^{\varepsilon}-X_{s}^{0}\bigr\vert ^{2\gamma}\,ds \biggr)^{\frac{1}{2}} \biggr] \\ &\le c_{1}L \biggl( \int_{0}^{t}\mathbb{E} \bigl[\bigl\vert X_{s}^{\varepsilon}-X_{s}^{0}\bigr\vert ^{2\gamma} \bigr]\,ds \biggr)^{\frac{1}{2}} \\ &\le c_{1}L \biggl( \int_{0}^{t} \bigl(\mathbb{E} \bigl[\bigl\vert X_{s}^{\varepsilon}-X_{s}^{0}\bigr\vert ^{2 } \bigr] \bigr)^{\gamma}\,ds \biggr)^{\frac{1}{2}} \\ &\le\varepsilon ^{\frac{\gamma}{2}} C_{2}(T, L). \end{aligned}
(3.7)

Putting (3.3), (3.5)-(3.7) together, by Gronwall’s inequality, we have

$$\mathbb{E} \bigl[\bigl\Vert Y^{\varepsilon}-Y^{0}\bigr\Vert \bigr] \le\varepsilon ^{\frac{\gamma}{2}} C_{3}(T, L).$$

The proof is complete. □

## Moderate deviation principle

Notice that

$$d \biggl(\frac{Y^{0}_{t}}{h(\varepsilon )} \biggr)= b'\bigl(X^{0}_{t} \bigr) \biggl( \frac {Y^{0}_{t}}{h(\varepsilon )} \biggr) \,dt+ \frac{1}{h(\varepsilon )} \sigma \bigl(X^{0}_{t}\bigr)\,dB_{t},\qquad Y^{0}_{0}=0.$$

By Freidlin-Wentzell’s theorem , $$\frac{Y^{0}}{h(\varepsilon )}$$ satisfies the LDP on $$C([0,T];\mathbb{R})$$ with the speed $$h^{2}(\varepsilon )$$ and with the rate function

$$I(\varphi):=\left \{ \textstyle\begin{array}{l@{\quad}l} \frac{1}{2}\int_{0}^{T} \vert \frac{\dot{\varphi}(t)- b'(X^{0}_{t})\varphi (t)}{\sigma(X^{0}_{t}) }\vert ^{2}\,dt, & \mbox{if } \varphi \mbox{ is absolutely continuous with }\varphi(0)=0; \\ +\infty, & \mbox{otherwise}. \end{array}\displaystyle \right .$$
(4.1)

Now, let us prove the MDP in Theorem 1.1.

### Proof of MDP

By Theorem 4.2.13 of , to prove the LDP for $$(X^{\varepsilon}-X^{0} )/(\sqrt{\varepsilon}h(\varepsilon ))$$, it is enough to show that $$Y^{\varepsilon}/h(\varepsilon )$$ is $$h^{2}(\varepsilon )$$-exponentially equivalent to $$Y^{0}/h(\varepsilon )$$, i.e., for any $$\delta>0$$,

$$\limsup_{\varepsilon \rightarrow0}\frac{1}{h^{2}(\varepsilon )}\log \mathbb{P} \biggl(\frac{\| Y^{\varepsilon}-Y^{0}\|}{h(\varepsilon )}>\delta \biggr)=-\infty.$$
(4.2)

Recalling (3.3), we have

$$\sup_{0\le s\le t}\bigl\vert Y^{\varepsilon}_{s}-Y^{0}_{s}\bigr\vert \le\sup _{0\le s\le t}\bigl\vert I_{1}^{\varepsilon}(s)\bigr\vert + L \int_{0}^{t} \Bigl( \sup_{0\le u\le s}\bigl\vert Y^{\varepsilon}_{u}-Y^{0}_{u}\bigr\vert \Bigr)\,ds+ \sup_{0\le s\le t}\bigl\vert I_{3}^{\varepsilon}(s)\bigr\vert .$$

By Gronwall’s inequality, we have

$$\bigl\Vert Y^{\varepsilon}-Y^{0}\bigr\Vert \le \bigl(\bigl\Vert I_{1}^{\varepsilon}\bigr\Vert +\bigl\Vert I_{3}^{\varepsilon}\bigr\Vert \bigr)e^{LT}.$$
(4.3)

By (4.3), to prove (4.2), it is sufficient to prove that, for any $$\delta>0$$,

$$\limsup_{\varepsilon \rightarrow0}\frac{1}{h^{2}(\varepsilon )}\log \mathbb{P} \biggl(\frac{ \| I_{i}^{\varepsilon}\|}{h(\varepsilon )}>\delta \biggr)=-\infty, \quad i=1, 3.$$
(4.4)

Now we estimate those two terms.

Step 1. For the continuous martingale $$I^{\varepsilon}_{3}$$, let $$\langle I^{\varepsilon}_{3}\rangle_{t}$$ be its quadratic variation process. For any $$\eta>0$$, by Bernstein’s inequality, we have

\begin{aligned}& \mathbb{P} \bigl( \bigl\Vert I_{3}^{\varepsilon}\bigr\Vert >h(\varepsilon )\delta \bigr) \\& \quad \le \mathbb{P} \bigl( \bigl\Vert I_{3}^{\varepsilon}\bigr\Vert >h(\varepsilon )\delta, \bigl\Vert X^{\varepsilon}-X^{0} \bigr\Vert < \eta \bigr)+\mathbb{P}\bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta \bigr) \\& \quad \le \mathbb{P} \bigl( \bigl\Vert I_{3}^{\varepsilon}\bigr\Vert >h(\varepsilon )\delta, \bigl\langle I_{3}^{\varepsilon}\bigr\rangle _{T}\le TL^{2}\eta^{2\gamma} \bigr)+\mathbb {P} \bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta\bigr) \\& \quad \le \exp \biggl\{ -\frac{h^{2}(\varepsilon ) \delta^{2}}{2TL^{2}\eta ^{2\gamma}} \biggr\} +\mathbb{P}\bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta\bigr), \end{aligned}
(4.5)

where we have used the Hölder continuity of σ.

By Theorem 1.2 in , $$X^{\varepsilon}$$ satisfies the LDP on $$C([0,T];\mathbb{R})$$ with a good rate function Ĩ. Hence, for any $$\eta>0$$,

$$\limsup_{\varepsilon \rightarrow0} \varepsilon \log\mathbb {P} \bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta \bigr) \le-\inf\bigl\{ \tilde{I}(f): \bigl\Vert f-X^{0}\bigr\Vert \ge\eta\bigr\} .$$

Since the good rate function Ĩ has compact level sets, the $$\inf\{\tilde{I}(f): \|f-X^{0}\|\ge\eta\}$$ is obtained at some function $$f_{0}$$. Because $$\tilde{I}(f)=0$$ if and only if $$f=X^{0}$$, we conclude that

$$-\inf\bigl\{ \tilde{I}(f): \bigl\Vert f-X^{0}\bigr\Vert \ge\eta \bigr\} < 0.$$

By (1.3), we have

$$\limsup_{\varepsilon \rightarrow0} \frac{1}{h^{2}(\varepsilon )}\log \mathbb{P} \bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta \bigr)=- \infty.$$
(4.6)

Since $$\eta>0$$ is arbitrary, putting together (4.5) and (4.6), we obtain

$$\limsup_{\varepsilon \rightarrow0}\frac{1}{h^{2}(\varepsilon )}\log \mathbb{P} \biggl(\frac{ \| I_{3}^{\varepsilon}\|}{h(\varepsilon )}>\delta \biggr)=-\infty.$$
(4.7)

Step 2. For the first term $$I_{1}^{\varepsilon}$$, by (3.4), we have

$$\bigl\Vert I_{1}^{\varepsilon}\bigr\Vert \le\frac{C(T,L)}{\sqrt{\varepsilon}} \bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert ^{2}.$$

By (3.2) and Gronwall’s inequality, we have

$$\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \le\sqrt{\varepsilon}C(T,L) \sup_{0\le t\le T} \int_{0}^{t} \sigma \bigl(X^{\varepsilon}_{s}\bigr)\,dB_{s}.$$

For any $$\eta>0$$, by Bernstein’s inequality and the Hölder continuity of σ, we have

\begin{aligned}& \mathbb{P} \bigl(\bigl\Vert I_{1}^{\varepsilon}\bigr\Vert >h(\varepsilon )\delta \bigr) \\& \quad \le \mathbb{P} \bigl(\bigl\Vert I_{1}^{\varepsilon}\bigr\Vert >h(\varepsilon )\delta, \bigl\Vert X^{\varepsilon}-X^{0} \bigr\Vert < \eta \bigr)+\mathbb{P}\bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta\bigr) \\& \quad \le \mathbb{P} \biggl( \biggl(\sup_{0\le t\le T} \int_{0}^{t} \sigma \bigl(X^{\varepsilon}_{s}\bigr)\,dB_{s} \biggr)^{2}\ge \frac{h(\varepsilon )\delta}{\sqrt{\varepsilon}C(L,T)}, \bigl\Vert X^{\varepsilon}\bigr\Vert < \bigl\Vert X^{0}\bigr\Vert +\eta \biggr) \\& \qquad {}+\mathbb{P}\bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta \bigr) \\& \quad \le \mathbb{P} \biggl( \biggl(\sup_{0\le t\le T} \int_{0}^{t} \sigma \bigl(X^{\varepsilon}_{s}\bigr)\,dB_{s} \biggr)^{2}\ge \frac{h(\varepsilon )\delta}{\sqrt{\varepsilon}C(L,T)}, \biggl\langle \int_{0}^{\cdot} \sigma\bigl(X^{\varepsilon}_{s}\bigr)\,dB_{s} \biggr\rangle _{T} \le TL^{2} \bigl(\bigl\Vert X^{0}\bigr\Vert +\eta \bigr)^{2\gamma} \biggr) \\& \qquad {} + \mathbb{P}\bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta\bigr) \\& \quad \le \exp \biggl\{ -\frac{h(\varepsilon )\delta}{2\sqrt{\varepsilon}C(L,T) T L^{2} (\Vert X^{0}\Vert +\eta )^{2\gamma}} \biggr\} +\mathbb{P}\bigl(\bigl\Vert X^{\varepsilon}-X^{0}\bigr\Vert \ge\eta\bigr). \end{aligned}
(4.8)

By (1.3), (4.6), and (4.8), we have

$$\limsup_{\varepsilon \rightarrow0}\frac{1}{h^{2}(\varepsilon )}\log \mathbb{P} \biggl(\frac{ \| I_{1}^{\varepsilon}\|}{h(\varepsilon )}>\delta \biggr)=-\infty.$$
(4.9)

The proof is complete. □

## References

1. Revuz, D, Yor, M: Continuous Martingales and Brownian Motion, 3rd edn. Springer, Berlin (1999)

2. Baldi, P, Caramellino, L: General Freidlin-Wentzell large deviations and positive diffusions. Stat. Probab. Lett. 81(8), 1218-1229 (2011)

3. Aït-Sahalia, Y, Hansen, L: Handbook of Financial Econometrics: Tools and Techniques, vol. 1. North-Holland, Amsterdam (2009)

4. Mackevic̆ius, V: Verhulst versus CIR. Lith. Math. J. 55, 119-133 (2015)

5. Donati-Martin, C, Rouault, A, Yor, M, Zani, M: Large deviations for squares of Bessel and Ornstein-Uhlenbeck processes. Probab. Theory Relat. Fields 129(2), 261-289 (2004)

6. Dembo, A, Zeitouni, O: Large Deviations Techniques and Applications, 2nd edn. Applications of Mathematics, vol. 38. Springer, Berlin (1998)

7. Freidlin, MI, Wentzell, AD: Random Perturbation of Dynamical Systems. Springer, Berlin (1984). Translated by J Szuc

8. Ermakov, M: The sharp lower bound of asymptotic efficiency of estimators in the zone of moderate deviation probabilities. Electron. J. Stat. 6, 2150-2184 (2012)

9. Gao, F, Zhao, X: Delta method in large deviations and moderate deviations for estimators. Ann. Stat. 39, 1211-1240 (2011)

10. Miao, Y, Shen, S: Moderate deviation principle for autoregressive processes. J. Multivar. Anal. 100, 1952-1961 (2009)

11. Guillin, A: Averaging principle of SDE with small diffusion: moderate deviations. Ann. Probab. 31, 413-443 (2013)

12. Ma, Y, Wang, R, Wu, L: Moderate deviation principle for dynamical systems with small random perturbation. arXiv:1107.3432

13. Wang, R, Zhang, TS: Moderate deviations for stochastic reaction-diffusion equations with multiplicative noise. Potential Anal. 42(1), 99-113 (2015)

14. Budhiraja, A, Dupuis, P, Ganguly, A: Moderate deviation principles for stochastic differential equations with jumps. Ann. Probab. (to appear)

15. Chen, L, Gao, F, Wang, S: Moderate deviations and central limit theorem for small perturbation Wishart processes. Front. Math. China 9(1), 1-15 (2014)

## Acknowledgements

The authors are grateful to the anonymous referees for conscientious comments and corrections. This work was supported by National Natural Science Foundation of China (11471304, 11401556).

## Author information

Authors

### Corresponding author

Correspondence to Yumeng Li.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions 