- Research
- Open access
- Published:
Large deviation principle for the mean reflected stochastic differential equation with jumps
Journal of Inequalities and Applications volume 2018, Article number: 295 (2018)
Abstract
In this paper, we establish a large deviation principle for a mean reflected stochastic differential equation driven by both Brownian motion and Poisson random measure. The weak convergence method plays an important role.
1 Introduction
Consider the mean reflected stochastic differential equation (MR-SDE for short) described by the following system:
where \(E=\mathbb {R}\setminus \{0\}\), b, σ, and F are Lipschitz functions from \(\mathbb {R}\) to \(\mathbb {R}\), h is bi-Lipschitz continuous, Ñ is a compensated Poisson measure \(\widetilde{N}(ds,dz)= N(ds,dz)-\vartheta (dz)\,ds\), and \(\{B_{t}\}_{t\ge 0}\) is a standard Brownian motion independent of N. The integral of the function h with respect to the law of the solution to the SDE is asked to be nonnegative. The solution to (1) is the couple of continuous processes \((X,K)\), where K is needed to ensure that the constraint is satisfied in a minimal way according to the last condition, namely the Skorokhod condition.
MR-SDE is a very special type of reflected stochastic differentials equations (SDEs) in which the constraint is not directly on the paths of the solution to the SDE as in the usual case but on the law of the solution. This kind of processes has been introduced recently by Briand, Elie, and Hu [4] in backward forms under the context of risk measures. Briand et al. [3] studied the MR-SDE in forward forms, and they provided an approximation of solution to the MR-SDE with the help of interacting particles systems.
Since the original work of Freidlin and Wentzell [11], the small noise large deviation principles for stochastic (partial) differential equations have been extensively studied in the literature. In this setting, one considers a small parameter multiplying the noise term and is interested in asymptotic probabilities of behavior as the parameter approaches zero. Earlier works on this family of problems relied on approximations and exponential probability estimates, see [2, 9]. Later, Dupuis and Ellis [10] developed a weak convergence approach to the theory of large deviation. This approach is mainly based on some variational representation formula about the Laplace transform of bounded continuous functionals. The weak convergence approach has now been adopted for the study of large deviation problems for stochastic partial differential equations, see [8, 13, 14, 16, 17], etc. It is also used to study the moderate deviation problems for stochastic partial differential equations, see [7, 12, 15].
We will use the weak convergence approach to study the large deviation principle for MR-SDE. Here, a representation formula of K plays an important role to overcome the difficulty coming from the fact that the reflection process K depends on the law of the position.
The rest of this paper is organized as follows. In Sect. 2, we first give the definition of the solution to Eq. (1), and then we state the main results of this paper. The weak convergence criterion for the large deviation principle is recalled in Sect. 3. In Sect. 4, we shall prove the main result.
2 Framework and main results
We consider the following conditions.
Condition 2.1
-
(i)
Lipschitz assumption: For any \(p>0\), there exists a constant \(C_{p}>0\) such that, for all \(x,x'\in \mathbb {R}\), we have
$$ \bigl\vert b(x)-b\bigl(x'\bigr) \bigr\vert ^{p}+ \bigl\vert \sigma (x)-\sigma \bigl(x'\bigr) \bigr\vert ^{p}+ \int_{E} \bigl\vert F(x,z)-F\bigl(x',z\bigr) \bigr\vert ^{p} \vartheta (dz)\le C_{p} \bigl\vert x-x' \bigr\vert ^{p}. $$ -
(ii)
The random variable \(X_{0}\) is square integrable independent of \(B_{t}\) and \(N_{t}\).
Condition 2.2
-
(i)
The function \(h:\mathbb {R}\rightarrow \mathbb {R}\) is an increasing function, and there exist \(0< m< M\) such that
$$ m \vert x-y \vert \le \bigl\vert h(x)-h(y) \bigr\vert \le M \vert x-y \vert \quad \textit{for all } x,y\in \mathbb {R}. $$ -
(ii)
The initial condition \(X_{0}\) satisfies \(\mathbb {E}[h(X_{0})] \ge 0\).
Definition 2.1
A couple of continuous processes \((X,K)\) is said to be a flat deterministic solution to Eq. (1) if \((X,K)\) satisfies (1) with K being a nondecreasing deterministic function such that \(K_{0}=0\).
Theorem 2.3
([5, Theorem 1, Proposition 1])
Under Conditions 2.1 and 2.2, the mean reflected SDE (1) has a unique deterministic flat solution \((X, K)\), and
where \((U_{t})_{0\le t\le T}\) is the process defined by
Moreover, for any \(p\ge 2\), there exists a positive constant \(K_{p}\), depending on T, b, σ, F, h, such that
In this paper, we are concerned with the large deviation principle (LDP for short) for MR-SDEs of jump type on \(\mathbb {R}\):
Condition 2.4
The function F satisfies the following:
-
(1)
There exists a function \(M_{F}\in L^{1}(\vartheta )\cap L^{2}(\vartheta )\) such that, for any \((x,z)\in \mathbb {R}\times E\),
$$ \bigl\vert F(x, z) \bigr\vert ^{2}\le M_{F}(z) \bigl(1+ \vert x \vert ^{2}\bigr); $$ -
(2)
There exists a function \(L_{F}\in L^{1}(\vartheta )\cap L^{2}(\vartheta )\) such that, for any \((x_{i},z)\in \mathbb {R}\times E\), \(i=1,2\),
$$\begin{aligned} \bigl\vert F(x_{1},z)-F(x_{2},z) \bigr\vert ^{2} \leq L_{F}(z) \vert x_{1}-x_{2} \vert ^{2}. \end{aligned}$$
For any \(\delta >0\), define a class of functions
Condition 2.5
The functions \(M_{F}\) and \(L_{F}\) are in the class \(\mathcal {H}^{\delta }\) for some \(\delta >0\).
Remark 2.6
Condition 2.5 implies that, for all \(\delta \in (0,\infty )\) and \(\varGamma \in \mathcal{B}([0,T]\times {E})\) satisfying \(\vartheta_{T}(E)<\infty \),
The main result of this paper is the following theorem.
Theorem 2.7
Suppose that Conditions 2.1, 2.2, 2.4, and 2.5 hold. Then the family \(\{X^{\varepsilon }\}_{\varepsilon >0}\) satisfies a large deviation principle on \(D([0,T];\mathbb {R})\) in the topology of uniform convergence with the rate function I defined as in (11) with \(\mathcal{G}^{0}\) given by (22). More precisely, for any Borel set \(\varGamma \in D([0,T];\mathbb {R})\), we have
3 Poisson random measure and the weak convergence criterion
3.1 Poisson random measure and Brownian motion
Let E be a locally compact Polish space. Denote by \(C([0,T],{E})\) and \(D([0,T],{E})\) the spaces of continuous functions and right continuous functions with left limits from \([0,T]\) into E, respectively; \(C_{c}({E})\) is the space of continuous functions with compact supports, \(\mathcal{M}_{FC}({E})\) is the space of all measures ϑ on \(({E},\mathcal{B}({E}))\) such that \(\vartheta (K)<\infty \) for every compact K in E. Endow \(\mathcal{M}_{FC}({E})\) with the weakest topology such that, for every \(f\in C_{c}({E})\), the function \(\vartheta \rightarrow \langle f,\vartheta \rangle =\int_{{E}}f(u)\,d \vartheta (u)\) is continuous. For any \(T\in (0,\infty )\), let \({E}_{T}:=[0,T]\times {E}\). For the measure \(\vartheta \in \mathcal{M}_{FC}({E})\), let \(\vartheta_{T}:=\lambda_{T}\otimes \vartheta \), where \(\lambda_{T}\) is the Lebesgue measure on \([0,T]\).
Recall that a Poisson random measure n on \({E}_{T}\) with intensity measure \(\vartheta_{T}\) is an \(\mathcal{M}_{FC}({E}_{T})\)-valued random variable such that, for each \(B\in \mathcal{B}({E}_{T})\) with \(\vartheta_{T}(B)<\infty \), \(\mathbf{n}(B)\) is Poisson distributed with mean \(\vartheta_{T}(B)\), and for disjoint \(B_{1},\ldots,B_{k}\in \mathcal{B}({E}_{T})\), \(\mathbf{n}(B_{1}),\ldots,\mathbf{n}(B_{k})\) are mutually independent random variables. Denote by \(\mathbb{P}\) the measure induced by n on \((\mathcal{M}_{FC}({E}_{T}),\mathcal{B}(\mathcal{M} _{FC}({E}_{T})))\). Let \(\mathbb{M}:=\mathcal{M}_{FC}({E}_{T})\). Then \(\mathbb{P}\) is the unique probability measure on \((\mathbb{M}, \mathcal{B}(\mathbb{M}))\) under which the canonical map \(N:\mathbb{M} \rightarrow \mathbb{M}\), \(N(m):= m\) is a Poisson random measure with intensity measure \(\vartheta_{T}\). For each \(\theta >0\), let \(\mathbb{P}_{\theta }\) be the probability measure on \((\mathbb{M}, \mathcal{B}(\mathbb{M}))\) under which N is a Poisson random measure with intensity \(\theta \vartheta_{T}\).
Set \(\mathbb{Y}:={E}\times [0,\infty )\) and \(\mathbb{Y}_{T}:=[0,T] \times \mathbb{Y}\). Similarly, let \(\overline{\mathbb{M}}:= \mathcal{M}_{FC}(\mathbb{Y}_{T})\) and \(\overline{\mathbb{P}}\) be the unique probability measure on \((\overline{\mathbb{M}},\mathcal{B}(\overline{ \mathbb{M}}))\) under which the canonical map \(\overline{N}:\overline{ \mathbb{M}}\rightarrow \overline{\mathbb{M}}\), \(\overline{N}(m):= m\) is a Poisson random measure with intensity measure \(\overline{\vartheta } _{T}:=\lambda_{T}\otimes \vartheta \otimes \lambda_{\infty }\), with \(\lambda_{\infty }\) being Lebesgue measure on \([0,\infty )\). Let \(\mathcal{F}_{t}:=\sigma \{\overline{N}((0,s]\times O):0\leq s\leq t, O\in \mathcal{B}(\mathbb{Y})\}\), and denote by \(\overline{\mathcal{F}}_{t}\) the completion under \(\overline{ \mathbb{P}}\). Set \(\overline{\mathcal{P}}\) to be the predictable σ-field on \([0,T]\times \overline{\mathbb{M}}\) with the filtration \(\{\overline{\mathcal{F}}_{t}:0\leq t\leq T\}\) on \((\overline{\mathbb{M}},\mathcal{B}(\overline{\mathbb{M}}))\). Let \(\overline{\mathcal{A}}\) be the class of all \((\overline{\mathcal{P}} \otimes \mathcal{B}({E}))/\mathcal{B}[0,\infty )\)-measurable maps \(\varphi :{E}_{T}\times \overline{\mathbb{M}}\rightarrow [0,\infty )\). For \(\varphi \in \overline{\mathcal{A}}\), define a counting process \(N^{\varphi }\) on \({E}_{T}\) by
\(N^{\varphi }\) is the controlled random measure with φ selecting the intensity for the points at location x and time s.
Set \(\mathbb{W}:=C([0,T],\mathbb{R} )\), \(\mathbb{V}:=\mathbb{W}\times \mathbb{M}\), and \(\overline{\mathbb{V}}:=\mathbb{W}\times \overline{ \mathbb{M}}\). Define \(N^{\mathbb{V}}: \mathbb{V}\rightarrow \mathbb{M}\) as \(N^{\mathbb{V}}(w,m)=m\) for \((w,m)\in \mathbb{V}\), and \(B^{\mathbb{V}}\) by \(\beta^{\mathbb{V}}(w,m)=w\) for \((w,m)\in \mathbb{V}\). The maps \(\overline{N}^{\overline{\mathbb{V}}}: \overline{ \mathbb{V}}\rightarrow \overline{\mathbb{M}}\) and \(B^{\overline{ \mathbb{V}}}=(B^{\overline{\mathbb{V}}}\) are defined analogously. Define the σ-filtration \(\mathcal{G}^{\mathbb{V}}_{t}:=\sigma \{N ^{\mathbb{V}}((0,s]\times O),B^{\mathbb{V}}(s): 0\leq s\leq t, O \in \mathcal{B}({E})\}\). For every \(\theta >0\), \(\mathbb{P}^{ \mathbb{V}}_{\theta }\) denotes the unique probability measure on \((\mathbb{V},\mathcal{B}(\mathbb{V}))\) such that:
-
(1)
\(B^{\mathbb{V}}\) is a standard Brownian motion,
-
(2)
\(N^{\mathbb{V}}\) is a Poisson random measure with intensity measure \(\theta \vartheta_{T}\) independent of \(B^{\mathbb{V}}\).
Analogously, define \(( \overline{\mathbb{P}}^{\overline{ \mathbb{V}}}_{\theta },\overline{\mathcal{G}}^{\overline{\mathbb{V}}} _{t} ) \) and denote \(\overline{\mathbb{P}}^{\overline{\mathbb{V}}} _{\theta =1}\) by \(\overline{\mathbb{P}}^{\overline{\mathbb{V}}}\). Denote by \(\{ \overline{\mathcal{F}}^{\overline{\mathbb{V}}}_{t} \} \) the \(\overline{\mathbb{P}}^{\overline{\mathbb{V}}}\)-completion of \(\{ \overline{\mathcal{G}}^{\overline{\mathbb{V}}}_{t} \} \) and by \(\overline{\mathcal{P}}^{\overline{\mathbb{V}}}\) the predictable σ-field on \([0,T]\times \overline{\mathbb{V}}\) with the filtration \(\{\overline{\mathcal{F}}^{\overline{\mathbb{V}}}_{t}\}\) on \((\overline{\mathbb{V}},\mathcal{B}(\overline{\mathbb{V}}))\). Let \(\overline{\mathbb{A}}\) be the class of all \(( \overline{ \mathcal{P}}^{\overline{\mathbb{V}}}\otimes \mathcal{B}({E}) ) / \mathcal{B}[0,\infty )\)-measurable maps \(\varphi :{E}_{T}\times \overline{ \mathbb{V}}\rightarrow [0,\infty )\). Define \(l:[0,\infty )\rightarrow [0,\infty )\) by
For any \(\varphi \in \overline{\mathbb{A}}\), the quantity
is well defined as a \([0,\infty ]\)-valued random variable.
Let
Set \(\mathcal{U}:=\mathcal{L}_{2}\times \overline{\mathbb{A}}\). Define
and
3.2 The weak convergence criterion
In this subsection, we recall a general criterion for a large deviation principle established in [8]. Let \(\{\mathcal{G}^{\varepsilon }\}_{\varepsilon >0}\) be a family of measurable maps from \(\overline{\mathbb{V}}\) to \(\mathbb{U}\), where \(\overline{\mathbb{V}}\) is introduced in Sect. 3.1 and \(\mathbb{U}\) is a Polish space. We present a sufficient condition of large deviation principle for the family \(Z^{\varepsilon }:=\mathcal{G}^{\varepsilon } ( \sqrt{\varepsilon }B, \varepsilon N^{\varepsilon^{-1}} ) \), as \(\varepsilon \rightarrow 0\).
Define
and
AÂ function \(g\in S^{\varUpsilon }\) can be identified with a measure \(\vartheta_{T}^{g}\in \mathbb{M}\) defined by
This identification induces a topology on \(S^{\varUpsilon }\) under which \(S^{\varUpsilon }\) is a compact space, see the Appendix of [6]. Throughout we use this topology on \(S^{\varUpsilon }\). We also use the weak topology on \(\tilde{S}^{\varUpsilon }\). Set \(\overline{S}^{{\varUpsilon }}:=\tilde{S}^{{\varUpsilon }}\times S ^{{\varUpsilon }}\), \(\mathbb{S}:=\bigcup_{{\varUpsilon }\geq 1}\overline{S} ^{\varUpsilon }\), and
The following condition is sufficient for establishing an LDP for a family \(\{Z^{\varepsilon }\}_{\varepsilon >0}\).
Condition 3.1
There exists a measurable map \(\mathcal{G}^{0}:\overline{\mathbb{V}} \rightarrow \mathbb{U}\) such that the following conditions hold:
-
(a)
For any \({\varUpsilon }\in \mathbb{N}\), let \((f_{n},g_{n}), (f,g) \in \overline{S}^{\varUpsilon }\) be such that \((f_{n},g_{n})\rightarrow (f,g)\) as \(n\rightarrow \infty \). Then
$$ \mathcal{G}^{0} \biggl( \int_{0}^{\cdot }f_{n}(s)\,ds, \vartheta_{T}^{g _{n}} \biggr) \longrightarrow \mathcal{G}^{0} \biggl( \int_{0}^{\cdot }f(s)\,ds, \vartheta_{T}^{g} \biggr) \quad \textit{in } \mathbb{U}. $$ -
(b)
For any \({\varUpsilon }\in \mathbb{N}\), let \(\phi_{\varepsilon }= ( \psi_{\varepsilon },\varphi_{\varepsilon } ) \), \(\phi =( \psi ,\varphi )\in \mathcal{U}^{\varUpsilon }\) be such that \(\phi_{\varepsilon }\) converges in distribution to Ï• as \(\varepsilon \rightarrow 0\). Then
$$ \mathcal{G}^{\varepsilon } \biggl( \sqrt{\varepsilon }B+ \int_{0}^{ \cdot }\psi_{\varepsilon }(s)\,ds, \varepsilon N^{\varepsilon^{-1} \varphi_{\varepsilon }} \biggr) \longrightarrow \mathcal{G}^{0} \biggl( \int _{0}^{\cdot }\psi (s)\,ds,\vartheta_{T}^{\varphi } \biggr) \quad \textit{in distribution}. $$
For \(h\in \mathbb{U}\), define \(\mathbb{S}_{h}:= \{ (f,g)\in \mathbb{S}:h=\mathcal{G}^{0} ( \int_{0}^{\cdot }f(s)\,ds, \vartheta ^{g}_{T} ) \} \). Let \(I:\mathbb{U}\rightarrow [0,\infty ]\) be defined by
where \(\overline{L}_{T}(q)\) is given by (10). By convention, \(I(h)=\infty \) if \(\mathbb{S}_{h}=\emptyset \).
Recall the following criterion from [8].
Theorem 3.2
([8])
Suppose that Condition 3.1 holds. Then the family \(\{ \mathcal{G} ^{\varepsilon } ( \sqrt{\varepsilon }B,\varepsilon N^{ \varepsilon^{-1}} ) \} _{\varepsilon >0}\) satisfies a large deviation principle with the rate function I given by (11).
For applications, the following strengthened form of Theorem 3.2 is useful. Let \(\{K_{n}\}_{n\ge 1}\) be an increasing sequence of compact sets in \(\mathbb{X}\) such that \(\bigcup_{n=1}^{ \infty }K_{n}={E}\). For each n, let
and let \(\overline{\mathbb{A}}_{b}:=\bigcup_{n=1}^{\infty }\overline{ \mathbb{A}}_{b,n}\), \(\tilde{\mathcal{U}}^{\varUpsilon }:=\mathcal{U}^{ \varUpsilon }\cap \{ (\psi ,\phi ): \phi \in \overline{\mathbb{A}}_{b} \} \).
Theorem 3.3
([8])
Suppose Condition 3.1 holds with \(\mathcal{U}^{\varUpsilon }\) replaced by \(\tilde{\mathcal{U}}^{\varUpsilon }\). Then the conclusions of Theorem 3.2 also hold.
4 Proof of Theorem 2.7
Proof of Theorem 2.7
According to Theorem 3.3, we need to prove that Condition 3.1 is fulfilled. The verification of Conditions (3.1.a) and (3.1.b) will be given by Proposition 4.4 and Proposition 4.8, respectively. □
To use the representation formula (2) of the process K, we recall a result from [3]. Define the function
and
With these notations, denoting by \((\mu_{t})_{t\in [0,T]}\) the family of marginal laws of \((U_{t})_{t\in [0,T]}\), we have
For any two measures ν and \(\nu '\), the Wasserstein-1 distance between ν and \(\nu '\) is defined by
Lemma 4.1
([3, Theorem 2.5])
Under (A.2), for any \(\nu , \nu '\in \mathcal{P}(\mathbb {R})\),
From Remark 1 in [5], we have
By the definition of \(G_{0}(X_{s})=0\), if \(s< t\), using Lemma 4.1, we have
The following lemma can be proved by using the argument in [6, Lemma 3.4], [7, Lemma 4.3]. We omit its proof.
Lemma 4.2
Under Conditions 2.4 and 2.5, for the function \(G=M_{F}\) or \(L_{F}\), we have
-
(i)
For every \({\varUpsilon }\in \mathbb{N}\),
$$\begin{aligned} C^{\varUpsilon }_{1}:=\sup_{g\in S^{\varUpsilon }} \int_{{E}_{T}} \bigl\vert G(v) \bigr\vert \cdot \bigl\vert g(s,v)-1 \bigr\vert \vartheta (dv)\,ds< \infty \end{aligned}$$(16)and
$$\begin{aligned} C^{\varUpsilon }_{2}:=\sup_{g\in S^{\varUpsilon }} \int_{{E}_{T}} \bigl\vert G(v) \bigr\vert ^{2} \cdot \bigl(g(s,v)+1\bigr)\vartheta (dv)\,ds< \infty . \end{aligned}$$(17) -
(ii)
For ever \(\eta >0\), there exists \(\delta >0\) such that, for any \(A\subset [0,T]\) satisfying \(\lambda_{T}(A)<\delta \),
$$\begin{aligned} \sup_{g\in S^{\varUpsilon }} \int_{A} \int_{{E}} \bigl\vert G(v) \bigr\vert \cdot \bigl\vert g(s,v)-1 \bigr\vert \vartheta (dv)\,ds\leq \eta . \end{aligned}$$(18)
The following lemma is from [6, Lemma 3.11].
Lemma 4.3
Let \(k:[0,T]\times E\rightarrow \mathbb{R}\) be a measurable function such that
and for all \(\delta \in (0,\infty )\) and \(\varGamma \in \mathcal{B}([0,T] \times E)\) satisfying \(\vartheta_{T}(\varGamma )<\infty \),
For any \({\varUpsilon }\in \mathbb{N}\), let \(g_{n},g\in S^{\varUpsilon }\) be such that \(g_{n}\rightarrow g\) as \(n\rightarrow \infty \). Then
Under Conditions (A.1) and (A.2), Eq. (5) has a unique strong solution \(X^{\varepsilon }\). Therefore, there exists a Borel-measurable function \(\mathcal{G}^{\varepsilon }: \bar{\mathbb{V}}\rightarrow D([0,T]; \mathbb {R})\) such that, for any Poisson random measure \(N^{\varepsilon^{-1}}\) on \([0,T]\times E\) with intensity measure \(\varepsilon^{-1} \lambda_{T} \otimes \vartheta \), \(\mathcal{G}^{\varepsilon } ( \sqrt{\varepsilon }B, \varepsilon N^{\varepsilon^{-1}} ) \) is the unique solution of Eq. (5).
Next we introduce the map \(\mathcal{G}^{0}\) which will be used to define the rate function and also used for verification of Condition 3.1. Recall \(\mathbb{S}\) defined in the last section. Under Condition 2.4, for every \(q=(f,g)\in \mathbb{S}\), the deterministic integral equation
has a unique continuous solution. Here
where \((U_{t})_{0\le t\le T}\) is the process defined by
Define
Let \(I:D([0,T];\mathbb {R})\rightarrow [0,\infty ]\) be defined as in (11) with \(\mathcal{G}^{0}\) given by (22).
We first verify Condition (3.1.a).
Proposition 4.4
Let \({\varUpsilon }\in \mathbb{N}\) and let \(q_{n}:=(f_{n},g_{n})\), \(q:=(f,g) \in \bar{S}^{\varUpsilon }\) be such that \(q_{n}\rightarrow q\) as \(n\rightarrow \infty \) in \(\bar{S}^{\varUpsilon }\). Then
Proof
Recall that \(\mathcal{G}^{0} ( \int_{0}^{\cdot }f_{n}(s)\,ds, \vartheta_{T}^{g_{n}} ) ={X}^{q_{n}}\). For simplicity, we denote \(X^{n}:={X}^{q_{n}}\), \(X:={X}^{q}\). Since \(X\in C([0, T];\mathbb {R})\), we know that \(M:=\sup_{t\in [0,T]}\vert X(t)\vert <+\infty \). Notice that
Set \(\kappa^{n}(t):=\sup_{u\in [0, t]}\vert X^{n}(u)-X(u)\vert \). By the Lipschitz condition of b, we have
By the Lipschitz condition of σ, we have
By the linear growth of σ, we know that
Since \(f_{n}\rightarrow f\) weakly in \(L^{2}([0,T];\mathbb {R})\), there exists a constant \(C(f)\) such that
Thus, by the Cauchy–Schwarz inequality, we have
Similarly, we have, for any \(0\le t_{1}< t_{2}\le T\),
which means that the sequence \(\{I_{3}^{n}:n\ge 1\}\) is equicontinuous. By the Arzéla–Ascoli theorem, we know that \(\{I_{3}^{n}:n\ge 1\}\) is relatively compact in \(C([0,T];\mathbb {R}^{d})\).
By using (26) and the fact that \(f_{n}\rightarrow f\) weakly in \(L^{2}([0,T];\mathbb {R})\), we obtain that, for any \(t\in [0,T]\),
This, together with the relative compactness of \(\{I_{3}^{n}:n\ge 1\}\) in \(C([0,T];\mathbb {R})\), implies that
By Lemma 4.2, we have
By Condition 2.4, Remark 2.6, and Lemma 4.3, we know that as \(n\rightarrow \infty \),
By Lemma 4.2, we know that the sequence \(\{I^{n}_{5}:n\ge 1\}\) is uniformly bounded and equicontinuous, which implies that \(\{I^{n}_{5}:n\ge 1\}\) is relatively compact in \(C([0,T];\mathbb {R}^{d})\) by the Arzéla–Ascoli theorem. Consequently, by (29), we know
Recalling the definition of \(K_{t}^{q}\) given by (21), we have
According to Lemma 4.1, we know that
Putting (23), (24), (25), (28), (32) together, we have
Then, by Lemma 4.2, (27), (30), and Gronwall’s lemma, we have
The proof is complete. □
We now verify the second part of Condition 3.1.
Let \(\phi_{\varepsilon }=(\psi_{\varepsilon }, \varphi_{\varepsilon })\in \widetilde{\mathcal{U}}^{\varUpsilon }\) and \(\vartheta_{\varepsilon }=\frac{1}{ \varphi_{\varepsilon }}\). The following lemma follows from [8, Lemma 2.3].
Lemma 4.5
([8, Lemma 2.3])
The processes
and
are \(\{\bar{\mathcal{F}}_{t}^{\bar{\mathbb{V}}}\}\)-martingales. Set
Then
defines a probability measure on \(\bar{\mathbb{V}}\).
Since \(( \sqrt{\varepsilon }B+\int_{0}^{\cdot }\psi_{\varepsilon }(s)\,ds,\varepsilon N^{\varepsilon ^{-1}\varphi_{\varepsilon }} ) \) under \(\mathbb{Q}^{\varepsilon }_{T}\) has the same law as that of \(( \sqrt{\varepsilon }B,\varepsilon N^{\varepsilon ^{-1}} ) \) under \(\overline{\mathbb{P}}^{\overline{\mathbb{V}}}\), there exists a unique solution to the following controlled stochastic evolution equation \(\widetilde{X}^{\varepsilon }\):
Here \(\widetilde{K}_{t}^{\varepsilon }\) is given by
with the process \((\widetilde{U}_{t}^{\varepsilon })_{0\le t\le T}\) defined by
Then
The following estimate can be proved in a similar way to (4), which is omitted here.
Lemma 4.6
There exists some constant \(\varepsilon _{0}>0\) such that
Let \(\{Y_{\varepsilon }\}_{\varepsilon \in (0,1)}\) be a sequence of random elements of \(D([0,T];\mathbb{R})\), and \(\{\tau_{\varepsilon },\delta_{\varepsilon }\}\) be such that:
-
(a)
For each ε, \(\tau_{\varepsilon }\) is a stopping time with respect to the natural σ-field and takes only finitely many values.
-
(b)
The constant \(\delta_{\varepsilon }\in [0,T]\) satisfies that \(\delta_{\varepsilon } \rightarrow 0\) as \(\varepsilon \rightarrow 0\).
We introduce the following condition on \(\{Y_{\varepsilon }\}\):
Condition (A)
For each sequence \(\{\tau_{\varepsilon },\delta_{\varepsilon }\}\) satisfying (a) and (b), \(Y_{\varepsilon }(\tau_{\varepsilon }+\delta_{\varepsilon })-Y_{\varepsilon }( \tau_{\varepsilon })\rightarrow 0\) in probability, as \(\varepsilon \rightarrow 0\). Recall the following lemma from Aldous [1].
Lemma 4.7
([1])
Suppose that \(\{Y_{\varepsilon }\}_{\varepsilon \in (0,1)}\) satisfies Condition (A) and \(\{Y_{\varepsilon }(t)\}_{\varepsilon \in (0,1)}\) is tight on \(\mathbb {R}\) for each \(t\in [0,T]\), then \(\{Y_{\varepsilon }\}_{\varepsilon \in (0,1)}\) is tight in \(D([0,T];\mathbb{R})\).
Proposition 4.8
Fix \({\varUpsilon }\in \mathbb{N}\), and let \(\phi_{\varepsilon }=( \psi_{\varepsilon },\varphi_{\varepsilon })\), \(\phi =(\psi ,\varphi ) \in \widetilde{\mathcal{U}}^{\varUpsilon }\) be such that \(\phi_{\varepsilon }\) converges in distribution to \(\phi =(\psi , \varphi )\) as \(\varepsilon \rightarrow 0\). Then
Proof
First, we prove that \(\widetilde{X}^{\varepsilon }=\mathcal{G}^{\varepsilon } ( \sqrt{\varepsilon }\beta +\int_{0}^{\cdot }\psi_{\varepsilon }(s)\,ds, \varepsilon N^{\varepsilon^{-1}\varphi_{\varepsilon }} ) \) is tight in \(D([0,T];\mathbb {R})\). With the help of Aldous’ criterion in Lemma 4.7, we only need to check that \(\widetilde{X}^{\varepsilon }\) satisfies the condition of Lemma 4.7.
By Lemma 4.6,
Hence \(\{\widetilde{X}^{\varepsilon }_{t}\}\) is tight on \(\mathbb {R}\) for each \(t\in [0,T]\). Thus it remains to prove that \(\{ \widetilde{X}^{\varepsilon } _{t} \} \) satisfies Condition (A). For any sequences \(\{\tau_{\varepsilon }, \delta_{\varepsilon }\}\) satisfying (a) and (b),
By the linear growth of b and σ, we have
and
For any \(\eta >0\), when \(\delta_{\varepsilon }<\delta \), where δ is the constant in Lemma 4.2(ii), we have, by Lemma 4.2,
For the fifth term,
According to Lemma 4.1, we know that
By (37)–(43), Lemma 4.2, and Chebyshev’s inequality, we obtain Condition (A). Thus we have proved that \(\widetilde{X}^{\varepsilon }= \mathcal{G}^{\varepsilon } ( \sqrt{\varepsilon }B+\int_{0}^{ \cdot }\psi_{\varepsilon }(s)\,ds, \varepsilon N^{\varepsilon^{-1} \varphi_{\varepsilon }} ) \) is tight in \(D([0,T];\mathbb {R})\).
Finally, we prove that \(\mathcal{G}^{0} ( \int_{0}^{\cdot }\psi (s)\,ds, \vartheta^{\varphi } ) \) is the unique limit of \(\mathcal{G}^{\varepsilon } (\sqrt{ \varepsilon }B+\int_{0}^{\cdot }\psi_{\varepsilon }(s)\,ds, \varepsilon N^{\varepsilon^{-1}\varphi_{\varepsilon }} )\).
Recall (33). Denote
and
Similar to the proof of (39), we know that \(\overline{M} ^{\varepsilon }\Longrightarrow 0\) and \(M^{\varepsilon }\Longrightarrow 0\) as \(\varepsilon \rightarrow 0\).
Choose a subsequence along which \((\widetilde{X}^{\varepsilon }, \phi_{\varepsilon }, \overline{M}^{\varepsilon }, M^{\varepsilon })\) converges to \((\widetilde{X}, \phi , 0,0)\) in distribution. By the Skorokhod representation theorem, we may assume that \((\widetilde{X}^{\varepsilon }, \phi_{\varepsilon }, \overline{M}^{\varepsilon }, M^{\varepsilon })\) converges to \((\widetilde{X}, \phi , 0,0)\) almost surely.
Note that convergence in Skorokhod topology to a continuous limit is equivalent to the uniform convergence, and \(C([0,T];\mathbb {R})\) is a closed subset of \(D([0,T];\mathbb {R})\). Hence
Since \(\widetilde{X}^{\varepsilon }-M^{\varepsilon }\in C([0,T];\mathbb {R})\) and \(\widetilde{X}^{\varepsilon }-M^{\varepsilon }\rightarrow \widetilde{X}\) almost surely in \(D([0,T];\mathbb {R})\), we have \(\widetilde{X}\in C([0,T];\mathbb {R})\), and
Letting \(\varepsilon \rightarrow 0\), along the lines of the proof of Proposition 4.4, we see that X̃ must solve
By the uniqueness, this gives that \(\widetilde{X}=\mathcal{G}^{0} ( \int_{0}^{\cdot }\psi (s)\,ds, \vartheta^{\varphi } ) \).
The proof is complete. □
References
Aldous, D.: Stopping times and tightness. Ann. Probab. 6, 335–340 (1978)
Azencott, R.: Grandes déviations et applications. In: Saint Flour Probability Summer School-1978, Saint Flour, 1978. Lecture Notes in Math., vol. 774, pp. 1–176. Springer, Berlin (1980)
Briand, P., de Raynal, P.-É.C., Guillin, A., Labart, C.: Particles systems and numerical schemes for mean reflected stochastic differential equations, pp. 1–25. arXiv:1612.06886 (2016)
Briand, P., Elie, R., Hu, Y.: BSDES with mean reflection. Ann. Appl. Probab. 28(1), 482–510 (2018)
Briand, P., Ghannoum, A., Labart, C.: Mean reflected stochastic differential equations with jumps, pp. 1–37. arXiv:1803.10165 (2018)
Budhiraja, A., Chen, J., Dupuis, P.: Large deviations for stochastic partial differential equations driven by a Poisson random measure. Stoch. Process. Appl. 123, 523–560 (2013)
Budhiraja, A., Dupuis, P., Ganguly, A.: Moderate deviation principles for stochastic differential equations with jumps. Ann. Probab. 44, 1723–1775 (2016)
Budhiraja, A., Dupuis, P., Maroulas, V.: Variational representations for continuous time processes. Ann. Inst. Henri Poincaré B, Probab. Stat. 47(3), 725–747 (2011)
Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications. Springer, New York (2000)
Dupuis, P., Ellis, R.: AÂ Weak Convergence Approach to the Theory of Large Deviations. Wiley, New York (1997)
Freidlin, M.I., Wentzell, A.D.: Random Perturbation of Dynamical Systems, 3rd edn. Grundlehren der Mathematischen Wissenschaften, vol. 260. Springer, Heidelberg (2012). Translated from the 1979 Russian original by J. Szücs
Li, Y., Wang, R., Yao, N., Zhang, S.: A moderate deviation principle for stochastic Volterra equation. Stat. Probab. Lett. 122(10), 79–85 (2017)
Liu, W.: Large deviations for stochastic evolution equations with small multiplicative noise. Appl. Math. Optim. 61, 27–56 (2010)
Ren, J., Zhang, X.: Freidlin-Wentzell’s large deviations for stochastic evolution equations. J. Funct. Anal. 254, 3148–3172 (2008)
Wang, R., Zhai, J., Zhang, T.: A moderate deviation principle for 2-D stochastic Navier–Stokes equations. J. Differ. Equ. 258(10), 3363–3390 (2015)
Xu, T., Zhang, T.: White noise driven SPDEs with reflection: existence, uniqueness and large deviation principles. Stoch. Process. Appl. 119(10), 3453–3470 (2009)
Zhai, J.L., Zhang, T.: Large deviations for 2-D stochastic Navier–Stokes equations with multiplicative Lévy noises. Bernoulli 21, 2351–2392 (2015)
Funding
This work was supported by the National Natural Science Foundation of China (11471304, 11401556).
Author information
Authors and Affiliations
Contributions
The author read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The author declares that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Li, Y. Large deviation principle for the mean reflected stochastic differential equation with jumps. J Inequal Appl 2018, 295 (2018). https://doi.org/10.1186/s13660-018-1889-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1889-2