Skip to main content

Convergence rate of Euler–Maruyama scheme for SDDEs of neutral type

Abstract

In this paper, we are concerned with the convergence rate of Euler–Maruyama (EM) scheme for stochastic differential delay equations (SDDEs) of neutral type, where the neutral, drift, and diffusion terms are allowed to be of polynomial growth. More precisely, for SDDEs of neutral type driven by Brownian motions, we reveal that the convergence rate of the corresponding EM scheme is one-half; Whereas for SDDEs of neutral type driven by pure jump processes, we show that the best convergence rate of the associated EM scheme is slower than one-half. As a result, the convergence rate of general SDDEs of neutral type, which is dominated by pure jump process, is slower than one-half.

Introduction

There is numerous literature concerned with convergence rate of numerical schemes for stochastic differential equations (SDEs). It is well-known that the convergence rate of EM scheme for SDEs under global Lipschitz and linear growth condition is one-half (see, e.g., [12]). Under different conditions, the convergence rate of EM scheme for SDEs varies. For example, under the Khasminskii-type condition, Mao [11] revealed that the convergence rate of the truncated EM method is close to one-half; under the Hölder condition, the convergence rate of EM scheme for SDEs has been studied by many scholars (see, e.g., [7, 16, 17]); Sabanis [19] recovered the classical rate of convergence (i.e., one-half) for the tamed EM schemes, where, for the SDE involved, the drift coefficient satisfies a one-sided Lipschitz condition and a polynomial Lipschitz condition, and the diffusion term is Lipschitzian. In [2], Bao et al. investigated the convergence rate of EM scheme for SDEs with Hölder–Dini continuous drifts.

There is also some literature on the convergence rate of numerical schemes for stochastic functional differential equations (SFDEs). For example, under a log-Lipschitz condition, Bao et al. [5] studied the convergence rate of EM approximation for a range of SFDEs driven by jump processes; Bao and Yuan [4] investigated the convergence rate of EM approach for a class of SDDEs, where the drift and diffusion coefficients are allowed to be of polynomial growth with respect to the delay variables; Gyöngy and Sabanis [8] discussed the rate of almost sure convergence of Euler approximations for SDDEs under monotonicity conditions. In [31], Zhang et al. established the convergence of a class of highly nonlinear stochastic differential delay equations without the linear growth condition replacing by Khasminskii-type condition.

Increasingly real-world systems are modeled by SFDEs of neutral type, as they represent systems which evolve in a random environment and whose evolution depends on the past states and derivatives of states of the systems through either memory or time delay. In the last decade, for SFDEs of neutral type, there are a large number of papers on, e.g., stochastic stability (see, e.g., [12, 13, 26]), on large fluctuations (see, e.g., [1]), on large deviation principle (see, e.g., [6]), on transportation inequality (see, e.g., [3]), to name a few.

Since most SFDEs of neutral types cannot be solved explicitly, the topic on numerical approximations for SFDEs of neutral type has also been investigated considerably. For instance, under a global Lipschitz condition, Wu and Mao [21] revealed that the convergence rate of the EM scheme constructed is close to one-half; under a log-Lipschitz condition, Jiang et al. [9] generalized [24] by Yuan and Mao to the neutral case; under the Khasminskill-type condition, following the line of Yuan and Glover [25], Milosevic [15] and Zhou [27] studied the convergence in probability of the associated EM scheme; while in [22], Yan et al. proved the strong convergence of the split-step theta method for SFDEs of neutral type with convergence rate of one-half. In [28], Zhou and Jin investigated the strong convergence of the implicit numerical approximations for SFDEs of neutral type with superlinearly growing coefficients. For preserving stochastic stability (of the exact solutions) of variable numerical schemes, we refer to, e.g., [10, 23, 29, 30] and the references therein.

We remark that most of the existing literature on the convergence rate of explicit EM scheme for SFDEs of neutral type has dealt with the Lipschitz-type condition, where, in particular, the neutral term is contractive. For example, Obradović et al. [18] discussed the convergence in probability of the explicit EM method for neutral stochastic systems with unbounded delay and Markovian switching under local Lipschitz conditions. To the best of our knowledge, the convergence rate of explicit EM scheme for SFDEs of neutral type with non-Lipschitz conditions (hence nonlinear) has seen few results. Consider the following SDDE of neutral type:

$$ \mathrm{d}\bigl\{ X(t)-X^{2}(t-\tau ) \bigr\} = \bigl\{ aX(t)+bX^{3}(t-\tau ) \bigr\} \,\mathrm{d}t+cX^{2}(t- \tau ) \, \mathrm{d}B(t),\quad t\ge 0, $$
(1.1)

in which \(a,b,c \in \mathbb{R}\), \(\tau >0\) are some constants, and \(B(t)\) is a scalar Brownian motion. Observe that all the neutral, drift, and diffusion coefficients in (1.1) are highly nonlinear with respect to the delay variable so that the existing results on the convergence rate of EM schemes associated with SFDEs of neutral type cannot be applied to the example above. In this paper we intend to establish the theory on the convergence rate of EM scheme for a class of SDDEs of neutral type, where, in particular, the neutral term is of polynomial growth, so that it would cover more interesting models.

Throughout the paper, the shorthand notation \(a\lesssim b\) is used to express that there exists a positive constant c such that \(a\le cb\), where c is a generic constant whose value may change from line to line. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a complete probability space with a filtration \((\mathcal{F}_{t})_{t\geq 0}\) satisfying the usual conditions (i.e., it is right continuous and \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\)-null sets). For each integer \(n\ge 1\), let \((\mathbb{R}^{n}, \langle \cdot,\cdot \rangle, |\cdot |)\) be an n-dimensional Euclidean space. For \(A\in \mathbb{R}^{n}\otimes \mathbb{R}^{m}\), the collection of all \(n\times m\) matrices, \(\|A\|\) stands for the Hilbert–Schmidt norm, i.e., \(\|A\|=(\sum_{i=1}^{m}|Ae_{i}|^{2})^{1/2}\), where \((e_{i})_{i\ge 1}\) is an orthogonal basis of \(\mathbb{R}^{m}\). For \(\tau >0\), which is referred to as delay or memory, \(\mathcal{C}:=C([-\tau,0];\mathbb{R}^{n})\) means the space of all continuous functions \(\phi:[-\tau,0]\mapsto \mathbb{R}^{n}\) with the uniform norm \(\|\phi \|_{\infty }:=\sup_{-\tau \leq \theta \leq 0}|\phi (\theta )|\). Let \((B(t))_{t\ge 0}\) be a standard m-dimensional Brownian motion defined on the probability space \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\ge 0},\mathbb{P})\).

The result of this paper will be organized as follows: The convergence rate of two special cases of SDDEs of neutral type, one driven by Brownian motions and the other driven by Pure jump processes, will be discussed in Sects. 2 and 3, respectively. The convergence result of a general SDDEs of neutral type will be demonstrated in Sect. 4. Some numerical examples will be illustrated in Sect. 5. A conclusion will be presented in Sect. 6.

The Brownian motion case

To begin, we focus on an SDDE of neutral type on \((\mathbb{R}^{n}, \langle \cdot,\cdot \rangle, |\cdot |)\) in the form

$$ \mathrm{d}\bigl\{ X(t)-G \bigl(X(t-\tau ) \bigr) \bigr\} = b \bigl(X(t),X(t-\tau ) \bigr)\,\mathrm{d}t+\sigma \bigl(X(t),X(t- \tau ) \bigr)\, \mathrm{d}B(t),\quad t>0, $$
(2.1)

with the initial value \(X(\theta )=\xi (\theta )\) for \(\theta \in [-\tau,0]\), where \(G:\mathbb{R}^{n}\mapsto \mathbb{R}^{n}\), \(b:\mathbb{R}^{n}\times \mathbb{R}^{n}\mapsto \mathbb{R}^{n}\), \(\sigma:\mathbb{R}^{n}\times \mathbb{R}^{n}\mapsto \mathbb{R}^{n\times m}\).

We assume that there exist constants \(L>0\) and \(q\ge 1\) such that, for any \(x,y,\overline{x},\overline{y}\in \mathbb{R}^{n}\),

  1. (A1)

    \(|G(y)-G(\overline{y})|\leq L(1+|y|^{q}+|\overline{y}|^{q})|y- \overline{y}|\);

  2. (A2)

    \(|b(x,y)-b(\overline{x},\overline{y})|+\|\sigma (x,y)-\sigma ( \overline{x},\overline{y})\| \leq L|x-\overline{x}|+L(1+|y|^{q}+| \overline{y}|^{q})|y-\overline{y}|\), where \(\|\cdot \|\) stands for the Hilbert–Schmidt norm;

  3. (A3)

    \(|\xi (t)-\xi (s)|\leq L|t-s|\) for any \(s,t\in [-\tau,0]\).

Remark 2.1

There are some examples such that (A1) and (A2) hold. For instance, if \(G(y)=y^{2}, b(x,y)=\sigma (x,y)=ax+y^{3}\) for any \(x,y\in \mathbb{R}\) and some \(a\in \mathbb{R}\), then both (A1) and (A2) hold.

By following a similar argument to [12, Theorem 3.1, p. 210], (2.1) has a unique solution \(\{X(t)\}\) under (A1) and (A2). In the sequel, we introduce the EM scheme associated with (2.1). Without loss of generality, we assume that \(h=T/M=\tau /m\in (0,1)\) for some integers \(M,m>1\). For every integer \(k = -m,\dots,0\), set \(Y_{h}^{(k)}:=\xi (kh)\), and for each integer \(k = 1, \dots, M-1\), we define

$$ Y_{h}^{(k+1)}-G \bigl(Y_{h}^{(k+1-m)} \bigr)= Y_{h}^{(k)}-G \bigl(Y_{h}^{(k-m)} \bigr)+b \bigl(Y_{h}^{(k)},Y_{h}^{(k-m)} \bigr)h+ \sigma \bigl(Y_{h}^{(k)},Y_{h}^{(k-m)} \bigr)\Delta B^{(k)}_{h}, $$
(2.2)

where \(\Delta B^{(k)}_{h}:= B((k+1)h)-B(kh)\). For any \(t\in [kh,(k+1)h)\), set \(\overline{Y}(t): = Y_{h}^{(k)}\). To avoid the complex calculation, we define the continuous-time EM approximation solution \(Y(t)\) as follows: for any \(\theta \in [-\tau,0]\), \(Y(\theta ) = \xi (\theta ) \), and

$$ \begin{aligned} Y(t)={}& G \bigl(\overline{Y}(t-\tau ) \bigr)+\xi (0)-G \bigl(\xi (-\tau ) \bigr)+ \int _{0}^{t}b \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr)\,\mathrm{d}s \\ &{} + \int _{0}^{t}\sigma \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr) \,\mathrm{d}B(s),\quad t\in [0,T]. \end{aligned} $$
(2.3)

A straightforward calculation shows that the continuous-time EM approximate solution \(Y(t)\) coincides with the discrete-time approximation solution \(\overline{Y}(t)\) at the grid points \(t = nh\).

Pth moment bound

The lemma below provides estimates of the pth moment of the solution to (2.1) and the corresponding EM scheme, alongside with the pth moment of the displacement.

Lemma 2.1

Under (A1) and (A2), for any \(p\ge 2\) there exists a constant \(C_{T}>0\) such that

$$ \mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert X(t) \bigr\vert ^{p} \Bigr)\vee \mathbb{E} \Bigl( \sup _{0\leq t\leq T} \bigl\vert Y(t) \bigr\vert ^{p} \Bigr) \leq C_{T} $$
(2.4)

and

$$ \mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert \Gamma (t) \bigr\vert ^{p} \Bigr)\lesssim h^{p/2}, $$
(2.5)

where \(\Gamma (t):=Y(t)-\overline{Y}(t)\).

Proof

We focus only on the following estimate:

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert Y(t) \bigr\vert ^{p} \Bigr)\leq C_{T} $$
(2.6)

for some constant \(C_{T}>0\) since the uniform pth moment of \(X(t)\) in a finite time interval can be done similarly. From (A1) and (A2), one has

$$ \bigl\vert G(y) \bigr\vert \lesssim 1+ \vert y \vert ^{1+q} $$
(2.7)

and

$$ \bigl\vert b(x,y) \bigr\vert + \bigl\Vert \sigma (x,y) \bigr\Vert \lesssim 1+ \vert x \vert + \vert y \vert ^{1+q} $$
(2.8)

for any \(x,y\in \mathbb{R}^{n}\). By the Hölder inequality, the Burkholder–Davis–Gundy (BDG) inequality (see, e.g., [12, Theorem 7.3, p. 40]), we derive from (2.7) and (2.8) that

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{-\tau \le s\le t} \bigl\vert Y(s) \bigr\vert ^{p} \Bigr) \lesssim{}& 1+ \Vert \xi \Vert _{\infty }^{p(1+q)}+\mathbb{E} \Bigl(\sup_{-\tau \le s \le t-\tau } \bigl\vert \overline{Y}(s) \bigr\vert ^{p(1+q)} \Bigr) \\ &{} + \int _{0}^{t} \bigl\{ \mathbb{E} \bigl\vert \overline{Y}(s) \bigr\vert ^{p}+\mathbb{E} \bigl\vert \overline{Y}(s-\tau ) \bigr\vert ^{p(1+q)} \bigr\} \,\mathrm{d}s \\ \lesssim{} &1+ \Vert \xi \Vert _{\infty }^{p(1+q)}+\mathbb{E} \Bigl(\sup_{-\tau \le s \le t-\tau } \bigl\vert Y(s) \bigr\vert ^{p(1+q)} \Bigr) \\ &{} + \int _{0}^{T}\mathbb{E} \Bigl(\sup _{-\tau \le r\le s} \bigl\vert Y(r) \bigr\vert ^{p} \Bigr) \, \mathrm{d}s, \end{aligned} $$

where we have used \(Y(kh)=\overline{Y}(kh)\) in the last display. This, together with Gronwall’s inequality, yields that

$$ \mathbb{E} \Bigl(\sup_{0\le s\le t} \bigl\vert Y(s) \bigr\vert ^{p} \Bigr)\lesssim 1+ \Vert \xi \Vert ^{p(1+q)}_{\infty }+ \mathbb{E} \Bigl(\sup_{0\le s\le (t-\tau )\vee 0} \bigl\vert X(s) \bigr\vert ^{p(1+q)} \Bigr), $$

which further implies that

$$ \mathbb{E} \Bigl(\sup_{0\le t\le \tau } \bigl\vert X(t) \bigr\vert ^{p} \Bigr)\lesssim 1+ \Vert \xi \Vert ^{p(1+q)}_{\infty } $$

and

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{0\le t\le 2\tau } \bigl\vert X(t) \bigr\vert ^{p} \Bigr)& \lesssim 1+ \mathbb{E} \Vert \xi \Vert ^{p(1+q)}_{\infty }+ \Bigl(\sup_{0\le t\le \tau } \bigl\vert X(t) \bigr\vert ^{p(1+q)} \Bigr)\lesssim 1+ \Vert \xi \Vert ^{p(1+q)^{2}}_{\infty }, \end{aligned} $$

where we use the fact that \(p_{1} =p(1+q)> 2\) and

$$ \mathbb{E} \Bigl(\sup_{0\le t\le 2\tau } \bigl\vert X(t) \bigr\vert ^{p_{1}} \Bigr)\lesssim 1+ \mathbb{E} \Vert \xi \Vert ^{p_{1}(1+q)}_{\infty }\lesssim 1+ \mathbb{E} \Vert \xi \Vert ^{p(1+q)^{2}}_{\infty }. $$

Thus (2.6) follows from an inductive argument.

Employing Hölder’s and BDG inequalities, we deduce from (2.3) and (2.8) that

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert \Gamma (t) \bigr\vert ^{p} \Bigr) \lesssim{}& \sup _{0\le k\le M-1} \biggl\{ \mathbb{E} \biggl(\sup_{kh\le t \leq (k+1)h} \biggl\vert \int _{kh}^{t}b \bigl(\overline{Y}(s), \overline{Y}(s- \tau ) \bigr)\,\mathrm{d}s \biggr\vert ^{p} \biggr) \\ &{} +\mathbb{E} \biggl(\sup_{nh\le t \leq (k+1)h} \biggl\vert \int _{kh}^{t} \sigma \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr)\,\mathrm{d}B(s) \biggr\vert ^{p} \biggr) \biggr\} \\ \lesssim {}&\sup_{0\le k\le M-1} \biggl\{ h^{p-1}\mathbb{E} \int _{kh}^{(k+1)h} \bigl\vert b \bigl( \overline{Y}(s),\overline{Y}(s-\tau ) \bigr) \bigr\vert ^{p}\,\mathrm{d}s \\ & {}+h^{\frac{p}{2}-1}\mathbb{E} \int _{kh}^{(k+1)h} \bigl\Vert \sigma \bigl( \overline{Y}(s),\overline{Y}(s-\tau ) \bigr) \bigr\Vert ^{p}\,\mathrm{d}s \biggr\} \\ \lesssim{}& h^{\frac{p}{2}-1}\sup_{0\le k\le M-1} \biggl\{ \int _{kh}^{(k+1)h} \bigl(1+\mathbb{E} \bigl\vert \overline{Y}(s) \bigr\vert ^{p}+\mathbb{E} \bigl\vert \overline{Y}(s- \tau ) \bigr\vert ^{p(q+1)} \bigr)\,\mathrm{d}s \biggr\} \\ \lesssim{}& h^{\frac{p}{2}}, \end{aligned} $$

where in the last step we have used (2.6). The desired assertion is therefore proved. □

Convergence result

The first main result in this paper is stated as follows.

Theorem 2.2

Under Assumptions (A1)(A3),

$$ \mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert X(t)-Y(t) \bigr\vert ^{p} \Bigr)\lesssim h^{p/2},\quad p \geq 2. $$
(2.9)

So the convergence rate of the EM scheme (i.e., (2.3)) associated with (2.1) is one-half.

With Lemma 2.1 in hand, we are now in the position to finish the proof of Theorem 2.2.

Proof of Theorem 2.2

We follow the Yamada–Watanabe approach (see, e.g., [4]) to complete the proof of Theorem 2.2. For a fixed \(\kappa >1\) and arbitrary \(\varepsilon \in (0,1)\), there exists a continuous nonnegative function \(\varphi _{\kappa \varepsilon }(\cdot )\) with the support \([\varepsilon /\kappa,\varepsilon ]\) such that

$$ \int _{\varepsilon /\kappa }^{\varepsilon }\varphi _{\kappa \varepsilon }(x) \, \mathrm{d}x = 1\quad \text{and}\quad \varphi _{\kappa \varepsilon }(x) \leq \frac{2}{x\ln \kappa },\quad x> 0. $$

Set

$$ \phi _{\kappa \varepsilon }(x):= \int _{0}^{x} \int _{0}^{y}\varphi _{ \kappa \varepsilon }(z)\, \mathrm{d}z \,\mathrm{d}y,\quad x>0. $$

We can see that \(\phi _{\kappa \varepsilon }(\cdot )\) is such that

$$ x-\varepsilon \leq \phi _{\kappa \varepsilon }(x)\leq x,\quad x > 0. $$
(2.10)

Let

$$ V_{\kappa \varepsilon }(x)= \phi _{\kappa \varepsilon } \bigl( \vert x \vert \bigr),\quad x \in \mathbb{R}^{n}. $$
(2.11)

By a straightforward calculation, it holds

$$ (\nabla V_{\kappa \varepsilon }) (x)= \phi '_{\kappa \varepsilon } \bigl( \vert x \vert \bigr) \vert x \vert ^{-1}x, \quad x \in \mathbb{R}^{n} $$

and

$$ \bigl(\nabla ^{2} V_{\kappa \varepsilon } \bigr) (x)=\phi '_{\kappa \varepsilon } \bigl( \vert x \vert \bigr) \bigl( \vert x \vert ^{2} \mathbf{I}-x\otimes x \bigr) \vert x \vert ^{-3}+ \vert x \vert ^{-2}\phi ''_{\kappa \varepsilon } \bigl( \vert x \vert \bigr)x \otimes x,\quad x\in \mathbb{R}^{n}, $$

where and \(\nabla ^{2}\) stand for the gradient and Hessian operators, respectively, I denotes the identity matrix, and \(x\otimes x=xx^{*}\) with \(x^{*}\) being the transpose of \(x\in \mathbb{R}^{n}\). Moreover, we have

$$ \bigl\vert (\nabla V_{\kappa \varepsilon }) (x) \bigr\vert \leq 1 \quad\text{and}\quad \bigl\Vert \bigl( \nabla ^{2}V_{\kappa \varepsilon } \bigr) (x) \bigr\Vert \leq 2n \biggl(1+ \frac{1}{\ln \kappa } \biggr) \frac{1}{ \vert x \vert } { \mathbf{{1}}}_{[\varepsilon / \kappa,\varepsilon ]} \bigl( \vert x \vert \bigr), $$
(2.12)

where \(\mathbf{{1}}_{A}(\cdot )\) is the indicator function of the subset \(A\subset \mathbb{R}_{+}\).

For notation simplicity, set

$$ Z(t):=X(t)-Y(t) \quad\text{and}\quad \Lambda (t):= Z(t)-G \bigl(X(t-\tau ) \bigr)+G \bigl( \overline{Y}(t-\tau ) \bigr). $$
(2.13)

In the sequel, let \(t\in [0,T]\) be arbitrary and fix \(p\geq 2\). Due to \(\Lambda (0)={\mathbf{0}}\in \mathbb{R}^{n}\) and \(V_{\kappa \varepsilon }({\mathbf{0}})=0\), an application of Itô’s formula gives

$$ \begin{aligned} V_{\kappa \varepsilon } \bigl(\Lambda (t) \bigr)={}& \int _{0}^{t} \bigl\langle (\nabla V_{\kappa \varepsilon }) \bigl(\Lambda (s) \bigr),\Gamma _{1}(s) \bigr\rangle \,\mathrm{d}s\\ &{}+\frac{1}{2} \int _{0}^{t}\text{trace} \bigl\{ \bigl( \Gamma _{2}(s) \bigr)^{*} \bigl( \nabla ^{2} V_{\kappa \varepsilon } \bigr) \bigl(\Lambda (s) \bigr)\Gamma _{2}(s) \bigr\} \,\mathrm{d}s \\ &{} + \int _{0}^{t} \bigl\langle \nabla (V_{\kappa \varepsilon }) \bigl( \Lambda (s) \bigr),\Gamma _{2}(s)\,\mathrm{d}B(s) \bigr\rangle \\ =: {}& I_{1}(t)+I_{2}(t)+I_{3}(t), \end{aligned} $$

where

$$ \Gamma _{1}(t):=b \bigl(X(t),X(t-\tau ) \bigr)-b \bigl(\overline{Y}(t),\overline{Y}(t- \tau ) \bigr) $$
(2.14)

and

$$ \Gamma _{2}(t):=\sigma \bigl(X(t),X(t-\tau ) \bigr)-\sigma \bigl( \overline{Y}(t), \overline{Y}(t-\tau ) \bigr). $$

Set

$$ V(x,y):=1+ \vert x \vert ^{q}+ \vert y \vert ^{q},\quad x,y\in \mathbb{R}^{n}. $$
(2.15)

According to (2.4), for any \(q\ge 2\) there exists a constant \(C_{T}>0\) such that

$$ \mathbb{E} \Bigl(\sup_{0\le t\le T}V \bigl(X(t-\tau ), \overline{Y}(t-\tau ) \bigr)^{q} \Bigr)\le C_{T}. $$
(2.16)

Noting that

$$ X(t)-\overline{Y}(t) =\Lambda (t)+\Gamma (t)+ G \bigl(X(t-\tau ) \bigr)-G \bigl( \overline{Y}(t-\tau ) \bigr), $$
(2.17)

and using Hölder’s and BDG inequalities, we get from (2.12) and (A1)–(A2) that

$$ \begin{aligned} \Theta (t):&=\mathbb{E} \Bigl(\sup _{0\leq s\leq t} \bigl\vert I_{1}(s) \bigr\vert ^{p} \Bigr)+\mathbb{E} \Bigl(\sup_{0\le s\le t} \bigl\vert I_{3}(s) \bigr\vert ^{p} \Bigr) \\ &\lesssim \int _{0}^{t} \bigl\{ \mathbb{E} \bigl\vert \Gamma _{1}(s) \bigr\vert ^{p}+\mathbb{E} \bigl\Vert \Gamma _{2}(s) \bigr\Vert ^{p} \bigr\} \,\mathrm{d}s \\ &\lesssim \int _{0}^{t}\mathbb{E} \bigl\vert X(s)- \overline{Y}(s) \bigr\vert ^{p}\,\mathrm{d}s+ \int _{-\tau }^{t-\tau }\mathbb{E} \bigl(V \bigl(X(s), \overline{Y}(s) \bigr)^{p} \bigl\vert X(s) - \overline{Y}(s) \bigr\vert ^{p} \bigr)\,\mathrm{d}s \\ &\lesssim \int _{0}^{t}\mathbb{E} \bigl\{ \bigl\vert \Lambda (s) \bigr\vert ^{p}+ \bigl\vert \Gamma (s) \bigr\vert ^{p} \bigr\} \,\mathrm{d}s + \int _{-\tau }^{t-\tau }\mathbb{E} \bigl(V \bigl(X(s), \overline{Y}(s) \bigr)^{p} \bigl\vert X(s) -\overline{Y}(s) \bigr\vert ^{p} \bigr)\,\mathrm{d}s. \end{aligned} $$
(2.18)

Also, by Hölder’s inequality, it follows from (2.5), (A3), and (2.16) that

$$ \begin{aligned} \Theta (t)\lesssim{}& \int _{0}^{t} \bigl\{ \mathbb{E} \bigl\vert \Lambda (s) \bigr\vert ^{p}+ \mathbb{E} \bigl\vert \Gamma (s) \bigr\vert ^{p}+ \bigl(\mathbb{E}V \bigl(X(s-\tau ),\overline{Y}(s- \tau ) \bigr)^{2p} \bigr)^{1/2} \\ &{} \times ( \bigl(\mathbb{E} \bigl\vert Z(s-\tau ) \bigr\vert ^{2p} \bigr)^{1/2}+ \bigl(\mathbb{E} \bigl\vert \Gamma (s- \tau ) \bigr\vert ^{2p} \bigr)^{1/2} \bigr\} \,\mathrm{d}s \\ \lesssim{}& \int _{0}^{t} \bigl\{ \mathbb{E} \bigl\vert \Lambda (s) \bigr\vert ^{p}+\mathbb{E} \bigl\vert \Gamma (s) \bigr\vert ^{p}+ \bigl( \mathbb{E} \bigl\vert Z(s-\tau ) \bigr\vert ^{2p} \bigr)^{1/2}+ \bigl(\mathbb{E} \bigl\vert \Gamma (s- \tau ) \bigr\vert ^{2p} \bigr)^{1/2} \bigr\} \, \mathrm{d}s \\ \lesssim{}& \int _{0}^{t} \bigl\{ \mathbb{E} \bigl\vert \Lambda (s) \bigr\vert ^{p}+ \bigl(\mathbb{E} \bigl\vert Z(s- \tau ) \bigr\vert ^{2p} \bigr)^{1/2}+h^{p/2} \bigr\} \, \mathrm{d}s. \end{aligned} $$
(2.19)

Using (2.12), we derive

$$ \begin{aligned}\mathbb{E} \Bigl(\sup_{0\leq s\leq t} \bigl\vert I_{2}(s) \bigr\vert ^{p} \Bigr) \lesssim {}&\mathbb{E} \int _{0}^{t} \bigl\Vert \bigl(\nabla ^{2} V_{\kappa \varepsilon } \bigr) \bigl( \Lambda (s) \bigr) \bigr\Vert ^{p} \bigl\Vert \Gamma _{2}(s) \bigr\Vert ^{2p}\,\mathrm{d}s \\ \lesssim {}&\mathbb{E} \int _{0}^{t}\frac{1}{ \vert \Lambda (s) \vert ^{p}} \bigl\{ \bigl\vert X(s)- \overline{Y}(s) \bigr\vert ^{2p}+V \bigl(X(s-\tau ), \overline{Y}(s-\tau ) \bigr)^{2p} \\ &{} \times \bigl( \bigl\vert X(s-\tau )-\overline{Y}(s-\tau ) \bigr\vert ^{2p} \bigr) \bigr\} \mathbf{I}_{[\varepsilon / \kappa,\varepsilon ]} \bigl( \bigl\vert \Lambda (s) \bigr\vert \bigr) \,\mathrm{d}s. \end{aligned} $$

In the light of (A1) and (2.13)–(2.17), then we have

$$\begin{aligned} \begin{aligned} &\mathbb{E} \Bigl(\sup _{0\leq s\leq t} \bigl\vert I_{2}(s) \bigr\vert ^{p} \Bigr) \\ &\quad\lesssim \mathbb{E} \int _{0}^{t}\frac{1}{ \vert \Lambda (s) \vert ^{p}} \bigl\{ \bigl\vert \Lambda (s) \bigr\vert ^{2p}+ \bigl\vert \Gamma (s) \bigr\vert ^{2p}+ \bigl\vert G \bigl(X(s-\tau ) \bigr)-G \bigl( \overline{Y}(s- \tau ) \bigr) \bigr\vert ^{2p} \\ &\qquad{} +V \bigl(X(s-\tau ),\overline{Y}(s-\tau ) \bigr)^{2p} \bigl( \bigl\vert X(s-\tau )- \overline{Y}(s-\tau ) \bigr\vert ^{2p} \bigr) \bigr\} {\mathbf{{I}}_{[\varepsilon /\kappa, \varepsilon ]}} \bigl( \bigl\vert \Lambda (s) \bigr\vert \bigr)\,\mathrm{d}s \\ &\quad\lesssim \mathbb{E} \int _{0}^{t}\frac{1}{ \vert \Lambda (s) \vert ^{p}} \bigl\{ \bigl\vert \Lambda (s) \bigr\vert ^{2p}+ \bigl\vert \Gamma (s) \bigr\vert ^{2p} \\ &\qquad{} +V \bigl(X(s-\tau ),\overline{Y}(s-\tau ) \bigr)^{2p} \bigl( \bigl\vert X(s-\tau )- \overline{Y}(s-\tau ) \bigr\vert ^{2p} \bigr) \bigr\} {\mathbf{{I}}_{[\varepsilon /\kappa, \varepsilon ]}} \bigl( \bigl\vert \Lambda (s) \bigr\vert \bigr)\,\mathrm{d}s \\ &\quad\lesssim \mathbb{E} \int _{0}^{t} \bigl\{ \bigl\vert \Lambda (s) \bigr\vert ^{p}+\varepsilon ^{-p} \bigl\vert \Gamma (s) \bigr\vert ^{2p} \\ &\qquad{} +\varepsilon ^{-p}V^{2p} \bigl(X(s-\tau ), \overline{Y}(s- \tau ) \bigr) \bigl( \bigl\vert X(s- \tau )-\overline{Y}(s-\tau ) \bigr\vert ^{2p} \bigr) \bigr\} \,\mathrm{d}s \\ &\quad\lesssim \int _{0}^{t} \bigl\{ \varepsilon ^{-p}h^{p}+\mathbb{E} \bigl\vert \Lambda (s) \bigr\vert ^{p}+ \varepsilon ^{-p} \bigl(\mathbb{E} \bigl( \bigl\vert Z(s-\tau ) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \, \mathrm{d}s, \end{aligned} \end{aligned}$$
(2.20)

where in the last step we have used Hölder’s inequality. Now, according to (2.10), (2.19), and (2.20), one has

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{0\leq s\leq t} \bigl\vert \Lambda (s) \bigr\vert ^{p} \Bigr)\lesssim{}& \epsilon ^{p}+\mathbb{E} \Bigl(\sup_{0\leq s\leq t}V_{ \kappa \varepsilon } \bigl(\Lambda (s) \bigr) \Bigr) \\ \lesssim {}&\epsilon ^{p}+\Theta (t)+\mathbb{E} \Bigl(\sup _{0\leq s\leq t} \bigl\vert I_{3}(s) \bigr\vert ^{p} \Bigr) \\ \lesssim{}& \varepsilon ^{p}+ \int _{0}^{t} \Bigl\{ h^{p/2}+ \varepsilon ^{-p}h^{p}+ \mathbb{E} \Bigl(\sup_{0\leq r\leq s} \bigl\vert \Lambda (r) \bigr\vert ^{p} \Bigr) \\ & {}+ \bigl(\mathbb{E} \bigl( \bigl\vert Z(s-\tau ) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+\varepsilon ^{-p} \bigl( \mathbb{E} \bigl( \bigl\vert Z(s-\tau ) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \Bigr\} \,\mathrm{d}s. \end{aligned} $$

Thus, Gronwall’s inequality gives

$$ \begin{aligned} \mathbb{E} \Bigl(\sup _{0\le s\le t} \bigl\vert \Lambda (s) \bigr\vert ^{p} \Bigr) \lesssim{}& \varepsilon ^{p}+h^{p/2}+\varepsilon ^{-p}h^{p} \\ & {}+ \int _{0}^{(t-\tau )\vee 0} \bigl\{ \bigl(\mathbb{E} \bigl( \bigl\vert Z(s) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+ \varepsilon ^{-p} \bigl(\mathbb{E} \bigl( \bigl\vert Z(s) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \,\mathrm{d}s \\ \lesssim{}& h^{p/2}+ \int _{0}^{(t-\tau )\vee 0} \bigl\{ \bigl(\mathbb{E} \bigl( \bigl\vert Z(s) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+ \varepsilon ^{-p} \bigl(\mathbb{E} \bigl( \bigl\vert Z(s) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \,\mathrm{d}s, \end{aligned} $$
(2.21)

by choosing \(\varepsilon =h^{1/2}\) and taking \(|Z(t)|\equiv 0\) for \(t\in [-\tau,0]\) into account. Next, by (A1) and (2.16), it follows from Hölder’s inequality that

$$\begin{aligned} & \mathbb{E} \Bigl(\sup _{0\leq t\leq T} \bigl\vert Z(t) \bigr\vert ^{p} \Bigr) \\ \begin{aligned} &\quad\lesssim \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr)+ \mathbb{E} \Bigl(\sup_{-\tau \leq t\leq T-\tau } \bigl\vert G \bigl(X(t) \bigr) -G \bigl( \overline{Y}(t) \bigr) \bigr\vert ^{p} \Bigr) \\ &\quad\lesssim \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr)+ \mathbb{E} \Bigl(\sup_{-\tau \leq t\leq T-\tau } \bigl(V \bigl(X(t),\overline{Y}(t) \bigr)^{p} \bigl\vert X(t)- \overline{Y}(t) \bigr\vert ^{p} \bigr) \Bigr) \end{aligned} \\ &\quad\lesssim \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr)+h^{p/2}+ \Bigl(\mathbb{E} \Bigl(\sup _{0\leq t\leq (T-\tau )\vee 0} \bigl\vert Z(t) \bigr\vert ^{2p} \Bigr) \Bigr)^{1/2}. \end{aligned}$$
(2.22)

Substituting (2.21) into (2.22) yields

$$ \begin{aligned} \mathbb{E} \Bigl(\sup _{0\le t\le T} \bigl\vert Z(t) \bigr\vert ^{p} \Bigr) \lesssim{}& h^{p/2}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t\leq (T-\tau ) \vee 0} \bigl\vert Z(t) \bigr\vert ^{2p} \Bigr) \Bigr)^{1/2} \\ &{}+ \int _{0}^{(T-\tau )\vee 0} \bigl\{ \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+ \varepsilon ^{-p} \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \,\mathrm{d}t. \end{aligned} $$
(2.23)

Hence, we have

$$ \mathbb{E} \Bigl(\sup_{0\le t\le \tau } \bigl\vert Z(t) \bigr\vert ^{p} \Bigr) \lesssim h^{p/2} $$

and

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{0\le t\le 2\tau } \bigl\vert Z(t) \bigr\vert ^{p} \Bigr) \lesssim{}& h^{p/2}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t\leq \tau } \bigl\vert Z(t) \bigr\vert ^{2p} \Bigr) \Bigr)^{1/2} \\ &{}+ \int _{0}^{\tau } \bigl\{ \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+ \varepsilon ^{-p} \bigl( \mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \,\mathrm{d}t \\ \lesssim{}& h^{p/2} \end{aligned} $$

by taking \(\varepsilon =h^{1/2}\). Thus, the desired assertion (2.9) follows from an inductive argument. □

The NSDDE driven by pure jump processes

Next, we move to consider the convergence rate of EM scheme corresponding to a class of SDDEs of neutral type driven by pure jump processes. More precisely, we consider an SDDEs of neutral type

$$ \mathrm{d}\bigl\{ X(t)-G \bigl(X(t-\tau ) \bigr) \bigr\} =b \bigl(X(t),X(t-\tau ) \bigr)\,\mathrm{d}t+ \int _{U}g \bigl(X(t-),X \bigl((t- \tau )- \bigr),u \bigr) \widetilde{N}(\mathrm{d}u,\mathrm{d}t) $$
(3.1)

with the initial data \(X(\theta )=\xi (\theta ), \theta \in [-\tau,0]\). Herein, G and b are given as in (2.1), \(g:\mathbb{R}^{n}\times \mathbb{R}^{n}\times U\mapsto \mathbb{R}^{m}\), where \(U\in \mathcal{B}(\mathbb{R})\); \(\widetilde{N}(\mathrm{d}t,\mathrm{d}u):= N(\mathrm{d}t,\mathrm{d}u)-\mathrm{d}t \lambda (\mathrm{d}u)\) is the compensated Poisson measure associated with the Poisson counting measure \(N(\mathrm{d}t,\mathrm{d}u)\) generated by a stationary \(\mathcal{F}_{t}\)-Poisson point process \(\{p(t)\}_{t\ge 0}\) on \(\mathbb{R}\) with characteristic measure \(\lambda (\cdot )\), i.e., \(N(t,U)= \sum_{s\in D(P),s\leq t}I_{U}(p(s))\) for \(U\in \mathcal{B}(\mathbb{R})\); \(X(t-):=\lim_{s \uparrow t} X(s)\).

We assume that b and G are such that (A1) and (A2) hold with \(\sigma \equiv {\mathbf{0}}_{n\times m}\) therein. We further suppose that there exist \(L_{0},r>0\) such that for any \(x,y,\overline{x},\overline{y}\in \mathbb{R}^{n}\) and \(u\in U\),

  1. (A4)

    \(|g(x,y,u)-g(\overline{x},\overline{y},u)|\leq L_{0} (|x- \overline{x}|+(1+|y|^{q}+|\overline{y}|^{q})|y-\overline{y}|)|u|^{r} \) and \(|g(0,0,u)|\leq |u|^{r}\), where \(q\ge 1\) is the same as that in (A1).

  2. (A5)

    \(\int _{U}|u|^{p}\lambda (du) <\infty \) for any \(p\geq 2\).

Remark 3.1

The jump coefficient may also be highly nonlinear with respect to the delay argument, for example, \(x,y\in \mathbb{R}\), \(u\in U\) and \(q\geq 1\), \(g(x,y,u)=(x+y^{q})u\) satisfies (A5).

By carrying out a similar argument to that of [12, Theorem 3.1, p. 210], (3.1) admits a unique strong solution \(\{X(t)\}\) according to [20, Theorem 117, p. 79].

By following the procedures of (2.2) and (2.3), the discrete-time EM scheme and the continuous-time EM approximation associated with (3.1) are defined respectively as follows:

$$ \begin{aligned}& Y_{h}^{(n+1)}-G \bigl(Y_{h}^{(n+1-m)} \bigr)\\ &\quad= Y_{h}^{(n)}-G \bigl(Y_{h}^{(n-m)} \bigr)+b \bigl(Y_{h}^{(n)},Y_{h}^{(n-m)} \bigr)h+g \bigl(Y_{h}^{(n)},Y_{h}^{(n-m)},u \bigr) \Delta \widetilde{N}_{nh}, \end{aligned} $$
(3.2)

where \(\Delta \widetilde{N}_{nh}:= \widetilde{N}((n+1)h,U)-\widetilde{N}(nh,U)\), and

$$ \begin{aligned} Y(t)={}& G \bigl(\overline{Y}(t-\tau ) \bigr)+\xi (0)-G \bigl(\xi (-\tau ) \bigr)+ \int _{0}^{t}b \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr)\,\mathrm{d}s \\ &{}+ \int _{0}^{t} \int _{U}g \bigl(\overline{Y}(s-),\overline{Y} \bigl((s-\tau )- \bigr),u \bigr) \widetilde{N}(\mathrm{d}u,\mathrm{d}s), \end{aligned} $$
(3.3)

where is defined similarly as in (2.3).

Pth moment bound

Hereinafter, \((X(t))\) is the strong solution to (3.1) and \((Y(t))\) is the continuous-time EM scheme (i.e., (3.3)) associated with (3.1).

The lemma below plays a crucial role in revealing convergence rate of the EM scheme.

Lemma 3.1

Under (A1)(A5) with \(\sigma \equiv {\mathbf{0}}_{n\times m}\), for any \(p\geq 2\), there exists a constant \(C_{T}\) such that

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert X(t) \bigr\vert ^{p} \Bigr) \vee \mathbb{E} \Bigl(\sup _{0\leq t\leq T} \bigl\vert Y(t) \bigr\vert ^{p} \Bigr) \leq C_{T} $$
(3.4)

and

$$ \mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert \Gamma (t) \bigr\vert ^{p} \Bigr)\lesssim h, $$
(3.5)

where \(\Gamma (t):=Y(t)-\overline{Y}(t)\).

Proof

On the other hand, the proof of (3.1) is similar to that of (2.4) except for some technical details. To make this paper self-contained, the key steps will be sketched below.

Again, we only focus on the pth moment estimation of \(Y(t)\),

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert Y(t) \bigr\vert ^{p} \Bigr) \leq C_{T}, $$
(3.6)

since the uniform pth moment of \(Y(t)\) in a finite time interval can be replicated similarly.

According to (A1), (A2), (A4), and (A5), one has

$$\begin{aligned} &\bigl\vert G(y) \bigr\vert \lesssim 1+ \vert y \vert ^{1+q}, \end{aligned}$$
(3.7)
$$\begin{aligned} &\bigl\vert b(x,y) \bigr\vert + \bigl\Vert \sigma (x,y) \bigr\Vert \lesssim 1+ \vert x \vert + \vert y \vert ^{1+q}, \end{aligned}$$
(3.8)

and

$$ \bigl\vert g(x,y,u) \bigr\vert \lesssim \bigl(1+ \vert x \vert + \vert y \vert ^{1+q} \bigr) \vert u \vert ^{r}, $$
(3.9)

where \(x,y\in \mathbb{R}^{n}, u\in U\).

Then, by applying the BDG and H older’s inequalities, one can derive from (3.7)–(3.9) that

$$ \begin{aligned} &\mathbb{E} \Bigl(\sup_{-\tau \le s\le t} \bigl\vert Y(s) \bigr\vert ^{p} \Bigr)\\ &\quad \lesssim 1+ \Vert \xi \Vert _{\infty }^{p(1+q)}+\mathbb{E} \Bigl(\sup_{-\tau \le s \le t-\tau } \bigl\vert \overline{Y}(s) \bigr\vert ^{p(1+q)} \Bigr) \\ &\qquad{}+ \int _{0}^{t} \bigl\{ \mathbb{E} \bigl\vert \overline{Y}(s) \bigr\vert ^{p}+\mathbb{E} \bigl\vert \overline{Y}(s-\tau ) \bigr\vert ^{p(1+q)} \bigr\} \,\mathrm{d}s\\ &\qquad{} + \int _{0}^{t} \int _{U} \bigl\{ \bigl[1+ \mathbb{E} \bigl\vert \overline{Y}(s) \bigr\vert ^{p}+\mathbb{E} \bigl\vert \overline{Y}(s-\tau ) \bigr\vert ^{p(1+q)} \bigr] \vert u \vert ^{rp} \bigr\} \,\mathrm{d}u\,\mathrm{d}s, \\ &\quad\lesssim 1+ \Vert \xi \Vert _{\infty }^{p(1+q)}+\mathbb{E} \Bigl(\sup_{-\tau \le s \le t-\tau } \bigl\vert Y(s) \bigr\vert ^{p(1+q)} \Bigr)+ \int _{0}^{T}\mathbb{E} \Bigl(\sup _{- \tau \le r\le s} \bigl\vert Y(r) \bigr\vert ^{p} \Bigr)\, \mathrm{d}s, \end{aligned} $$

where we have used \(Y(kh)=\overline{Y}(k h)\) in the last display. This, together with Gronwall’s inequality, yields

$$ \mathbb{E} \Bigl(\sup_{0\le s\le t} \bigl\vert Y(s) \bigr\vert ^{p} \Bigr)\lesssim 1+ \Vert \xi \Vert ^{p(1+q)}_{\infty }+ \mathbb{E} \Bigl(\sup_{0\le s\le (t-\tau )\vee 0} \bigl\vert Y(s) \bigr\vert ^{p(1+q)} \Bigr). $$

The rest of the proof leading to (3.6) can be done in an identical way as for its Brownian motion counterpart, so we omit the details here.

In the sequel, we aim to show (3.5). From (A4), by applying BDG (see, e.g., [14, Theorem 1]) and Hölder’s inequalities, we derive that

$$ \begin{aligned} &\mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert \Gamma (t) \bigr\vert ^{p} \Bigr) \\ &\quad \lesssim \sup _{0\le k\le M-1} \biggl\{ \mathbb{E} \biggl(\sup_{kh\le t\le (k+1)h} \biggl\vert \int _{kh}^{t}b \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr)\,\mathrm{d}s \biggr\vert ^{p} \biggr) \\ &\qquad{} +\mathbb{E} \biggl(\sup_{kh\le t\le (k+1)h} \biggl\vert \int _{kh}^{t} \int _{U}g \bigl(\overline{Y}(s-),\overline{Y} \bigl((s-\tau )- \bigr),u \bigr) \widetilde{N}(\mathrm{d}s,\mathrm{d}u) \biggr\vert ^{p} \biggr) \biggr\} \\ &\quad \lesssim \sup_{0\le k\le M-1} \biggl\{ \int _{kh}^{(k+1)h} \biggl(h^{p-1} \mathbb{E} \bigl\vert b \bigl(\overline{Y}(s),\overline{Y}(s-\tau ) \bigr) \bigr\vert ^{p} \\ & \qquad{}+ \int _{U}\mathbb{E} \bigl\vert g \bigl(\overline{Y}(s), \overline{Y}(s-\tau ),u \bigr) \bigr\vert ^{p} \lambda (\mathrm{d}u) \biggr)\,\mathrm{d}s \biggr\} \\ & \quad\lesssim \sup_{0\le k\le M-1} \biggl\{ \int _{kh}^{(k+1)h} \Bigl(1+ \mathbb{E} \Bigl(\sup _{-\tau \le r\le s} \bigl\vert Y(r) \bigr\vert ^{p(1+q)} \Bigr) \Bigr) \\ &\qquad{} \times \biggl(h^{p-1}+ \int _{U} \vert u \vert ^{pr}\lambda ( \mathrm{d}u) \biggr) \,\mathrm{d}s \biggr\} \\ &\quad\lesssim h^{p}+h \\ &\quad\lesssim h, \end{aligned} $$

where we have used (A2) with \(\sigma \equiv {\mathbf{0}}_{n\times m}\) and (3.9) in the third step, and (3.4) and (A5) in the last two step, respectively. So (3.5) follows as required. □

Convergence results

Our second main result in this paper is presented as follows.

Theorem 3.2

Under (A1)(A5) with \(\sigma \equiv {\mathbf{0}}_{n\times m}\) therein, for any \(p\geq 2\) and \(\theta \in (0,1)\),

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert X(t)-Y(t) \bigr\vert ^{p} \Bigr)\lesssim h^{ \frac{1}{(1+\theta )^{[T/\tau ]}}}. $$
(3.10)

So the best convergence rate of EM scheme (i.e., (3.3)) associated with (3.1) is close to one-half.

Remark 3.2

By a close inspection of the proof for Theorem 3.2, the conditions (A4) and (A5) can be replaced by the following: For any \(p>2\) there exists \(K_{p},K_{0}>0\) and \(q>1\) such that

$$ \begin{aligned} & \int _{U} \bigl\vert g(x,y,u) \bigr\vert ^{p} \lambda (\mathrm{d}u)\le K_{p} \bigl(1+ \vert x \vert ^{p}+ \vert y \vert ^{q} \bigr), \\ & \int _{U} \bigl\vert g(x,y,u)-g(\overline{x}, \overline{y},u) \bigr\vert ^{p}\lambda (\mathrm{d}u) \le K_{p} \bigl[ \vert x-\overline{x} \vert ^{p}+ \bigl(1+ \vert y \vert ^{q}+ \vert \overline{y} \vert ^{q} \bigr) \vert y- \overline{y} \vert ^{p} \bigr], \\ & \int _{U} \bigl\vert g(x,y,u) \bigr\vert ^{2} \lambda (\mathrm{d}u)\le K_{0} \bigl(1+ \vert x \vert ^{2}+ \vert y \vert ^{q} \bigr), \\ & \int _{U} \bigl\vert g(x,y,u)-g(\overline{x}, \overline{y},u) \bigr\vert ^{2}\lambda (\mathrm{d}u) \le K_{0} \bigl[ \vert x-\overline{x} \vert ^{2}+ \bigl(1+ \vert y \vert ^{q}+ \vert \overline{y} \vert ^{q} \bigr) \vert y- \overline{y} \vert ^{2} \bigr] \end{aligned} $$

for any \(x,y,\overline{x},\overline{y}\in \mathbb{R}^{n}\).

Next, we go back to finish the proof of Theorem 3.2.

Proof of Theorem 3.2

We follow the idea of the proof for Theorem 2.2 to complete the proof. Set

$$ \Gamma _{3}(t,u):= g \bigl(X(t),X(t-\tau ),u \bigr)-g \bigl( \overline{Y}(t), \overline{Y}(t-\tau ),u \bigr). $$

Applying Itô’s formula, as well as the Lagrange mean value theorem to \(V_{\kappa \varepsilon }(\cdot )\), defined by (2.11), gives

$$\begin{aligned} V_{\kappa \varepsilon } \bigl(\Lambda (t) \bigr) ={}& \int _{0}^{t} \bigl\langle (\nabla V_{\kappa \varepsilon }) \bigl(\Lambda (s) \bigr),\Gamma _{1}(s) \bigr\rangle \,\mathrm{d}s \\ &{} + \int _{0}^{t} \int _{U} \bigl\{ V_{\kappa \varepsilon } \bigl(\Lambda (s)+ \Gamma _{3}(s) \bigr)-V_{\kappa \varepsilon } \bigl(\Lambda (s) \bigr)- \bigl\langle (\nabla V_{ \kappa \varepsilon }) \bigl(\Lambda (s) \bigr),\Gamma _{3}(s) \bigr\rangle \bigr\} \lambda ( \,\mathrm{d}u)\,\mathrm{d}s \\ & {}+ \int _{0}^{t} \int _{U} \bigl\{ V_{\kappa \varepsilon } \bigl(\Lambda (s-)+ \Gamma _{3}(s-) \bigr)-V_{\kappa \varepsilon } \bigl(\Lambda (s-) \bigr) \bigr\} \widetilde{N}(\mathrm{d}u,\mathrm{d}s) \\ ={}&V_{\kappa \varepsilon } \bigl(\Lambda (0) \bigr)+ \int _{0}^{t} \bigl\langle (\nabla V_{ \kappa \varepsilon }) \bigl(\Lambda (s) \bigr),\Gamma _{1}(s) \bigr\rangle \,\mathrm{d}s \\ &{} + \int _{0}^{t} \int _{U} \biggl\{ \int _{0}^{1} \bigl\langle \nabla V_{ \kappa \varepsilon } \bigl(\Lambda (s)+r\Gamma _{3}(s) \bigr) - \nabla V_{\kappa \varepsilon } \bigl(\Lambda (s) \bigr),\Gamma _{3}(s) \bigr\rangle \,\mathrm{d}r \biggr\} \lambda (\mathrm{d}u)\,\mathrm{d}s \\ &{} + \int _{0}^{t} \int _{U} \biggl\{ \int _{0}^{1} \bigl\langle \nabla V_{ \kappa \varepsilon } \bigl(\Lambda (s-)+r\Gamma _{3}(s-) \bigr),\Gamma _{3}(s-) \bigr\rangle \,\mathrm{d}r \biggr\} \widetilde{N}(\mathrm{d}u, \mathrm{d}s) \\ =:{}& J_{1}(t)+J_{2}(t)+J_{3}(t), \end{aligned}$$

in which \(\Gamma _{1}\) is defined as in (2.14). By BDG inequality (see, e.g., [14, Theorem 1]), we obtain from (2.12), (2.18) with \(\sigma \equiv {\mathbf{0}}_{n\times m}\) therein, (A4) and (A5) that

$$ \begin{aligned} \Upsilon (t):={}&\sum_{i=1}^{3} \mathbb{E} \Bigl(\sup_{0\leq s \leq t} \bigl\vert J_{i}(s) \bigr\vert ^{p} \Bigr)\\ \lesssim{}& \int _{0}^{t}\mathbb{E} \bigl\{ \bigl\vert \Lambda (s) \bigr\vert ^{p}+ \bigl\vert \Gamma (s) \bigr\vert ^{p} \bigr\} \,\mathrm{d}s+ \int _{-\tau }^{t-\tau }\mathbb{E} \bigl(V \bigl(X(s), \overline{Y}(s) \bigr)^{p} \bigl\vert X(s) -\overline{Y}(s) \bigr\vert ^{p} \bigr)\,\mathrm{d}s, \end{aligned} $$

where \(V(\cdot,\cdot )\) is introduced in (2.15). Observe from Hölder’s inequality that

$$ \begin{aligned} &\mathbb{E}V \bigl(X(s),\overline{Y}(s) \bigr)^{p} \bigl\vert Y(s)- \overline{Y}(s) \bigr\vert ^{p} \\ &\quad\lesssim \bigl(\mathbb{E}V \bigl(X(s),\overline{Y}(s) \bigr)^{ \frac{p(1+\theta )}{\theta }} \bigr)^{\frac{\theta }{1+\theta }} \bigl(\mathbb{E} \bigl\vert X(s)- \overline{Y}(s) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }} \\ &\quad\lesssim \bigl(\mathbb{E}V \bigl(X(s),\overline{Y}(s) \bigr)^{ \frac{p(1+\theta )}{\theta }} \bigr)^{\frac{\theta }{1+\theta }} \bigl(\mathbb{E} \bigl\vert Z(s) \bigr\vert ^{p(1+ \theta )}+\mathbb{E} \bigl\vert \Gamma (s) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }} \\ &\quad\lesssim (\mathbb{E} \bigl(1+ \bigl\vert X(s) \bigr\vert ^{\frac{pq(1+\theta )}{\theta }}+ \bigl\vert \overline{Y}(s) \bigr\vert ^{\frac{pq(1+\theta )}{\theta }} \bigr)^{ \frac{\theta }{1+\theta }} \bigl(\mathbb{E} \bigl\vert Z(s) \bigr\vert ^{p(1+\theta )} +\mathbb{E} \bigl\vert \Gamma (s) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }} \\ &\quad\lesssim \bigl( \mathbb{E} \bigl\vert Z(s) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }}+ \bigl( \mathbb{E} \bigl\vert \Gamma (s) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }} \\ &\quad\lesssim h^{\frac{1}{1+\theta }}+ \bigl( \mathbb{E} \bigl\vert Z(s) \bigr\vert ^{p(1+\theta )} \bigr)^{ \frac{1}{1+\theta }},\quad \theta >0, \end{aligned} $$
(3.11)

in which we have used (3.4) in the penultimate display and (3.5) in the last display, respectively. So we arrive at

$$ \begin{aligned} \Upsilon (t) &\lesssim h^{\frac{1}{1+\theta }}+ \int _{0}^{t} \mathbb{E} \bigl\{ \bigl\vert \Lambda (s) \bigr\vert ^{p}+ \bigl\vert \Gamma (s) \bigr\vert ^{p} \bigr\} \,\mathrm{d}s+ \int _{-\tau }^{t- \tau } \bigl( \mathbb{E} \bigl\vert Z(s) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }}\,\mathrm{d}s. \end{aligned} $$

This, together with (2.10) and (3.5), implies

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{0\leq s\leq t} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr)&\lesssim \epsilon ^{p}+\mathbb{E} \Bigl(\sup_{0\leq s\leq t}V_{ \kappa \varepsilon } \bigl(\Lambda (s) \bigr) \Bigr) \\ &\lesssim \epsilon ^{p}+ h^{\frac{1}{1+\theta }}+ \int _{0}^{t} \mathbb{E} \bigl\{ \bigl\vert \Lambda (s) \bigr\vert ^{p}+ \bigl\vert \Gamma (s) \bigr\vert ^{p} \bigr\} \,\mathrm{d}s+ \int _{-\tau }^{t- \tau } \bigl( \mathbb{E} \bigl\vert Z(s) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }}\,\mathrm{d}s \\ &\lesssim h^{\frac{1}{1+\theta }}+ \int _{0}^{t}\mathbb{E} \bigl\vert \Lambda (s) \bigr\vert ^{p} \,\mathrm{d}s+ \int _{-\tau }^{t-\tau } \bigl( \mathbb{E} \bigl\vert Z(s) \bigr\vert ^{p(1+\theta )} \bigr)^{ \frac{1}{1+\theta }}\,\mathrm{d}s \end{aligned} $$

by taking \(\varepsilon =h^{\frac{1}{p(1+\theta )}}\) in the last display. Using Gronwall’s inequality, due to \(Z(\theta )=0\) for \(\theta \in [-\tau,0]\), one has

$$ \begin{aligned} \mathbb{E} \Bigl(\sup _{0\leq t\leq T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr)\lesssim h^{\frac{1}{1+\theta }}+ \int _{0}^{(T-\tau )\vee 0} \bigl( \mathbb{E} \bigl\vert Z(t) \bigr\vert ^{p(1+\theta )} \bigr)^{\frac{1}{1+\theta }}\,\mathrm{d}t. \end{aligned} $$
(3.12)

Next, observe from (A1) and Hölder’s inequality that

$$ \begin{aligned} &\mathbb{E} \Bigl(\sup _{0\leq t\leq T} \bigl\vert Z(t) \bigr\vert ^{p} \Bigr)\\ &\quad\lesssim \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr)+ \mathbb{E} \Bigl(\sup_{-\tau \leq t\leq T-\tau } \bigl\vert G \bigl(X(t) \bigr) -G \bigl( \overline{Y}(t) \bigr) \bigr\vert ^{p} \Bigr) \\ &\quad\lesssim \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr)+ \mathbb{E} \Bigl(\sup_{-\tau \leq t\leq T-\tau } \bigl(V \bigl(X(t),\overline{Y}(t) \bigr)^{p} \bigl\vert X(t)- \overline{Y}(t) \bigr\vert ^{p} \bigr) \Bigr) \\ &\quad\lesssim \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr) \\ & \qquad{}+ \Bigl\{ (1+\mathbb{E} \Bigl(\sup_{-\tau \leq t\leq T} \bigl\vert X(t) \bigr\vert ^{ \frac{pq(1+\theta )}{\theta }} \Bigr)+ \Bigl(\sup_{-\tau \leq t\leq T} \bigl\vert Y(t) \bigr\vert ^{ \frac{pq(1+\theta )}{\theta }} \Bigr) \Bigr\} ^{\frac{\theta }{1+\theta }} \\ &\qquad{} \times \Bigl\{ \mathbb{E} \Bigl(\sup_{-\tau \leq t\leq T-\tau } \bigl\vert Z(t) \bigr\vert ^{p(1+ \theta )} \Bigr) +\mathbb{E} \Bigl(\sup_{-\tau \leq t\leq T} \bigl\vert \Gamma (t) \bigr\vert ^{p(1+ \theta )} \Bigr) \Bigr\} ^{\frac{1}{1+\theta }} \\ &\quad\lesssim h^{\frac{1}{1+\theta }}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t \leq (T-\tau )\vee 0} \bigl\vert Z(t) \bigr\vert ^{p(1+\theta )} \Bigr) \Bigr)^{ \frac{1}{1+\theta }}, \end{aligned} $$
(3.13)

where in the last step we have utilized (3.4) and (3.5). So we find that

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq \tau } \bigl\vert Z(t) \bigr\vert ^{p} \Bigr)\lesssim h^{ \frac{1}{1+\theta }}, $$

which, in addition to (3.13), further yields that

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{0\leq t\leq 2\tau } \bigl\vert Z(t) \bigr\vert ^{p} \Bigr)& \lesssim h^{\frac{1}{1+\theta }}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t \leq \tau } \bigl\vert Z(t) \bigr\vert ^{p(1+\theta )} \Bigr) \Bigr)^{\frac{1}{1+\theta }} \\ &\lesssim h^{\frac{1}{(1+\theta )^{2}}}+h^{\frac{1}{1+\theta }} \\ &\lesssim h^{\frac{1}{(1+\theta )^{2}}}. \end{aligned} $$

Thus, the desired assertion follows from an inductive argument. □

Main results

In this section, we investigate the generalized SDDEs of neutral type, by considering the following SDDE of neutral type:

$$ \begin{aligned} \mathrm{d}\bigl\{ X(t)-G \bigl(X(t-\tau ) \bigr) \bigr\} ={}&b \bigl(X(t),X(t-\tau ) \bigr)\,\mathrm{d}t+ \sigma \bigl(X(t),X(t- \tau ) \bigr)\,\mathrm{d}B(t) \\ &{}+ \int _{U}g \bigl(X(t-),X \bigl((t-\tau )- \bigr),u \bigr) \widetilde{N}(\mathrm{d}u,\mathrm{d}t) \end{aligned} $$
(4.1)

with the initial data \(X(\theta )=\xi (\theta ), \theta \in [-\tau,0]\). Herein, G, b and σ are given as in (2.1), while g is given as in (3.1).

By generalizing the procedures of (2.2), (2.3), (3.2), and (3.3), the discrete-time EM scheme and the continuous-time EM approximation associated with (4.1) are respectively defined as follow: Without loss of generality, we assume that \(h=T/M=\tau /m\in (0,1)\) for some integers \(M,m>1\). For every integer \(k = -m,\dots,0\), set \(Y_{h}^{(k)}:=\xi (kh)\), and for each integer \(k = 1, \dots, M-1\),

$$ \begin{aligned} Y_{h}^{(k+1)}-G \bigl(Y_{h}^{(k+1-m)} \bigr)={}& Y_{h}^{(k)}-G \bigl(Y_{h}^{(k-m)} \bigr)+b \bigl(Y_{h}^{(k)},Y_{h}^{(k-m)} \bigr)h \\ &{}+\sigma \bigl(Y_{h}^{(k)},Y_{h}^{(k-m)} \bigr)\Delta B^{(k)}_{h}+g \bigl(Y_{h}^{(k)},Y_{h}^{(k-m)},u \bigr) \Delta \widetilde{N}_{kh}, \end{aligned} $$
(4.2)

where \(\Delta B^{(k)}_{h}:= B((k+1)h)-B(kh)\) while \(\Delta \widetilde{N}_{kh}:= \widetilde{N}((k+1)h,U)-\widetilde{N}(kh,U)\), and for any \(t\in [kh,(k+1)h)\), set \(\overline{Y}(t): = Y_{h}^{(k)}\), and

$$ \begin{aligned} Y(t)={}& G \bigl(\overline{Y}(t-\tau ) \bigr)+\xi (0)-G \bigl(\xi (-\tau ) \bigr)+ \int _{0}^{t}b \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr)\,\mathrm{d}s \\ &{}+ \int _{0}^{t}\sigma \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr) \,\mathrm{d}B(s)+ \int _{0}^{t} \int _{U}g \bigl(\overline{Y}(s-),\overline{Y} \bigl((s- \tau )- \bigr),u \bigr)\widetilde{N}(\mathrm{d}u,\mathrm{d}s). \end{aligned} $$
(4.3)

Note that, in the rest of this paper, we denote by \(X(t)\) the strong solution to (4.1), while \(Y(t)\), defined in (4.3), is the continuous-time EM scheme associated with (4.1).

Pth moment bound

The following lemma is a generalization of Lemma (2.1) and (3.1).

Lemma 4.1

Under (A1)(A5), for any \(p\geq 2\), there exists a constant \(C_{T}\) such that

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert X(t) \bigr\vert ^{p} \Bigr) \vee \mathbb{E} \Bigl(\sup _{0\leq t\leq T} \bigl\vert Y(t) \bigr\vert ^{p} \Bigr) \leq C_{T} $$
(4.4)

and

$$ \mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert \Gamma (t) \bigr\vert ^{p} \Bigr)\lesssim h^{( \frac{p}{2}-1)\wedge 1}, $$
(4.5)

where \(\Gamma (t):=Y(t)-\overline{Y}(t)\).

Proof

Here, only key steps are outlined, so that redundant calculation are omitted. To estimate a bound of \(Y(t)\), the continuous-time EM scheme associated with (4.1), a simple generalization of two special cases will be sufficient.

For (4.5), an application of BDG and Hölder’s inequalities yields

$$\begin{aligned} &\mathbb{E} \Bigl(\sup_{0\le t\le T} \bigl\vert \Gamma (t) \bigr\vert ^{p} \Bigr)\\ &\quad \lesssim \sup _{0\le k\le M-1} \biggl\{ \mathbb{E} \biggl(\sup_{kh\le t\le (k+1)h} \biggl\vert \int _{kh}^{t}b \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr)\,\mathrm{d}s \biggr\vert ^{p} \biggr) \\ &\qquad{} +\mathbb{E} \biggl(\sup_{nh\le t \leq (k+1)h} \biggl\vert \int _{kh}^{t} \sigma \bigl(\overline{Y}(s), \overline{Y}(s-\tau ) \bigr)\,\mathrm{d}B(s) \biggr\vert ^{p} \biggr) \biggr\} \\ &\qquad{} +\mathbb{E} \biggl(\sup_{kh\le t\le (k+1)h} \biggl\vert \int _{kh}^{t} \int _{U}g \bigl(\overline{Y}(s-),\overline{Y} \bigl((s-\tau )- \bigr),u \bigr) \widetilde{N}(\mathrm{d}s,\mathrm{d}u) \biggr\vert ^{p} \biggr) \} \\ &\quad \lesssim \sup_{0\le k\le M-1} \biggl\{ \int _{kh}^{(k+1)h} \biggl(h^{p-1} \mathbb{E} \bigl\vert b \bigl(\overline{Y}(s),\overline{Y}(s-\tau ) \bigr) \bigr\vert ^{p} \\ &\qquad{} +h^{\frac{p}{2}-1}\mathbb{E} \int _{kh}^{(k+1)h} \bigl\Vert \sigma \bigl( \overline{Y}(s),\overline{Y}(s-\tau ) \bigr) \bigr\Vert ^{p}\,\mathrm{d}s \\ &\qquad{} + \int _{U}\mathbb{E} \bigl\vert g \bigl(\overline{Y}(s), \overline{Y}(s-\tau ),u \bigr) \bigr\vert ^{p} \lambda (\mathrm{d}u) \biggr)\,\mathrm{d}s \biggr\} \\ &\quad \lesssim \sup_{0\le k\le M-1} \biggl\{ \int _{kh}^{(k+1)h} \Bigl(1+ \mathbb{E} \Bigl(\sup _{-\tau \le r\le s} \bigl\vert Y(r) \bigr\vert ^{p(1+q)} \Bigr) \Bigr) \\ &\qquad{} \times \biggl(h^{\frac{p}{2}-1}+ \int _{U} \vert u \vert ^{pr}\lambda ( \mathrm{d}u) \biggr)\,\mathrm{d}s \biggr\} \\ &\quad\lesssim h^{\frac{p}{2}-1}+h \\ &\quad\lesssim h^{(\frac{p}{2}-1)\wedge 1}. \end{aligned}$$

 □

Convergence results

The convergence rate of the general SDDEs of neutral type is given as follows.

Theorem 4.2

Under (A1)(A5), for any \(p\geq 2\) and \(\theta \in (0,1)\),

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq T} \bigl\vert X(t)-Y(t) \bigr\vert ^{p} \Bigr)\lesssim h^{ \frac{1}{(1+\theta )^{[T/\tau ]}}}. $$
(4.6)

So the best convergence rate of EM scheme (i.e., (4.3)) associated with (4.1) is smaller than the classic convergence rate one-half.

Remark 4.1

The proof of (4.3) is not intuitive by combining two special cases, it requires a more technical approach. Therefore, some key steps will be highlighted in the proof.

Proof of Theorem 4.2

Define

$$ Z(t):=X(t)-Y(t) \quad\text{and}\quad \Lambda (t):= Z(t)-G \bigl(X(t-\tau ) \bigr)+G \bigl( \overline{Y}(t-\tau ) \bigr). $$

Then, an application of Yamada–Watanabe approach yields

$$\begin{aligned} &V_{\kappa \varepsilon } \bigl(\Lambda (t) \bigr) \\ &\quad= \int _{0}^{t} \bigl\langle (\nabla V_{\kappa \varepsilon }) \bigl(\Lambda (s) \bigr),\Gamma _{1}(s) \bigr\rangle \,\mathrm{d}s+\frac{1}{2} \int _{0}^{t}\text{trace} \bigl\{ \bigl( \Gamma _{2}(s) \bigr)^{*} \bigl( \nabla ^{2} V_{\kappa \varepsilon } \bigr) \bigl(\Lambda (s) \bigr)\Gamma _{2}(s) \bigr\} \,\mathrm{d}s \\ &\qquad{} + \int _{0}^{t} \bigl\langle \nabla (V_{\kappa \varepsilon }) \bigl( \Lambda (s) \bigr),\Gamma _{2}(s)\,\mathrm{d}B(s) \bigr\rangle \\ &\qquad{} + \int _{0}^{t} \int _{U} \bigl\{ V_{\kappa \varepsilon } \bigl(\Lambda (s)+ \Gamma _{3}(s) \bigr)-V_{\kappa \varepsilon } \bigl(\Lambda (s) \bigr)- \bigl\langle (\nabla V_{ \kappa \varepsilon }) \bigl(\Lambda (s) \bigr),\Gamma _{3}(s) \bigr\rangle \bigr\} \lambda ( \mathrm{d}u)\,\mathrm{d}s \\ &\qquad{} + \int _{0}^{t} \int _{U} \bigl\{ V_{\kappa \varepsilon } \bigl(\Lambda (s-)+ \Gamma _{3}(s-) \bigr)-V_{\kappa \varepsilon } \bigl(\Lambda (s-) \bigr) \bigr\} \widetilde{N}(\mathrm{d}u,\mathrm{d}s) \\ &\quad=V_{\kappa \varepsilon } \bigl(\Lambda (0) \bigr)+ \int _{0}^{t} \bigl\langle (\nabla V_{ \kappa \varepsilon }) \bigl(\Lambda (s) \bigr),\Gamma _{1}(s) \bigr\rangle \,\mathrm{d}s \\ &\qquad{} +\frac{1}{2} \int _{0}^{t}\text{trace} \bigl\{ \bigl( \Gamma _{2}(s) \bigr)^{*} \bigl( \nabla ^{2} V_{\kappa \varepsilon } \bigr) \bigl(\Lambda (s) \bigr)\Gamma _{2}(s) \bigr\} \,\mathrm{d}s \\ &\qquad{} + \int _{0}^{t} \bigl\langle \nabla (V_{\kappa \varepsilon }) \bigl( \Lambda (s) \bigr),\Gamma _{2}(s)\,\mathrm{d}B(s) \bigr\rangle \\ &\qquad{} + \int _{0}^{t} \int _{U} \biggl\{ \int _{0}^{1} \bigl\langle \nabla V_{ \kappa \varepsilon } \bigl(\Lambda (s)+r\Gamma _{3}(s) \bigr) - \nabla V_{\kappa \varepsilon } \bigl(\Lambda (s) \bigr),\Gamma _{3}(s) \bigr\rangle \,\mathrm{d}r \biggr\} \lambda (\mathrm{d}u)\,\mathrm{d}s \\ &\qquad{} + \int _{0}^{t} \int _{U} \biggl\{ \int _{0}^{1} \bigl\langle \nabla V_{ \kappa \varepsilon } \bigl(\Lambda (s-)+r\Gamma _{3}(s-) \bigr),\Gamma _{3}(s-) \bigr\rangle \,\mathrm{d}r \biggr\} \widetilde{N}(\mathrm{d}u, \mathrm{d}s) \end{aligned}$$

where

$$\begin{aligned} &\Gamma _{1}(t):=b \bigl(X(t),X(t-\tau ) \bigr)-b \bigl( \overline{Y}(t),\overline{Y}(t- \tau ) \bigr), \\ &\Gamma _{2}(t):=\sigma \bigl(X(t),X(t-\tau ) \bigr)-\sigma \bigl( \overline{Y}(t), \overline{Y}(t-\tau ) \bigr), \end{aligned}$$

and

$$ \Gamma _{3}(t,u):= g \bigl(X(t),X(t-\tau ),u \bigr)-g \bigl( \overline{Y}(t), \overline{Y}(t-\tau ),u \bigr). $$

Recall (2.21) and (3.12) that

$$ \begin{aligned} \mathbb{E} \Bigl(\sup _{0\le t\le T} \bigl\vert \Lambda (t) \bigr\vert ^{p} \Bigr) \lesssim {}&h^{p/2}+ \int _{0}^{(T-\tau )\vee 0} \bigl\{ \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+ \varepsilon ^{-p} \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \,\mathrm{d}t, \\ &{}+h^{\frac{1}{1+\theta }}+ \int _{0}^{(T-\tau )\vee 0} \bigl( \mathbb{E} \bigl\vert Z(t) \bigr\vert ^{p(1+ \theta )} \bigr)^{\frac{1}{1+\theta }}\,\mathrm{d}t. \end{aligned} $$
(4.7)

By replicating the procedure in (2.23) and (3.13), it yields that

$$ \begin{aligned} \mathbb{E} \Bigl(\sup _{0\le t\le T} \bigl\vert Z(t) \bigr\vert ^{p} \Bigr) \lesssim{}& h^{p/2}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t\leq (T-\tau ) \vee 0} \bigl\vert Z(t) \bigr\vert ^{2p} \Bigr) \Bigr)^{1/2} \\ &{}+ \int _{0}^{(T-\tau )\vee 0} \bigl\{ \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+ \varepsilon ^{-p} \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \,\mathrm{d}t \\ &{}+h^{\frac{1}{1+\theta }}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t\leq (T- \tau )\vee 0} \bigl\vert Z(t) \bigr\vert ^{p(1+\theta )} \Bigr) \Bigr)^{\frac{1}{1+\theta }}, \end{aligned} $$
(4.8)

which implies that

$$ \mathbb{E} \Bigl(\sup_{0\leq t\leq \tau } \bigl\vert Z(t) \bigr\vert ^{p} \Bigr)\lesssim h^{ \frac{1}{1+\theta }} $$

and

$$ \begin{aligned} \mathbb{E} \Bigl(\sup_{0\leq t\leq 2\tau } \bigl\vert Z(t) \bigr\vert ^{p} \Bigr) \lesssim{}& h^{p/2}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t\leq \tau } \bigl\vert Z(t) \bigr\vert ^{2p} \Bigr) \Bigr)^{1/2} \\ &{}+ \int _{0}^{\tau } \bigl\{ \bigl(\mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{2p} \bigr) \bigr)^{1/2}+ \varepsilon ^{-p} \bigl( \mathbb{E} \bigl( \bigl\vert Z(t) \bigr\vert ^{4p} \bigr) \bigr)^{1/2} \bigr\} \,\mathrm{d}t \\ &{}+h^{\frac{1}{1+\theta }}+ \Bigl(\mathbb{E} \Bigl(\sup_{0\leq t\leq \tau } \bigl\vert Z(t) \bigr\vert ^{p(1+ \theta )} \Bigr) \Bigr)^{\frac{1}{1+\theta }} \\ \lesssim{} &h^{p/2}+h^{\frac{1}{(1+\theta )^{2}}}+h^{\frac{1}{1+\theta }} \\ \lesssim{}& h^{\frac{1}{(1+\theta )^{2}}}, \end{aligned} $$

where we take \(\epsilon =h^{1/2}\) and use the fact that \(h^{\frac{1}{1+\theta }}\geq h^{p/2}\), for any \(p\geq 2, \theta >0\). Therefore, the desired assertion follows from an inductive argument. □

Examples

In this section, three numerical examples will be discussed to demonstrate the convergence results established in the previous sections, which shows the theoretical convergence rates agree with the numerical simulation very well.

Example 5.1

Let \(B(t)\) be a scalar Brownian motion, consider a one-dimensional nonlinear SDDE of neutral type driven by Brownian motion

$$ \mathrm{d}\bigl\{ X(t)-X^{2}(t-1) \bigr\} = \bigl\{ X(t)+0.5X^{3}(t-1) \bigr\} \,\mathrm{d}t+X^{2}(t-1) \,\mathrm{d}B(t),\quad t\ge 0, $$

with the initial data \(X = 0\), for \(t\in [-1,0]\).

In Fig. 1, the EM scheme results for stepsizes \(h=1/256\), \(h=1/512\), and \(h=1/1024\) are plotted, respectively. The figure shows that the convergence rate is consistent with the result obtained in Sect. 2.

Figure 1
figure1

Brownian motion case

Example 5.2

Let \(\widetilde{N}(t)\) be a pure jump process with intensity \(\lambda =1\). Consider a one-dimensional nonlinear SDDE of neutral type driven by pure jump process

$$ \mathrm{d}\bigl\{ X(t)-X^{2}(t-1) \bigr\} = \bigl\{ X(t)+0.5X^{3}(t-1) \bigr\} \,\mathrm{d}t+0.8X^{2}(t-1) \,\mathrm{d}\widetilde{N}(t),\quad t\ge 0, $$

with the initial data \(X = 0\), for \(t\in [-1,0]\).

In Fig. 2, the EM scheme results for stepsizes \(h=1/256\), \(h=1/512\) and \(h=1/1024\) are plotted, respectively. The figure shows that the convergence rate obtained from Sect. 3 is much slower than its counterpart obtained in Sect. 2.

Figure 2
figure2

Pure jump case

Example 5.3

Let \(B(t)\) be a scalar Brownian motion, consider a one-dimensional nonlinear SDDE of neutral type driven by Brownian motion and pure jump process

$$\begin{aligned} &\mathrm{d}\bigl\{ X(t)-X^{2}(t-1) \bigr\} \\ &\quad= \bigl\{ X(t)+0.5X^{3}(t-1) \bigr\} \,\mathrm{d}t+X^{2}(t-1) \,\mathrm{d}B(t)+0.8X^{2}(t-1)\, \mathrm{d}\widetilde{N}(t),\quad t\ge 0, \end{aligned}$$

with the initial data \(X = 0\), for \(t\in [-1,0]\) and intensity \(\lambda =1\).

In Fig. 3, the EM scheme results for stepsizes \(h=1/256\), \(h=1/512\) and \(h=1/1024\) are plotted, respectively. The figure shows that the convergence rate is dominated by the jump process, which verified our theoretical result obtained in the Sect. 4.

Figure 3
figure3

Joint case

Conclusion

In this paper, the convergence rate of EM scheme for SDDEs of neutral type is studied under a more general polynomial condition. In the Brownian motion case, the convergence rate is consistent with the classic result of one-half. Meanwhile, in the pure jump case, the convergence rate is much slower than one-half. As a result, in the general SDDEs of neutral type, the convergence is dominated by the slower rate.

Availability of data and materials

Not applicable.

References

  1. 1.

    Appleby, J.A.D., Mao, X., Wu, H.: On the almost sure running maxima of solutions of affine stochastic functional differential equations. SIAM J. Math. Anal. 42(2), 646–678 (2010)

    MathSciNet  Article  Google Scholar 

  2. 2.

    Bao, J., Huang, X., Yuan, C.: Convergence rate of Euler–Maruyama scheme for SDEs with Hölder–Dini continuous drifts. J. Theor. Probab. 32(2), 848–871 (2019)

    Article  Google Scholar 

  3. 3.

    Bao, J., Wang, F.-Y., Yuan, C.: Transportation cost inequalities for neutral functional stochastic equations. Z. Anal. Anwend. 32(4), 457–475 (2013)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Bao, J., Yuan, C.: Convergence rate of EM scheme for SDDEs. Proc. Am. Math. Soc. 141, 3231–3243 (2013)

    MathSciNet  Article  Google Scholar 

  5. 5.

    Bao, J., Böttcher, B., Mao, X., Yuan, C.: Convergence rate of numerical solutions to SFDEs with jumps. J. Comput. Appl. Math. 236, 119–131 (2011)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Bao, J., Yuan, C.: Large deviations for neutral functional SDEs with jumps. Stochastics 87, 48–70 (2015)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Gyöngy, I., Rásonyi, M.: A note on Euler approximations for SDEs with Hölder continuous diffusion coefficients. Stoch. Process. Appl. 121, 2189–2200 (2011)

    Article  Google Scholar 

  8. 8.

    Gyöngy, I., Sabanis, S.: A note on Euler approximations for stochastic differential equations with delay. Appl. Math. Optim. 68, 391–412 (2013)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Jiang, F., Shen, Y., Wu, F.: A note on order of convergence of numerical method for neutral stochastic functional differential equations. Commun. Nonlinear Sci. Numer. Simul. 17, 1194–1200 (2012)

    MathSciNet  Article  Google Scholar 

  10. 10.

    Li, X., Cao, W.: On mean-square stability of two-step Maruyama methods for nonlinear neutral stochastic delay differential equations. Appl. Math. Comput. 261, 373–381 (2015)

    MathSciNet  MATH  Google Scholar 

  11. 11.

    Mao, X.: Convergence rates of the truncated Euler–Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 296, 362–375 (2016)

    MathSciNet  Article  Google Scholar 

  12. 12.

    Mao, X.: Stochastic Differential Equations and Applications, 2nd edn. Horwood, Chichester (2008)

    Google Scholar 

  13. 13.

    Mao, X., Shen, Y., Yuan, C.: Almost surely asymptotic stability of neutral stochastic differential delay equations with Markovian switching. Stoch. Process. Appl. 118, 1385–1406 (2008)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Marinelli, C., Röckner, M.: On maximal inequalities for purely discontinuous martingales in infinite dimensional. In: Sèminnaire de Probabilitès XLVI. Lecture Notes in Mathematics, vol. 2123, pp. 293–316 (2014)

    Google Scholar 

  15. 15.

    Milosevic, M.: Highly nonlinear neutral stochastic differential equations with time-dependent delay and the Euler–Maruyama method. Math. Comput. Model. 54, 2235–2251 (2011)

    MathSciNet  Article  Google Scholar 

  16. 16.

    Ngo, H.L., Taguchi, D.: On the Euler–Maruyama scheme for SDEs with bounded variation and Hölder continuous coefficients. Math. Comput. Simul. 161, 102–112 (2019)

    Article  Google Scholar 

  17. 17.

    Pamen, O.M., Taguchi, D.: Strong rate of convergence for the Euler–Maruyama approximation of SDEs with Hölder continuous drift coefficient. Stoch. Process. Appl. 127(8), 2542–2559 (2017)

    Article  Google Scholar 

  18. 18.

    Obradović, M., Milošević, M.: Stability of a class of neutral stochastic differential equations with unbounded delay and Markovian switching and the Euler–Maruyama method. J. Comput. Appl. Math. 309, 244–266 (2017)

    MathSciNet  Article  Google Scholar 

  19. 19.

    Sabanis, S.: A note on tamed Euler approximations. Electron. Commun. Probab. 18, 1–10 (2013)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Situ, R.: Theory of Stochastic Differential Equations with Jumps and Applications. Mathematical and Analytical Techniques with Applications to Engineering. Springer, New York (2005)

    Google Scholar 

  21. 21.

    Wu, F., Mao, X.: Numerical solutions of neutral stochastic functional differential equations. SIAM J. Numer. Anal. 46, 1821–1841 (2008)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Yan, Z., Xiao, A., Tang, X.: Strong convergence of the split-step theta method for neutral stochastic delay differential equations. Appl. Numer. Math. 120, 215–232 (2017)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Yu, Z.: Almost sure and mean square exponential stability of numerical solutions for neutral stochastic functional differential equations. Int. J. Comput. Math. 92, 132–150 (2015)

    MathSciNet  Article  Google Scholar 

  24. 24.

    Yuan, C., Mao, X.: A note on the rate of convergence of the Euler–Maruyama method for stochastic differential equations. Stoch. Anal. Appl. 26, 325–333 (2008)

    MathSciNet  Article  Google Scholar 

  25. 25.

    Yuan, C., Glover, W.: Approximate solutions of stochastic differential delay equations with Markovian switching. J. Comput. Appl. Math. 194, 207–226 (2006)

    MathSciNet  Article  Google Scholar 

  26. 26.

    Zhou, S.: Exponential stability of numerical solution to neutral stochastic functional differential equation. Appl. Math. Comput. 266, 441–461 (2015)

    MathSciNet  MATH  Google Scholar 

  27. 27.

    Zhou, S., Fang, Z.: Numerical approximation of nonlinear neutral stochastic functional differential equations. J. Appl. Math. Comput. 41, 427–445 (2013)

    MathSciNet  Article  Google Scholar 

  28. 28.

    Zhou, S., Jin, H.: Numerical solution to highly nonlinear neutral-type stochastic differential equation. Appl. Numer. Math. 140, 48–75 (2019)

    MathSciNet  Article  Google Scholar 

  29. 29.

    Zong, X., Wu, F., Huang, C.: Exponential mean square stability of the theta approximations for neutral stochastic differential delay equations. J. Comput. Appl. Math. 286, 172–185 (2015)

    MathSciNet  Article  Google Scholar 

  30. 30.

    Zong, X., Wu, F.: Exponential stability of the exact and numerical solutions for neutral stochastic delay differential equations. Appl. Math. Model. 40, 19–30 (2016)

    MathSciNet  Article  Google Scholar 

  31. 31.

    Zhang, W., Song, M.H., Liu, M.Z.: Strong convergence of the partially truncated Euler–Maruyama method for a class of stochastic differential delay equations. J. Comput. Appl. Math. 335, 114–128 (2018)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

This paper is supported by the Zhejiang Province’s First-Class Discipline “Applied Economics” Platform. Meanwhile, the author acknowledges Ms Ruoyu Zhang and Mr Yi Yang for their advice in the numerical simulation.

Funding

Not applicable.

Author information

Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yanting Ji.

Ethics declarations

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ji, Y. Convergence rate of Euler–Maruyama scheme for SDDEs of neutral type. J Inequal Appl 2021, 5 (2021). https://doi.org/10.1186/s13660-020-02533-3

Download citation

MSC

  • 65C30
  • 60H10

Keywords

  • Stochastic Differential Delay Equation of Neutral Type
  • Polynomial Condition
  • Euler Scheme
  • Convergence Rate
  • Jump Processes