# Exponential mean-square stability of numerical solutions for stochastic delay integro-differential equations with Poisson jump

## Abstract

In this paper, we investigate the exponential mean-square stability for both the solution of n-dimensional stochastic delay integro-differential equations (SDIDEs) with Poisson jump, as well for the split-step θ-Milstein (SSTM) scheme implemented of the proposed model. First, by virtue of Lyapunov function and continuous semi-martingale convergence theorem, we prove that the considered model has the property of exponential mean-square stability. Moreover, it is shown that the SSTM scheme can inherit the exponential mean-square stability by using the delayed difference inequality established in the paper. Eventually, three numerical examples are provided to show the effectiveness of the theoretical results.

## Introduction

In special cases, stochastic delay differential equations (SDDEs) and stochastic delay integro-differential equations (SDIDEs) are a type of stochastic differential equations (SDEs), which has been discussed in a variety of sciences such as the mathematical model [1], economy [2], infectious diseases [3], and population dynamics [4]. With the development of science and technology, it is found that Markov chain and jump-diffusion systems are more suitable for describing the sudden disturbances in many physical, financial and dynamical systems, such as the sudden fluctuations in the financial markets (see [58] and [9]). Actually, the stochastic integral with respect to the Wiener process and the one with respect to the Poisson random measure differs greatly. Clearly, nearly all sample paths of the Wiener process are continuous, but the Poisson random measure $${N(dt, d\nu )}$$ is a jump process with their sample paths only being right-continuous and having left limits. Hence, it is more significant to consider SDIDEs with Poisson jump.

Due to the fact that most of these equations cannot be solved explicitly, stability theory of numerical solutions is one of central problems in numerical analysis. Therefore the stability of numerical schemes becomes also one of the main tools to examine the stability solution of these equations (see [1015] and [16]). For SDIDEs, most of the related existing literature focused on the linear models. As for SDIDEs, Ding et al. [17] studied the stability of the semi-implicit Euler method for linear SDIDEs. Rathinasamy and Balachandran [18] studied the mean-square stability of the Milstein method for linear SDIDEs. Liu et al. [19] proposed the split-step theta method for SDIDEs by the Lagrange interpolation technique and investigated the exponential mean-square stability of the proposed method. Meanwhile, Li and Gan investigated the exponential mean-square stability of theta method for nonlinear SDIDEs by the technique with the Barbalat lemma in the literature [20]. Moreover, the convergence and mean-square stability analysis of Euler, Milstein as well the higher order of stochastic Runge–Kutta methods can be implemented to address stochastic ordinary differential equations (SODEs) (see [21, 22] and [23]).

For SDDEs with jumps, the existing literature concerns mainly stability analysis of numerical schemes. For example, Mo et al. [24] discuss the exponential mean-square stability of the θ-method for neutral stochastic delay differential equations with jumps. Tan and Wang [25] investigated the mean-square stability of the explicit Euler method for linear SDDEs with jumps. Li and Gun [26] discuss the almost sure exponential stability of numerical solutions for SDDEs with jumps. Zhang et al. [27] derived some criteria on pth moment stability and almost sure stability with general decay rates of stochastic differential delay equations with Poisson jumps and Markovian switching. Li and Zhu [28] investigated the pth moment exponential stability and almost surely exponential stability of stochastic delay differential equations with Poisson jump. Zhao and Liu [29] modified the split-step backward Euler method for nonlinear stochastic delay differential equations with jumps, while Jiang et al. [30] considered the stability of the split-step backward Euler method for linear SDIDEs with Markovian switching.

The Lyapunov method was applied by many authors to deal with stochastic property. For example, Zhu in [31] modified the well-known Razumikhin-type theorem for a class of stochastic functional differential equations with Lévy noise and Markov switching, also in the literature [32] one discusses the pth moment exponential stability of stochastic delay differential equations with Lévy noise. Deng et al. [33] investigated the truncated EM method for stochastic differential equations with Poisson jumps. In [34], Ren and Tian investigated the convergence and stability region properties of the θ-Milstein method for stochastic differential equations with Poisson jump.

Nevertheless, the SSTM scheme has never been applied to n-dimensional SDIDEs with Poisson jump, at least to the best of our knowledge. In the present paper, in order to fill this gap, we introduce the SSTM scheme for n-dimensional SDIDEs with Poisson jump by some numerical integration technique and perform a stability analysis of the proposed scheme.

The remainder of the paper is organized as follows. Section 2 presents some necessary notations and preliminary results. Section 3 investigates the exponential mean-square stability of the continuous model by defining the appropriate Lyapunov function. Section 4 introduces the SSTM scheme for SDIDEs with Poisson jump and establishes a delayed difference inequality to discuss its exponential mean-square stability. Section 5 performs the theoretical analysis about the mean-square stability of the SSTM scheme. Finally, three numerical experiments are reported to illustrate the stability results of the proposed scheme in Sect. 6.

## Preliminary results

All over this paper, unless otherwise specified, we will use $$|\cdot |$$ to denote the Euclidean norm in $$\mathbb{R}^{n}$$, and $${ \langle x, y \rangle =x^{T}y}$$ for all $${x, y \in \mathbb{R}^{n}}$$. $${a \vee b}$$ denotes the maximum value between a and b, while $${a \wedge b}$$ denotes the minimum value. We suppose that the presented analysis is done on the base of a completed filtered probability space $$(\varOmega , \mathcal{F}, \mathcal{F}_{t}, \mathbf{P})$$ with a filtration $$\{\mathcal{F}_{t}\}_{t \geq 0}$$ satisfying the usual conditions, that is, it is right-continuous and increasing while $${\mathcal{F}_{0}}$$ contains all P-null sets. Let $${W(t)= (W_{1}(t), W_{2}(t), \ldots , W_{d}(t) )^{T}}$$ be an d-dimensional Brownian motion defined on the probability space. For a given delay $${\tau >0}$$, denote by $${L^{2}_{\mathcal{F}_{0}}([-\tau ,0];\mathbb{R}^{n})}$$ the family of all $${\mathcal{F}_{0}}$$-measurable and $${C([-\tau ,0];\mathbb{R}^{n})}$$-valued random variables $${\xi (t)}$$ for $${t\in [-\tau , 0]}$$, equipped with the supremum norm as follows:

$$\mathbb{E} \Bigl(\sup_{-\tau \leq t \leq 0} \bigl\vert \xi ^{T}(t)\xi (t) \bigr\vert \Bigr)< + \infty ,$$
(1)

and $${\mathcal{B}(\mathbb{R}^{n})}$$ denotes the Borel algebra in $${\mathbb{R}^{n}}$$. Let $${\overline{p}=\{\overline{p}(t), t\geq 0\}}$$ be a stationary $${\mathcal{F}_{t}}$$-adapted and $${\mathbb{R}^{n}}$$-valued Poisson point process. For $${A\in \mathcal{B}(\mathbb{R}^{n}-\{0\})}$$, we define the Poisson counting measure N associated with by

$$N \bigl((0,t]\times A \bigr):= \# \bigl\{ 0< s\leq t, \overline{p}(s)\in A \bigr\} =\sum_{t_{0}< s\leq t}I_{A} \bigl( \overline{p}(s) \bigr),$$
(2)

where # denotes the cardinality of set $${\{\cdot \}}$$. For simplicity, we denote $$N(t, A):= N ( (0, t]\times A )$$. It is well known that there exists a σ-finite $$L\acute{e}vy$$ measure π such that

\begin{aligned} &P \bigl(N(t, A)=n \bigr)=\frac{\exp (-t\pi (A))(\pi (A)t)^{n}}{n!}, \end{aligned}
(3)
\begin{aligned} &\mathbb{E} \bigl(N(t, A) \bigr)=\pi (A)t. \end{aligned}
(4)

Let $${N(t, \nu )}$$ is a $${\mathcal{F}_{t}}$$-adapted Poisson random measure on $${[0,+\infty )\times \mathbb{R}^{n}}$$ with a σ-finite intensity measure $${\pi (d\nu )}$$, and then the compensator martingale measure $${\tilde{N}(t, A)}$$ satisfies

$$N(t, A)=\tilde{N}(t, A)+\hat{N}(t, A),\quad t>0.$$
(5)

Here $${\tilde{N}(t, A)}$$ is called the compensated Poisson random measure and $${\hat{N}(t, A)=\pi (A)t}$$ is called the compensator (see [35] and [36]). Also we assume that $${N_{\lambda }(t, \nu )}$$ is Poisson random measure (independent of Brownian motion $${W(t)}$$) with jump intensity λ. Let $$f:\mathbb{R}_{+}\times \mathbb{R}^{n}\times \mathbb{R}^{n} \times \mathbb{R}^{n}\mapsto \mathbb{R}^{n}$$, $${g:\mathbb{R}_{+}\times \mathbb{R}^{n}\times \mathbb{R}^{n} \times \mathbb{R}^{n}\mapsto \mathbb{R}^{n\times d}}$$, $${h:\mathbb{R}_{+}\times \mathbb{R}^{n}\times \mathbb{R}^{n} \times \mathbb{R}^{n}\times Z\mapsto \mathbb{R}^{n}}$$ and $${K:\mathbb{R}_{+}\times \mathbb{R}^{n}\mapsto \mathbb{R}^{n}}$$ are locally Lipschitz continuous with $${f (t, 0, 0, 0)=0}$$, $${g(t, 0, 0, 0)=0}$$ and $${K(t,0)=0}$$. In this paper, for $${Z\in \mathcal{B} (\mathbb{R}^{n}-\{0\} )}$$ we consider n-dimensional SDIDEs with Poisson jump as follows:

\begin{aligned} dy(t)={}&f \biggl(t, y(t), y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr)\,dt \\ &{}+g \biggl(t,y(t),y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr) \,dW(t) \\ &{}+ \int _{Z}h \biggl(t,y(t),y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds, \nu \biggr)N_{\lambda }(dt, d\nu ),\quad t>0, \end{aligned}
(6)

with initial data

$$y(t)=\xi (t), \quad -\tau \leq t\leq 0,$$
(7)

where ξ is a $${C ([-\tau ,0]; \mathbb{R}^{n} )}$$-valued random variable and delay $${\tau >0}$$ is a constant. An important contribution of this paper is to avoid the use of non-anticipative stochastic calculus for diffusion function (see [37] and [38]):

\begin{aligned} g \biggl(t, y(t), y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr)={}& g_{1} \biggl(t, y(t), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr) \\ &{}+g_{2} \biggl(t,y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr). \end{aligned}
(8)

### Lemma 2.1

(Continuous semi-martingale convergence theorem, see [39])

Let$${A(t)}$$and$${U(t)}$$be two$${\mathcal{F}_{t}}$$-adapted increasing processes on$${t\geq 0}$$with$${A(0)=U(0)=0}$$a.s. Let$${M(t)}$$be a real-valued local martingale with$${M(0)=0}$$a.s. Letζbe a nonnegative$${\mathcal{F}_{0}}$$-measurable random variable such that$${\mathbb{E}(\zeta )<\infty }$$. Assume that$${y(t)}$$is nonnegative and define

$$y(t)=\zeta +A(t)-U(t)+M(t),\quad t\geq 0.$$
(9)

If$${\lim_{t\rightarrow \infty }A(t) <\infty }$$,

$$\lim_{t\rightarrow \infty }y(t)< \infty , \quad \textit{or}\quad \lim_{t \rightarrow \infty }U(t)< \infty ,$$
(10)

that is, both$${y(t)}$$and$${U(t)}$$converge to finite random variables.

### Assumption 2.1

The coefficient functions f, g and h satisfy the local Lipschitz condition such that, for each integer $${j\geq 1}$$, there exists a positive constant $${C_{j}}$$ such that

\begin{aligned} & \bigl\vert f(t, y_{2}, \overline{y}_{2}, \hat{y}_{2})-f(t, y_{1}, \overline{y}_{1}, \hat{y}_{1}) \bigr\vert ^{2} \vee \bigl\vert g(t,y_{2},\overline{y}_{2}, \hat{y}_{2})-g(t, y_{1}, \overline{y}_{1}, \hat{y}_{1}) \bigr\vert ^{2} \\ &\quad \leq C_{j} \bigl( \vert y_{2}-y_{1} \vert ^{2} + \vert \overline{y}_{2}- \overline{y}_{1} \vert ^{2} + \vert \hat{y}_{2}-\hat{y}_{1} \vert ^{2} \bigr), \end{aligned}
(11)
\begin{aligned} &\int _{Z} \bigl\vert h(t, y_{2}, \overline{y}_{2}, \hat{y}_{2}, \nu ) -h(t, y_{1}, \overline{y}_{1}, \hat{y}_{1}, \nu ) \bigr\vert ^{2}\pi (d\nu ) \\ &\quad \leq C_{j} \bigl( \vert y_{2}-y_{1} \vert ^{2} + \vert \overline{y}_{2}- \overline{y}_{1} \vert ^{2} + \vert \hat{y}_{2}-\hat{y}_{1} \vert ^{2} \bigr), \end{aligned}
(12)

for all $$(t, y_{i}, \overline{y}_{i}, \hat{y}_{i}) \in \mathbb{R}_{+}\times \mathbb{R}^{n}\times \mathbb{R}^{n}\times \mathbb{R}^{n} \mapsto \mathbb{R}^{n}$$ with $$|y_{i}|\vee |\overline{y}_{i}|\vee |\hat{y}_{i}|\leq j$$ ($$i=1, 2$$).

Obviously, it follows from Assumption 2.1 that there exists a unique maximal local solution to SDIDEs with jump (6).

### Assumption 2.2

There exist a symmetric, positive definite $${n\times n}$$ matrix Q and nonnegative constants L, , $${\alpha _{i}}$$, $${\beta _{i}}$$, $${\eta _{i}}$$, $${\tilde{\eta }_{i}}$$, $${\sigma _{i}}$$, $${\tilde{\sigma }_{i}}$$ and $${\gamma _{i}}$$, $${i=1,2,3}$$, such that

\begin{aligned}& y^{T}Qf(t, y, 0, 0) \leq -\alpha _{1} y^{T}Qy, \end{aligned}
(13)
\begin{aligned}& \bigl\vert f(t, y, \overline{y}, \hat{y})- f(t, y, 0, 0) \bigr\vert \leq \alpha _{2} \vert \overline{y} \vert +\alpha _{3} \vert \hat{y} \vert , \end{aligned}
(14)
\begin{aligned}& f^{T}(t, y, \overline{y}, \hat{y})Qf(t, y, \overline{y}, \hat{y}) \leq L \bigl(y^{T}Qy+\overline{y}^{T}Q \overline{y}+\hat{y}^{T}Q\hat{y} \bigr), \end{aligned}
(15)
\begin{aligned}& g^{T}(t, y, \overline{y}, \hat{y})Qg(t, y, \overline{y}, \hat{y}) \leq \beta _{1}y^{T}Qy+\beta _{2}\overline{y}^{T}Q\overline{y} + \beta _{3}\hat{y}^{T}Q\hat{y}, \end{aligned}
(16)
\begin{aligned}& L^{1}g^{T}(t, y, \overline{y}, \hat{y})QL^{1}g(t, y, \overline{y}, \hat{y}) \leq \eta _{1}y^{T}Qy+\eta _{2}\overline{y}^{T}Q \overline{y} + \eta _{3}\hat{y}^{T}Q\hat{y}, \end{aligned}
(17)
\begin{aligned}& L^{-1}g^{T}(t, y, \overline{y}, \hat{y})QL^{-1}g(t, y, \overline{y}, \hat{y}) \leq \tilde{\eta }_{1}y^{T}Qy+\tilde{\eta }_{2} \overline{y}^{T}Q \overline{y} +\tilde{\eta }_{3} \hat{y}^{T}Q\hat{y}, \end{aligned}
(18)
\begin{aligned}& L^{1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\& \quad \leq \sigma _{1}y^{T}Qy+\sigma _{2} \overline{y}^{T}Q \overline{y} +\sigma _{3} \hat{y}^{T}Q\hat{y}, \end{aligned}
(19)
\begin{aligned}& L^{-1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{-1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\& \quad \leq \tilde{\sigma }_{1}y^{T}Qy+\tilde{\sigma }_{2}\overline{y}^{T}Q \overline{y} +\tilde{\sigma }_{3}\hat{y}^{T}Q\hat{y}, \end{aligned}
(20)
\begin{aligned}& \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )Qh(t, y, \overline{y}, \hat{y}, \nu )\pi (d\nu ) \leq \gamma _{1}y^{T}Qy+ \gamma _{2}\overline{y}^{T}Q \overline{y} +\gamma _{3}\hat{y}^{T}Q \hat{y}, \end{aligned}
(21)
\begin{aligned}& \bigl\vert K(t, y) \bigr\vert \leq \overline{k} \vert y \vert , \end{aligned}
(22)

for all $${(y, \overline{y}, \hat{y}) \in \mathbb{R}^{n}\times \mathbb{R}^{n} \times \mathbb{R}^{n}}$$ and $${t>0}$$ with the property:

$$\alpha _{1}>\alpha _{2}+\alpha _{3}\overline{k}\tau +\frac{1}{2} \bigl(\beta _{1}+\beta _{2}+\beta _{3} \overline{k}^{2}\tau ^{2} \bigr) + \frac{\lambda }{2} \bigl(\gamma _{1}+\gamma _{2} +\gamma _{3} \overline{k}^{2}\tau ^{2} \bigr),$$
(23)

where λ is the intensity of the Poisson process $${N_{\lambda }(t)}$$, and

$$L^{j}V (t_{n},x_{n},x_{n-m}, \overline{x}_{n} )= \textstyle\begin{cases} g (t_{n},x_{n},x_{n-m},\overline{x}_{n} )V'_{x} (t_{n},x_{n},x_{n-m}, \overline{x}_{n} ), & j=1, \\ V (t_{n},x_{n}+h(t_{n},x_{n},x_{n-m},\overline{x}_{n}),x_{n-m}, \overline{x}_{n} ) \\ \quad {}-V (t_{n},x_{n},x_{n-m},\overline{x}_{n} ),& j=-1. \end{cases}$$
(24)

## Exponential stability of the global solution of SFDEs with jump

In this section, we will discuss the exponential stability of the global solution of stochastic functional differential equations (SFDEs) with Poisson jump. Liu et al. discuss the exponential stability of the solution of the general SFDEs in the literature [19]. Let us consider the following SFDEs with jump [40]:

$$dy(t)=F(t,y_{t})\,dt+G(t,y_{t})\,dW(t)+ \int _{Z}H(t,y_{t},\nu )N_{\lambda }(dt,d \nu ),\quad t>0,$$
(25)

with initial data

$$y(t)=\xi (t)\in L^{p}_{\mathcal{F}_{0}}\bigl([-\tau , 0]; \mathbb{R}^{n}\bigr),\quad p>0,$$
(26)

where $${y(t)\in \mathbb{R}^{n}}$$, and the segment $${y_{t}}$$ is defined as follows:

$$y_{t}= \bigl\{ y(t+\theta ): -\tau \leq \theta \leq 0 \bigr\} ,$$
(27)

which is regarded as a $${C([-\tau , 0]; \mathbb{R}^{n})}$$-valued stochastic process. Besides, $$F:\mathbb{R}_{+}\times C([-\tau , 0]; \mathbb{R}^{n})\rightarrow \mathbb{R}^{n}$$, $${G:\mathbb{R}_{+}\times C([-\tau , 0]; \mathbb{R}^{n})\rightarrow \mathbb{R}^{n\times d}}$$ and $$H:\mathbb{R}_{+}\times C([-\tau , 0]; \mathbb{R}^{n})\times Z \rightarrow \mathbb{R}^{n}$$ are local Lipschitz continuous functionals. We require that $${F(t, 0)=0 \in \mathbb{R}^{n}}$$, $${G(t, 0)=0 \in \mathbb{R}^{n\times d}}$$ and $${H(t, 0)=0 \in \mathbb{R}^{n}}$$, this implying that the SFDEs with jump (25) admits a trivial (null) solution $${y(t)=0}$$. Furthermore, we assume that $$V(t, y(t))\in C^{1,2}(\mathbb{R}_{+}, \mathbb{R}_{+}\times \mathbb{R}^{n})$$ is a Lyapunov function.

Define the differential operators $$\mathcal{L}V:\mathbb{R}_{+}\times \mathbb{R}^{n}\rightarrow \mathbb{R}$$ and $${\mathcal{H}V:\mathbb{R}_{+}\times \mathbb{R}^{n}\rightarrow \mathbb{R}}$$ associated with SFDEs with jump (25) as follows:

\begin{aligned}& \begin{aligned}[b] \mathcal{L}V \bigl(t, y(t), \varphi \bigr)={}&V_{t} \bigl(t, y(t)\bigr)+V_{y}\bigl(t, y(t)\bigr)F(t, \varphi )+ \frac{1}{2}\operatorname{trace} \bigl[G^{T}(t, \varphi )V_{yy}\bigl(t, y(t)\bigr)G(t, \varphi ) \bigr] \\ &{}+ \int _{Z} \bigl[V \bigl(t, y(t)+H(t, \varphi , \nu ) \bigr)-V\bigl(t, y(t)\bigr) \bigr]\pi (d\nu ), \end{aligned} \end{aligned}
(28)
\begin{aligned}& \mathcal{H}V \bigl(t, y(t), \varphi \bigr)=V_{y} \bigl(t, y(t)\bigr)G(t, \varphi )+ \int _{Z} \bigl[V \bigl(t, y(t)+H(t, \varphi , \nu ) \bigr)-V\bigl(t, y(t)\bigr) \bigr] \tilde{N}_{\lambda }(d\nu ), \end{aligned}
(29)

where

\begin{aligned} &V_{t}\bigl(t, y(t)\bigr)=\frac{\partial V(t, y(t))}{\partial t},\qquad V_{y} \bigl(t, y(t)\bigr)= \biggl(\frac{\partial V(t, y(t))}{\partial y_{1}}, \ldots , \frac{\partial V(t, y(t))}{\partial y_{n}} \biggr), \\ &V_{yy}\bigl(t, y(t)\bigr)= \biggl( \frac{\partial ^{2}V(t, y(t))}{\partial y_{i}\partial y_{j}} \biggr)_{n \times n}. \end{aligned}

Then we have the Itô formula as follows (see [40]):

$$dV \bigl(t, y(t) \bigr)=\mathcal{L}V \bigl(t, y(t), y_{t} \bigr)\,dt + \mathcal{H}V \bigl(t, y(t), y_{t} \bigr)\,dW(t).$$
(30)

The following lemma ensures that the global solution of SFDEs with jump (25) is exponentially mean-square stable.

### Lemma 3.1

Assume that there exists a Lyapunov function$${V(t, y(t))\in C^{1,2}(\mathbb{R}_{+}, \mathbb{R}_{+}\times \mathbb{R}^{n})}$$, such that

$$c_{1}\lambda _{\min }^{p}(Q) \bigl\vert y(t) \bigr\vert ^{p}\leq V\bigl(t, y(t)\bigr) \leq c_{2}\psi \bigl( \bigl\vert y(t) \bigr\vert \bigr) \quad \textit{and}\quad \psi \bigl( \bigl\vert y(t) \bigr\vert \bigr)\leq c_{3}\lambda _{\max }^{p}(Q) \bigl\vert y(t) \bigr\vert ^{p},$$
(31)

where$${0< c_{1}\leq c_{2}}$$, $${c_{3}\geq 0}$$, $${\lambda _{\min }(Q)}$$and$${\lambda _{\max }(Q)}$$denote the smallest and largest eigenvalue ofQ, respectively. For$${t>0}$$, the Lyapunov function$${V(t, y(t))}$$satisfies

\begin{aligned} \mathcal{L}V \bigl(t, \varphi (0), \varphi \bigr) \leq{}& {-}\mu \psi \bigl( \bigl\vert \varphi (0) \bigr\vert \bigr)+ \sum _{i=1}^{m} \biggl(\mu _{i}\psi \bigl( \bigl\vert \varphi (-\tau _{i}) \bigr\vert \bigr) +\hat{ \mu }_{i} \int _{-\tau _{i}}^{0}\psi \bigl( \bigl\vert \varphi ( \theta ) \bigr\vert \bigr)\,d\theta \\ &{}+\tilde{\mu }_{i} \int _{Z}\psi \bigl( \bigl\vert \varphi (\theta ) \bigr\vert \bigr)\pi (d\nu ) \biggr), \end{aligned}
(32)

where$${\psi (u)\geq 0}$$is an arbitrary function with$${0\leq \tau _{i}\leq \tau }$$and$${\mu , \mu _{i}, \hat{\mu }_{i}, \tilde{\mu }_{i}}$$are nonnegative constants with the property

$$\sum_{i=1}^{m} \biggl(\mu _{i}+\hat{\mu }_{i}\tau _{i} + \tilde{\mu }_{i} \int _{Z}\pi (d\nu ) \biggr)< \mu .$$
(33)

Then there exists a global solution$${y(t)}$$to SFDEs with jump (25) for any initial data$${\xi (t)}$$. Also we have the exponential estimate as follows:

$$\mathbb{E} \bigl( \bigl\vert y(t) \bigr\vert ^{p} \bigr)\leq C\bigl(\xi ( t )\bigr)e^{-rt},\quad \forall t \geq 0,$$
(34)

where$${C(\xi ( t ))}$$is a positive constant, depending on the initial data$${\xi (t)}$$and$${r>0}$$is the unique positive solution of the following equation:

$$c_{2}r+\sum_{i=1}^{m} \biggl(\mu _{i}+\hat{\mu }_{i}\tau _{i} + \tilde{\mu }_{i} \int _{Z}\pi (d\nu ) \biggr)e^{r\tau _{i}}=\mu .$$
(35)

### Proof

Let us define the time varying Lyapunov function

$$\delta (t)=e^{rt}V\bigl(t, y(t)\bigr),$$
(36)

then we can easily get

\begin{aligned} \mathcal{L}\delta (t)={}&e^{rt} \bigl(rV\bigl(t, y(t) \bigr) +\mathcal{L}V\bigl(t, y(t), y_{t}\bigr) \bigr) \\ \leq{}& e^{rt} \bigl(c_{2}r \psi \bigl( \bigl\vert y(t) \bigr\vert \bigr)+\mathcal{L}V\bigl(t, y(t), y_{t}\bigr) \bigr) \\ \leq{}& {-}c_{3}\lambda _{\max }^{p}(Q)e^{rt}( \mu -c_{2}r) \bigl\vert y(t) \bigr\vert ^{p} +c_{3} \lambda _{\max }^{p}(Q)\sum _{i=1}^{m} \biggl(\mu _{i}e^{r\tau _{i}} \bigl\vert y(t- \tau _{i}) \bigr\vert ^{p} \\ &{}+\hat{\mu }_{i}e^{r\tau _{i}} \int _{-\tau _{i}}^{0} \bigl\vert y_{t}( \theta ) \bigr\vert ^{p}\,d\theta +\tilde{\mu }_{i}e^{r\tau _{i}} \int _{Z} \bigl\vert y(t-\tau _{i}) \bigr\vert ^{p} \pi (d\nu ) \biggr). \end{aligned}
(37)

Let us also define

$$M(t)= \int _{0}^{t}e^{rs}\mathcal{H}V \bigl(s, y(s), y_{s}\bigr)\,dW(s),$$
(38)

therefore, one can verify it to be a local martingale under the given condition. If we define the stopping time $${\overline{\tau }_{l}=\inf \{k | y(t)\geq l \}}$$ for $$l=1, 2, \ldots$$ , then we have $${\mathbb{E} (M(t\wedge \overline{\tau }_{l}) )=0}$$ for each l. Now, define the Lyapunov functional:

\begin{aligned} U\bigl(t, \delta (t)\bigr)={}& \delta (t)+c_{3} \lambda _{\max }^{p}(Q)\sum_{i=1}^{m} \biggl(\mu _{i}e^{r\tau _{i}} \int _{t-\tau _{i}}^{t} \bigl\vert y(s) \bigr\vert ^{p}\,ds \\ &{}+\hat{\mu }_{i}e^{r\tau _{i}} \int _{t-\tau _{i}}^{t} \int _{s}^{t} \bigl\vert y(u) \bigr\vert ^{p}\,du\,ds +\tilde{\mu }_{i}e^{r\tau _{i}} \int _{t-\tau _{i}}^{t} \int _{Z} \bigl\vert y(s) \bigr\vert ^{p} \pi (d\nu )\,ds \biggr), \end{aligned}
(39)

which implies $${\delta (t)\leq U(t, \delta (t))}$$. Therefore, by the necessary condition

$$c_{3}\lambda _{\max }^{p}(Q) \Biggl( (\mu -c_{2}r )e^{rt} -\sum _{i=1}^{m} \biggl(\mu _{i} +\hat{\mu }_{i}\tau _{i}+\tilde{\mu }_{i} \int _{Z}\pi (d \nu ) \biggr)e^{r\tau _{i}} \Biggr)\geq 0.$$
(40)

By implementation of the operator defined in (28) on Eq. (39) and using the integration by parts as well as the notation defined in (27), we can derive that

\begin{aligned}& \mathcal{L}U\bigl(t, \delta (t)\bigr) \\& \quad = \mathcal{L}\delta (t)+\mathcal{L} \Biggl(c_{3}\lambda _{\max }^{p}(Q) \sum_{i=1}^{m} \biggl(\mu _{i}e^{r\tau _{i}} \int _{t-\tau _{i}}^{t} \bigl\vert y(s) \bigr\vert ^{p}\,ds \\& \qquad {}+\hat{\mu }_{i}e^{r\tau _{i}} \int _{t-\tau _{i}}^{t} \int _{s}^{t} \bigl\vert y(u) \bigr\vert ^{p}\,du\,ds +\tilde{\mu }_{i}e^{r\tau _{i}} \int _{t-\tau _{i}}^{t} \int _{Z} \bigl\vert y(s) \bigr\vert ^{p} \pi (d\nu )\,ds \biggr) \Biggr) \\& \quad = \mathcal{L}\delta (t)+c_{3}\lambda _{\max }^{p}(Q) \sum_{i=1}^{m} \biggl(\mu _{i}e^{r\tau _{i}} \bigl( \bigl\vert y(t) \bigr\vert ^{p}- \bigl\vert y(t-\tau _{i}) \bigr\vert ^{p} \bigr) \\& \qquad {}+\hat{\mu }_{i}e^{r\tau _{i}} \biggl(\tau _{i} \bigl\vert y(t) \bigr\vert ^{p}- \int _{-\tau _{i}}^{0} \bigl\vert y_{t}( \theta ) \bigr\vert ^{p}\,d\theta \biggr) +\tilde{\mu }_{i}e^{r\tau _{i}} \bigl( \bigl\vert y(t) \bigr\vert ^{p}- \bigl\vert y(t- \tau _{i}) \bigr\vert ^{p} \bigr) \int _{Z}\pi (d\nu ) \biggr) \\& \quad \leq - \Biggl(c_{3}\lambda _{\max }^{p}(Q) \Biggl( (\mu -c_{2}r )e^{rt} -\sum _{i=1}^{m} \biggl(\mu _{i}+\hat{\mu }_{i}\tau _{i} +\tilde{\mu }_{i} \int _{Z}\pi (d\nu ) \biggr)e^{r\tau _{i}} \Biggr) \Biggr) \bigl\vert y(t) \bigr\vert ^{p} \\& \quad \leq 0, \end{aligned}
(41)

where the first and second inequalities are driven by Eqs. (37) and (40), respectively. At the same time, we have

$$\mathcal{H}U\bigl(t, \delta (t)\bigr)=\mathcal{H}\delta (t)=e^{rt} \mathcal{H}V\bigl(t, y(t), y_{t}\bigr).$$
(42)

Based on Eq. (42), by the Itô formula we can obtain

$$dU\bigl(t, \delta (t)\bigr)=\mathcal{L}U\bigl(t, \delta (t) \bigr)\,dt+\mathcal{H}U\bigl(t, \delta (t)\bigr)\,dW(t),$$
(43)

and $${\mathbb{E} (M(t\wedge \overline{\tau }_{l}) )=0}$$, where $${M(t)}$$ is defined in (38). Furthermore, for any positive constant $${\overline{C} (\xi (t) )}$$ we can obtain

$$\mathbb{E} \bigl(U\bigl(t\wedge \overline{\tau }_{l}, \delta (t\wedge \overline{\tau }_{l})\bigr) \bigr) \leq \mathbb{E} (U\bigl(0, \delta (0) \bigr) \leq \overline{C} \bigl(\xi (t) \bigr).$$
(44)

We now assert the global existence of the solution. Assume that this assertion were false, then there is a finite explosion time. Now, according to Eq. (44), we have

$$c_{1}\mathbb{E} \bigl(y^{T}(t\wedge \overline{\tau }_{l})y(t\wedge \overline{\tau }_{l}) \bigr) \leq \overline{C} \bigl(\xi (t) \bigr).$$
(45)

By using the familiar Fatou lemma, we can obtain

$$c_{1}\mathbb{E} \bigl(y^{T}(\overline{ \tau }_{l})y(\overline{\tau }_{l}) \bigr) =\lim _{t\rightarrow \infty }\inf c_{1}\mathbb{E} \bigl(y^{T}(t \wedge \overline{\tau }_{l})y(t\wedge \overline{\tau }_{l}) \bigr) \leq \overline{C} \bigl(\xi (t) \bigr),$$
(46)

that is, $${l^{2}\leq c_{1}^{-1}\overline{C}\mathbb{E} (\Vert U_{0}\Vert ^{p} )}$$, which leads to a contradiction due to the arbitrariness of the integer l. This means the correctness of the assertion, namely the solution $${y(t)}$$ exists globally. Further, by using the Fatou lemma again, one can derive that

$$\mathbb{E} \bigl(U\bigl(t,\delta (t)\bigr) \bigr) \leq \overline{C} \bigl(\xi (t) \bigr).$$
(47)

By the definitions for $${\delta (t)}$$ and $${U(t,\delta (t))}$$ of Eqs. (36) and (39), we finally get

$$e^{rt}\mathbb{E} \bigl(V\bigl(t, y(t)\bigr) \bigr) \leq \mathbb{E} \bigl(U\bigl(t,\delta (t)\bigr) \bigr) \leq \overline{C} \bigl(\xi (t) \bigr),$$
(48)

namely,

$$\mathbb{E} \bigl(V\bigl(t, y(t)\bigr) \bigr)\leq \overline{C} \bigl(\xi (t) \bigr)e^{-rt}, \quad \text{or}\quad \mathbb{E} \bigl( \bigl\vert y(t) \bigr\vert ^{p} \bigr)\leq C \bigl(\xi (t) \bigr)e^{-rt},$$
(49)

where $${C (\xi (t) )=c_{1}^{-1}\overline{C} (\xi (t) )}$$ as required. □

Consequently, by applying Lemma 3.1 to SDIDEs with jump (6) with $${m=1}$$, and $${\psi (u)= u^{T}u}$$, we directly get the following stability criterion.

### Theorem 3.1

Under Assumption 2.2, the global solution$${y(t)}$$to SDIDEs with jump (6) is said to be exponentially mean-square stable if there exist a positive constantrand a positive constant$${C(\xi ( t ))}$$, depending on the initial data$${\xi ( t )}$$, such that

$$\mathbb{E} \bigl(y^{T}(t)y(t) \bigr)\leq C\bigl(\xi ( t )\bigr)e^{-rt}, \quad \forall t\geq 0,$$
(50)

whereris a positive constant. In particular, ris the unique positive solution of the following equation:

$$r + \bigl(\alpha _{2}+\beta _{2}+ \lambda \gamma _{2}+ \bigl(\alpha _{3} \overline{k}+\beta _{3}\overline{k}^{2}\tau +\lambda \gamma _{3} \overline{k}^{2}\tau \bigr)\tau \bigr)e^{r \tau } =2\alpha _{1}-\alpha _{2}- \alpha _{3}\overline{k}\tau -\beta _{1}-\lambda \gamma _{1}.$$
(51)

## The SSTM scheme with Poisson jump

In this section, we present the split-step theta Milstein (SSTM) approximation with jump. Let us choose a discretization time-step $${\Delta t \in \mathbb{R}}$$ such that $${\Delta t=\frac{\tau }{m}}$$ for a integer m. Let us consider the time discretization levels $$t_{n}=n\Delta t$$, $$n=-m, -m+1,\ldots , 0, 1, \ldots$$ .

A discrete approximate solution $${\{x_{n}\}_{n\geq 0}}$$ can be obtained as follows: set

$$x_{n}=y_{n}=\xi (n \Delta t),\quad n=-m, \ldots , -1,0, \qquad x_{0}= \xi (0).$$
(52)

Then, compute the approximation $${\{y_{n}\}_{n\geq 0}}$$ according to the following scheme:

$$\textstyle\begin{cases} x_{n}=y_{n}+\theta f(t_{n},x_{n},x_{n-m},\overline{x}_{n})\Delta t +(1- \theta )f(t_{n},y_{n},y_{n-m},\overline{y}_{n})\Delta t, \\ y_{n+1}=x_{n}+g(t_{n},x_{n},x_{n-m},\overline{x}_{n})\Delta W_{n} + {\int _{Z}}h(t_{n},x_{n},x_{n-m},\overline{x}_{n},\nu ) \Delta \tilde{N}_{\lambda , n}(d\nu ) \\ \hphantom{y_{n+1}={}}{}+\frac{1}{2}L^{1}g(t_{n},x_{n},x_{n-m},\overline{x}_{n}) (S_{n}- \Delta t ) +L^{-1}g(t_{n},x_{n},x_{n-m},\overline{x}_{n})D_{n} \\ \hphantom{y_{n+1}={}}{} +L^{1} {\int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) ( \Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \\ \hphantom{y_{n+1}={}}{} +\frac{1}{2}L^{-1} {\int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}- \Delta N_{\lambda , n} ), \quad n=0,1,2,\ldots , \end{cases}$$
(53)

where $${\theta \in [0,1]}$$ is a parameter, and $${y_{n}}$$ is an approximation to the state $${y(t_{n})}$$.

\begin{aligned}& {\Delta W_{n}:=W(t_{n+1})-W(t_{n})}, S_{n}:= (\Delta W_{n} )^{2}, \end{aligned}
(54)
\begin{aligned}& \Delta \tilde{N}_{\lambda , n}(d\nu ):=\tilde{N}_{\lambda } \bigl((0,t_{n+1}],d \nu \bigr) -\tilde{N}_{\lambda } \bigl((0,t_{n}],d \nu \bigr), \qquad P_{n}:= \bigl(\Delta \tilde{N}_{\lambda , n}(d\nu ) \bigr)^{2}, \end{aligned}
(55)
\begin{aligned}& D_{n}:=\sum_{i=N_{\lambda }(t_{n})+1}^{N_{\lambda }(t_{n+1})} \bigl[W\bigl( \varGamma (i)\bigr)-W(t_{n}) \bigr] -\lambda \int _{t_{n}}^{t_{n+1}}W(s)\,ds+ \lambda W(t_{n})\Delta t, \end{aligned}
(56)

where $${\varGamma (i)}$$ is the instant of the ith jump of the Poisson process.

In addition, $${\overline{x}_{n}}$$ and $${\overline{y}_{n}}$$ approach the integral terms. In this paper, we choose a composite trapezoidal rule as the tool of the disperse integral to solve this case. Therefore, we have

\begin{aligned}& \overline{x}_{n}=\frac{\Delta t}{2} \bigl(K(x_{n-m})+K(x_{n}) \bigr) + \Delta t \sum _{j=1}^{m-1}K(x_{n-m+j}), \end{aligned}
(57)
\begin{aligned}& \overline{y}_{n}=\frac{\Delta t}{2} \bigl(K(y_{n-m})+K(y_{n}) \bigr) + \Delta t \sum _{j=1}^{m-1}K(y_{n-m+j}), \end{aligned}
(58)

the integral term in SDIDEs with jump (6) is approximated with the trapezoidal rule, which makes use of the piecewise Lagrange interpolation technique. Finally, we assume that the derivatives of the function g needed in (24) are well-defined.

### Lemma 4.1

(Delayed difference inequality, see [19])

Let$${m\geq 1}$$be an integer, $${m_{0}=-1}$$or$${m_{0}=0}$$. Denote

$$DV_{n}=V_{n+1}-V_{n},\quad n\in \mathbb{N}.$$
(59)

Assume that$${\{V_{n} \}_{n \in \mathbb{N}}}$$, $${\{x_{n} \}_{n \in \mathbb{N}}}$$and$${\{y_{n} \}_{n \in \mathbb{N}}}$$are nonnegative sequences with$$c_{1}x_{n}\leq V_{n}\leq c_{2}x_{n}$$, $${0< c_{1}\leq c_{2}}$$. If$${\{V_{n} \}_{n \in \mathbb{N}}}$$satisfies the delayed difference inequality

$$DV_{n}\leq -ax_{n-m_{0}}+\sum _{i=m_{0}+1}^{m}a_{i}x_{n-i}-by_{n+1} + \sum_{i=0}^{m}b_{i}y_{n-i},\quad n\in \mathbb{N},$$
(60)

wherea, b, $$a_{i}$$and$${b_{i}}$$are nonnegative constants with$${{\sum_{i=m_{0}+1}^{m}}a_{i}< a}$$and$${{\sum_{i=0}^{m}}b_{i}< b}$$, then we have the estimate

$$x_{n}\leq M \Vert x_{0} \Vert e^{-rn},$$
(61)

where$${M\geq 1}$$is a constant, $${\Vert x_{0}\Vert ={\max_{-m\leq l\leq 0}}|x_{l}|}$$, $${r=\ln C}$$, and$${C>1}$$is the largest positive number satisfying the algebraic inequality system

$${\sum_{i=m_{0}+1}^{m}}a_{i}C^{i+1} \leq \bigl(a-c_{2}(C-1) \bigr)C^{m_{0}+1},\quad \textit{and}\quad { \sum_{i=0}^{m}}b_{i}C^{i+1} \leq b.$$
(62)

## Exponential mean-square stability of the numerical scheme

This section concludes with some criteria for exponential mean-square stability of the SSTM approximation $$\{y_{n}\}_{n \geq 0}$$ with jump.

### Definition 5.1

The numerical scheme (53) is said to be exponentially mean-square stable if there exist two positive constants r and C such that for any initial data $${\xi (t)}$$ the following relation holds:

$$\mathbb{E} \bigl(x_{n}^{T}x_{n} \bigr)\vee \mathbb{E} \bigl(y_{n}^{T}y_{n} \bigr) \leq Ce^{-rt_{n}}\cdot \mathbb{E} \Bigl(\sup_{-\tau \leq t \leq 0} \bigl\vert \xi ^{T}(t)\xi (t) \bigr\vert \Bigr),\quad \forall n\geq 0.$$
(63)

### Remark 5.1

Let us define Lyapunov function $${V (t, y(t) )=y^{T}(t)Qy(t)}$$, by using (22) and the Cauchy–Schwartz inequality, we have

$$\biggl\vert \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr\vert ^{2}\leq \overline{k}^{2} \tau \int _{t-\tau }^{t}y^{T}(s)Qy(s)\,ds,\quad t>0.$$
(64)

To establish the results on the exponential mean-square stability, and we present the following theorem.

### Theorem 5.1

Let$${\theta \in [0, 1]}$$, and let Assumption 2.2hold. Then, for any initial data$${\xi (t)}$$, there exists an upper stepsize bound:

\begin{aligned}& \bigl( 2(1-\theta )L + \bigl(\eta _{1}+\eta _{2}+\eta _{3}\overline{k}^{2} \tau ^{2} \bigr) + \bigl(\tilde{\eta }_{1}+\tilde{\eta }_{2}+\tilde{\eta }_{3} \overline{k}^{2}\tau ^{2} \bigr) \bigr) \bigl(\overline{k}\tau ^{2} \Delta t^{\ast 2}+\Delta t^{\ast }\bigr) \\& \quad = \alpha _{1}- \biggl(\alpha _{2}+\alpha _{3}\overline{k}\tau + \frac{1}{2} \bigl(\beta _{1}+\beta _{2}+\beta _{3} \overline{k}^{2} \tau ^{2} \bigr) +\frac{\lambda }{2} \bigl(\gamma _{1}+\gamma _{2}+ \gamma _{3} \overline{k}^{2}\tau ^{2} \bigr) \\& \qquad {}-\frac{\lambda }{2} \bigl(\sigma _{1}+\sigma _{2}+ \sigma _{3} \overline{k}^{2}\tau ^{2} + \tilde{\sigma }_{1}+\tilde{\sigma }_{2}+ \tilde{\sigma }_{3}\overline{k}^{2}\tau ^{2} \bigr) \biggr), \end{aligned}
(65)

depending onθ, such that for any$${\Delta t\in (0,\Delta t^{*})}$$we have an exponential estimate:

$$\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr)\vee \mathbb{E} \bigl(y_{n}^{T}Qy_{n} \bigr) \leq C\bigl(\xi (t)\bigr)e^{-r_{\Delta }(\theta )n\Delta t},\quad n=1,2, \ldots ,$$
(66)

where$${C(\xi ( t ))}$$is a positive constant, $${r_{\Delta }(\theta )=\ln P_{\Delta }}$$, and$${P_{\Delta }>1}$$satisfies the algebraic inequality system:

\begin{aligned} &P_{\Delta }-1+\theta \Delta t \biggl(\alpha _{2}+\beta _{1}+\lambda (1+2 \gamma _{1}) + \beta _{2}+2\lambda \gamma _{2}+\overline{k}\alpha _{3} \tau \\ &\qquad {}+(\beta _{3}+2\lambda \gamma _{3}) \overline{k}^{2}\tau ^{2} - \overline{k}\alpha _{3}\frac{\Delta t}{2} \biggr)P_{\Delta }^{m+1} \\ &\quad \leq \theta \Delta t \biggl(2\alpha _{1}-\alpha _{2}- \overline{k} \alpha _{3}\tau -\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr). \end{aligned}
(67)

### Proof

According to the first of the scheme (53), we have

$$x_{n}-\theta f(t_{n},x_{n},x_{n-m}, \overline{x}_{n})\Delta t =y_{n}+(1- \theta )f(t_{n},y_{n},y_{n-m},\overline{y}_{n}) \Delta t.$$
(68)

Using this, with the elementary inequality $${a^{2}\leq 2ab+ (a-b)^{2}}$$, we can obtain

\begin{aligned} x_{n}^{T}Qx_{n} \leq{}& y_{n}^{T}Qy_{n} +2(1-\theta )y_{n}^{T}Qf(t_{n},y_{n},y_{n-m}, \overline{y}_{n})\Delta t \\ &{}+2\theta x_{n}^{T}Qf(t_{n},x_{n},x_{n-m}, \overline{x}_{n})\Delta t \\ &{}+(1-\theta )^{2}f^{T}(t_{n},y_{n},y_{n-m}, \overline{y}_{n}) Qf(t_{n},y_{n},y_{n-m}, \overline{y}_{n}) (\Delta t)^{2}. \end{aligned}
(69)

Now, by using (22) and (57), we can derive that

$$\overline{x}_{n}^{T}Q \overline{x}_{n} \leq \overline{k}^{2}\tau \Biggl( \frac{\Delta t}{2}x_{n-m}^{T}Qx_{n-m} +\Delta t\sum_{j=1}^{m-1}x_{n-m+j}^{T}Qx_{n-m+j} +\frac{\Delta t}{2}x_{n}^{T}Qx_{n} \Biggr),$$
(70)

and also by using the inequality $${ ({\sum_{i=1}^{m}}a_{i} )^{2}\leq m {\sum_{i=1}^{m}}a_{i}^{2}}$$, we can obtain

\begin{aligned} 2x_{n}^{T}Q\overline{x}_{n} \leq{}& \overline{k} \Biggl( \frac{\Delta t}{2} \bigl(x_{n-m}^{T}Qx_{n-m} +x_{n}^{T}Qx_{n} \bigr) \\ &{}+\Delta \Biggl(\sum_{j=1}^{m-1}x_{n-m+j}^{T}Qx_{n-m+j} +x_{n}^{T}Qx_{n} \Biggr)+ \frac{\Delta t}{2}x_{n}^{T}Qx_{n} \Biggr) \\ \leq{}& \overline{k} \Biggl( \biggl(m\Delta t+\frac{\Delta t}{2} \biggr)x_{n}^{T}Qx_{n} +\Delta t \sum _{j=1}^{m-1}x_{n-m+j}^{T}Qx_{n-m+j} + \frac{\Delta t}{2}x_{n-m}^{T}Qx_{n-m} \Biggr) \\ \leq{}& \overline{k} \Biggl( \biggl(\tau +\frac{\Delta t}{2} \biggr)x_{n}^{T}Qx_{n}+ \Delta t \sum _{j=1}^{m-1}x_{n-m+j}^{T}Qx_{n-m+j} + \frac{\Delta t}{2}x_{n-m}^{T}Qx_{n-m} \Biggr), \end{aligned}
(71)

where $${\tau =m\Delta t}$$. Therefore, by using conditions (13), (14) and Eq. (71), we get

\begin{aligned}& 2x_{n}^{T} Qf(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \\& \quad = 2x_{n}^{T}Qf(t_{n},x_{n},0,0) +2x_{n}^{T}Q \bigl(f(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) -f(t_{n},x_{n},0,0) \bigr) \\& \quad \leq -2\alpha _{1}x_{n}^{T}Qx_{n}+2x_{n}^{T}Q \bigl(\alpha _{2} \vert x_{n-m} \vert +\alpha _{3} \vert \overline{x}_{n} \vert \bigr) \\& \quad \leq -2\alpha _{1}x_{n}^{T}Qx_{n}+ \alpha _{2} \bigl(x_{n}^{T}Qx_{n} +x_{n-m}^{T}Qx_{n-m} \bigr)+2\alpha _{3}x_{n}^{T}Q\overline{x}_{n} \\& \quad \leq \biggl(-2\alpha _{1}+\alpha _{2}+\overline{k} \alpha _{3}\tau + \overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr)x_{n}^{T}Qx_{n} + \biggl( \alpha _{2}+\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr)x_{n-m}^{T}Qx_{n-m} \\& \qquad {}+\overline{k}\alpha _{3}\Delta t \sum _{j=1}^{m-1}x_{n-m+j}^{T}Qx_{n-m+j}. \end{aligned}
(72)

For evaluating $${y_{n}^{T}Qf(t_{n},y_{n},y_{n-m},\overline{y}_{n})}$$, the procedure is the same as Eq. (72). Taking the expectation on both sides of (69) using (15), (71) and (72) gives

\begin{aligned}& \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) \\& \quad \leq \mathbb{E}\bigl(y_{n}^{T}Qy_{n} \bigr)+(1-\theta ) \Biggl( \biggl(-2\alpha _{1}+ \alpha _{2}+\overline{k}\alpha _{3}\tau +\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr)\mathbb{E}\bigl(y_{n}^{T}Qy_{n} \bigr) \\& \qquad {}+ \biggl(\alpha _{2}+\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr) \mathbb{E}\bigl(y_{n-m}^{T}Qy_{n-m} \bigr) +\overline{k}\alpha _{3}\Delta t \sum _{j=1}^{m-1} \mathbb{E}\bigl(y_{n-m+j}^{T}Qy_{n-m+j} \bigr) \Biggr)\Delta t \\& \qquad {}+\theta \Biggl( \biggl(-2\alpha _{1}+\alpha _{2}+ \overline{k}\alpha _{3} \tau +\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr)\mathbb{E}\bigl(x_{n}^{T}Qx_{n} \bigr) + \biggl(\alpha _{2}+\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr) \mathbb{E}\bigl(x_{n-m}^{T}Qx_{n-m} \bigr) \\& \qquad {}+\overline{k}\alpha _{3}\Delta t \sum _{j=1}^{m-1} \mathbb{E}\bigl(x_{n-m+j}^{T}Qx_{n-m+j} \bigr) \Biggr)\Delta t +(1-\theta )^{2}L \Biggl( \biggl(1+ \overline{k}^{2}\tau \frac{\Delta t}{2} \biggr) \mathbb{E} \bigl(y_{n}^{T}Qy_{n}\bigr) \\& \qquad {}+ \biggl(1+\overline{k}^{2}\tau \frac{\Delta t}{2} \biggr) \mathbb{E}\bigl(y_{n-m}^{T}Qy_{n-m}\bigr) + \overline{k}^{2}\tau \Delta t \sum_{j=1}^{m-1} \mathbb{E}\bigl(y_{n-m+j}^{T}Qy_{n-m+j}\bigr) \Biggr) (\Delta t)^{2}. \end{aligned}
(73)

On the other hand, it is easy to deduce from the second of the scheme (53) that

\begin{aligned} y_{n+1}^{T}Qy_{n+1} ={}&x_{n}^{T}Qx_{n} +S_{n}^{T}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \\ &{}+\frac{1}{4} (S_{n}- \Delta t )^{T}L^{1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) QL^{1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) (S_{n}- \Delta t ) \\ &{}+ { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )Q h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )P_{n} \\ &{}+D_{n}^{T}L^{-1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})QL^{-1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n})D_{n} \\ &{}+ (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} )^{T} L^{1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\ &{} \times L^{1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) ( \Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \\ &{}+\frac{1}{4} (P_{n}- \Delta N_{\lambda , n} )^{T} L^{-1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\ &{} \times L^{-1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}- \Delta N_{\lambda , n} ) \\ &{}+2\Delta W_{n}^{T}g^{T}(t_{n}, x_{n}, x_{n-m},\overline{x}_{n})Q L^{-1}g(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})D_{n}+m_{n}^{\Delta }, \end{aligned}
(74)

where

\begin{aligned} m_{n}^{\Delta }={}&2x_{n}^{T}Qg(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \Delta W_{n} +x_{n}^{T}QL^{1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) (S_{n}-\Delta t ) \\ &{}+2x_{n}^{T}Q \int _{Z}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu ) \Delta \tilde{N}_{\lambda , n}(d\nu ) +2x_{n}^{T}QL^{-1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n})D_{n} \\ &{}+2x_{n}^{T}QL^{1} \int _{Z}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) ( \Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \\ &{}+x_{n}^{T}QL^{-1} \int _{Z}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}-\Delta N_{ \lambda , n} ) \\ &{}+\Delta W_{n}^{T}g^{T}(t_{n}, x_{n}, x_{n-m},\overline{x}_{n})Q L^{1}g(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}) (S_{n}-\Delta t ) \\ &{}+2\Delta W_{n}^{T}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) \\ &{}+2\Delta W_{n}^{T}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \\ &{} \times L^{1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \\ &{}+\Delta W_{n}^{T}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \\ &{} \times L^{-1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}-\Delta N_{ \lambda , n} ) \\ &{}+ (S_{n}-\Delta t )^{T}L^{1}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) \\ &{}+ (S_{n}-\Delta t )^{T}L^{1}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q L^{-1}g(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})D_{n} \\ &{}+ (S_{n}-\Delta t )^{T}L^{1}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \\ &{} \times L^{1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \\ &{}+\frac{1}{2} (S_{n}-\Delta t )^{T}L^{1}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \\ &{} \times L^{-1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}-\Delta N_{ \lambda , n} ) \\ &{}+2 \int _{Z}h^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu ) \Delta \tilde{N}_{\lambda , n}(d\nu ) QL^{-1}g(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})D_{n} \\ &{}+2 \int _{Z}h^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu ) \Delta \tilde{N}_{\lambda , n}(d\nu )Q \\ &{} \times L^{1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \\ &{}+ \int _{Z}h^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu ) \Delta \tilde{N}_{\lambda , n}(d\nu )Q \\ &{} \times L^{-1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}-\Delta N_{ \lambda , n} ) \\ &{}+ 2D_{n}^{T}L^{-1}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \\ &{} \times L^{1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \\ &{}+ D_{n}^{T}L^{-1}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q \\ &{} \times L^{-1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}-\Delta N_{ \lambda , n} ) \\ &{}+ (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} )^{T} L^{1} \int _{Z}h^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu )Q \\ &{} \times L^{-1} \int _{Z}h(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}-\Delta N_{ \lambda , n} ). \end{aligned}
(75)

Note that

\begin{aligned} \begin{aligned} &\mathbb{E} (\Delta W_{n} )=0, \qquad \mathbb{E} (S_{n} )=\Delta t, \qquad \mathbb{E} \bigl((S_{n})^{2} \bigr)=3(\Delta t)^{2}, \\ & \mathbb{E} \bigl(\Delta \tilde{N}_{\lambda , n}(d\nu ) \bigr)=0, \qquad \mathbb{E} (P_{n} )=\lambda \pi (d\nu )\Delta t, \\ &\mathbb{E} \bigl((P_{n})^{2} \bigr)=\lambda \Delta t(1+3\lambda \Delta t) \pi (d\nu ). \end{aligned} \end{aligned}
(76)

By taking the expectation at both sides of (75), by using (76) and the fact that $${\Delta W_{n}}$$, $${\Delta \tilde{N}_{\lambda , n}}$$ and $${\Delta N_{\lambda , n}}$$ are independent on $${x_{n}}$$, $${x_{n-m}}$$ and $${\overline{x}_{n}}$$, we obtain

$$\mathbb{E} \bigl(m_{n}^{\Delta } \bigr)=0.$$
(77)

Therefore we have

\begin{aligned}& \mathbb{E} \bigl(y_{n+1}^{T}Qy_{n+1} \bigr) \\& \quad =\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \mathbb{E} \bigl(S_{n}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr) \\& \qquad {}+\frac{1}{4}\mathbb{E} \bigl( (S_{n}- \Delta t ) L^{1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q L^{1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) (S_{n}- \Delta t ) \bigr) \\& \qquad {}+\mathbb{E} \biggl( { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu ) Qh(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )P_{n} \biggr) \\& \qquad {}+\mathbb{E} \bigl(D_{n}L^{-1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})QL^{-1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n})D_{n} \bigr) \\& \qquad {}+\mathbb{E} \biggl( (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} )^{T} L^{1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) ( \Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \biggr) \\& \qquad {}+\frac{1}{4}\mathbb{E} \biggl( (P_{n}- \Delta N_{\lambda , n} ) L^{-1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{-1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}- \Delta N_{\lambda , n} ) \biggr) \\& \qquad {}+2\mathbb{E} \bigl(\Delta W_{n}^{T}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q L^{-1}g(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})D_{n} \bigr). \end{aligned}
(78)

Since $${x_{n}}$$, $${x_{n-m}}$$ and $${\overline{x}_{n}}$$ are all $${\mathcal{F}_{t_{n}}}$$-measurable, we can easily obtain

\begin{aligned}& \mathbb{E} \bigl(S_{n}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr) \\& \quad =\Delta t\mathbb{E} \bigl(g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr), \end{aligned}
(79)
\begin{aligned}& \mathbb{E} \bigl( (S_{n}- \Delta t )^{T} L^{1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q L^{1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) (S_{n}- \Delta t ) \bigr) \\& \quad =2(\Delta t)^{2}\mathbb{E} \bigl(L^{1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q L^{1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr), \end{aligned}
(80)
\begin{aligned}& \mathbb{E} \biggl( { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu ) Qh(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )P_{n} \biggr) \\& \quad =\lambda \Delta t \int _{Z}\mathbb{E} \bigl(h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu ) Qh(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu ) \bigr)\pi (d\nu ), \end{aligned}
(81)
\begin{aligned}& \mathbb{E} \bigl(D_{n}^{T}L^{-1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) QL^{-1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n})D_{n} \bigr) \\& \quad \leq C (\Delta t)^{2}\mathbb{E} \bigl( L^{-1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) QL^{-1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr), \end{aligned}
(82)
\begin{aligned}& \mathbb{E} \biggl( (\Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} )^{T} L^{1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) ( \Delta W_{n}\Delta \tilde{N}_{\lambda , n}-D_{n} ) \biggr) \\& \quad \leq C (\Delta t)^{2}\mathbb{E} \biggl(L^{1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) \biggr), \end{aligned}
(83)
\begin{aligned}& \mathbb{E} \biggl( (P_{n}- \Delta N_{\lambda , n} )^{T} L^{-1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{-1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) (P_{n}- \Delta N_{\lambda , n} ) \biggr) \\& \quad =2\lambda \Delta t(1+\lambda \Delta t)\mathbb{E} \biggl( L^{-1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{-1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) \biggr), \end{aligned}
(84)
\begin{aligned}& \mathbb{E} \bigl(\Delta W_{n}^{T}g^{T}(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})Q L^{-1}g(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n})D_{n} \bigr) \\& \quad = \Biggl(\sum_{i=N_{\lambda }(t_{n})+1}^{N_{\lambda }(t_{n+1})} \bigl( \varGamma (i)-t_{n}\bigr) +\frac{1}{2}(\Delta t)^{2} \Biggr) \\& \qquad {} \times \mathbb{E} \bigl(g^{T}(t_{n}, x_{n}, x_{n-m},\overline{x}_{n})Q L^{-1}g(t_{n}, x_{n}, x_{n-m}, \overline{x}_{n}) \bigr). \end{aligned}
(85)

Now by using the elementary inequality $${2a^{T}b\leq a^{2}+b^{2}}$$, and by substitution of (79)–(85) into (78)

\begin{aligned}& \mathbb{E} \bigl(y_{n+1}^{T}Qy_{n+1} \bigr) \\& \quad \leq \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) +\Delta t\mathbb{E} \bigl(g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) Qg(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr) \\& \qquad {}+\frac{1}{2}(\Delta t)^{2}\mathbb{E} \bigl(L^{1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n})Q L^{1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr) \\& \qquad {}+\lambda \Delta t \int _{Z}\mathbb{E} \bigl(h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu ) Qh(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu ) \bigr)\pi (d\nu ) \\& \qquad {}+C (\Delta t)^{2}\mathbb{E} \bigl( L^{-1}g^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) QL^{-1}g(t_{n},x_{n},x_{n-m}, \overline{x}_{n}) \bigr) \\& \qquad {}+C (\Delta t)^{2}\mathbb{E} \biggl(L^{1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) \biggr) \\& \qquad {}+2\lambda \Delta t(1+\lambda \Delta t)\mathbb{E} \biggl( L^{-1} { \int _{Z}}h^{T}(t_{n},x_{n},x_{n-m}, \overline{x}_{n}, \nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) Q \\& \qquad {} \times L^{-1} { \int _{Z}}h(t_{n},x_{n},x_{n-m}, \overline{x}_{n},\nu )\Delta \tilde{N}_{\lambda , n}(d\nu ) \biggr) \\& \qquad {}+ \Biggl(\sum_{i=N_{\lambda }(t_{n})+1}^{N_{\lambda }(t_{n+1})} \bigl( \varGamma (i)-t_{n}\bigr) +\frac{1}{2}(\Delta t)^{2} \Biggr) \mathbb{E} \bigl(g^{T}(t_{n}, x_{n}, x_{n-m},\overline{x}_{n})Qg(t_{n}, x_{n}, x_{n-m},\overline{x}_{n}) \\& \qquad {}+L^{-1}g^{T}(t_{n}, x_{n}, x_{n-m},\overline{x}_{n})QL^{-1}g(t_{n}, x_{n}, x_{n-m},\overline{x}_{n}) \bigr). \end{aligned}
(86)

So we have $${\mathbb{E} (x_{n}^{T}Qx_{n} )\leq \mathbb{E} (y_{n+1}^{T}Qy_{n+1} )}$$. Subsequently, by using (16)–(21), we can obtain

\begin{aligned}& \mathbb{E} \bigl(y_{n+1}^{T}Qy_{n+1} \bigr) \\& \quad \leq \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \Biggl(\Delta t+\sum_{i=N_{ \lambda }(t_{n})+1}^{N_{\lambda }(t_{n+1})} \bigl(\varGamma (i)-t_{n}\bigr) + \frac{1}{2}(\Delta t)^{2} \Biggr) \\& \qquad {} \times \bigl(\beta _{1}\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) +\beta _{2} \mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) +\beta _{3}\mathbb{E} \bigl( \overline{x}_{n}^{T}Q \overline{x}_{n} \bigr) \bigr) \\& \qquad {}+(\Delta t)^{2} \biggl(\frac{1}{2}\eta _{1} \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \frac{1}{2}\eta _{2}\mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) + \frac{1}{2}\eta _{3}\mathbb{E} \bigl( \overline{x}_{n}^{T}Q\overline{x}_{n} \bigr) \biggr) \\& \qquad {}+\lambda \Delta t \bigl(\gamma _{1}\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) +\gamma _{2}\mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) +\gamma _{3} \mathbb{E} \bigl(\overline{x}_{n}^{T}Q \overline{x}_{n} \bigr) \bigr) \\& \qquad {}+ \Biggl(C(\Delta t)^{2}+\sum_{i=N_{\lambda }(t_{n})+1}^{N_{\lambda }(t_{n+1})} \bigl(\varGamma (i)-t_{n}\bigr) +\frac{1}{2}(\Delta t)^{2} \Biggr) \\& \qquad {} \times \bigl(\tilde{\eta }_{1}\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \tilde{\eta }_{2}\mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) +\tilde{\eta }_{3} \mathbb{E} \bigl(\overline{x}_{n}^{T}Q \overline{x}_{n} \bigr) \bigr) \\& \qquad {}+C(\Delta t)^{2} \bigl(\sigma _{1}\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \sigma _{2}\mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) +\sigma _{3} \mathbb{E} \bigl(\overline{x}_{n}^{T}Q \overline{x}_{n} \bigr) \bigr) \\& \qquad {}+2\lambda \Delta t(1+\lambda \Delta t) \bigl(\tilde{\sigma }_{1} \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \tilde{\sigma }_{2}\mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) +\tilde{\sigma }_{3}\mathbb{E} \bigl( \overline{x}_{n}^{T}Q \overline{x}_{n} \bigr) \bigr). \end{aligned}
(87)

By Eq. (70), it follows that

\begin{aligned}& \mathbb{E} \bigl(y_{n+1}^{T}Qy_{n+1} \bigr) \\& \quad \leq \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) +\Delta t \Biggl[ \biggl((1+C) \beta _{1}+\lambda \gamma _{1}+C\tilde{\eta }_{1}+2\lambda \tilde{\sigma }_{1} +\frac{\Delta t}{2} \bigl((1+C )\beta _{3}+ \lambda \gamma _{3} \\& \qquad {}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau \biggr) \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \biggl((1+C)\beta _{2}+ \lambda \gamma _{2}+C \tilde{\eta }_{2}+2\lambda \tilde{\sigma }_{2} \\& \qquad {}+\frac{\Delta t}{2} \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C \tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau \biggr) \mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) \\& \qquad {}+\Delta t \bigl((1+C)\beta _{3}+\lambda \gamma _{3}+C \tilde{\eta }_{3}+2 \lambda \tilde{\sigma }_{3} \bigr) \overline{k}^{2}\tau \sum_{j=1}^{m-1} \mathbb{E} \bigl(x_{n-m+j}^{T}Qx_{n-m+j} \bigr) \Biggr] \\& \qquad {}+(\Delta t)^{2} \Biggl[ \biggl(\frac{1}{2}\beta _{1}+\frac{1}{2}\eta _{1}+(1+C) \tilde{\eta }_{1} +C\sigma _{1}+2\lambda ^{2}\tilde{ \sigma }_{1}+ \frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} +\frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr)\tilde{\eta }_{3} \\& \qquad {}+C\sigma _{3}+2\lambda ^{2}\tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr)\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \biggl( \frac{1}{2} \beta _{2}+\frac{1}{2}\eta _{2}+(1+C)\tilde{\eta }_{2} +C\sigma _{2}+2 \lambda ^{2}\tilde{\sigma }_{2} \\& \qquad {}+\frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr) \mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) \\& \qquad {}+\Delta t \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr)\overline{k}^{2} \tau \\& \qquad {}\times\sum_{j=1}^{m-1} \mathbb{E} \bigl(x_{n-m+j}^{T}Qx_{n-m+j} \bigr) \Biggr]. \end{aligned}
(88)

We conclude

\begin{aligned}& \mathbb{E} \bigl(y_{n}^{T}Qy_{n} \bigr) \\& \quad \leq \mathbb{E} \bigl(x_{n-1}^{T}Qx_{n-1} \bigr) +\Delta t \Biggl[ \biggl((1+C) \beta _{1}+\lambda \gamma _{1}+C\tilde{\eta }_{1}+2\lambda \tilde{\sigma }_{1} +\frac{\Delta t}{2} \bigl((1+C )\beta _{3}+ \lambda \gamma _{3} \\& \qquad {}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau \biggr) \mathbb{E} \bigl(x_{n-1}^{T}Qx_{n-1} \bigr) + \biggl((1+C)\beta _{2}+ \lambda \gamma _{2}+C \tilde{\eta }_{2}+2\lambda \tilde{\sigma }_{2} \\& \qquad {}+\frac{\Delta t}{2} \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C \tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau \biggr) \mathbb{E} \bigl(x_{n-m-1}^{T}Qx_{n-m-1} \bigr) \\& \qquad {}+\Delta t \bigl((1+C)\beta _{3}+\lambda \gamma _{3}+C \tilde{\eta }_{3}+2 \lambda \tilde{\sigma }_{3} \bigr) \overline{k}^{2}\tau \sum_{j=1}^{m-1} \mathbb{E} \bigl(x_{n-m+j-1}^{T}Qx_{n-m+j-1} \bigr) \Biggr] \\& \qquad {}+(\Delta t)^{2} \Biggl[ \biggl(\frac{1}{2}\beta _{1}+\frac{1}{2}\eta _{1}+(1+C) \tilde{\eta }_{1} +C\sigma _{1}+2\lambda ^{2}\tilde{ \sigma }_{1}+ \frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} +\frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr)\tilde{\eta }_{3} \\& \qquad {}+C\sigma _{3}+2\lambda ^{2}\tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr)\mathbb{E} \bigl(x_{n-1}^{T}Qx_{n-1} \bigr) + \biggl( \frac{1}{2} \beta _{2}+\frac{1}{2}\eta _{2}+(1+C)\tilde{\eta }_{2} +C\sigma _{2}+2 \lambda ^{2}\tilde{\sigma }_{2} \\& \qquad {}+\frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr) \mathbb{E} \bigl(x_{n-m-1}^{T}Qx_{n-m-1} \bigr) \\& \qquad {}+\Delta t \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr)\overline{k}^{2} \tau \\& \qquad {}\times\sum_{j=1}^{m-1} \mathbb{E} \bigl(x_{n-m+j-1}^{T}Qx_{n-m+j-1} \bigr) \Biggr]. \end{aligned}
(89)

Now, applying (89) to the first term of the right side of (73), we can obtain

\begin{aligned}& \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr)-\mathbb{E} \bigl(x_{n-1}^{T}Qx_{n-1} \bigr) \\& \quad \leq (1-\theta ) \Biggl[ \biggl(-2\alpha _{1}+\alpha _{2}+\overline{k} \alpha _{3}\tau +\overline{k}\alpha _{3}\frac{\Delta t}{2}+(1+C) \beta _{1}+\lambda \gamma _{1} +C\tilde{\eta }_{1}+2\lambda \tilde{\sigma }_{1} \\& \qquad {}+\frac{\Delta t}{2} \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C \tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau +\Delta t \biggl( \frac{1}{2}\beta _{1}+\frac{1}{2}\eta _{1}+(1+C) \tilde{\eta }_{1} +C\sigma _{1} \\& \qquad {}+2\lambda ^{2}\tilde{\sigma }_{1}+\frac{\Delta t}{2} \biggl( \frac{1}{2}\beta _{3} +\frac{1}{2}\eta _{3}+\biggl(\frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2}\tilde{ \sigma }_{3} \biggr) \overline{k}^{2}\tau \biggr) \biggr) \mathbb{E} \bigl(y_{n}^{T}Qy_{n} \bigr) \\& \qquad {}+ \biggl(\alpha _{2}+\overline{k}\alpha _{3} \frac{\Delta t}{2} +(1+C) \beta _{2}+\lambda \gamma _{2}+C\tilde{\eta }_{2}+2\lambda \tilde{\sigma }_{2} +\frac{\Delta t}{2} \bigl((1+C)\beta _{3}+ \lambda \gamma _{3} \\& \qquad {}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau +\Delta t \biggl(\frac{1}{2} \beta _{2}+\frac{1}{2}\eta _{2}+(1+C) \tilde{ \eta }_{2} +C\sigma _{2}+2\lambda ^{2}\tilde{ \sigma }_{2} \\& \qquad {}+\frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr) \biggr)\mathbb{E} \bigl(y_{n-m}^{T}Qy_{n-m} \bigr) \\& \qquad {}+ \biggl(\overline{k}\alpha _{3}\Delta t +\Delta t \bigl((1+C) \beta _{3}+ \lambda \gamma _{3}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr) \overline{k}^{2}\tau +\Delta t \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3} \\& \qquad {}+\biggl(\frac{1}{2}+C\biggr)\tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr)\overline{k}^{2}\tau \biggr) \sum_{j=1}^{m-1} \mathbb{E} \bigl(y_{n-m+j}^{T}Qy_{n-m+j} \bigr) \\& \qquad {}+(1-\theta )L \Delta t \Biggl( \biggl(1+\overline{k}^{2}\tau \frac{\Delta t}{2} \biggr) \mathbb{E} \bigl(y_{n}^{T}Qy_{n} \bigr) + \biggl(1+ \overline{k}^{2}\tau \frac{\Delta t}{2} \biggr) \mathbb{E} \bigl(y_{n-m}^{T}Qy_{n-m} \bigr) \\& \qquad {}+\Delta t\overline{k}^{2}\tau \sum_{j=1}^{m-1} \mathbb{E} \bigl(y_{n-m+j}^{T}Qy_{n-m+j} \bigr) \Biggr) \Biggr]\Delta t +\theta \Biggl[ \biggl(-2\alpha _{1}+\alpha _{2} \\& \qquad {}+\overline{k}\alpha _{3}\tau +\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr) \mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) + \biggl( \alpha _{2}+\overline{k}\alpha _{3} \frac{\Delta t}{2} \biggr) \mathbb{E} \bigl(x_{n-m}^{T}Qx_{n-m} \bigr) \\& \qquad {}+\overline{k}\alpha _{3}\Delta t \sum _{j=1}^{m-1} \mathbb{E} \bigl(x_{n-m+j}^{T}Qx_{n-m+j} \bigr) + \biggl((1+C)\beta _{1}+\lambda \gamma _{1} +C \tilde{\eta }_{1}+2 \lambda \tilde{\sigma }_{1} \\& \qquad {}+\frac{\Delta t}{2} \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C \tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau \biggr) \mathbb{E} \bigl(x_{n-1}^{T}Qx_{n-1} \bigr)+ \biggl((1+C) \beta _{2}+ \lambda \gamma _{2} \\& \qquad {}+C\tilde{\eta }_{2}+2\lambda \tilde{\sigma }_{2} + \frac{\Delta t}{2} \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2}\tau \biggr) \mathbb{E} \bigl(x_{n-m-1}^{T}Qx_{n-m-1} \bigr) \\& \qquad {}+\Delta t \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C\tilde{\eta }_{3}+2 \lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2}\tau \sum_{j=1}^{m-1} \mathbb{E} \bigl(x_{n-m+j-1}^{T}Qx_{n-m+j-1} \bigr) \\& \qquad {}+\Delta t \Biggl( \biggl(\frac{1}{2}\beta _{1}+ \frac{1}{2}\eta _{1}+(1+C) \tilde{\eta }_{1} +C \sigma _{1}+2\lambda ^{2}\tilde{\sigma }_{1}+ \frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} \\& \qquad {}+C\sigma _{3}+2\lambda ^{2}\tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr)\mathbb{E} \bigl(x_{n-1}^{T}Qx_{n-1} \bigr) + \biggl( \frac{1}{2} \beta _{2}+\frac{1}{2}\eta _{2}+(1+C)\tilde{\eta }_{2} +C\sigma _{2}+2 \lambda ^{2}\tilde{\sigma }_{2} \\& \qquad {}+\frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr)\mathbb{E} \bigl(x_{n-m-1}^{T}Qx_{n-m-1} \bigr) \\& \qquad {}+\Delta t \biggl(\frac{1}{2}\beta _{3}+\frac{1}{2}\eta _{3}+\biggl(\frac{1}{2}+C \biggr)\tilde{\eta }_{3} +C\sigma _{3}+2 \lambda ^{2}\tilde{\sigma }_{3} \biggr)\overline{k}^{2} \tau \\& \qquad {}\times\sum_{j=1}^{m-1} \mathbb{E} \bigl(x_{n-m+j-1}^{T}Qx_{n-m+j-1} \bigr) \Biggr) \Biggr] \Delta t, \end{aligned}
(90)

where

\begin{aligned} &a=a_{0}=\theta \biggl(-2\alpha _{1}+\alpha _{2}+\overline{k}\alpha _{3} \biggl(\tau + \frac{3}{2}\Delta t \biggr) \biggr), \end{aligned}
(91)
\begin{aligned} &a_{1}=\theta \biggl(\overline{k}\alpha _{3}\Delta t+(1+C)\beta _{1}+ \lambda \gamma _{1} +C\tilde{\eta }_{1}+2\lambda \tilde{\sigma }_{1} + \frac{\Delta t}{2} \bigl((1+C)\beta _{3}+\lambda \gamma _{3} \\ &\hphantom{a_{1}={}}{}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr) \overline{k}^{2}\tau +\Delta t \biggl(\frac{1}{2} \beta _{1}+ \frac{1}{2}\eta _{1}+(1+C)\tilde{ \eta }_{1} +C\sigma _{1}+2\lambda ^{2} \tilde{ \sigma }_{1} \\ &\hphantom{a_{1}={}}{}+\frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2} \eta _{3}+\biggl(\frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr) \biggr), \end{aligned}
(92)
\begin{aligned} &a_{m-1}=\theta \biggl((1+C)\beta _{2}+\lambda \gamma _{2}+C \tilde{\eta }_{2}+2\lambda \tilde{\sigma }_{2} +\frac{\Delta t}{2} \bigl((1+C)\beta _{3}+ \lambda \gamma _{3} \\ &\hphantom{a_{m-1}={}}{}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr) \overline{k}^{2}\tau + \biggl(\frac{1}{2}\beta _{2}+\frac{1}{2}\eta _{2}+(1+C) \tilde{\eta }_{2} +C\sigma _{2}+2\lambda ^{2}\tilde{ \sigma }_{2} \\ &\hphantom{a_{m-1}={}}{}+\frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3}+\biggl(\frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2 \lambda ^{2}\tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr) \biggr), \\ &a_{m}=\theta \biggl(\alpha _{2}+\frac{1}{2} \overline{k}\alpha _{3} \Delta t + \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C\tilde{\eta }_{3}+2 \lambda \tilde{\sigma }_{3} \bigr) \overline{k}^{2}\tau \\ &\hphantom{a_{m}={}}{}+\Delta t \biggl(\frac{1}{2}\beta _{3}+ \frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr)\overline{k}^{2} \tau \biggr), \end{aligned}
(93)
\begin{aligned} &b=0, \end{aligned}
(94)
\begin{aligned} &b_{0}= (1-\theta ) \biggl(-2\alpha _{1}+\alpha _{2}+\overline{k} \alpha _{3} \biggl(\tau + \frac{1}{2}\Delta t \biggr)+(1+C)\beta _{1}+ \lambda \gamma _{1} +C\tilde{\eta }_{1}+2\lambda \tilde{\sigma }_{1} \\ &\hphantom{b_{0}={}}{}+\frac{\Delta t}{2} \bigl((1+C)\beta _{3}+\lambda \gamma _{3} +C \tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2} \tau +\Delta t \biggl( \frac{1}{2}\beta _{1}+\frac{1}{2}\eta _{1}+(1+C) \tilde{\eta }_{1} \\ &\hphantom{b_{0}={}}{}+C\sigma _{1}+2\lambda ^{2}\tilde{\sigma }_{1}+ \frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} +\frac{1}{2}\eta _{3}+\biggl( \frac{1}{2}+C\biggr)\tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr) \overline{k}^{2}\tau \biggr) \\ &\hphantom{b_{0}={}}{}+\overline{k}\alpha _{3}\Delta t+\Delta t \bigl((1+C)\beta _{3}+ \lambda \gamma _{3}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr) \overline{k}^{2}\tau +\Delta t \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2}\eta _{3} \\ &\hphantom{b_{0}={}}{}+\biggl(\frac{1}{2}+C\biggr)\tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr)\overline{k}^{2}\tau +\Delta t\overline{k}^{2} \tau \biggr), \end{aligned}
(95)
\begin{aligned} &b_{1}= (1-\theta ) \biggl(\overline{k}\alpha _{3} \Delta t+ \Delta t \bigl((1+C)\beta _{3}+\lambda \gamma _{3}+C\tilde{\eta }_{3}+2 \lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2}\tau +\Delta t \biggl( \frac{1}{2}\beta _{3} +\frac{1}{2}\eta _{3} \\ &\hphantom{b_{1}={}}{}+\biggl(\frac{1}{2}+C\biggr)\tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr)\overline{k}^{2}\tau +\Delta t\overline{k}^{2} \tau \biggr), \end{aligned}
(96)
\begin{aligned} &b_{m}= (1-\theta ) \biggl(\alpha _{2}+\overline{k} \alpha _{3} \frac{\Delta t}{2} +(1+C)\beta _{2}+ \lambda \gamma _{2}+C \tilde{\eta }_{2}+2\lambda \tilde{ \sigma }_{2} +\frac{\Delta t}{2} \bigl((1+C)\beta _{3} \\ &\hphantom{b_{m}={}}{}+\lambda \gamma _{3}+C\tilde{\eta }_{3}+2\lambda \tilde{\sigma }_{3} \bigr)\overline{k}^{2}\tau +\Delta t \biggl(\frac{1}{2}\beta _{2}+ \frac{1}{2}\eta _{2}+(1+C)\tilde{\eta }_{2} +C\sigma _{2}+2 \lambda ^{2} \tilde{\sigma }_{2} \\ &\hphantom{b_{m}={}}{}+\frac{\Delta t}{2} \biggl(\frac{1}{2}\beta _{3} + \frac{1}{2} \eta _{3}+\biggl(\frac{1}{2}+C\biggr) \tilde{\eta }_{3} +C\sigma _{3}+2\lambda ^{2} \tilde{\sigma }_{3} \biggr) \overline{k}^{2} \tau \biggr) \\ &\hphantom{b_{m}={}}{}+(1-\theta )L \Delta t \biggl(1+\overline{k}^{2}\tau \frac{\Delta t}{2} \biggr) \biggr). \end{aligned}
(97)

Applying Lemma 4.1 to (90), we have the estimation

$$\mathbb{E} \bigl(x_{n}^{T}Qx_{n} \bigr) \leq C\bigl(\xi (t)\bigr)e^{-r_{\Delta }( \theta )n\Delta t}.$$
(98)

Finally, with (89), we get

$$\mathbb{E} \bigl(y_{n}^{T}Qy_{n} \bigr) \leq C\bigl(\xi (t)\bigr)e^{-r_{\Delta }( \theta )n\Delta t}.$$
(99)

This completes the proof of Theorem 5.1. □

## Numerical illustrations

This section is devoted to presenting some examples to illustrate our numerical stability results for the SSTM approximation $$\{y_{n}\}_{n\geq 0}$$ with jump. Also, we compare the proposed scheme with the stability analysis of the split-step θ-Euler scheme.

### Example 6.1

Let us consider the following nonlinear SDIDE with jump:

\begin{aligned} dy(t)={}& \biggl(-8y(t)+\sin \bigl(y(t-1)\bigr)- \int _{t-1}^{t}\sin ^{3}y(s)\,ds \biggr)\,dt \\ &{}+ \biggl(\frac{y(t)}{1+y^{2}(t)}+ \sin \bigl(y(t-1)\bigr)+ \int _{t-1}^{t}\sin ^{3}y(s)\,ds \biggr)\,dW(t) \\ &{}+ \int _{Z} \biggl(0.01\nu y(t)+0.3\sqrt{\nu }y(t-1)-0.2 \int _{t-1}^{t} \sin ^{3}y(s)\,ds \biggr)N_{\lambda }(dt,d\nu ) , \\ & t>0, \end{aligned}
(100)

with initial data $${\xi (t)=1}$$ for $${t \in [-1,0]}$$. For any $${y, \overline{y}, \hat{y} \in \mathbb{R}}$$, $$t>0$$ and for any positive number Q we have

\begin{aligned}& y^{T}Qf(t,y,0,0)\leq -8y^{T}Qy, \end{aligned}
(101)
\begin{aligned}& \bigl\vert f(t,y,\overline{y},\hat{y})-f(t,y,0,0) \bigr\vert \leq \vert \overline{y} \vert + \vert \hat{y} \vert , \end{aligned}
(102)
\begin{aligned}& f^{T}(t,y,\overline{y},\hat{y})Qf(t,y, \overline{y},\hat{y}) \leq 80 \bigl(y^{T}Qy+\overline{y}^{T}Q \overline{y} +\hat{y}^{T}Q\hat{y} \bigr). \end{aligned}
(103)

Moreover, similar arguments yield

\begin{aligned}& g^{T} (t,y,\overline{y},\hat{y})Qg(t,y, \overline{y},\hat{y}) \\& \quad = \biggl(\frac{y}{1+y^{2}} \biggr)^{T}Q \biggl( \frac{y}{1+y^{2}} \biggr) +2 \biggl( \frac{y}{1+y^{2}} \biggr)^{T}Q\sin (\overline{y}) \\& \qquad {}+ \bigl(\sin (\overline{y}) \bigr)^{T}Q\sin (\overline{y}) + \hat{y}^{T}Q \hat{y}+2 \biggl(\frac{y}{1+y^{2}} \biggr)^{T}Q\hat{y} +2 \bigl(\sin ( \overline{y}) \bigr)^{T}Q\hat{y} \\& \quad \leq 3 \bigl(y^{T}Qy+\overline{y}^{T}Q\overline{y} + \hat{y}^{T}Q \hat{y} \bigr), \end{aligned}
(104)

where we used the fact that $${ |\frac{y}{1+y^{2}} | \leq y}$$, $${|\sin (\overline{y})| \leq |\overline{y}|}$$ and the Cauchy–Schwartz inequality. In addition, according to (24) and by using the fact that $${|\cos (\overline{y})| \leq 1}$$, we have

\begin{aligned}& L^{1}g(t,y,\overline{y},\hat{y}) \\& \quad = \biggl(\frac{y}{1+y^{2}}+\sin (\overline{y})+\hat{y} \biggr) \biggl( \frac{1-y^{2}}{(1+y^{2})^{2}}+\cos (\overline{y})+1 \biggr) \\& \quad \leq 3 (y+\overline{y}+\hat{y} ), \end{aligned}
(105)

so that

$$L^{1}g^{T}(t,y, \overline{y},\hat{y})QL^{1}g(t,y,\overline{y},\hat{y}) \leq 48 \bigl(y^{T}Qy+\overline{y}^{T}Q\overline{y} + \hat{y}^{T}Q \hat{y} \bigr).$$
(106)

Also we have

\begin{aligned}& \begin{aligned}[b] L^{-1}g(t,y,\overline{y},\hat{y})={}& \frac{y+\int _{Z}h\pi (d\nu )}{1+ (y +\int _{Z}h\pi (d\nu ) )^{2}}+\sin \biggl(\overline{y}+ \int _{Z}h \pi (d\nu ) \biggr) \\ &{}+\hat{y}+ \int _{Z}h\pi (d\nu ) - \biggl(\frac{y}{1+y^{2}}+\sin ( \overline{y})+\hat{y} \biggr) \\ \leq{}& 1.09y +1.47\overline{y} +1.69\hat{y}, \end{aligned} \end{aligned}
(107)
\begin{aligned}& L^{-1}g^{T}(t,y,\overline{y}, \hat{y})QL^{-1}g(t,y,\overline{y}, \hat{y}) \leq 4.6325y^{T}Qy+6.2482 \overline{y}^{T}Q\overline{y} +7.1832 \hat{y}^{T}Q\hat{y}. \end{aligned}
(108)

Let $${N_{\lambda }(dt,d\nu )}$$ be a Poisson random measure given by $${\pi (d\nu )\,dt=\lambda \tilde{h}(\nu )\, d\nu \,dt}$$, with $${\lambda =2,}$$ and let

$$\tilde{h}(\nu )=\frac{1}{\sqrt{2\pi \nu }}e^{- \frac{(\ln \nu )^{2}}{2}},\quad 0\leq \nu < \infty ,$$
(109)

be the density function of a log-normal random variable. Moreover, by the property of the log-normal distributed $${\tilde{h}(\nu )}$$,

\begin{aligned} \int _{Z} h(t,y,\overline{y},\hat{y}, \nu )\pi (d\nu )&= \int _{0}^{ \infty } (0.01\nu y+0.3\sqrt{\nu } \overline{y} -0.2\hat{y} ) \frac{1}{\sqrt{2\pi \nu }}e^{-\frac{(\ln \nu )^{2}}{2}}\, d\nu \\ & =0.03 y+0.49 \overline{y}-0.23 \hat{y} \end{aligned}
(110)

and

\begin{aligned} &\int _{Z} h^{T}(t,y,\overline{y},\hat{y}, \nu )Qh(t,y,\overline{y}, \hat{y}, \nu )\pi (d\nu ) \\ &\quad = \int _{0}^{\infty } (0.01\nu y+0.3\sqrt{\nu } \overline{y} -0.2 \hat{y} )^{T}Q (0.01\nu y+0.3\sqrt{\nu }\overline{y} -0.2\hat{y} ) \frac{1}{\sqrt{2\pi \nu }}e^{-\frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\quad \leq 0.0614 y^{T}Qy+0.7966 \overline{y}^{T}Q \overline{y} +0.3008 \hat{y}^{T}Q\hat{y}. \end{aligned}
(111)

Also based on Eq. (24), we have

\begin{aligned}& \begin{aligned}[b] L^{1} \biggl( \int _{Z} h(t,y,\overline{y},\hat{y}, \nu )\pi (d\nu ) \biggr)&= 0.29 \biggl(\frac{y}{1+y^{2}}+\sin (\overline{y})+\hat{y} \biggr) \\ &\leq 0.29 (y+\overline{y}+\hat{y} ), \end{aligned} \end{aligned}
(112)
\begin{aligned}& L^{1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\& \quad \leq 0.87 \bigl(y^{T}Qy+\overline{y}^{T}Q\overline{y} +\hat{y}^{T}Q \hat{y} \bigr), \end{aligned}
(113)
\begin{aligned}& L^{-1} \biggl( \int _{Z} h(t,y,\overline{y},\hat{y})\pi (d\nu ) \biggr) \leq 0.0097y+0.151\overline{y}-0.0653\hat{y}, \end{aligned}
(114)
\begin{aligned}& L^{-1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\& \quad \leq 0.002y^{T}Qy+0.034\overline{y}^{T}Q\overline{y} +0.0148 \hat{y}^{T}Q\hat{y}. \end{aligned}
(115)

Thus, Assumption 2.2 holds with $${Q=1}$$, $${\alpha _{1}=8}$$, $${\alpha _{2}=\alpha _{3}=1}$$, $${\overline{k}=1}$$, $${\beta _{1}=\beta _{2}=\beta _{3}=3}$$, $${\eta _{1}=\eta _{2}=\eta _{3}=48}$$, $${\tilde{\eta }_{1}=4.6325}$$, $${\tilde{\eta }_{2}=6.2482}$$, $${\tilde{\eta }_{3}=7.1832}$$, $${\sigma _{1}=\sigma _{2}=\sigma _{3}=0.87}$$, $${\tilde{\sigma }_{1}=0.002}$$, $${\tilde{\sigma }_{2}=0.034}$$, $${\tilde{\sigma }_{3}=0.0148}$$, $${L=80}$$, $${\gamma _{1}=0.0614}$$, $${\gamma _{2}=0.7966}$$ and $${\gamma _{3}=0.3008}$$. Obviously these parameters satisfy inequality (23):

$$8=\alpha _{1}>\alpha _{2}+\alpha _{3}\overline{k}\tau +\frac{1}{2} \bigl(\beta _{1}+\beta _{2}+\beta _{3} \overline{k}^{2}\tau ^{2} \bigr) + \frac{1}{2} \lambda \bigl(\gamma _{1}+\gamma _{2} +\gamma _{3} \overline{k}^{2}\tau ^{2} \bigr)=7.6588.$$
(116)

Substituting the proposed values in (65) yields

$$\bigl(160(1-\theta )+162.0639 \bigr)\Delta t^{\ast 2}+ \bigl(160(1-\theta )+162.0639 \bigr)\Delta t^{\ast }-3.002=0.$$
(117)

To empirically check these theoretical findings, the SSTM approximation of the SDIDE with jump (100) is performed using both $${\theta =0.1}$$ and $${\theta = 0.8}$$.

Therefore, according to Theorem 5.1, if $$\theta =0.1$$, the SSTM scheme is stable for time-steps $${\Delta t\leq 2^{-7}}$$. However, the mean-square stability is lost if $${\Delta t\geq 2^{-6}}$$. If $$\theta =0.8$$, the SSTM scheme is stable for time-steps $${\Delta t\leq 2^{-6}}$$, but it is not if $${\Delta t\geq 2^{-5}}$$.

The results obtained are reported in Fig. 1. As we see in Fig. 2, if $$\theta =0.1$$, the split-step θ-Euler scheme is stable for time-steps $${\Delta t\leq 2^{-9}}$$. However, the mean-square stability is lost if $${\Delta t\geq 2^{-8}}$$. If $$\theta =0.8$$, the split-step θ-Euler scheme is stable for time-steps $${\Delta t\leq 2^{-8}}$$, but it is not if $${\Delta t\geq 2^{-7}}$$.

### Example 6.2

Let us consider the following two-dimensional SDIDE with jump:

\begin{aligned} dy(t)={}&Vy(t)\,dt+g \biggl(t,y(t),y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr) \,dW(t) \\ &{}+ \int _{Z}h \biggl(t,y(t),y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds, \nu \biggr)N_{\lambda }(dt, d\nu ), \quad t>0, \end{aligned}
(118)

with initial data

$$\xi (t)= \bigl(\xi _{1}(t),\xi _{2}(t) \bigr)^{T}= (1, 1 )^{T} \in C \bigl([-1,0]; \mathbb{R}^{2} \bigr),$$
(119)

where V is the following matrix:

$$V= \begin{pmatrix} -3 & 1 \\ -1 & -2 \end{pmatrix}.$$
(120)

Let $${y(t)= (y_{1}(t), y_{2}(t) )^{T}}$$, $${\overline{y}(t)= (y_{1}(t-1), y_{2}(t-1) )^{T}}$$, $${\hat{y}(t)= (\hat{y}_{1}(t), \hat{y}_{2}(t) )^{T}}$$ and

\begin{aligned} &f(t,y,\overline{y},\hat{y})=Vy(t)= \bigl(-3y_{1}(t)+y_{2}(t),-y_{1}(t)-2y_{2}(t) \bigr)^{T}, \\ &g(t,y,\overline{y},\hat{y})= \biggl(\frac{1}{2}\ln \bigl(1+ \overline{y}_{1}^{2}(t)\bigr) + \int _{t-1}^{t}\sin \hat{y}_{1}(s)\cos \hat{y}_{1}(s)\,ds, \\ &\hphantom{g(t,y,\overline{y},\hat{y})={}} \frac{1}{2}\sin ^{2}\overline{y}_{2}(t) - \int _{t-1}^{t} \sin \hat{y}_{1}(s) \cos \hat{y}_{1}(s)\,ds \biggr)^{T}, \\ &h(t,y,\overline{y},\hat{y}, \nu )= \biggl(0.01\nu y_{1}(t) -0.4 \int _{t-1}^{t} \sin \hat{y}_{1}(s) \cos \hat{y}_{1}(s)\,ds, \\ &\hphantom{h(t,y,\overline{y},\hat{y}, \nu )={}} 0.3\sqrt{\nu }\overline{y}_{2}(t) +0.1 \int _{t-1}^{t} \sin \hat{y}_{1}(s) \cos \hat{y}_{1}(s)\,ds \biggr)^{T}. \end{aligned}

We are going to show that Assumption 2.2 holds if we choose, for example, the following symmetric positive definite matrix Q:

$$Q= \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}.$$
(121)

In fact, for any $${y, \overline{y}, \hat{y} \in \mathbb{R}^{2}}$$ and $$t>0$$, we can obtain

\begin{aligned}& \begin{aligned}[b] y^{T}Qf(t,y,0,0)&= \begin{pmatrix} y_{1} & y_{2} \end{pmatrix} \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} -3 & 1 \\ -1 & -2 \end{pmatrix} \begin{pmatrix} y_{1} \\ y_{2} \end{pmatrix} \\ &= \begin{pmatrix} -7y_{1}-5y_{2} & -3y_{2} \end{pmatrix} \begin{pmatrix} y_{1} \\ y_{2} \end{pmatrix} \\ &\leq -\frac{19}{2}y_{1}^{2}- \frac{11}{2}y_{2}^{2} \leq -\frac{11}{2} \vert y \vert ^{2}, \end{aligned} \end{aligned}
(122)
\begin{aligned}& \begin{aligned}[b] f^{T}(t,y,\overline{y},\hat{y})Qf(t,y, \overline{y},\hat{y})&= \begin{pmatrix} -5y_{1}-4y_{2} & -y_{1}-6y_{2} \end{pmatrix} \begin{pmatrix} -3y_{1}+y_{2} \\ -y_{1}-2y_{2} \end{pmatrix} \\ &\leq \frac{47}{2}y_{1}^{2}+ \frac{31}{2}y_{2}^{2} \leq \frac{47}{2} \vert y \vert ^{2}. \end{aligned} \end{aligned}
(123)

Moreover, similar arguments yield

\begin{aligned} &g^{T}(t,y,\overline{y},\hat{y})Qg(t,y, \overline{y},\hat{y}) \\ &\quad = \begin{pmatrix} \frac{1}{2}\ln (1+\overline{y}_{1}^{2})+\frac{1}{2}\hat{y}_{1} & \frac{1}{2}\sin ^{2}\overline{y}_{2}-\frac{1}{2}\hat{y}_{2} \end{pmatrix} \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} \frac{1}{2}\ln (1+\overline{y}_{1}^{2})+\frac{1}{2}\hat{y}_{1} \\ \frac{1}{2}\sin ^{2}\overline{y}_{2}-\frac{1}{2}\hat{y}_{2} \end{pmatrix} \\ &\quad =\ln ^{2}\bigl(1+\overline{y}_{1}^{2} \bigr)+\hat{y}_{1}^{2} +\frac{1}{2}\sin ^{4}( \overline{y}_{2})-\frac{1}{2} \hat{y}_{2}^{2} \leq 2 \vert \hat{y} \vert ^{2}, \end{aligned}
(124)

where we used the fact that $${\ln ^{2}(1+\overline{y}^{2}) \leq |\overline{y}|^{2}}$$, $${|\sin (\overline{y})|^{4} \leq |\overline{y}|^{2}}$$ and the Cauchy–Schwartz inequality. In addition, according to (24), we have

\begin{aligned} L^{1}g^{T}(t,y,\overline{y},\hat{y}) &=\frac{1}{2} \begin{pmatrix} \frac{\overline{y}_{1}}{1+\overline{y}_{1}^{2}}\ln (1+\overline{y}_{1}^{2})+ \frac{1}{2}\hat{y}_{1} & \sin ^{3}\overline{y}_{2}\cos \overline{y}_{2}- \frac{1}{2}\hat{y}_{2} \end{pmatrix} \\ &\leq \frac{1}{4} \begin{pmatrix} \overline{y}_{1}+\hat{y}_{1} & 2\overline{y}_{2}-\hat{y}_{2} \end{pmatrix}, \end{aligned}
(125)

so that

$$L^{1}g^{T}(t,y, \overline{y},\hat{y})QL^{1}g(t,y,\overline{y},\hat{y}) \leq \frac{7}{16} \vert \overline{y} \vert ^{2}+ \frac{6}{16} \vert \hat{y} \vert ^{2}.$$
(126)

Also we have

\begin{aligned} &L^{-1}g(t,y,\overline{y},\hat{y}) \\ &\quad = \begin{pmatrix} \frac{1}{2}\ln (1+(\overline{y}_{1}+ {\int _{Z}}h \pi (d\nu ))^{2} ) +\int _{t-1}^{t}\sin (\hat{y}_{1}+\int _{Z}h \pi (d\nu ) ) \cos (\hat{y}_{1}+\int _{Z}h\pi (d\nu ) )\,ds \\ \frac{1}{2}\sin ^{2} (\overline{y}_{2}+ {\int _{Z}}h \pi (d\nu ) ) -\int _{t-1}^{t}\sin (\hat{y}_{1}+\int _{Z}h\pi (d \nu ) ) \cos (\hat{y}_{1}+\int _{Z}h\pi (d\nu ) )\,ds \end{pmatrix} \\ &\qquad {}- \begin{pmatrix} \frac{1}{2}\ln (1+\overline{y}_{1}^{2} ) + {\int _{t-1}^{t}} \sin \hat{y}_{1}(s)\cos \hat{y}_{1}(s)\,ds \\ \frac{1}{2}\sin ^{2}\overline{y}_{2}- {\int _{t-1}^{t}} \sin \hat{y}_{1}(s)\cos \hat{y}_{1}(s)\,ds \end{pmatrix} \\ &\quad \leq \begin{pmatrix} 0.462y_{1}+0.2266\hat{y}_{1} &{}-0.2473\overline{y}_{2}-0.0566\hat{y}_{1} \end{pmatrix}^{T}, \end{aligned}
(127)
\begin{aligned} &L^{-1}g^{T}(t,y,\overline{y}, \hat{y})QL^{-1}g(t,y,\overline{y}, \hat{y}) \leq 0.4957 \vert y \vert ^{2}+0.0199 \vert \overline{y} \vert ^{2}+0.2385 \vert \hat{y} \vert ^{2}. \end{aligned}
(128)

A simple calculation shows that

\begin{aligned} & \int _{Z}h^{T}(t,y,\overline{y},\hat{y}, \nu ) \pi (d\nu ) \\ &\quad \leq \begin{pmatrix} {\int _{Z}} (0.01\nu y_{1}-0.4\hat{y}_{1} )\pi (d \nu ) & {\int _{Z}} (0.3\sqrt{\nu }\overline{y}_{2}+0.1 \hat{y}_{2} )\pi (d\nu ) \end{pmatrix} \\ &\quad = \begin{pmatrix} 0.0308y_{1}-0.4532\hat{y}_{1} & 0.4946\overline{y}_{2}+0.1133\hat{y}_{1} \end{pmatrix} \end{aligned}
(129)

and

\begin{aligned} &h^{T}(t,y,\overline{y},\hat{y}, \nu )Q h(t,y, \overline{y},\hat{y}, \nu ) \\ &\quad = \begin{pmatrix} 0.01\nu y_{1}-0.4\hat{y}_{1} & 0.3\sqrt{\nu }\overline{y}_{2}+0.1 \hat{y}_{2} \end{pmatrix} \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} 0.01\nu y_{1}-0.4\hat{y}_{1} \\ 0.3\sqrt{\nu }\overline{y}_{2}+0.1\hat{y}_{2} \end{pmatrix} \\ &\quad =0.0001\nu ^{2} \vert y_{1} \vert ^{2}+0.004\nu \vert y_{1} \vert \vert \hat{y}_{1} \vert +0.04 \vert \hat{y}_{1} \vert ^{2}+0.09\nu \vert \overline{y}_{2} \vert ^{2} \\ &\qquad {}-0.03\sqrt{\nu } \vert \overline{y}_{2} \vert \vert \hat{y}_{2} \vert +0.0025 \vert \hat{y}_{2} \vert ^{2}, \end{aligned}
(130)

therefore, by using the property of the log-normal distributed $${\tilde{h}(\nu )}$$ defined in Eq. (109), we can obtain

\begin{aligned} & \int _{Z}h^{T}(t,y,\overline{y},\hat{y}, \nu )Q h(t,y,\overline{y}, \hat{y}, \nu )\pi (d\nu ) \\ &\quad = \int _{0}^{\infty } \bigl(0.0001\nu ^{2} \vert y_{1} \vert ^{2}+0.004\nu \vert y_{1} \vert \vert \hat{y}_{1} \vert +0.04 \vert \hat{y}_{1} \vert ^{2}+0.09\nu \vert \overline{y}_{2} \vert ^{2} \\ &\qquad {}-0.03\sqrt{\nu } \vert \overline{y}_{2} \vert \vert \hat{y}_{2} \vert +0.0025 \vert \hat{y}_{2} \vert ^{2} \bigr) \frac{1}{\sqrt{2\pi \nu }}e^{-\frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\quad \leq 0.0085 \vert y \vert ^{2}+0.3019 \vert \overline{y} \vert ^{2} +0.0515 \vert \hat{y} \vert ^{2}. \end{aligned}
(131)

Also based on Eq. (24), we have

\begin{aligned} &L^{1} \biggl( \int _{Z} h^{T}(t,y,\overline{y},\hat{y}, \nu ) \pi (d\nu ) \biggr) \\ &\quad \leq \begin{pmatrix} 0.2112\overline{y}_{1}-0.4224\hat{y}_{1} & -0.2112\overline{y}_{2}+0.4224 \hat{y}_{1} \end{pmatrix}, \end{aligned}
(132)
\begin{aligned} &L^{1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\ &\quad \leq 0.223 \vert \overline{y} \vert ^{2}+0.357 \vert \hat{y} \vert ^{2}, \end{aligned}
(133)
\begin{aligned} &L^{-1} \biggl( \int _{Z} h^{T}(t,y,\overline{y},\hat{y}, \nu ) \pi (d\nu ) \biggr) \\ &\quad \leq \begin{pmatrix} 0.0178y_{1}-0.2617\hat{y}_{1} & 0.4946\overline{y}_{2}+0.0035y_{1}-0.162 \hat{y}_{1} \end{pmatrix} \\ &\qquad {}- \begin{pmatrix} 0.3080y_{1}-0.4532\hat{y}_{1} & 0.4946\overline{y}_{2}+0.1133\hat{y}_{1} \end{pmatrix} \\ &\quad = \begin{pmatrix} -0.2902y_{1}+0.1915\hat{y}_{1} & 0.0035y_{1}-0.2753\hat{y}_{1} \end{pmatrix}, \end{aligned}
(134)
\begin{aligned} &L^{-1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{-1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\ &\quad \leq 0.134 \vert y \vert ^{2}+0.087 \vert \hat{y} \vert ^{2}. \end{aligned}
(135)

Thus, Assumption 2.2 holds with Q as in (121), $${\lambda =2}$$, $${\alpha _{1}=5.5}$$, $${\alpha _{2}=\alpha _{3}=0}$$, $${\overline{k}=1}$$, $$\beta _{1}=\beta _{2}=0$$, $${\beta _{3}=2}$$, $${L=23.5}$$, $${\eta _{1}=0}$$, $${\eta _{2}=0.4375}$$, $$\eta _{3}=0.375$$, $${\tilde{\eta }_{1}=0.4957}$$, $${\tilde{\eta }_{2}=0.0199}$$, $${\tilde{\eta }_{3}=0.2385}$$, $${\sigma _{1}=0}$$, $${\sigma _{2}=0.223}$$, $${\sigma _{3}=0.357}$$, $${\tilde{\sigma }_{1}=0.134}$$, $${\tilde{\sigma }_{2}=0}$$, $${\tilde{\sigma }_{3}=0.087}$$, $${\gamma _{1}=0.0085}$$, $$\gamma _{2}=0.3019$$ and $${\gamma _{3}=0.0515}$$. Obviously these parameters satisfy inequality (23):

$$5.5=\alpha _{1}>\alpha _{2}+\alpha _{3}\overline{k}\tau +\frac{1}{2} \bigl(\beta _{1}+\beta _{2}+\beta _{3} \overline{k}^{2}\tau ^{2} \bigr) + \frac{1}{2} \lambda \bigl(\gamma _{1}+\gamma _{2} +\gamma _{3} \overline{k}^{2}\tau ^{2} \bigr)=1.3619.$$
(136)

Substituting the proposed values in (65) yields

$$\bigl(47(1-\theta )+1.567 \bigr)\Delta t^{\ast 2}+ \bigl(47(1-\theta )+1.567 \bigr)\Delta t^{\ast }-4.9388=0.$$
(137)

To empirically check these theoretical findings, the SSTM approximation of the SDIDE with jump (118) is performed using both $${\theta =0.1}$$ and $${\theta = 0.8}$$. Therefore, according to Theorem 5.1, if $$\theta =0.1$$, the SSTM scheme is stable for time-steps $${\Delta t\leq 2^{-4}}$$. However, the mean-square stability is lost if $${\Delta t\geq 2^{-3}}$$. If $$\theta =0.8$$, the SSTM scheme is stable for all time-steps $${\Delta t\leq 2^{-2}}$$. The results obtained are reported in Fig. 3.

As we see in Fig. 4, if $$\theta =0.1$$, the split-step θ-Euler scheme is stable for time-steps $${\Delta t\leq 2^{-5}}$$. However, the mean-square stability is lost if $${\Delta t\geq 2^{-4}}$$. If $$\theta =0.8$$, the split-step θ-Euler scheme is stable for time-steps $${\Delta t\leq 2^{-3}}$$, but it is not if $${\Delta t\geq 2^{-2}}$$.

### Example 6.3

Let us consider the following three-dimensional SDIDE with jump:

\begin{aligned} dy(t)={}&Uy(t)\,dt+g \biggl(t,y(t),y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds \biggr) \,dW(t) \\ &{}+ \int _{Z}h \biggl(t,y(t),y(t-\tau ), \int _{t-\tau }^{t}K\bigl(s,y(s)\bigr)\,ds, \nu \biggr)N_{\lambda }(dt, d\nu ),\quad t>0, \end{aligned}
(138)

with the initial data

$$\xi (t)= \bigl(\xi _{1}(t),\xi _{2}(t),\xi _{3}(t) \bigr)^{T}= (1, 1, 1 )^{T} \in C \bigl([-1,0]; \mathbb{R}^{3} \bigr),$$
(139)

where U is the following matrix:

$$U= \begin{pmatrix} -7 & 1 & -3 \\ -3 & -2 & -1 \\ -2 & -1 & -2 \end{pmatrix}.$$
(140)

Let $${y(t)= (y_{1}(t), y_{2}(t), y_{3}(t) )^{T}}$$, $${\overline{y}(t)= (y_{1}(t-1), y_{2}(t-1), y_{3}(t-1) )^{T}}$$, $${\hat{y}(t)= (\hat{y}_{1}(t), \hat{y}_{2}(t), \hat{y}_{3}(t) )^{T}}$$ and

\begin{aligned} &f(t,y,\overline{y},\hat{y})=Uy(t) \\ &\hphantom{f(t,y,\overline{y},\hat{y})}= \bigl(-7y_{1}(t)+y_{2}(t)-3y_{3}(t), -3y_{1}(t)-2y_{2}(t)-y_{3}(t),-2y_{1}(t)-y_{2}(t)-2y_{3}(t) \bigr)^{T}, \\ &g(t,y,\overline{y},\hat{y})= \biggl(\frac{1}{2}\sin ^{2} \overline{y}_{1}(t)- \frac{1}{2}\hat{y}_{1}(t), \frac{1}{2}\ln \bigl(1+y_{2}^{2}(t)\bigr), \\ &\hphantom{g(t,y,\overline{y},\hat{y})={}}{} \ln \bigl(1+\overline{y}_{3}^{2}(t)\bigr) - \int _{t-1}^{t} \sin ^{3} \bigl( \overline{y}_{3}(s)\bigr)\,ds \biggr)^{T}, \\ &h(t,y,\overline{y},\hat{y}, \nu )= \biggl(\frac{1}{\sqrt{\nu }} \overline{y}_{1}(t) +0.1 \int _{t-1}^{t} \frac{\hat{y}_{1}(s)}{1+\hat{y}_{1}^{2}(s)}\,ds, 0.01\nu y_{2}(t)-0.4 \int _{t-1}^{t}\frac{\hat{y}_{2}(s)}{1+\hat{y}_{2}^{2}(s)}\,ds, \\ &\hphantom{h(t,y,\overline{y},\hat{y}, \nu )={}} 0.2\sqrt{\nu }\overline{y}_{3}(t) +0.5 \int _{t-1}^{t} \frac{\hat{y}_{3}(s)}{1+\hat{y}_{3}^{2}(s)}\,ds \biggr)^{T}. \end{aligned}

We are going to show that Assumption 2.2 holds if we choose, for example, the following symmetric positive definite matrix Q:

$$Q= \begin{pmatrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{pmatrix}.$$
(141)

In fact, for any $${y, \overline{y}, \hat{y} \in \mathbb{R}^{3}}$$ and $$t>0$$, we can obtain

\begin{aligned}& \begin{aligned}[b] y^{T}Qf(t,y,0,0)&= \begin{pmatrix} y_{1} & y_{2} & y_{3} \end{pmatrix} \begin{pmatrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{pmatrix} \begin{pmatrix} -7 & 1 & -3 \\ -3 & -2 & -1 \\ -2 & -1 & -2 \end{pmatrix} \begin{pmatrix} y_{1} \\ y_{2} \\ y_{3} \end{pmatrix} \\ &= \begin{pmatrix} 2y_{1}+2y_{2} & y_{1}+2y_{2}+y_{3} & y_{2}+2y_{3} \end{pmatrix} \begin{pmatrix} -7y_{1}+y_{2}-3y_{3} \\ -3y_{1}-2y_{2}-y_{3} \\ -2y_{1}-y_{2}-2y_{3} \end{pmatrix} \\ &\leq -31y_{1}^{2}-17y_{2}^{2}- \frac{35}{2}y_{3}^{2} \leq -17 \vert y \vert ^{2}, \end{aligned} \end{aligned}
(142)
\begin{aligned}& f^{T}(t,y,\overline{y},\hat{y})Qf(t,y, \overline{y},\hat{y}) \\& \quad = \begin{pmatrix} -13y_{1}-8y_{2}-5y_{3} & -8y_{1}-8y_{2}-6y_{3} & -5y_{1}-4y_{2}-5y_{3} \end{pmatrix} \\& \qquad {}\times \begin{pmatrix} -7y_{1}+y_{2}-3y_{3} \\ -3y_{1}-2y_{2}-y_{3} \\ -2y_{1}-y_{2}-2y_{3} \end{pmatrix} \leq \frac{151}{2}y_{1}^{2}+21y_{2}^{2}+ \frac{79}{2}y_{3}^{2} \\& \quad \leq \frac{151}{2} \vert y \vert ^{2}. \end{aligned}
(143)

Moreover, similar arguments yield

\begin{aligned} &g^{T}(t,y,\overline{y},\hat{y})Qg(t,y, \overline{y},\hat{y}) \\ &\quad = \begin{pmatrix} \frac{1}{2}\sin ^{2}\overline{y}_{1}(t)-\frac{1}{2}\hat{y}_{1}(t) & \frac{1}{2}\ln (1+y_{2}^{2}(t)) & \ln (1+\overline{y}_{3}^{2}(t))- {\int _{t-1}^{t}}\sin ^{3} (\overline{y}_{3}(s))\,ds \end{pmatrix} \end{aligned}
(144)
\begin{aligned} &\qquad {} \times \begin{pmatrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{pmatrix} \begin{pmatrix} \frac{1}{2}\sin ^{2} (\overline{y}_{1}(t) )-\frac{1}{2} \hat{y}_{1}(t) \\ \frac{1}{2}\ln (1+y_{2}^{2}(t) ) \\ \ln (1+\overline{y}_{3}^{2}(t) )- {\int _{t-1}^{t}} \sin ^{3} (\overline{y}_{3}(s))\,ds \end{pmatrix} \\ &\quad =\frac{1}{4}\sin ^{4}(\overline{y}_{1})- \frac{3}{4}\hat{y}_{1}^{2} + \frac{1}{2}\ln ^{2}\bigl(1+y_{2}^{2} \bigr) +\frac{3}{2}\ln ^{2}\bigl(1+ \overline{y}_{3}^{2} \bigr) +\frac{1}{2} \biggl( \int _{t-1}^{t}\sin ^{3}\bigl( \overline{y}_{3}(s)\bigr)\,ds \biggr)^{2} \\ &\quad \leq \frac{1}{2} \vert y \vert ^{2}- \frac{3}{4} \vert \hat{y} \vert ^{2}+2 \vert \overline{y} \vert ^{2}, \end{aligned}
(145)

where we used the fact that $${\ln ^{2}(1+y^{2}) \leq |y|^{2}}$$, $${|\sin (\overline{y})|^{4} \leq |\overline{y}|^{2}}$$ and the Cauchy–Schwartz inequality. In addition, according to (24), we have

\begin{aligned} &L^{1}g(t,y,\overline{y},\hat{y}) \\ &\quad = \begin{pmatrix} \frac{1}{2}\sin ^{3}(\overline{y}_{1})\cos (\overline{y}_{1})- \frac{1}{4}\sin ^{2}(\overline{y}_{1}) -\frac{1}{2}\sin ( \overline{y}_{1})\cos (\overline{y}_{1})\hat{y}_{1}+\frac{1}{4} \hat{y}_{1} \\ \frac{y_{2}}{2 (1+y_{2}^{2} )}\ln (1+y_{2}^{2}) \\ \frac{2\overline{y}_{3}}{1+\overline{y}_{3}^{2}}\ln (1+\overline{y}_{3}^{2}) -\ln (1+\overline{y}_{3}^{2})- \frac{2\overline{y}_{3}}{1+\overline{y}_{3}^{2}} {\int _{t-1}^{t}} \sin ^{3}(\overline{y}_{3}(s))\,ds+ {\int _{t-1}^{t}}\sin ^{3}( \overline{y}_{3}(s))\,ds \end{pmatrix} \\ &\quad \leq \begin{pmatrix} \frac{1}{4}\overline{y}_{1}+\frac{1}{4}\hat{y}_{1} \\ \frac{1}{2}y_{2} \\ -\overline{y}_{3} \end{pmatrix}, \end{aligned}
(146)

so that

$$L^{1}g^{T}(t,y, \overline{y},\hat{y})QL^{1}g(t,y,\overline{y},\hat{y}) \leq \frac{3}{2} \vert \overline{y} \vert ^{2}- \frac{1}{8} \vert \hat{y} \vert ^{2}.$$
(147)

Also we have

\begin{aligned}& L^{-1}g(t,y,\overline{y},\hat{y}) \\& \quad = \begin{pmatrix} \frac{1}{2}\sin ^{2} (\overline{y}_{1}+ {\int _{Z}}h \pi (d\nu ) ) -\frac{1}{2} (\hat{y}_{1}+\int _{Z}h\pi (d\nu ) ) \\ \frac{1}{2}\ln (1+(y_{2}+ {\int _{Z}}h\pi (d\nu ))^{2} ) \\ \ln (1+(\overline{y}_{3}+ {\int _{Z}}h\pi (d\nu ))^{2} ) - {\int _{t-1}^{t}}\sin ^{3} (\overline{y}_{3}(s)+ {\int _{Z}}h\pi (d\nu ) )\,ds \end{pmatrix} \\& \qquad {}- \begin{pmatrix} \frac{1}{2}\sin ^{2} (\overline{y}_{1} )-\frac{1}{2}\hat{y}_{1} \\ \frac{1}{2}\ln (1+y_{2}^{2} ) \\ \ln (1+\overline{y}_{3}^{2} ) - {\int _{t-1}^{t}} \sin ^{3} (\overline{y}_{3}(s) )\,ds \end{pmatrix} \\& \quad \leq \begin{pmatrix} -0.0003\hat{y}_{1} \\ 0.0154y_{2}-0.2266\hat{y}_{2} \\ 0 \end{pmatrix}, \end{aligned}
(148)
\begin{aligned}& L^{-1}g^{T}(t,y,\overline{y}, \hat{y})QL^{-1}g(t,y,\overline{y}, \hat{y}) \leq 0.0065 \vert y \vert ^{2}+0.0957 \vert \hat{y} \vert ^{2}. \end{aligned}
(149)

A simple calculation shows that

\begin{aligned}& \int _{Z}h^{T}(t,y,\overline{y},\hat{y}, \nu ) \pi (d\nu ) \\& \quad \leq \begin{pmatrix} {\int _{Z}} (\frac{1}{\sqrt{\nu }}\overline{y}_{1}+0.1 \hat{y}_{1} )\pi (d\nu ) & {\int _{Z}} (0.01\nu y_{2}-0.4 \hat{y}_{2} )\pi (d\nu ) & {\int _{Z}} (0.2\sqrt{ \nu }\overline{y}_{3}+0.5\hat{y}_{3} )\pi (d\nu ) \end{pmatrix} \\& \quad = \begin{pmatrix} \overline{y}_{1}+0.1133\hat{y}_{1} & 0.0308y_{2}-0.4532\hat{y}_{2} & 0.3298 \overline{y}_{3}+0.5665\hat{y}_{3} \end{pmatrix} \end{aligned}
(150)

and

\begin{aligned} &h^{T}(t,y,\overline{y},\hat{y}, \nu )Q h(t,y, \overline{y},\hat{y}, \nu ) \\ &\quad = \begin{pmatrix} \frac{1}{\sqrt{\nu }}\overline{y}_{1}+0.1\hat{y}_{1} & 0.01\nu y_{2}-0.4 \hat{y}_{2} & 0.2\sqrt{\nu }\overline{y}_{3}+0.5\hat{y}_{3} \end{pmatrix} \\ &\qquad {}\times \begin{pmatrix} 2 & 1 & 0 \\ 1 & 2 & 1 \\ 0 & 1 & 2 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{\nu }}\overline{y}_{1}+0.1\hat{y}_{1} \\ 0.01\nu y_{2}-0.4\hat{y}_{2} \\ 0.2\sqrt{\nu }\overline{y}_{3}+0.5\hat{y}_{3} \end{pmatrix} \\ &\quad = \bigl(0.01\sqrt{\nu }-0.0025\nu +0.002\nu \sqrt{\nu } +0.0002\nu ^{2}+0.0005 \bigr) \vert y_{2} \vert ^{2} \\ &\qquad {}+ \biggl(-\frac{0.2}{\sqrt{\nu }}+0.01\sqrt{\nu }+\frac{2}{\nu } \biggr) \vert \overline{y}_{1} \vert ^{2} + (-0.04\sqrt{\nu }+0.04\nu ) \vert \overline{y}_{2} \vert ^{2} \\ &\qquad {}+ (0.261\sqrt{\nu }+0.001\nu \sqrt{\nu }+0.04\nu +0.25 ) \vert \overline{y}_{3} \vert ^{2} \\ &\qquad {}+ \biggl(\frac{0.2}{\sqrt{\nu }}+0.0005\nu -0.0195 \biggr) \vert \hat{y}_{1} \vert ^{2} + \biggl(-\frac{0.4}{\sqrt{\nu }}-0.008 \nu -0.08\sqrt{\nu } +0.08 \biggr) \vert \hat{y}_{2} \vert ^{2} \\ &\qquad {}+ (0.005\nu +0.1\sqrt{\nu } +0.05 ) \vert \hat{y}_{3} \vert ^{2}, \end{aligned}
(151)

therefore, by using the property of the log-normal distributed* $${\tilde{h}(\nu )}$$ defined in Eq. (109), we can obtain

\begin{aligned} &\int _{Z}h^{T}(t,y,\overline{y},\hat{y}, \nu )Q h(t,y,\overline{y}, \hat{y}, \nu )\pi (d\nu ) \\ &\quad = \int _{0}^{\infty } \bigl(0.01\sqrt{\nu }-0.0025\nu +0.002\nu \sqrt{\nu } +0.0002\nu ^{2}+0.0005 \bigr) \vert y_{2} \vert ^{2} \\ &\qquad {}\times \frac{1}{\sqrt{2\pi \nu }}e^{-\frac{(\ln \nu )^{2}}{2}} \, d\nu \\ &\qquad {}+ \int _{0}^{\infty } \biggl(-\frac{0.2}{\sqrt{\nu }}+0.01 \sqrt{\nu }+ \frac{2}{\nu } \biggr) \vert \overline{y}_{1} \vert ^{2} \times \frac{1}{\sqrt{2\pi \nu }}e^{-\frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\qquad {}+ \int _{0}^{\infty } (-0.04\sqrt{\nu }+0.04\nu ) \vert \overline{y}_{2} \vert ^{2} \times \frac{1}{\sqrt{2\pi \nu }}e^{-\frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\qquad {}+ \int _{0}^{\infty } (0.261\sqrt{\nu }+0.001\nu \sqrt{ \nu }+0.04\nu +0.25 ) \vert \overline{y}_{3} \vert ^{2} \times \frac{1}{\sqrt{2\pi \nu }}e^{- \frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\qquad {}+ \int _{0}^{\infty } \biggl(\frac{0.2}{\sqrt{\nu }}+0.0005 \nu -0.0195 \biggr) \vert \hat{y}_{1} \vert ^{2}\times \frac{1}{\sqrt{2\pi \nu }}e^{- \frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\qquad {}+ \int _{0}^{\infty } \biggl(-\frac{0.4}{\sqrt{\nu }}-0.008 \nu -0.08\sqrt{ \nu } +0.08 \biggr) \vert \hat{y}_{2} \vert ^{2}\times \frac{1}{\sqrt{2\pi \nu }}e^{- \frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\qquad {}+ \int _{0}^{\infty } (0.005\nu +0.1\sqrt{\nu } +0.05 ) \vert \hat{y}_{3} \vert ^{2} \times \frac{1}{\sqrt{2\pi \nu }}e^{-\frac{(\ln \nu )^{2}}{2}}\, d\nu \\ &\quad \leq 0.0286 \vert y \vert ^{2}+3.3246 \vert \overline{y} \vert ^{2}-0.0642 \vert \hat{y} \vert ^{2}. \end{aligned}
(152)

Also based on Eq. (24), we have

\begin{aligned}& L^{1} \biggl( \int _{Z} h^{T}(t,y,\overline{y},\hat{y}, \nu ) \pi (d\nu ) \biggr) \leq \begin{pmatrix} 0.5566\overline{y}_{1}-0.5566\hat{y}_{1} & -0.2112y_{2} & 0 \end{pmatrix}, \end{aligned}
(153)
\begin{aligned}& L^{1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\& \quad \leq 0.0892 \vert y \vert ^{2}-0.1175 \vert \overline{y} \vert ^{2}+0.1175 \vert \hat{y} \vert ^{2}, \end{aligned}
(154)
\begin{aligned}& L^{-1} \biggl( \int _{Z}h(t,y,\overline{y},\hat{y}, \nu )\pi (d\nu ) \biggr) \\& \quad \leq \begin{pmatrix} 2.1133\overline{y}_{1}+0.2416\hat{y}_{1} \\ 0.0178y_{2}+0.0035y_{1}-0.2617\hat{y}_{2} \\ 0.6252\overline{y}_{3}+1.0743\hat{y}_{3} \end{pmatrix} - \begin{pmatrix} \overline{y}_{1}+0.1133\hat{y}_{1} \\ 0.0308y_{2}-0.4532\hat{y}_{2} \\ 0.3298\overline{y}_{3}+0.5665\hat{y}_{3} \end{pmatrix} \\& \quad = \begin{pmatrix} 1.1133\overline{y}_{1}+0.1283\hat{y}_{1} \\ -0.013y_{2}+0.1915\hat{y}_{2} \\ 0.2954\overline{y}_{3}+0.5078\hat{y}_{3} \end{pmatrix}, \end{aligned}
(155)
\begin{aligned}& L^{-1} \biggl( \int _{Z}h^{T}(t, y, \overline{y}, \hat{y}, \nu )\pi (d \nu ) \biggr)Q L^{-1} \biggl( \int _{Z}h(t, y, \overline{y}, \hat{y}, \nu ) \pi (d\nu ) \biggr) \\& \quad \leq -0.031 \vert y \vert ^{2}+2.9753 \vert \overline{y} \vert ^{2}+0.8577 \vert \hat{y} \vert ^{2}. \end{aligned}
(156)

Thus, Assumption 2.2 holds with Q as in (141), $${\lambda =3}$$, $${\alpha _{1}=17}$$, $${\alpha _{2}=\alpha _{3}=0}$$, $${\overline{k}=1}$$, $${\beta _{1}=0.5}$$, $${\beta _{2}=2}$$, $${\beta _{3}=-0.75}$$, $${L=75.5}$$, $${\eta _{1}=0}$$, $${\eta _{2}=1.5}$$, $${\eta _{3}=-0.125}$$, $${\tilde{\eta }_{1}=-0.0065}$$, $${\tilde{\eta }_{2}=0}$$, $${\tilde{\eta }_{3}=0.0957}$$, $${\sigma _{1}=0.0892}$$, $${\sigma _{2}=-0.1175}$$, $${\sigma _{3}=0.1175}$$, $${\tilde{\sigma }_{1}=-0.031}$$, $${\tilde{\sigma }_{2}=2.9753}$$, $${\tilde{\sigma }_{3}=0.8577}$$, $${\gamma _{1}=0.0286}$$, $${\gamma _{2}=3.3246}$$ and $${\gamma _{3}=-0.0642}$$. Obviously these parameters satisfy inequality (23):

$$17=\alpha _{1}>\alpha _{2}+\alpha _{3}\overline{k}\tau +\frac{1}{2} \bigl(\beta _{1}+\beta _{2}+\beta _{3} \overline{k}^{2}\tau ^{2} \bigr) + \frac{1}{2} \lambda \bigl(\gamma _{1}+\gamma _{2} +\gamma _{3} \overline{k}^{2}\tau ^{2} \bigr)=5.3085.$$
(157)

Substituting the proposed values in (65) yields

$$\bigl(75.5(1-\theta )+1.4642 \bigr)\Delta t^{\ast 2}+ \bigl(75.5(1-\theta )+1.4642 \bigr)\Delta t^{\ast }-17.5283=0.$$
(158)

To empirically check these theoretical findings, the SSTM approximation of the SDIDE with jump (118) is performed using both $${\theta =0.1}$$ and $${\theta = 0.8}$$. Therefore, according to Theorem 5.1, if $$\theta =0.1$$, the SSTM scheme is stable for time-steps $${\Delta t\leq 2^{-3}}$$. However, the mean-square stability is lost if $${\Delta t\geq 2^{-2}}$$. If $$\theta =0.8$$, the SSTM scheme is stable for time-steps $${\Delta t\leq 2^{-1}}$$, but it is not if $${\Delta t\geq 1}$$. The results obtained are reported in Fig. 5.

## Conclusion

In this paper, we considered the general case of n-dimensional SDIDEs with Poisson jump. We succeeded to define Lyapunov differential operators and obtained the exponential mean-square stability of the proposed model solution. Also, by introducing the SSTM scheme and by using the delayed difference inequality as well approximating the integro part of the model by the simple trapezoidal rule, we obtain the same exponential mean-square stability property for some restrictive stepsize Δt. Finally, in Figs. 1, 3 and 5, the behavior of the exponential mean-square stability solution is consistent with the Δt values for both different values of θ.

## References

1. Appleby, A.D., Riedle, Z.: Almost sure asymptotic stability of stochastic Volterra integro-differential equations with fading perturbations. Stoch. Anal. Appl. 24, 813–826 (2006)

2. Hobson, D.G., Rogers, L.C.G.: Complete models with stochastic volatility. Math. Finance 8, 27–48 (1998)

3. Beretta, E., Kolmanovskii, V.B.: Stability of epidemic model with time delays influenced by stochastic perturbations. Math. Comput. Simul. 45, 269–277 (1998)

4. Kuang, Y.: Delay Differential Equations with Applications in Population Dynamics, vol. 45, pp. 269–277. Academic Press, San Diego (1993)

5. Bo, L., Tang, D., Wang, Y.: Optimal investment of variance-swaps in jump-diffusion market with regime-switching. J. Econ. Dyn. Control 83, 175–197 (2017)

6. Cont, R., Tankov, P.: Financial modelling with jump processes. Financ. Math. Ser. 45, 269–277 (2003)

7. Dempster, M.A.H., Medova, E., Tang, K.: Latent jump diffusion factor estimation for commodity futures. J. Commod. Mark. 9, 35–54 (2018)

8. Li, X., Mao, X., Shen, Y.: Approximate solutions of stochastic differential delay equations with Markovian switching. J. Differ. Equ. Appl. 16, 195–210 (2010)

9. Zhang, Q.M.: Convergence of numerical solutions for a class of stochastic age-dependent capital system with Markovian switching. Econ. Model. 28, 1195–1201 (2011)

10. Ahmadian, D., Farkhondeh Rouz, O., Ballestra, L.V.: Stability analysis of split-step θ-Milstein method for a class of n-dimensional stochastic differential equations. Appl. Math. Comput. 348, 413–424 (2019)

11. Baker, C.T.H., Buckwar, E.: Exponential stability in p-th mean of solutions and of convergent Euler-type solutions of stochastic delay differential equations. J. Comput. Appl. Math. 184, 404–427 (2005)

12. Farkhondeh Rouz, O., Ahmadian, D.: Stability analysis of two classes of improved backward Euler methods for stochastic delay differential equations of neutral type. Comput. Methods Differ. Equ. 3, 201–213 (2017)

13. Farkhondeh Rouz, O., Ahmadian, D., Milev, M.: Exponential mean-square stability of two classes of theta Milstein methods for stochastic delay differential equations. AIP Conf. Proc. 1910, 060015 (2017)

14. Huang, C., Gan, S., Wang, D.: Delay-dependent stability analysis of numerical methods for stochastic delay differential equations. J. Comput. Appl. Math. 236, 3514–3527 (2012)

15. Mao, X., Wu, F., Kloeden, P.E.: Discrete Razumikhin-type technique and stability of the Euler Maruyama method to stochastic functional differential equations. Discrete Contin. Dyn. Syst. 33, 885–903 (2013)

16. Song, M., Mao, X.: Almost sure exponential stability of hybrid stochastic functional differential equations. J. Math. Anal. Appl. 458, 1390–1408 (2018)

17. Ding, X., Wu, K.: Convergence and stability of the semi-implicit Euler method for linear stochastic delay integro-differential equations. Int. J. Comput. Math. 83, 753–763 (2006)

18. Rathinasamy, A., Balachandran, K.: Mean-square stability of Milstein method for linear hybrid stochastic delay integro-differential equations. Nonlinear Anal. Hybrid Syst. 2, 1256–1263 (2008)

19. Liu, L., Mo, H., Deng, F.: Split-step theta method for stochastic delay integro-differential equations with mean square exponential stability. Appl. Math. Comput. 353, 320–328 (2019)

20. Li, Q., Gan, S.: Mean-square exponential stability of stochastic theta methods for nonlinear stochastic delay integro-differential equations. J. Appl. Math. Comput. 36, 69–87 (2012)

21. Baccouch, M., Johnsoooon, B.: A high-order discontinuous Galerkin method for Ito stochastic ordinary differential equations. J. Comput. Appl. Math. 308, 138–165 (2016)

22. Buckwar, E., Winkler, R.: Improved linear multi-step methods for stochastic ordinary differential equations. J. Comput. Appl. Math. 205, 912–922 (2007)

23. Bokor, R.H.: Stochastically stable one-step approximations of solutions of stochastic ordinary differential equations. Appl. Numer. Math. 44, 299–312 (2003)

24. Mo, H., Zhao, X., Deng, F.: Exponential mean-square stability of the θ-method for neutral stochastic delay differential equations with jumps. Int. J. Syst. Sci. 48, 462–470 (2016)

25. Tan, J., Wang, H.: Mean-square stability of the Euler–Maruyama method for stochastic differential delay equations with jumps. Int. J. Comput. Math. 88, 421–429 (2011)

26. Li, Q., Gan, S.: Almost sure exponential stability of numerical solutions for stochastic delay differential equations with jumps. J. Appl. Math. Comput. 37, 541–557 (2011)

27. Zhang, W., Ye, J., Li, H.: Stability with general decay rates of stochastic differential delay equations with Poisson jumps and Markovian switching. Stat. Probab. Lett. 92, 1–11 (2014)

28. Li, H., Zhu, Q.: The pth moment exponential stability and almost surely exponential stability of stochastic differential delay equations with Poisson jump. J. Math. Anal. Appl. 471, 197–210 (2019)

29. Zhao, G., Liu, M., Kloeden, P.E.: Numerical methods for nonlinear stochastic delay differential equations with jumps. Appl. Math. Comput. 233, 222–231 (2014)

30. Jiang, F., Shen, Y., Hu, J.: Stability of the split-step backward Euler scheme for stochastic delay integro-differential equations with Markovian switching. Math. Comput. Simul. 16, 814–821 (2011)

31. Zhu, Q.: Razumikhin-type theorem for stochastic functional differential equations with Lévy noise and Markov switching. Int. J. Control 90, 1703–1712 (2017)

32. Zhu, Q.: Stability analysis of stochastic delay differential equations with Lévy noise. Syst. Control Lett. 118, 62–68 (2018)

33. Deng, S., Fei, W., Liu, W., Mao, X.: The truncated EM method for stochastic differential equations with Poisson jumps. J. Comput. Appl. Math. 355, 232–257 (2019)

34. Ren, Q., Tian, H.: Compensated θ-Milstein methods for stochastic differential equations with Poisson jumps. Appl. Numer. Math. 150, 27–37 (2020)

35. Applebaum, D.: Lévy Processes and Stochastic Calculus. Cambridge University Press, New Haven (2004)

36. Rong, S.T.: Theory of Stochastic Differential Equations with Jumps and Applications. Springer, Berlin (2005)

37. Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations. Applications of Mathematics (New York). Springer, Berlin (1992)

38. Kloeden, P.E., Shardlow, T.: The Milstein scheme for stochastic delay differential equations without using anticipative calculus. Stoch. Anal. Appl. 30, 181–202 (2012)

39. Liptser, R.S., Shiryaev, A.N.: Theory of Martingale. Kluwer Academic, Dordrecht (1989)

40. Mao, X.: Stochastic Differential Equations and Their Applications. Horwood, Chichester (1997)

### Acknowledgements

The authors would like to thank the reviewers for their very valuable comments and helpful suggestions which improved the paper significantly.

### Availability of data and materials

Data sharing not applicable to this article as no dataries were generated or analysed during the current study.

## Funding

No funding was received. Omid Farkhondeh Rouz and his teacher professor Davood Ahmadian finished the manuscript together.

## Author information

Authors

### Contributions

All authors read and approved the final manuscript.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions

Reprints and Permissions

Ahmadian, D., Farkhondeh Rouz, O. Exponential mean-square stability of numerical solutions for stochastic delay integro-differential equations with Poisson jump. J Inequal Appl 2020, 186 (2020). https://doi.org/10.1186/s13660-020-02452-3

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/s13660-020-02452-3

### Keywords

• Split-step θ-Milstein scheme
• Exponential mean-square stability
• Stochastic delay integro-differential equations
• Poisson jump
• Lyapunov function