Skip to main content

A maximal inequality for pth power of stochastic convolution integrals

Abstract

We prove an inequality for the pth power of the norm of a stochastic convolution integral in a Hilbert space. The inequality is stronger than analogous inequalities in the literature in the sense that it is pathwise and not in expectation.

1 Introduction

Stochastic convolution integrals appear in many fields of stochastic analysis. They are integrals of the form

$$X_{t}= \int_{0}^{t} S_{t-s}\,dM_{s}, $$

where \(M_{t}\) is a martingale with values in a Hilbert space. Although they generalize stochastic integrals, they are different in many ways. For example, they are not semimartingales in general, and hence the usual results on semimartingales, such as maximal inequalities (i.e., inequalities for \(\sup_{0\le s\le t} \|X_{s}\|\)) and the existence of càdlàg versions, cannot be applied directly to them.

Among first studies in this field, Kotelenez [1] and Ichikawa [2] considered stochastic convolution integrals with respect to general martingales. They proved a maximal inequality in \(L^{2}\) for stochastic convolution integrals (Theorem 1).

Stochastic convolution integrals arise naturally in proving the existence, uniqueness, and regularity of the solutions of semilinear stochastic evolution equations

$$dX_{t} = A X_{t} \,dt + f(t,X_{t})\,dt + g(t,X_{t})\,dM_{t}, $$

where A is the generator of a \(C_{0}\) semigroup of linear operators on a Hilbert space, and \(M_{t}\) is a martingale. The case where the coefficients are Lipschitz operators is studied well, and theorems on the existence, uniqueness, and continuity with respect to initial data for the solutions in \(L^{2}\) are proved; see, for example, Kotelenez [3]. The proofs are based on the maximal inequality for stochastic convolution integrals, that is, Theorem 1.

These results have been generalized in several directions. One is the maximal inequality for pth power of the norm of stochastic convolution integrals. Tubaro [4] has proved an upper estimate for \(E[\sup_{0\le s\le t}\vert x(s)\vert^{p}]\) with \(p\ge2\) in the case where \(M_{t}\) is a real Wiener process. Ichikawa [2] has proved a maximal inequality for pth power, \(p\ge2\), in the particular case where \(M_{t}\) is a Hilbert-space-valued continuous martingale. The case of general martingale is proved by Zangeneh [5] for \(p\ge2\) (see Theorem 5). Hamedani and Zangeneh [6] have generalized the maximal inequality for \(0< p <\infty\).

Brzezniak et al. [7] have derived a maximal inequality for pth power of the norm of stochastic convolutions driven by Poisson random measures.

As far as we know, the maximal inequalities proved for stochastic convolution integrals in the literature all involve expectations. The only result that provides a pathwise (almost sure) bound is Zangeneh [5], who has proved Theorem 2, called an Itô-type inequality. This inequality provides a pathwise estimate for the square of the norm of stochastic convolution integrals and is a generalization of the Itô formula to stochastic convolution integrals.

In Section 2, we define and state some results about stochastic convolution integrals that will be used in the sequel. In Section 3, we state and prove the main result of this article, Theorem 6, which provides a pathwise bound for the pth power of stochastic convolution integrals with respect to general martingales. The special case where the martingale is an Itô integral with respect to a Wiener process has been proved by Jahanipour and Zangeneh [8].

As an example, we apply Theorem 6 to a semilinear stochastic evolution equation with non-Lipschitz coefficients. We consider the drift term to be a monotone nonlinear operator and the noise term to be a Wiener process and provide a sufficient condition for stability of mild solutions in \(L^{p}\) in Theorem 13. The precise assumptions on coefficients will be stated in Section 4.

2 Stochastic convolution integrals

Let H be a separable Hilbert space with inner product \(\langle\cdot ,\cdot \rangle\). Let \(S_{t}\) be a \(C_{0}\) semigroup on H with infinitesimal generator \(A:D(A)\to H\). Furthermore, we assume the exponential growth condition on \(S_{t}\), that is, there exists a constant α such that \(\| S_{t} \| \le e^{\alpha t}\). If \(\alpha =0\), then \(S_{t}\) is called a contraction semigroup.

In this section, we review some properties and results about convolution integrals of type \(X_{t}=\int_{0}^{t} S_{t-s}\,dM_{s}\) where \(M_{t}\) is a martingale. These are called stochastic convolution integrals. Kotelenez [3] gives a maximal inequality for stochastic convolution integrals.

Theorem 1

(Kotelenez [3])

Let \(\alpha\ge0\). There exists a constant C such that for any H-valued càdlàg locally square-integrable martingale \(M_{t}\), we have

$$\mathbb{E} \sup_{0\le t\le T} \biggl\| \int_{0}^{t} S_{t-s}\,dM_{s} \biggr\| ^{2} \le\mathbf{C} e^{4\alpha T} \mathbb{E}[M]_{T}. $$

Remark

Hamedani and Zangeneh [6] generalized this inequality to a stopped maximal inequality for pth moments (\(0< p<\infty\)) of stochastic convolution integrals.

Because of the presence of monotone nonlinearity in our equation, we need a pathwise bound for stochastic convolution integrals. For this reason, the following pathwise inequality for the norm of stochastic convolution integrals has been proved in Zangeneh [5].

Theorem 2

(Itô-type inequality, Zangeneh [5])

Let \(Z_{t}\) be an H-valued càdlàg locally square-integrable semimartingale. If

$$X_{t}=S_{t} X_{0} + \int_{0}^{t} S_{t-s}\,dZ_{s}, $$

then

$$ \lVert X_{t} \rVert^{2} \le e^{2\alpha t}\lVert X_{0} \rVert^{2} + 2 \int _{0}^{t} {e^{2\alpha(t-s)}\langle X_{s-} , d Z_{s} \rangle}+ \int_{0}^{t} {e^{2\alpha(t-s)}d[Z]_{s}}, $$

where \([Z]_{t}\) is the quadratic-variation process of \(Z_{t}\).

We state here the Burkholder-Davis-Gundy (BDG) inequality and its corollary for future reference.

Theorem 3

(Burkholder-Davis-Gundy (BDG) inequality)

For every \(p \ge1\), there exists a constant \(\mathcal{C}_{p} >0\) such that, for any real-valued square-integrable càdlàg martingale \(M_{t}\) with \(M_{0}=0\) and for any \(T\ge0\),

$$\mathbb{E}\mathop{\sup} _{0\le t\le T} |M_{t}|^{p} \le \mathcal{C}_{p} \mathbb{E} [M]_{T}^{\frac{p}{2}}. $$

Proof

See [9], p.37, and the reference there. □

Corollary 4

Let \(p\ge1\), and let \(\mathcal{C}_{p}\) be the constant in the BDG inequality, \(M_{t}\) be an H-valued square integrable càdlàg martingale, \(X_{t}\) be an H-valued adapted process, and \(T\ge0\). Then, for \(K>0\),

$$\begin{aligned} \mathbb{E}\sup_{0\le t\le T} \biggl\vert \int_{0}^{t} \langle X_{s},dM_{s} \rangle\biggr\vert ^{p} \le& \mathcal{C}_{p}\mathbb{E} \bigl(\bigl(X_{t}^{*}\bigr)^{p}[M]_{t}^{\frac{p}{2}} \bigr) \\ \le& \frac{\mathcal{C}_{p}}{2K} \mathbb{E}\bigl(X_{t}^{*}\bigr)^{2p} + \frac{\mathcal {C}_{p} K}{2} \mathbb{E}[M]_{t}^{p}, \end{aligned}$$

where \(X_{t}^{*}=\sup_{0\le t\le T} \|X_{t}\|\).

Proof

See [5], Lemma 4, p.147. □

We will also need the following inequality, which is an analogue of the Burkholder-Davies-Gundy inequality for stochastic convolution integrals.

Theorem 5

(Burkholder-type inequality, Zangeneh [5], Theorem 2, p.147)

Let \(p\ge2\) and \(T>0\). Let \(S_{t}\) be a contraction semigroup on H, and \(M_{t}\) be an H-valued square-integrable càdlàg martingale for \(t\in[0,T]\). Then

$$ \mathbb{E}\sup_{0 \le t \le T}\biggl\Vert { \int_{0}^{t} S_{t-s}\,dM_{s}} \biggr\Vert ^{p} \le K_{p} \mathbb{E}\bigl([M]_{T}^{\frac{p}{2}}\bigr), $$

where \(K_{p}\) is a constant depending only on p.

3 Itô-type inequality for pth power

We use the notion of semimartingale and Itô’s formula as is described by Métivier [10].

Theorem 6

(Itô-type inequality for pth power)

Let \(p\ge2\). Let \(Z(t)=V(t) + M(t)\) be a semimartingale, where \(V(t)\) is an H-valued process with finite variation \(|V|(t)\), and \(M(t)\) is an H-valued square-integrable martingale with quadratic variation \([M](t)\). Assume that

$$\mathbb{E} [M](T)^{\frac{p}{2}} < \infty \quad\textit{and}\quad \mathbb {E}|V|(T)^{p} < \infty. $$

Let \(X_{0}(\omega)\) be \(\mathcal{F}_{0}\)-measurable and square-integrable. Define \(X(t)=S(t) X_{0} + \int_{0}^{t} S(t-s)\,dZ(s)\). Then we have

$$\begin{aligned} \bigl\| X(t) \bigr\| ^{p} \le& e^{p\alpha t}\|X_{0}\|^{p} + p \int_{0}^{t} e^{p\alpha(t-s)} \bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr) , dZ(s) \bigr\rangle \\ &{} + \frac{1}{2}p(p-1) \int_{0}^{t} e^{p\alpha(t-s)} \bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p-2} d[M]^{c}(s) \\ &{} + \sum_{0\le s\le t} e^{p\alpha(t-s)} \bigl( \bigl\Vert X(s)\bigr\Vert ^{p} -\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p} - p \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr) , \Delta Z(s) \bigr\rangle \bigr). \end{aligned}$$

Remark

  1. 1.

    For \(p=2\), the theorem implies the Itô-type inequality, Theorem 2.

  2. 2.

    If M is a continuous martingale, then the inequality takes the simpler form

    $$\begin{aligned} \bigl\Vert X(t)\bigr\Vert ^{p} \le& e^{p\alpha t} \|X_{0}\|^{p} + p \int_{0}^{t} e^{p\alpha(t-s)} \bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr) , dZ(s) \bigr\rangle \\ &{} + \frac{1}{2}p(p-1) \int_{0}^{t} e^{p\alpha(t-s)} \bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p-2} d[M](s). \end{aligned}$$

Before proceeding to the proof of theorem, we state and prove some lemmas.

Lemma 7

It suffices to prove Theorem 2 for the case \(\alpha=0\).

Proof

Define

$$\begin{aligned}& \tilde{S}(t)= e^{-\alpha t} S(t) ,\qquad \tilde{X}(t)=e^{-\alpha t}X(t) , \qquad d\tilde{Z}(t)=e^{-\alpha t}\,dZ(t). \end{aligned}$$

Now we have \(d\tilde{X}(t)=\tilde{S}(t) X_{0}+\int_{0}^{t} \tilde{(}S)(t-s)\,d\tilde{Z}(s)\). Note that \(\tilde{S}_{t}\) is a contraction semigroup. It is easy to see that the statement for \(\tilde{X}_{t}\) implies the statement for \(X(t)\). □

Hence, from now on, we assume that \(\alpha=0\).

Lemma 8

(Ordinary Itô’s formula for pth power)

Let \(p\ge2\), and let \(Z(t)\) be an H-valued semimartingale. Then

$$\begin{aligned}[b] \bigl\Vert Z(t)\bigr\Vert ^{p} \le{}&\bigl\Vert Z(0)\bigr\Vert ^{p} + p \int_{0}^{t} \bigl\Vert Z\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle Z\bigl(s^{-}\bigr) , dZ(s) \bigr\rangle \\ &{}+ \frac{p(p-1)}{2} \int_{0}^{t} \bigl\Vert Z\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} d[M]^{c}(s) \\ &{}+ \sum_{0\le s\le t} \bigl( \bigl\Vert Z(s)\bigr\Vert ^{p} - \bigl\Vert Z\bigl(s^{-}\bigr)\bigr\Vert ^{p} - p \bigl\Vert Z\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle Z\bigl(s^{-} \bigr) , \Delta Z(s) \bigr\rangle \bigr). \end{aligned} $$

Proof

Use Itô’s formula (Métivier [10], Theorem 27.2, p.190) for \(\varphi(x)=\|x\|^{p}\) and note that

$$\begin{aligned}& \varphi'(x) (h)=p\|x\|^{p-2}\langle x , h\rangle, \\& \begin{aligned}[b] \varphi''(x) (h\otimes h)&=p(p-2) \|x\|^{p-4}\langle x , h\rangle \langle x , h \rangle+ p\|x \|^{p-2}\langle h , h \rangle \\ &\le p(p-1) \|x\| ^{p-2} \|h\|^{2}. \end{aligned} \end{aligned}$$

 □

Lemma 9

Let \(v:[0,T]\to D(A)\) be a function with finite variation (with respect to the norm of \(D(A)\)), denoted by \(|v|(t)\). For \(u_{0} \in D(A)\), let \(u(t)=S(t) u_{0} + \int_{0}^{t} S(t-s)\,dv(s)\). Then \(u(t)\) is \(D(A)\)-valued and satisfies

$$u(t)=u_{0}+ \int_{0}^{t} A u(s)\,ds + v(t). $$

Proof

(See also [11], p.30, Theorem 2.22 for the particular case \(dv(t)=f(t)\,dt\).) Let \(q(t)\) be the Radon-Nikodym derivative of \(v(t)\) with respect to \(|v|(t)\), that is, \(q(t)\) is a \(D(A)\)-valued function that is Bochner measurable with respect to \(d|v|(t)\), and \(v(t)=\int_{0}^{t} q(s)\,d|v|(s)\). We know that, for every \(t\in[0,T]\), \(\|q(t)\|\le1\).

Recall from semigroup theory that we can equip \(D(A)\) with an inner product by defining \(\langle x , y \rangle_{D(A)} := \langle x , y \rangle+ \langle Ax , Ay \rangle\). By closedness of A it follows that, under this inner product, \(D(A)\) is a Hilbert space, and \(A:D(A)\to H\) is a bounded linear map. Note that \(S(t)\) is also a semigroup on \(D(A)\). Hence, \(u(t)\) is a convolution integral in \(D(A)\) and hence has its value in \(D(A)\). We use the following two simple identities that hold in \(D(A)\):

$$S(t)x = x + \int_{0}^{t} A S(r) x \,dr, \qquad S(t-s)x = x + \int_{s}^{t} S(r-s)Ax \,dr. $$

We have

$$\begin{aligned} u(t) & = S(t) u_{0} + \int_{0}^{t} S(t-s)\,dv(s) \\ & = S(t) u_{0} + \int_{0}^{t} S(t-s) q(s) d|v|(s) \\ & = u_{0} + \int_{0}^{t} A S(r) u_{0} \,dr + \int_{0}^{t} \biggl( q(s) + A \int_{s}^{t} S(r-s)q(s)\,dr \biggr) d|v|(s). \end{aligned}$$

Now, using Fubini’s theorem, we find

$$\begin{aligned} & = u_{0} + \int_{0}^{t} q(s) d|v|(s) + \int_{0}^{t} A S(r) u_{0} \,dr + \int_{0}^{t} A \int_{0}^{r} S(r-s)q(s)d|v|(s)\,dr \\ & = u_{0} + v(t) + \int_{0}^{t} A \biggl( S(r) u_{0} \,dr + \int_{0}^{r} S(r-s)\,dv(s)\,dr \biggr) \\ & = u_{0} + v(t) + \int_{0}^{t} A u(r)\,dr. \end{aligned}$$

 □

Lemma 10

Let \(V(t)\) be a \(D(A)\)-valued process with finite variation in \(D(A)\), \(M(t)\) be a \(D(A)\)-valued square-integrable martingale, and \(V(0)=M(0)=0\). Let \(Z(t)=V(t) + M(t)\), and let \(X_{0}\) be \(D(A)\)-valued and \(\mathcal{F}_{0}\)-measurable. Define \(X(t)=S(t) X_{0} + \int_{0}^{t} S(t-s)\,dZ(s)\). Then \(X(t)\) is \(D(A)\)-valued and satisfies the following stochastic integral equation in H:

$$X(t)= X_{0} + \int_{0}^{t} A X(s)\,ds + Z(t). $$

Proof

Note that \(S(t)\) is also a semigroup on \(D(A)\). Hence, \(X(t)\) is a stochastic convolution integral in \(D(A)\), and hence has its value in \(D(A)\). Write \(\overline{Y}(t)=S(t)X_{0}+\int_{0}^{t} S(t-s)\,dV(s)\) and \(Y(t)=\int_{0}^{t} S(t-s)\,dM(s)\). Hence, \(X(t)=\overline{Y}(t)+Y(t)\). We can apply Lemma 9 to \(\overline{Y}(t)\) and deduce that \(\overline{Y}(t)=X_{0}+\int_{0}^{t} A \overline{Y}(s)\,ds + V(t)\). Hence, it suffices to prove that \(Y(t)=\int _{0}^{t} A Y(s)\,ds + M(t)\).

Let \(\{e_{1},e_{2},e_{3},\ldots\}\) be a basis for the Hilbert space \(D(A)\). Define \(\overline{M}^{j}(t)=\langle M(t),e_{j} \rangle\) and \(M^{k}(t)=\sum_{j=1}^{k} \overline{M}^{j}(t)\). Let \(Y^{k}(t)=\int_{0}^{t} S(t-s)\,dM^{k}(s)\). We use the following two simple identities that hold in \(D(A)\):

$$S(t)x = x + \int_{0}^{t} A S(r) x \,dr,\qquad S(t-s)x = x + A \int_{s}^{t} S(r-s)x \,dr. $$

We have

$$\begin{aligned} Y^{k}(t) & = \int_{0}^{t} S(t-s)\,dM^{k}(s) \\ & = \sum_{1}^{k} \int_{0}^{t} S(t-s) e_{j} \,d \overline{M}^{j}(s) \\ & = \sum_{1}^{k} \int_{0}^{t} \biggl( e_{j} + \int_{s}^{t} S(r-s)A e_{j} \,dr \biggr)\,d \overline{M}^{j}(s) \\ & = M^{k}(t) + \int_{0}^{t} \int_{s}^{t} S(r-s)A e_{j} \,dr \,d \overline{M}^{j}(s). \end{aligned}$$

Now, using the stochastic Fubini theorem (see [9], Theorem 8.14, p.119), we find

$$\begin{aligned} & = M^{k}(t) + \int_{0}^{t} \int_{0}^{r} S(r-s)A e_{j} \,d \overline{M}^{j}(s)\,dr \\ & = M^{k}(t) + \int_{0}^{t} A \biggl( \int_{0}^{r} S(r-s)\,d\overline{M}^{j}(s) \biggr)\,dr \\ & = M^{k}(t) + \int_{0}^{t} A Y^{k}(s)\,ds. \end{aligned}$$

Hence,

$$ Y^{k}(t)=M^{k}(t) + \int_{0}^{t} A Y^{k}(s)\,ds. $$
(1)

We have \(\mathbb{E}\|M(T)-M^{k}(T)\|_{D(A)}^{2}\to0\) and, by Theorem 1,

$$\mathbb{E} \sup_{0\le t\le T} \bigl\Vert Y(t)-Y^{k}(t) \bigr\Vert _{D(A)}^{2} \le\mathbf{C} \mathbb{E}\bigl\Vert M(T)-M^{k}(T)\bigr\Vert _{D(A)}^{2} \to0. $$

Since \(A:D(A)\to H\) is continuous, \(\mathbb{E} \sup_{0\le t\le T} \| AY(t)-AY^{k}(t)\|_{H}^{2}\to0\), and hence \(\mathbb{E}\|\int_{0}^{t} A Y(s)\,ds - \int_{0}^{t} A Y^{k}(s)\,ds\| \to0\). Taking the limits of both sides of (1), we get

$$Y(t)=M(t) + \int_{0}^{t} A Y(s)\,ds. $$

 □

Proof of Theorem 6

By Lemma 7 we need only to prove the theorem in the case \(\alpha=0\). In this case, we have to prove that

$$\begin{aligned} \bigl\Vert X(t)\bigr\Vert ^{p} \le& \Vert X_{0}\Vert ^{p} + p \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr), dZ(s) \bigr\rangle \\ &{} + \frac{1}{2}p(p-1) \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} d[M]^{c}(s) \\ &{} + \sum_{0\le s\le t} \bigl( \bigl\Vert X(s)\bigr\Vert ^{p} - \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p} - p \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X \bigl(s^{-}\bigr) , \Delta Z(s) \bigr\rangle \bigr). \end{aligned}$$
(2)

The main idea is that we approximate \(M(t)\) and \(V(t)\) by some \(D(A)\)-valued processes, and for \(D(A)\)-valued processes, we use ordinary Itô’s formula. This is done by the Yosida approximations. We recall some facts from semigroup theory in the following lemma. For proofs, see Pazy [12]. □

Lemma 11

For \(\lambda> 0\), \(\lambda I - A\) is invertible. Let \(R(\lambda )=\lambda(\lambda I - A)^{-1}\) and \(A(\lambda)=A R(\lambda)\). We have:

  1. (a)

    \(R(\lambda): H \to D(A)\) and \(A(\lambda):H \to H\) are bounded linear maps;

  2. (b)

    for every \(x\in H\), \(\|R(\lambda)x\|_{H} \le\|x\|_{H}\) and \(\langle x , A(\lambda) x \rangle\le0\);

  3. (c)

    \(R(\lambda)S(t)=S(t)R(\lambda)\), and for \(x\in D(A)\), \(R(\lambda) A x = A R(\lambda) x\);

  4. (d)

    for every \(x\in H\), \(\lim_{\lambda\to\infty} R(\lambda) x = x\) in H;

  5. (e)

    for every \(x\in D(A)\), \(\lim_{\lambda\to\infty} A(\lambda) x = Ax\).

Now, for \(n=1,2,3,\ldots\) , define:

$$\begin{aligned}& V^{n}(t)=R(n)V(t),\qquad M^{n}(t)=R(n)M(t), \qquad Z^{n}(t)=V^{n}(t)+M^{n}(t)=R(n)Z(t), \\& X^{n}_{0}= R(n) X_{0}, \qquad X^{n}(t)=S(t) X^{n}_{0} + \int_{0}^{t} S(t-s)\,dZ^{n}(s). \end{aligned}$$

According to Lemma 11, \(V^{n}(t)\) is a \(D(A)\)-valued finite–variation process, \(M^{n}(t)\) is a \(D(A)\)-valued martingale, and \(Z^{n}(t)\) is a \(D(A)\)-valued semimartingale. Hence, by Lemma 10, \(X^{n}(t)\) is an ordinary stochastic integral, and hence we can apply Lemma 8 to it and find

$$\begin{aligned} \bigl\Vert X^{n}(t)\bigr\Vert ^{p} \le{}&\bigl\Vert X^{n}_{0}\bigr\Vert ^{p} + p \int_{0}^{t} \bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p-2} \bigl\langle X^{n}\bigl(s^{-}\bigr) , A X^{n}(s)\,ds + dV^{n}(s) + dM^{n}(s) \bigr\rangle \\ &{}+ \frac{p(p-1)}{2} \int_{0}^{t} \bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p-2} d\bigl[M^{n}\bigr]^{c}(s)+F^{n}, \end{aligned}$$

where

$$F^{n} = \sum_{0\le s\le t} \bigl( \bigl\Vert X^{n}(s)\bigr\Vert ^{p} - X^{n}\bigl(s^{-} \bigr)^{p} - p \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X^{n}\bigl(s^{-}\bigr) , \Delta Z^{n}(s) \bigr\rangle \bigr). $$

Since A is the generator of a contraction semigroup, we have \(\langle A x , x \rangle\le0\), and hence,

$$\begin{aligned} \underbrace{\bigl\Vert X^{n}(t)\bigr\Vert ^{p}}_{\mathbf{A^{n}}} \le& \underbrace{\bigl\Vert X^{n}_{0}\bigr\Vert ^{p}}_{\mathbf{B^{n}}} + p \underbrace{ \int_{0}^{t} \bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p-2} \bigl\langle X^{n}\bigl(s^{-}\bigr) , dV^{n}(s)\bigr\rangle }_{\mathbf{C^{n}}} \\ &{} + p \underbrace{ \int_{0}^{t} \bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p-2} \bigl\langle X^{n}\bigl(s^{-}\bigr) , dM^{n}(s) \bigr\rangle }_{\mathbf{D^{n}}} \\ &{} + \frac{p(p-1)}{2} \underbrace{ \int_{0}^{t} \bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p-2} d\bigl[M^{n}\bigr]^{c}(s)}_{\mathbf{E^{n}}}+F^{n}. \end{aligned}$$
(3)

We claim that inequality (3) (after choosing a suitable subsequence) converges term by term to the following inequality:

$$\begin{aligned} \underbrace{\bigl\Vert X(t)\bigr\Vert ^{p}}_{\mathbf{A}} \le& \underbrace{\Vert X_{0}\Vert ^{p}}_{\mathbf{B}} + p \underbrace{ \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr) , dV(s)\bigr\rangle }_{\mathbf{C}} \\ &{} + p \underbrace{ \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr) , dM(s) \bigr\rangle }_{\mathbf{D}} \\ &{} + \frac{p(p-1)}{2} \underbrace{ \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} d[M]^{c}(s)}_{\mathbf{E}}+F, \end{aligned}$$

where

$$F = \sum_{0\le s\le t} \bigl( \bigl\Vert X(s)\bigr\Vert ^{p} - X\bigl(s^{-}\bigr)^{p} - p \bigl\Vert X\bigl(s^{-}\bigr) \bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr) , \Delta Z(s) \bigr\rangle \bigr). $$

We prove this claim in several steps.

(Step 1) We claim that \(\mathbb{E} |V^{n}-V|(t)^{p} \to0\). Let \(q(t)\) be the Radon-Nykodim derivative of \(V(t)\) with respect to \(|V|(t)\). We know that, for every t, \(\|q(t)\|\le1\). We have

$$\mathbb{E} \bigl\vert V^{n}-V\bigr\vert (t)^{p} = \mathbb{E} \biggl( \int_{0}^{t} \bigl\Vert \bigl(R(n)-I\bigr) q(s) \bigr\Vert \, d|V|(s) \biggr)^{p}. $$

Note that, for all s and ω, \(\|(R(n)-I) q(s)\| \le2 \) and tends to zero, and since \(|V|(t)<\infty\) a.s., by the Lebesgue dominated convergence theorem, \(\int_{0}^{t} \|(R(n)-I) q(s)\| d|V|(s)\to 0\) a.s. and is dominated by \(2|V|(t)\). Now since \(\mathbb{E} |V|(t)^{p} < \infty\), using the Lebesgue dominated convergence theorem, we find that \(\mathbb{E} (\int_{0}^{t} \|(R(n)-I) q(s)\|\,d|V|(s) )^{p}\to 0\), and the claim is proved.

(Step 2) We claim that \(\mathbb{E} [M^{n}-M](t)^{\frac{p}{2}} \to0\). Note that \([M^{n}-M](t)\le2[M^{n}](t)+2[M](t)\le4[M](t)\), and hence \([M^{n}-M](t)^{\frac{p}{2}}\) is dominated by \(4^{\frac{p}{2}}[M](t)^{\frac{p}{2}}\). On the other hand, \(\mathbb {E}[M^{n}-M](t) = \mathbb{E}\|M^{n}(t)-M(t)\|^{2}\to0\). Hence, \([M^{n}-M](t)\), and consequently \([M^{n}-M](t)^{\frac{p}{2}}\) tend to 0 in probability, and therefore by the Lebesgue dominated convergence theorem its expectation also tends to 0.

(Step 3) We claim that

$$ \mathbb{E}\sup_{0\le s \le t} \bigl\Vert X^{n}(s)-X(s)\bigr\Vert ^{p} \to0. $$
(4)

We have

$$\begin{aligned} \bigl\Vert X^{n}(s)-X(s)\bigr\Vert ^{p} \le&3^{p} \underbrace{\bigl\Vert S(s) \bigl( X^{n}_{0}-X_{0} \bigr)\bigr\Vert ^{p}}_{\mathbf{A_{1}}} \\ &{} + 3^{p} \underbrace{\biggl\Vert \int_{0}^{s} S(s-r) d\bigl(V^{n}(r)-V(r) \bigr)\biggr\Vert ^{p}}_{\mathbf{A_{2}}} \\ &{} + 3^{p} \underbrace{\biggl\Vert \int_{0}^{s} S(s-r) d\bigl(M^{n}(r)-M(r) \bigr)\biggr\Vert ^{p}}_{\mathbf{A_{3}}}. \end{aligned}$$

For \(\mathbf{A_{1}}\), we have

$$\mathbb{E}\sup_{0\le s \le t} \mathbf{A_{1}} \le\mathbb{E} \bigl\Vert X_{0}^{n}-X_{0}\bigr\Vert ^{p} \to0. $$

For \(\mathbf{A_{2}}\), we have

$$\mathbb{E}\sup_{0\le s \le t} \mathbf{A_{2}} \le \mathbb{E} \bigl\vert V^{n}-V\bigr\vert (t)^{p} \to0, $$

where we have used Step 1. For \(\mathbf{A_{3}}\), we use the Burkholder-type inequality (Theorem 5) for \(\alpha=0\) and find

$$\mathbb{E}\sup_{0\le s \le t} \mathbf{A_{3}} \le K_{p} \mathbb{E} \bigl( \bigl[M^{n}-M\bigr](t)^{\frac{p}{2}} \bigr)\to0, $$

where we have used Step 2. Hence, (4) is proved.

(Step 4) We claim that

$$ \mathbb{E}\sup_{0\le s \le t} \bigl\Vert X^{n}(s)\bigr\Vert ^{p} \to\mathbb{E}\sup _{0\le s \le t} \bigl\Vert X(s)\bigr\Vert ^{p}. $$
(5)

By the triangle inequality,

$$\begin{aligned} &\Bigl\vert \Bigl(\mathbb{E}\sup_{0\le s \le t} \bigl\Vert X^{n}(s)\bigr\Vert ^{p} \Bigr)^{\frac{1}{p}} - \Bigl( \mathbb{E}\sup_{0\le s \le t} \bigl\Vert X(s)\bigr\Vert ^{p} \Bigr)^{\frac{1}{p}}\Bigr\vert \\ &\quad\le \Bigl( \mathbb{E} \Bigl\vert \sup_{0\le s \le t} \bigl\Vert X^{n}(s)\bigr\Vert - \sup_{0\le s \le t} \bigl\Vert X(s)\bigr\Vert \Bigr\vert ^{p} \Bigr)^{\frac{1}{p}} \\ &\quad\le \Bigl( \mathbb{E} \sup_{0\le s \le t} \bigl\vert \bigl\Vert X^{n}(s)\bigr\Vert -\bigl\Vert X(s)\bigr\Vert \bigr\vert ^{p} \Bigr)^{\frac{1}{p}} \\ &\quad\le \Bigl( \mathbb{E} \sup_{0\le s \le t} \bigl\Vert X^{n}(s)-X(s)\bigr\Vert ^{p} \Bigr)^{\frac {1}{p}} \to0, \end{aligned}$$

where in the last line, we have used Step 3. Hence, (5) is proved, and in particular the sequence \(\mathbb{E}\sup_{0\le s \le t} \| X^{n}(s)\|^{p}\) is bounded for each t.

(Step 5) We claim that \(\mathbb{E}|\mathbf{C^{n}}-\mathbf{C}| \to 0\). We have

$$\begin{aligned} \mathbb{E}\bigl|\mathbf{C^{n}}-\mathbf{C}\bigr| \le{}&\underbrace{\mathbb{E}\biggl| \int _{0}^{t} \bigl(\bigl\Vert X^{n} \bigl(s^{-}\bigr)\bigr\Vert ^{p-2}-\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigr)\bigl\langle X^{n}\bigl(s^{-}\bigr) , dV^{n}(s)\bigr\rangle \biggr|}_{\mathbf{C^{n}_{1}}} \\ &{}+ \underbrace{\mathbb{E}\biggl| \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigl\langle X^{n}\bigl(s^{-}\bigr)-X\bigl(s^{-} \bigr) , dV^{n}(s)\bigr\rangle \biggr|}_{\mathbf{C^{n}_{2}}} \\ &{}+\underbrace{\mathbb{E}\biggl| \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigl\langle X\bigl(s^{-}\bigr) , d\bigl(V^{n}(s)-V(s) \bigr)\bigr\rangle \biggr|}_{\mathbf{C^{n}_{3}}}. \end{aligned}$$

For the term \(\mathbf{C^{n}_{1}}\), we have

$$ \mathbf{C^{n}_{1}} \le\mathbb{E} \bigl( \bigl(\sup\bigl\vert \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} - \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigr\vert \bigr) \bigl(\sup\bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert \bigr) \bigl\vert V^{n}\bigr\vert (t) \bigr). $$

Now, using the simple inequality \(|a-b|^{r} \le|a^{r} - b^{r}|\) for \(r\ge 1\) and \(a,b\in\mathbb{R}^{+}\), we have \(|\|X^{n}(s^{-})\|^{p-2} - \|X(s^{-})\| ^{p-2}| \le|\|X^{n}(s^{-})\|^{p}-\|X(s^{-})\|^{p}|^{\frac{p-2}{p}}\). Substituting and using the Hölder inequality, we find

$$\le \bigl( \mathbb{E} \sup\bigl\vert \bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p} -\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p}\bigr\vert \bigr)^{\frac {p-2}{p}} \bigl( \mathbb{E} \sup \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p} \bigr)^{\frac{1}{p}} \bigl( \mathbb{E}\bigl\vert V^{n}\bigr\vert (t)^{p} \bigr)^{\frac{1}{p}}. $$

The second term above is bounded (according to Step 4), and the third term is bounded by \((\mathbb{E}|V|(t)^{p} )^{\frac{1}{p}}\) since \(|V^{n}|(t)\le|V|(t)\). We claim that the first term, after choosing a subsequence, converges to zero. We know from Step 3 that \(\mathbb{E}\sup_{0\le s \le t} \| X^{n}(s)-X(s)\|^{p} \to0\). Hence, we can choose a subsequence \(n_{k}\) for which \(\sup_{0\le s \le t} \| X^{n_{k}}(s)-X(s)\|^{p} \to0\), a.s. We have also \(\sup_{0\le s \le t} \|X(s)\|<\infty\) a.s., and hence

$$\sup\bigl\vert \bigl\Vert X^{n_{k}}\bigl(s^{-}\bigr)\bigr\Vert ^{p} -\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p}\bigr\vert \to0 \quad \mbox{a.s.} $$

On the other hand,

$$\begin{aligned} &\sup_{0\le s\le t} \bigl\vert \bigl\Vert X^{n_{k}} \bigl(s^{-}\bigr)\bigr\Vert ^{p} -\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p}\bigr\vert \\ &\quad\le2^{p} \sup_{0\le s\le t} \bigl\Vert X^{n_{k}}\bigl(s^{-}\bigr)-X\bigl(s^{-}\bigr)\bigr\Vert ^{p} + \bigl(2^{p}+1\bigr) \sup_{0\le s\le t} \bigl\Vert X\bigl(s^{-} \bigr)\bigr\Vert ^{p}. \end{aligned}$$

Hence, by the dominated convergence theorem we have

$$\mathbb{E} \sup_{0\le s\le t} \bigl\vert \bigl\Vert X^{n_{k}}\bigl(s^{-}\bigr)\bigr\Vert ^{p} -\bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p}\bigr\vert \to0, $$

and therefore, for the same subsequence, \(\mathbf{C^{n}_{1}}\to0\).

For the term \(\mathbf{C^{n}_{2}}\), we have

$$\mathbf{C^{n}_{2}} \le\mathbb{E} \Bigl( \Bigl(\sup _{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \Bigr) \Bigl(\sup_{0\le s\le t}\bigl\Vert X^{n}\bigl(s^{-} \bigr)-X\bigl(s^{-}\bigr)\bigr\Vert \Bigr)\bigl\vert V^{n}\bigr\vert (t) \Bigr). $$

By the Hölder inequality we have

$$\le \Bigl( \mathbb{E} \sup_{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr) \bigr\Vert ^{p} \Bigr)^{\frac {p-2}{p}} \Bigl( \mathbb{E} \sup _{0\le s\le t}\bigl\Vert X^{n}\bigl(s^{-}\bigr)-X\bigl(s^{-} \bigr)\bigr\Vert ^{p} \Bigr)^{\frac{1}{p}} \bigl(\mathbb{E} \bigl\vert V^{n}\bigr\vert (t)^{p} \bigr)^{\frac{1}{p}}. $$

The first and third terms are bounded, and the second term tends to zero by Step 3. Hence, \(\mathbf{C^{n}_{2}}\to0\).

For the term \(\mathbf{C^{n}_{3}}\), we have

$$\mathbf{C^{n}_{3}} \le\mathbb{E} \Bigl( \Bigl(\sup _{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-1} \Bigr) \bigl\vert V^{n}-V\bigr\vert (t) \Bigr). $$

By the Hölder inequality we have

$$\le\mathbb{E} \Bigl( \sup_{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr) \bigr\Vert ^{p} \Bigr)^{\frac {p-1}{p}} \bigl(\mathbb{E} \bigl(\bigl\vert V^{n}-V\bigr\vert (t)^{p}\bigr) \bigr)^{\frac{1}{p}}, $$

which tends to 0 by Step 1. Hence, \(\mathbf{C^{n}_{3}}\to0\).

(Step 6) We claim that \(\mathbb{E}|\mathbf{D^{n}}-\mathbf{D}| \to 0\). We have

$$\begin{aligned} \mathbb{E}\bigl\vert \mathbf{D^{n}}-\mathbf{D}\bigr\vert \le{}& \underbrace{\mathbb{E}\biggl\vert \int _{0}^{t} \bigl(\bigl\Vert X^{n} \bigl(s^{-}\bigr)\bigr\Vert ^{p-2}-\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigr)\bigl\langle X^{n}\bigl(s^{-}\bigr) , dM^{n}(s)\bigr\rangle \biggr\vert }_{\mathbf{D^{n}_{1}}} \\ &{}+ \underbrace{\mathbb{E}\biggl\vert \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigl\langle X^{n}\bigl(s^{-}\bigr)-X\bigl(s^{-} \bigr) , dM^{n}(s)\bigr\rangle \biggr\vert }_{\mathbf{D^{n}_{2}}} \\ &{}+\underbrace{\mathbb{E}\biggl\vert \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigl\langle X\bigl(s^{-}\bigr) , d\bigl(M^{n}(s)-M(s) \bigr)\bigr\rangle \biggr\vert }_{\mathbf{D^{n}_{3}}}. \end{aligned}$$

For the term \(\mathbf{D^{n}_{1}}\), we use Corollary 4 for \(p=1\) and find

$$\mathbf{D^{n}_{1}} \le\mathcal{C}_{p} \mathbb{E} \bigl( \bigl(\sup\bigl\vert \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} - \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigr\vert \bigr) \bigl(\sup\bigl\Vert X^{n} \bigl(s^{-}\bigr)\bigr\Vert \bigr) \bigl[M^{n}\bigr](t)^{\frac {1}{2}} \bigr). $$

Now using the simple inequality \(|a-b|^{r} \le|a^{r} - b^{r}|\) for \(r\ge1\) and \(a,b\in\mathbb{R}^{+}\), we have \(|\|X^{n}(s^{-})\|^{p-2} - \|X(s^{-})\| ^{p-2}| \le|\|X^{n}(s^{-})\|^{p}-\|X(s^{-})\|^{p}|^{\frac{p-2}{p}}\). Substituting and using the Hölder inequality, we find

$$\le\mathcal{C}_{p} \bigl( \mathbb{E} \sup\bigl\vert \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p} -\bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p}\bigr\vert \bigr)^{\frac{p-2}{p}} \bigl( \mathbb{E} \sup\bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p} \bigr)^{\frac{1}{p}} \bigl( \mathbb{E} \bigl[M^{n}\bigr](t)^{\frac{p}{2}} \bigr)^{\frac{1}{p}}. $$

The second term above is bounded (according to Step 4), and the third term is bounded by \(( \mathbb{E}[M](t)^{\frac{p}{2}} )^{\frac{1}{p}}\) since \([M^{n}](t)\le[M](t)\). The first term, by the same arguments as in Step 5, after choosing a subsequence, converges to zero.

For the term \(\mathbf{D^{n}_{2}}\), we use Corollary 4 for \(p=1\) and find

$$\mathbf{D^{n}_{2}} \le\mathcal{C}_{p} \mathbb{E} \Bigl( \Bigl(\sup_{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\Bigr) \Bigl(\sup_{0\le s\le t}\bigl\Vert X^{n}\bigl(s^{-}\bigr)-X\bigl(s^{-}\bigr)\bigr\Vert \Bigr) \bigl[M^{n}\bigr](t)^{\frac{1}{2}} \Bigr). $$

By the Hölder inequality we have

$$\le\mathcal{C}_{p} \Bigl( \mathbb{E} \sup_{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p} \Bigr)^{\frac{p-2}{p}} \Bigl( \mathbb{E} \sup_{0\le s\le t}\bigl\Vert X^{n} \bigl(s^{-}\bigr)-X\bigl(s^{-}\bigr)\bigr\Vert ^{p} \Bigr)^{\frac{1}{p}} \bigl(\mathbb{E} \bigl[M^{n}\bigr](t)^{\frac{p}{2}} \bigr)^{\frac{1}{p}}. $$

The first and third terms are bounded, and the second term tends to zero by Step 3. Hence, \(\mathbf{D^{n}_{2}}\to0\).

For the term \(\mathbf{D^{n}_{3}}\), we use Corollary 4 for \(p=1\) and find

$$\mathbf{D^{n}_{3}} \le\mathcal{C}_{p} \mathbb{E} \Bigl( \Bigl(\sup_{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-1}\Bigr) \bigl[M^{n}-M\bigr](t)^{\frac{1}{2}} \Bigr). $$

By the Hölder inequality we have

$$\le\mathcal{C}_{p} \mathbb{E} \Bigl( \sup_{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p} \Bigr)^{\frac{p-1}{p}} \bigl( \mathbb{E} \bigl(\bigl[M^{n}-M\bigr](t)^{\frac{p}{2}}\bigr) \bigr)^{\frac{1}{p}}, $$

which tends to 0 by Step 2. Hence, \(\mathbf{C^{n}_{3}}\to0\).

(Step 7) We claim that \(\mathbb{E}|\mathbf{E^{n}}-\mathbf{E}| \to 0\). We have

$$\begin{aligned} \mathbb{E}\bigl\vert \mathbf{E^{n}}-\mathbf{E}\bigr\vert \le{}& \underbrace{\mathbb{E}\biggl\vert \int _{0}^{t} \bigl(\bigl\Vert X^{n} \bigl(s^{-}\bigr)\bigr\Vert ^{p-2}-\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigr) d\bigl[M^{n}\bigr]^{c}(s)\biggr\vert }_{\mathbf{E^{n}_{1}}} \\ &{}+ \underbrace{\mathbb{E}\biggl\vert \int_{0}^{t} \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} d\bigl(\bigl[M^{n}\bigr]^{c}(s)-[M]^{c}(s) \bigr)\biggr\vert }_{\mathbf{E^{n}_{2}}}. \end{aligned}$$

For the term \(\mathbf{E^{n}_{1}}\), we have

$$\mathbf{E^{n}_{1}} \le\mathbb{E} \Bigl( \Bigl(\sup _{0\le s\le t}\bigl\vert \bigl\Vert X^{n}\bigl(s^{-}\bigr) \bigr\Vert ^{p-2}-\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigr\vert \Bigr) \bigl[M^{n}\bigr]^{c}(t) \Bigr). $$

Now, using the simple inequality \(|a-b|^{r} \le|a^{r} - b^{r}|\) for \(r\ge 1\) and \(a,b\in\mathbb{R}^{+}\), we have

$$\bigl\vert \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} - \bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2}\bigr\vert \le\bigl\vert \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p}-\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p}\bigr\vert ^{\frac{p-2}{p}}. $$

Substituting and using the Hölder inequality, we find

$$\le \bigl( \mathbb{E} \sup\bigl\vert \bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p} -\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p}\bigr\vert \bigr)^{\frac{p-2}{p}} \bigl( \mathbb{E} \bigl[M^{n}\bigr]^{c}(t)^{\frac{p}{2}} \bigr)^{\frac{2}{p}}. $$

The second term above is bounded by \(( \mathbb{E}[M](t)^{\frac{p}{2}} )^{\frac{2}{p}}\) since \([M^{n}]^{c}(t)\le[M](t)\). The first term, by the same arguments as in Step 5, after choosing a subsequence, converges to zero.

For the term \(\mathbf{E^{n}_{2}}\), we have

$$\mathbf{E^{n}_{2}} \le\mathbb{E} \Bigl( \Bigl(\sup _{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \Bigr) \bigl([M]^{c}(t)-\bigl[M^{n}\bigr]^{c}(t) \bigr) \Bigr) $$

By Holder inequality we have

$$\le \Bigl( \mathbb{E} \sup_{0\le s\le t}\bigl\Vert X\bigl(s^{-}\bigr) \bigr\Vert ^{p} \Bigr)^{\frac{p-2}{p}} \bigl( \mathbb{E} \bigl([M]^{c}(t)-\bigl[M^{n}\bigr]^{c}(t) \bigr)^{\frac{p}{2}} \bigr)^{\frac{2}{p}}. $$

The first term is a constant. For the second term, we have \(0\le [M]^{c}(t)-[M^{n}]^{c}(t)\le[M](t)-[M^{n}](t) \le[M](t)\), and hence \(([M]^{c}(t)-[M^{n}]^{c}(t))^{\frac{p}{2}}\) is dominated by \([M](t)^{\frac{p}{2}}\). On the other hand, \(\mathbb{E} ([M]^{c}(t)-[M^{n}]^{c}(t)) \le \mathbb{E} (\|M\|(t)^{2} - \|M^{n}\|(t)^{2}) \to0\). Hence, \([M]^{c}(t)-[M^{n}]^{c}(t)\); consequently, \(([M]^{c}(t)-[M^{n}]^{c}(t))^{\frac{p}{2}}\) tends to 0 in probability, and therefore by the Lebesgue dominated convergence theorem its expectation also tends to 0. Hence, \(\mathbf{E^{n}_{2}}\to0\).

(Step 8) We claim that \(\mathbf{F^{n}}\to\mathbf{F}\) a.s. We use the following lemma, which is proved later.

Lemma 12

For \(x,y\in H\), we have

$$\|x+y\|^{p}-\|x\|^{p} -p\|x\|^{p-2}\langle x , y \rangle\le\frac {1}{2}p(p-1) \bigl(\Vert x\Vert ^{p-2} +\Vert x+y\Vert ^{p-2}\bigr)\|y\|^{2}. $$

Note that the semimartingale \(Z(s)\) is càdlàg and hence is continuous except at a countable set of points \(0\le s\le t\), and these are the only points in which the terms in the sums F and Fn are nonzero.

By Lemma 12,

$$\begin{aligned} &\bigl\vert \bigl\Vert X^{n}(s)\bigr\Vert ^{p} - X^{n}\bigl(s^{-}\bigr)^{p} - p \bigl\Vert X^{n}\bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X^{n}\bigl(s^{-}\bigr) , \Delta Z^{n}(s) \bigr\rangle \bigr\vert \\ &\quad\le\frac{1}{2}p(p-1) \bigl(\bigl\Vert X^{n}\bigl(s^{-} \bigr)\bigr\Vert ^{p-2} +\bigl\Vert X^{n}(s)\bigr\Vert ^{p-2}\bigr)\bigl\Vert \Delta Z^{n}(s)\bigr\Vert ^{2} \\ &\quad\le p(p-1) \Bigl(\sup_{0\le s\le t} \bigl\Vert X^{n}(s)\bigr\Vert ^{p-2}\Bigr)\bigl\Vert \Delta Z^{n}(s)\bigr\Vert ^{2}. \end{aligned}$$
(6)

As in Step 5, we choose a subsequence \(n_{k}\) for which there exists \(\Omega_{0}\subset\Omega\) with \(\mathbb{P}(\Omega_{0})=1\) such that \(\sup_{0\le s \le t} \| X^{n_{k}}(s)-X(s)\|^{p} \to0\) for \(\omega\in \Omega_{0}\). Hence, for \(\omega\in\Omega_{0}\), \(\|X^{n}(s)\| \to\|X(s)\|\), and in particular \(\sup_{n} \sup_{s} \|X^{n}(s)\|^{p-2} <\infty\). Note also that \(\|\Delta Z^{n}(s)\|^{2}\le\|\Delta Z(s)\|^{2}\) and that \(\sum\|\Delta Z(s)\|^{2} < \infty\). Hence, by (6), for \(\omega\in\Omega_{0}\), Fn is dominated by an absolutely convergent series. On the other hand, since for \(\omega\in \Omega_{0}\), \(\|X^{n}(s)\| \to\|X(s)\|\), the terms of Fn converge to the terms of F. Hence, by the dominated convergence theorem for series we have \(\mathbf{F^{n}}\to\mathbf{F}\) for \(\omega\in\Omega_{0}\).

Proof of Lemma 12

Define \(f(t)= \|x+ty\|^{p}\). Then

$$f'(t)= p\|x+ty\|^{p-2}\langle x+ty , y \rangle $$

and

$$f''(t)= p\|x+ty\|^{p-2}\|y\|^{2} +p(p-2)\|x+ty\|^{p-4} \langle x+ty , y \rangle^{2} \le p(p-1) \|x+ty\|^{p-2}\|y\|^{2}. $$

By Taylor’s remainder theorem, we have that, for some \(\tau\in[0,1]\),

$$f(1)-f(0)-f'(0)=\frac{1}{2}f''( \tau) \le\frac{1}{2} p(p-1)\|x+\tau y\| ^{p-2}\|y\|^{2}. $$

Since \(\|x+\tau y\|\le\max(\|x\|,\|x+y\|)\), we have

$$f(1)-f(0)-f'(0) \le\frac{1}{2}p(p-1) \bigl(\Vert x\Vert ^{p-2} +\|x+y\|^{p-2}\bigr)\|y\| ^{2}, $$

which completes the proof. □

4 An application to the stability of the semilinear stochastic evolution equations

In this section, we consider a semilinear stochastic evolution equation with monotone nonlinearity and derive a sufficient condition for stability of its solutions in the p-norm.

Let \((\Omega,\mathcal{F},\mathcal{F}_{t},\mathbb{P})\) be a filtered probability space, and let \(W_{t}\) be a cylindrical Wiener process on it with values in a separable Hilbert space K. Our goal is to study the following equation in H:

$$ dX_{t}=AX_{t} \,dt+f(t,X_{t})\,dt + g(t,X_{t})\,dW_{t}. $$
(7)

We assume the following:

Hypothesis 1

  1. (a)

    \(f(t,x,\omega):\mathbb{R}^{+}\times H\times\Omega\to H\) is measurable, \(\mathcal{F}_{t}\)-adapted, demicontinuous with respect to x, and there exists a constant M such that

    $$\bigl\langle f(t,x,\omega)-f(t,y,\omega),x-y \bigr\rangle \le M \|x-y \|^{2}; $$
  2. (b)

    \(g(t,x,\omega):\mathbb{R}^{+}\times H\times\Omega\to L_{HS}(K,H)\) is predictable, and there exists a constant C such that

    $$\bigl\Vert g(t,x,\omega)-g(t,y,\omega)\bigr\Vert _{L_{HS}(K,H)}^{2} \le C \|x-y \|^{2}, $$

where \(L_{HS}(K,H)\) denotes the space of Hilbert-Schmidt operators from K to H and demicontinuity means the following:

Definition 1

\(f:H\to H\) is called demicontinuous if whenever \(x_{n} \to x\), strongly in H, then \(f(x_{n})\rightharpoonup f(x)\) weakly in H.

Remark

It has been shown in [13] and [14] that, under the above assumptions and an additional linear growth condition on f and g, equation (7) has a unique mild solution for any square-integrable initial condition, and it has been shown in [15] that the solution depends continuously on the initial conditions and coefficients.

Theorem 13

(Exponential stability in the pth moment)

Let \(X_{t}\) and \(Y_{t}\) be mild solutions of (7) with initial conditions \(X_{0}\) and \(Y_{0}\). Then

$$\begin{aligned} \mathbb{E} \| X_{t}-Y_{t} \| ^{p} \le& e^{\gamma t} \mathbb{E}\|X_{0}-Y_{0}\|^{p} \end{aligned}$$

for \(\gamma= p \alpha+ p M+\frac{1}{2}p(p-1) C\). In particular, if \(\gamma< 0\), then all mild solutions are exponentially stable in the \(L^{p}\) norm.

Proof

First, we consider the case t \(\alpha=0\). Subtract \(X_{t}\) and \(Y_{t}\):

$$X_{t}-Y_{t}=S_{t} (X_{0}-Y_{0}) + \int_{0}^{t} S_{t-s}\,dZ_{s}, $$

where

$$d Z_{t} = \bigl(f(t,X_{t})-f(t,Y_{t})\bigr)\,dt + d M_{t} $$

and

$$M_{t}= \int_{0}^{t} \bigl(g(s,X_{s})-g(s,Y_{s}) \bigr)\,dW_{s}. $$

Applying the Itô-type inequality (Theorem 6) for \(\alpha=0\) to \(X_{t}-Y_{t}\) and noting that \(M_{t}\) is a continuous martingale, we find

$$\begin{aligned} \Vert X_{t}-Y_{t} \Vert ^{p} \le{}&\Vert X_{0}-Y_{0} \Vert ^{p} \\ &{}+ p \underbrace{ \int_{0}^{t} {\Vert X_{s}-Y_{s} \Vert ^{p-2} \bigl\langle X_{s-}-Y_{s-} , \bigl(f(s,X_{s})-f(s,Y_{s})\bigr)\bigr\rangle \,ds}}_{\mathbf{A_{t}}} \\ &{}+ p \underbrace{ \int_{0}^{t} {\Vert X_{s}-Y_{s} \Vert ^{p-2} \langle X_{s-}-Y_{s-} , d M_{s} \rangle}}_{\mathbf{B_{t}}} \\ &{}+\frac{1}{2}p(p-1) \underbrace{ \int_{0}^{t} \Vert X_{s}-Y_{s} \Vert ^{p-2}d[M]_{s}}_{\mathbf{C_{t}}}. \end{aligned}$$
(8)

Using Hypothesis 1(a) for the term At, we find

$$ \mathbb{E}\mathbf{A_{t}} \le M \int_{0}^{t} \mathbb{E} \Vert X_{s}-Y_{s} \Vert ^{p} \,ds. $$
(9)

Using Hypothesis 1(b) for the term Ct, we find

$$ \mathbb{E}\mathbf{C_{t}} \le C \int_{0}^{t} \mathbb{E} \Vert X_{s}-Y_{s} \Vert ^{p} \,ds. $$
(10)

Taking the expectations of both sides of (8), noting that Bt is a martingale, and substituting (9) and (10) into (8), we find

$$ \mathbb{E} \Vert X_{t}-Y_{t}\Vert ^{p} \le \mathbb{E} \Vert X_{0}-Y_{0}\Vert ^{p} + \gamma \int _{0}^{t} \mathbb{E} \Vert X_{s}-Y_{s}\Vert ^{p} \,ds, $$

where \(\gamma=p M+\frac{1}{2}p(p-1)C\). Now the statement follows by Gronwall’s inequality. Hence, the proof for the case \(\alpha=0\) is complete. Now, for the general case, apply the change of variables used in Lemma 7. □

References

  1. Kotelenez, P: A submartingale type inequality with applications to stochastic evolution equations. Stochastics 8, 139-151 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  2. Ichikawa, A: Some inequalities for martingales and stochastic convolutions. Stoch. Anal. Appl. 4(3), 329-339 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  3. Kotelenez, P: A stopped Doob inequality for stochastic convolution integrals and stochastic evolution equations. Stoch. Anal. Appl. 2(3), 245-265 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  4. Tubaro, L: An estimate of Burkholder type for stochastic processes defined by the stochastic integral. Stoch. Anal. Appl. 2(2), 187-192 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  5. Zangeneh, BZ: Semilinear stochastic evolution equations with monotone nonlinearities. Stoch. Stoch. Rep. 53, 129-174 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hamedani, HD, Zangeneh, BZ: Stopped Doob inequality for p-th moment, \(0< p<\infty\), stochastic convolution integrals. Stoch. Anal. Appl. 19(5), 771-798 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  7. Brzeźniak, Z, Hausenblas, E, Zhu, J: Maximal inequality of stochastic convolution driven by compensated Poisson random measures in Banach spaces. arXiv preprint arXiv:1005.1600 (2010)

  8. Jahanipur, R, Zangeneh, BZ: Stability of semilinear stochastic evolution equations with monotone nonlinearity. Math. Inequal. Appl. 3, 593-614 (2000)

    MathSciNet  MATH  Google Scholar 

  9. Peszat, S, Zabczyk, J: Stochastic Partial Differential Equations with Lévy Noise. Cambridge University Press, Cambridge (2007)

    Book  MATH  Google Scholar 

  10. Métivier, M: Semimartingales: A Course on Stochastic Processes, vol. 2. de Gruyter, Berlin (1982)

    Book  MATH  Google Scholar 

  11. Curtain, RF, Pritchard, AJ: Infinite Dimensional Linear Systems Theory. Springer, Berlin (1978)

    Book  MATH  Google Scholar 

  12. Pazy, A: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, Berlin (1983)

    Book  MATH  Google Scholar 

  13. Salavati, Z: Semilinear stochastic evolution equations with Lévy noise and monotone nonlinearity. arXiv:1304.2122v1

  14. Salavati, E, Zangeneh, BZ: Semilinear stochastic evolution equations of monotone type with Lévy noise. In: Proceedings of Dynamic Systems and Applications, vol. 6, pp. 380-387 (2012)

    Google Scholar 

  15. Salavati, E, Zangeneh, BZ: Continuous dependence on coefficients for stochastic evolution equations with multiplicative Lévy noise and monotone nonlinearity. Bull. Iran. Math. Soc. 42(1), 175-194 (2016)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by a grant from IPM.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erfan Salavati.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Salavati, E., Zangeneh, B.Z. A maximal inequality for pth power of stochastic convolution integrals. J Inequal Appl 2016, 155 (2016). https://doi.org/10.1186/s13660-016-1094-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-016-1094-0

MSC

Keywords