- Research
- Open access
- Published:
A maximal inequality for pth power of stochastic convolution integrals
Journal of Inequalities and Applications volume 2016, Article number: 155 (2016)
Abstract
We prove an inequality for the pth power of the norm of a stochastic convolution integral in a Hilbert space. The inequality is stronger than analogous inequalities in the literature in the sense that it is pathwise and not in expectation.
1 Introduction
Stochastic convolution integrals appear in many fields of stochastic analysis. They are integrals of the form
where \(M_{t}\) is a martingale with values in a Hilbert space. Although they generalize stochastic integrals, they are different in many ways. For example, they are not semimartingales in general, and hence the usual results on semimartingales, such as maximal inequalities (i.e., inequalities for \(\sup_{0\le s\le t} \|X_{s}\|\)) and the existence of càdlàg versions, cannot be applied directly to them.
Among first studies in this field, Kotelenez [1] and Ichikawa [2] considered stochastic convolution integrals with respect to general martingales. They proved a maximal inequality in \(L^{2}\) for stochastic convolution integrals (Theorem 1).
Stochastic convolution integrals arise naturally in proving the existence, uniqueness, and regularity of the solutions of semilinear stochastic evolution equations
where A is the generator of a \(C_{0}\) semigroup of linear operators on a Hilbert space, and \(M_{t}\) is a martingale. The case where the coefficients are Lipschitz operators is studied well, and theorems on the existence, uniqueness, and continuity with respect to initial data for the solutions in \(L^{2}\) are proved; see, for example, Kotelenez [3]. The proofs are based on the maximal inequality for stochastic convolution integrals, that is, Theorem 1.
These results have been generalized in several directions. One is the maximal inequality for pth power of the norm of stochastic convolution integrals. Tubaro [4] has proved an upper estimate for \(E[\sup_{0\le s\le t}\vert x(s)\vert^{p}]\) with \(p\ge2\) in the case where \(M_{t}\) is a real Wiener process. Ichikawa [2] has proved a maximal inequality for pth power, \(p\ge2\), in the particular case where \(M_{t}\) is a Hilbert-space-valued continuous martingale. The case of general martingale is proved by Zangeneh [5] for \(p\ge2\) (see Theorem 5). Hamedani and Zangeneh [6] have generalized the maximal inequality for \(0< p <\infty\).
Brzezniak et al. [7] have derived a maximal inequality for pth power of the norm of stochastic convolutions driven by Poisson random measures.
As far as we know, the maximal inequalities proved for stochastic convolution integrals in the literature all involve expectations. The only result that provides a pathwise (almost sure) bound is Zangeneh [5], who has proved Theorem 2, called an Itô-type inequality. This inequality provides a pathwise estimate for the square of the norm of stochastic convolution integrals and is a generalization of the Itô formula to stochastic convolution integrals.
In Section 2, we define and state some results about stochastic convolution integrals that will be used in the sequel. In Section 3, we state and prove the main result of this article, Theorem 6, which provides a pathwise bound for the pth power of stochastic convolution integrals with respect to general martingales. The special case where the martingale is an Itô integral with respect to a Wiener process has been proved by Jahanipour and Zangeneh [8].
As an example, we apply Theorem 6 to a semilinear stochastic evolution equation with non-Lipschitz coefficients. We consider the drift term to be a monotone nonlinear operator and the noise term to be a Wiener process and provide a sufficient condition for stability of mild solutions in \(L^{p}\) in Theorem 13. The precise assumptions on coefficients will be stated in Section 4.
2 Stochastic convolution integrals
Let H be a separable Hilbert space with inner product \(\langle\cdot ,\cdot \rangle\). Let \(S_{t}\) be a \(C_{0}\) semigroup on H with infinitesimal generator \(A:D(A)\to H\). Furthermore, we assume the exponential growth condition on \(S_{t}\), that is, there exists a constant α such that \(\| S_{t} \| \le e^{\alpha t}\). If \(\alpha =0\), then \(S_{t}\) is called a contraction semigroup.
In this section, we review some properties and results about convolution integrals of type \(X_{t}=\int_{0}^{t} S_{t-s}\,dM_{s}\) where \(M_{t}\) is a martingale. These are called stochastic convolution integrals. Kotelenez [3] gives a maximal inequality for stochastic convolution integrals.
Theorem 1
(Kotelenez [3])
Let \(\alpha\ge0\). There exists a constant C such that for any H-valued càdlàg locally square-integrable martingale \(M_{t}\), we have
Remark
Hamedani and Zangeneh [6] generalized this inequality to a stopped maximal inequality for pth moments (\(0< p<\infty\)) of stochastic convolution integrals.
Because of the presence of monotone nonlinearity in our equation, we need a pathwise bound for stochastic convolution integrals. For this reason, the following pathwise inequality for the norm of stochastic convolution integrals has been proved in Zangeneh [5].
Theorem 2
(Itô-type inequality, Zangeneh [5])
Let \(Z_{t}\) be an H-valued càdlàg locally square-integrable semimartingale. If
then
where \([Z]_{t}\) is the quadratic-variation process of \(Z_{t}\).
We state here the Burkholder-Davis-Gundy (BDG) inequality and its corollary for future reference.
Theorem 3
(Burkholder-Davis-Gundy (BDG) inequality)
For every \(p \ge1\), there exists a constant \(\mathcal{C}_{p} >0\) such that, for any real-valued square-integrable càdlàg martingale \(M_{t}\) with \(M_{0}=0\) and for any \(T\ge0\),
Proof
See [9], p.37, and the reference there. □
Corollary 4
Let \(p\ge1\), and let \(\mathcal{C}_{p}\) be the constant in the BDG inequality, \(M_{t}\) be an H-valued square integrable càdlàg martingale, \(X_{t}\) be an H-valued adapted process, and \(T\ge0\). Then, for \(K>0\),
where \(X_{t}^{*}=\sup_{0\le t\le T} \|X_{t}\|\).
Proof
See [5], Lemma 4, p.147. □
We will also need the following inequality, which is an analogue of the Burkholder-Davies-Gundy inequality for stochastic convolution integrals.
Theorem 5
(Burkholder-type inequality, Zangeneh [5], Theorem 2, p.147)
Let \(p\ge2\) and \(T>0\). Let \(S_{t}\) be a contraction semigroup on H, and \(M_{t}\) be an H-valued square-integrable càdlàg martingale for \(t\in[0,T]\). Then
where \(K_{p}\) is a constant depending only on p.
3 Itô-type inequality for pth power
We use the notion of semimartingale and Itô’s formula as is described by Métivier [10].
Theorem 6
(Itô-type inequality for pth power)
Let \(p\ge2\). Let \(Z(t)=V(t) + M(t)\) be a semimartingale, where \(V(t)\) is an H-valued process with finite variation \(|V|(t)\), and \(M(t)\) is an H-valued square-integrable martingale with quadratic variation \([M](t)\). Assume that
Let \(X_{0}(\omega)\) be \(\mathcal{F}_{0}\)-measurable and square-integrable. Define \(X(t)=S(t) X_{0} + \int_{0}^{t} S(t-s)\,dZ(s)\). Then we have
Remark
-
1.
For \(p=2\), the theorem implies the Itô-type inequality, Theorem 2.
-
2.
If M is a continuous martingale, then the inequality takes the simpler form
$$\begin{aligned} \bigl\Vert X(t)\bigr\Vert ^{p} \le& e^{p\alpha t} \|X_{0}\|^{p} + p \int_{0}^{t} e^{p\alpha(t-s)} \bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p-2} \bigl\langle X\bigl(s^{-}\bigr) , dZ(s) \bigr\rangle \\ &{} + \frac{1}{2}p(p-1) \int_{0}^{t} e^{p\alpha(t-s)} \bigl\Vert X \bigl(s^{-}\bigr)\bigr\Vert ^{p-2} d[M](s). \end{aligned}$$
Before proceeding to the proof of theorem, we state and prove some lemmas.
Lemma 7
It suffices to prove Theorem 2 for the case \(\alpha=0\).
Proof
Define
Now we have \(d\tilde{X}(t)=\tilde{S}(t) X_{0}+\int_{0}^{t} \tilde{(}S)(t-s)\,d\tilde{Z}(s)\). Note that \(\tilde{S}_{t}\) is a contraction semigroup. It is easy to see that the statement for \(\tilde{X}_{t}\) implies the statement for \(X(t)\). □
Hence, from now on, we assume that \(\alpha=0\).
Lemma 8
(Ordinary Itô’s formula for pth power)
Let \(p\ge2\), and let \(Z(t)\) be an H-valued semimartingale. Then
Proof
Use Itô’s formula (Métivier [10], Theorem 27.2, p.190) for \(\varphi(x)=\|x\|^{p}\) and note that
□
Lemma 9
Let \(v:[0,T]\to D(A)\) be a function with finite variation (with respect to the norm of \(D(A)\)), denoted by \(|v|(t)\). For \(u_{0} \in D(A)\), let \(u(t)=S(t) u_{0} + \int_{0}^{t} S(t-s)\,dv(s)\). Then \(u(t)\) is \(D(A)\)-valued and satisfies
Proof
(See also [11], p.30, Theorem 2.22 for the particular case \(dv(t)=f(t)\,dt\).) Let \(q(t)\) be the Radon-Nikodym derivative of \(v(t)\) with respect to \(|v|(t)\), that is, \(q(t)\) is a \(D(A)\)-valued function that is Bochner measurable with respect to \(d|v|(t)\), and \(v(t)=\int_{0}^{t} q(s)\,d|v|(s)\). We know that, for every \(t\in[0,T]\), \(\|q(t)\|\le1\).
Recall from semigroup theory that we can equip \(D(A)\) with an inner product by defining \(\langle x , y \rangle_{D(A)} := \langle x , y \rangle+ \langle Ax , Ay \rangle\). By closedness of A it follows that, under this inner product, \(D(A)\) is a Hilbert space, and \(A:D(A)\to H\) is a bounded linear map. Note that \(S(t)\) is also a semigroup on \(D(A)\). Hence, \(u(t)\) is a convolution integral in \(D(A)\) and hence has its value in \(D(A)\). We use the following two simple identities that hold in \(D(A)\):
We have
Now, using Fubini’s theorem, we find
□
Lemma 10
Let \(V(t)\) be a \(D(A)\)-valued process with finite variation in \(D(A)\), \(M(t)\) be a \(D(A)\)-valued square-integrable martingale, and \(V(0)=M(0)=0\). Let \(Z(t)=V(t) + M(t)\), and let \(X_{0}\) be \(D(A)\)-valued and \(\mathcal{F}_{0}\)-measurable. Define \(X(t)=S(t) X_{0} + \int_{0}^{t} S(t-s)\,dZ(s)\). Then \(X(t)\) is \(D(A)\)-valued and satisfies the following stochastic integral equation in H:
Proof
Note that \(S(t)\) is also a semigroup on \(D(A)\). Hence, \(X(t)\) is a stochastic convolution integral in \(D(A)\), and hence has its value in \(D(A)\). Write \(\overline{Y}(t)=S(t)X_{0}+\int_{0}^{t} S(t-s)\,dV(s)\) and \(Y(t)=\int_{0}^{t} S(t-s)\,dM(s)\). Hence, \(X(t)=\overline{Y}(t)+Y(t)\). We can apply Lemma 9 to \(\overline{Y}(t)\) and deduce that \(\overline{Y}(t)=X_{0}+\int_{0}^{t} A \overline{Y}(s)\,ds + V(t)\). Hence, it suffices to prove that \(Y(t)=\int _{0}^{t} A Y(s)\,ds + M(t)\).
Let \(\{e_{1},e_{2},e_{3},\ldots\}\) be a basis for the Hilbert space \(D(A)\). Define \(\overline{M}^{j}(t)=\langle M(t),e_{j} \rangle\) and \(M^{k}(t)=\sum_{j=1}^{k} \overline{M}^{j}(t)\). Let \(Y^{k}(t)=\int_{0}^{t} S(t-s)\,dM^{k}(s)\). We use the following two simple identities that hold in \(D(A)\):
We have
Now, using the stochastic Fubini theorem (see [9], Theorem 8.14, p.119), we find
Hence,
We have \(\mathbb{E}\|M(T)-M^{k}(T)\|_{D(A)}^{2}\to0\) and, by Theorem 1,
Since \(A:D(A)\to H\) is continuous, \(\mathbb{E} \sup_{0\le t\le T} \| AY(t)-AY^{k}(t)\|_{H}^{2}\to0\), and hence \(\mathbb{E}\|\int_{0}^{t} A Y(s)\,ds - \int_{0}^{t} A Y^{k}(s)\,ds\| \to0\). Taking the limits of both sides of (1), we get
□
Proof of Theorem 6
By Lemma 7 we need only to prove the theorem in the case \(\alpha=0\). In this case, we have to prove that
The main idea is that we approximate \(M(t)\) and \(V(t)\) by some \(D(A)\)-valued processes, and for \(D(A)\)-valued processes, we use ordinary Itô’s formula. This is done by the Yosida approximations. We recall some facts from semigroup theory in the following lemma. For proofs, see Pazy [12]. □
Lemma 11
For \(\lambda> 0\), \(\lambda I - A\) is invertible. Let \(R(\lambda )=\lambda(\lambda I - A)^{-1}\) and \(A(\lambda)=A R(\lambda)\). We have:
-
(a)
\(R(\lambda): H \to D(A)\) and \(A(\lambda):H \to H\) are bounded linear maps;
-
(b)
for every \(x\in H\), \(\|R(\lambda)x\|_{H} \le\|x\|_{H}\) and \(\langle x , A(\lambda) x \rangle\le0\);
-
(c)
\(R(\lambda)S(t)=S(t)R(\lambda)\), and for \(x\in D(A)\), \(R(\lambda) A x = A R(\lambda) x\);
-
(d)
for every \(x\in H\), \(\lim_{\lambda\to\infty} R(\lambda) x = x\) in H;
-
(e)
for every \(x\in D(A)\), \(\lim_{\lambda\to\infty} A(\lambda) x = Ax\).
Now, for \(n=1,2,3,\ldots\) , define:
According to Lemma 11, \(V^{n}(t)\) is a \(D(A)\)-valued finite–variation process, \(M^{n}(t)\) is a \(D(A)\)-valued martingale, and \(Z^{n}(t)\) is a \(D(A)\)-valued semimartingale. Hence, by Lemma 10, \(X^{n}(t)\) is an ordinary stochastic integral, and hence we can apply Lemma 8 to it and find
where
Since A is the generator of a contraction semigroup, we have \(\langle A x , x \rangle\le0\), and hence,
We claim that inequality (3) (after choosing a suitable subsequence) converges term by term to the following inequality:
where
We prove this claim in several steps.
(Step 1) We claim that \(\mathbb{E} |V^{n}-V|(t)^{p} \to0\). Let \(q(t)\) be the Radon-Nykodim derivative of \(V(t)\) with respect to \(|V|(t)\). We know that, for every t, \(\|q(t)\|\le1\). We have
Note that, for all s and ω, \(\|(R(n)-I) q(s)\| \le2 \) and tends to zero, and since \(|V|(t)<\infty\) a.s., by the Lebesgue dominated convergence theorem, \(\int_{0}^{t} \|(R(n)-I) q(s)\| d|V|(s)\to 0\) a.s. and is dominated by \(2|V|(t)\). Now since \(\mathbb{E} |V|(t)^{p} < \infty\), using the Lebesgue dominated convergence theorem, we find that \(\mathbb{E} (\int_{0}^{t} \|(R(n)-I) q(s)\|\,d|V|(s) )^{p}\to 0\), and the claim is proved.
(Step 2) We claim that \(\mathbb{E} [M^{n}-M](t)^{\frac{p}{2}} \to0\). Note that \([M^{n}-M](t)\le2[M^{n}](t)+2[M](t)\le4[M](t)\), and hence \([M^{n}-M](t)^{\frac{p}{2}}\) is dominated by \(4^{\frac{p}{2}}[M](t)^{\frac{p}{2}}\). On the other hand, \(\mathbb {E}[M^{n}-M](t) = \mathbb{E}\|M^{n}(t)-M(t)\|^{2}\to0\). Hence, \([M^{n}-M](t)\), and consequently \([M^{n}-M](t)^{\frac{p}{2}}\) tend to 0 in probability, and therefore by the Lebesgue dominated convergence theorem its expectation also tends to 0.
(Step 3) We claim that
We have
For \(\mathbf{A_{1}}\), we have
For \(\mathbf{A_{2}}\), we have
where we have used Step 1. For \(\mathbf{A_{3}}\), we use the Burkholder-type inequality (Theorem 5) for \(\alpha=0\) and find
where we have used Step 2. Hence, (4) is proved.
(Step 4) We claim that
By the triangle inequality,
where in the last line, we have used Step 3. Hence, (5) is proved, and in particular the sequence \(\mathbb{E}\sup_{0\le s \le t} \| X^{n}(s)\|^{p}\) is bounded for each t.
(Step 5) We claim that \(\mathbb{E}|\mathbf{C^{n}}-\mathbf{C}| \to 0\). We have
For the term \(\mathbf{C^{n}_{1}}\), we have
Now, using the simple inequality \(|a-b|^{r} \le|a^{r} - b^{r}|\) for \(r\ge 1\) and \(a,b\in\mathbb{R}^{+}\), we have \(|\|X^{n}(s^{-})\|^{p-2} - \|X(s^{-})\| ^{p-2}| \le|\|X^{n}(s^{-})\|^{p}-\|X(s^{-})\|^{p}|^{\frac{p-2}{p}}\). Substituting and using the Hölder inequality, we find
The second term above is bounded (according to Step 4), and the third term is bounded by \((\mathbb{E}|V|(t)^{p} )^{\frac{1}{p}}\) since \(|V^{n}|(t)\le|V|(t)\). We claim that the first term, after choosing a subsequence, converges to zero. We know from Step 3 that \(\mathbb{E}\sup_{0\le s \le t} \| X^{n}(s)-X(s)\|^{p} \to0\). Hence, we can choose a subsequence \(n_{k}\) for which \(\sup_{0\le s \le t} \| X^{n_{k}}(s)-X(s)\|^{p} \to0\), a.s. We have also \(\sup_{0\le s \le t} \|X(s)\|<\infty\) a.s., and hence
On the other hand,
Hence, by the dominated convergence theorem we have
and therefore, for the same subsequence, \(\mathbf{C^{n}_{1}}\to0\).
For the term \(\mathbf{C^{n}_{2}}\), we have
By the Hölder inequality we have
The first and third terms are bounded, and the second term tends to zero by Step 3. Hence, \(\mathbf{C^{n}_{2}}\to0\).
For the term \(\mathbf{C^{n}_{3}}\), we have
By the Hölder inequality we have
which tends to 0 by Step 1. Hence, \(\mathbf{C^{n}_{3}}\to0\).
(Step 6) We claim that \(\mathbb{E}|\mathbf{D^{n}}-\mathbf{D}| \to 0\). We have
For the term \(\mathbf{D^{n}_{1}}\), we use Corollary 4 for \(p=1\) and find
Now using the simple inequality \(|a-b|^{r} \le|a^{r} - b^{r}|\) for \(r\ge1\) and \(a,b\in\mathbb{R}^{+}\), we have \(|\|X^{n}(s^{-})\|^{p-2} - \|X(s^{-})\| ^{p-2}| \le|\|X^{n}(s^{-})\|^{p}-\|X(s^{-})\|^{p}|^{\frac{p-2}{p}}\). Substituting and using the Hölder inequality, we find
The second term above is bounded (according to Step 4), and the third term is bounded by \(( \mathbb{E}[M](t)^{\frac{p}{2}} )^{\frac{1}{p}}\) since \([M^{n}](t)\le[M](t)\). The first term, by the same arguments as in Step 5, after choosing a subsequence, converges to zero.
For the term \(\mathbf{D^{n}_{2}}\), we use Corollary 4 for \(p=1\) and find
By the Hölder inequality we have
The first and third terms are bounded, and the second term tends to zero by Step 3. Hence, \(\mathbf{D^{n}_{2}}\to0\).
For the term \(\mathbf{D^{n}_{3}}\), we use Corollary 4 for \(p=1\) and find
By the Hölder inequality we have
which tends to 0 by Step 2. Hence, \(\mathbf{C^{n}_{3}}\to0\).
(Step 7) We claim that \(\mathbb{E}|\mathbf{E^{n}}-\mathbf{E}| \to 0\). We have
For the term \(\mathbf{E^{n}_{1}}\), we have
Now, using the simple inequality \(|a-b|^{r} \le|a^{r} - b^{r}|\) for \(r\ge 1\) and \(a,b\in\mathbb{R}^{+}\), we have
Substituting and using the Hölder inequality, we find
The second term above is bounded by \(( \mathbb{E}[M](t)^{\frac{p}{2}} )^{\frac{2}{p}}\) since \([M^{n}]^{c}(t)\le[M](t)\). The first term, by the same arguments as in Step 5, after choosing a subsequence, converges to zero.
For the term \(\mathbf{E^{n}_{2}}\), we have
By Holder inequality we have
The first term is a constant. For the second term, we have \(0\le [M]^{c}(t)-[M^{n}]^{c}(t)\le[M](t)-[M^{n}](t) \le[M](t)\), and hence \(([M]^{c}(t)-[M^{n}]^{c}(t))^{\frac{p}{2}}\) is dominated by \([M](t)^{\frac{p}{2}}\). On the other hand, \(\mathbb{E} ([M]^{c}(t)-[M^{n}]^{c}(t)) \le \mathbb{E} (\|M\|(t)^{2} - \|M^{n}\|(t)^{2}) \to0\). Hence, \([M]^{c}(t)-[M^{n}]^{c}(t)\); consequently, \(([M]^{c}(t)-[M^{n}]^{c}(t))^{\frac{p}{2}}\) tends to 0 in probability, and therefore by the Lebesgue dominated convergence theorem its expectation also tends to 0. Hence, \(\mathbf{E^{n}_{2}}\to0\).
(Step 8) We claim that \(\mathbf{F^{n}}\to\mathbf{F}\) a.s. We use the following lemma, which is proved later.
Lemma 12
For \(x,y\in H\), we have
Note that the semimartingale \(Z(s)\) is càdlàg and hence is continuous except at a countable set of points \(0\le s\le t\), and these are the only points in which the terms in the sums F and Fn are nonzero.
By Lemma 12,
As in Step 5, we choose a subsequence \(n_{k}\) for which there exists \(\Omega_{0}\subset\Omega\) with \(\mathbb{P}(\Omega_{0})=1\) such that \(\sup_{0\le s \le t} \| X^{n_{k}}(s)-X(s)\|^{p} \to0\) for \(\omega\in \Omega_{0}\). Hence, for \(\omega\in\Omega_{0}\), \(\|X^{n}(s)\| \to\|X(s)\|\), and in particular \(\sup_{n} \sup_{s} \|X^{n}(s)\|^{p-2} <\infty\). Note also that \(\|\Delta Z^{n}(s)\|^{2}\le\|\Delta Z(s)\|^{2}\) and that \(\sum\|\Delta Z(s)\|^{2} < \infty\). Hence, by (6), for \(\omega\in\Omega_{0}\), Fn is dominated by an absolutely convergent series. On the other hand, since for \(\omega\in \Omega_{0}\), \(\|X^{n}(s)\| \to\|X(s)\|\), the terms of Fn converge to the terms of F. Hence, by the dominated convergence theorem for series we have \(\mathbf{F^{n}}\to\mathbf{F}\) for \(\omega\in\Omega_{0}\).
Proof of Lemma 12
Define \(f(t)= \|x+ty\|^{p}\). Then
and
By Taylor’s remainder theorem, we have that, for some \(\tau\in[0,1]\),
Since \(\|x+\tau y\|\le\max(\|x\|,\|x+y\|)\), we have
which completes the proof. □
4 An application to the stability of the semilinear stochastic evolution equations
In this section, we consider a semilinear stochastic evolution equation with monotone nonlinearity and derive a sufficient condition for stability of its solutions in the p-norm.
Let \((\Omega,\mathcal{F},\mathcal{F}_{t},\mathbb{P})\) be a filtered probability space, and let \(W_{t}\) be a cylindrical Wiener process on it with values in a separable Hilbert space K. Our goal is to study the following equation in H:
We assume the following:
Hypothesis 1
-
(a)
\(f(t,x,\omega):\mathbb{R}^{+}\times H\times\Omega\to H\) is measurable, \(\mathcal{F}_{t}\)-adapted, demicontinuous with respect to x, and there exists a constant M such that
$$\bigl\langle f(t,x,\omega)-f(t,y,\omega),x-y \bigr\rangle \le M \|x-y \|^{2}; $$ -
(b)
\(g(t,x,\omega):\mathbb{R}^{+}\times H\times\Omega\to L_{HS}(K,H)\) is predictable, and there exists a constant C such that
$$\bigl\Vert g(t,x,\omega)-g(t,y,\omega)\bigr\Vert _{L_{HS}(K,H)}^{2} \le C \|x-y \|^{2}, $$
where \(L_{HS}(K,H)\) denotes the space of Hilbert-Schmidt operators from K to H and demicontinuity means the following:
Definition 1
\(f:H\to H\) is called demicontinuous if whenever \(x_{n} \to x\), strongly in H, then \(f(x_{n})\rightharpoonup f(x)\) weakly in H.
Remark
It has been shown in [13] and [14] that, under the above assumptions and an additional linear growth condition on f and g, equation (7) has a unique mild solution for any square-integrable initial condition, and it has been shown in [15] that the solution depends continuously on the initial conditions and coefficients.
Theorem 13
(Exponential stability in the pth moment)
Let \(X_{t}\) and \(Y_{t}\) be mild solutions of (7) with initial conditions \(X_{0}\) and \(Y_{0}\). Then
for \(\gamma= p \alpha+ p M+\frac{1}{2}p(p-1) C\). In particular, if \(\gamma< 0\), then all mild solutions are exponentially stable in the \(L^{p}\) norm.
Proof
First, we consider the case t \(\alpha=0\). Subtract \(X_{t}\) and \(Y_{t}\):
where
and
Applying the Itô-type inequality (Theorem 6) for \(\alpha=0\) to \(X_{t}-Y_{t}\) and noting that \(M_{t}\) is a continuous martingale, we find
Using Hypothesis 1(a) for the term At, we find
Using Hypothesis 1(b) for the term Ct, we find
Taking the expectations of both sides of (8), noting that Bt is a martingale, and substituting (9) and (10) into (8), we find
where \(\gamma=p M+\frac{1}{2}p(p-1)C\). Now the statement follows by Gronwall’s inequality. Hence, the proof for the case \(\alpha=0\) is complete. Now, for the general case, apply the change of variables used in Lemma 7. □
References
Kotelenez, P: A submartingale type inequality with applications to stochastic evolution equations. Stochastics 8, 139-151 (1982)
Ichikawa, A: Some inequalities for martingales and stochastic convolutions. Stoch. Anal. Appl. 4(3), 329-339 (1986)
Kotelenez, P: A stopped Doob inequality for stochastic convolution integrals and stochastic evolution equations. Stoch. Anal. Appl. 2(3), 245-265 (1984)
Tubaro, L: An estimate of Burkholder type for stochastic processes defined by the stochastic integral. Stoch. Anal. Appl. 2(2), 187-192 (1984)
Zangeneh, BZ: Semilinear stochastic evolution equations with monotone nonlinearities. Stoch. Stoch. Rep. 53, 129-174 (1995)
Hamedani, HD, Zangeneh, BZ: Stopped Doob inequality for p-th moment, \(0< p<\infty\), stochastic convolution integrals. Stoch. Anal. Appl. 19(5), 771-798 (2001)
Brzeźniak, Z, Hausenblas, E, Zhu, J: Maximal inequality of stochastic convolution driven by compensated Poisson random measures in Banach spaces. arXiv preprint arXiv:1005.1600 (2010)
Jahanipur, R, Zangeneh, BZ: Stability of semilinear stochastic evolution equations with monotone nonlinearity. Math. Inequal. Appl. 3, 593-614 (2000)
Peszat, S, Zabczyk, J: Stochastic Partial Differential Equations with Lévy Noise. Cambridge University Press, Cambridge (2007)
Métivier, M: Semimartingales: A Course on Stochastic Processes, vol. 2. de Gruyter, Berlin (1982)
Curtain, RF, Pritchard, AJ: Infinite Dimensional Linear Systems Theory. Springer, Berlin (1978)
Pazy, A: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, Berlin (1983)
Salavati, Z: Semilinear stochastic evolution equations with Lévy noise and monotone nonlinearity. arXiv:1304.2122v1
Salavati, E, Zangeneh, BZ: Semilinear stochastic evolution equations of monotone type with Lévy noise. In: Proceedings of Dynamic Systems and Applications, vol. 6, pp. 380-387 (2012)
Salavati, E, Zangeneh, BZ: Continuous dependence on coefficients for stochastic evolution equations with multiplicative Lévy noise and monotone nonlinearity. Bull. Iran. Math. Soc. 42(1), 175-194 (2016)
Acknowledgements
This research was supported by a grant from IPM.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Salavati, E., Zangeneh, B.Z. A maximal inequality for pth power of stochastic convolution integrals. J Inequal Appl 2016, 155 (2016). https://doi.org/10.1186/s13660-016-1094-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-016-1094-0