A Maximal Inequality for $p$th Power of Stochastic Convolution Integrals

An inequality for the $p$th power of the norm of a stochastic convolution integral in a Hilbert space is proved. The inequality is stronger than analogues inequalities in the Literature in the sense that it is pathwise and not in expectation. An application of this inequality is provided for the semilinear stochastic evolution equations with L\'evy noise and monotone nonlinear drift. The existence and uniqueness of the mild solutions in $L^p$ for these equations is proved and a sufficient condition for exponential asymptotic stability of the solutions is derived.


Introduction
Stochastic convolution integrals appear in many fields of stochastic analysis. They are integrals of the form where M t is a martingale with values in a Hilbert space. Although they are generalization of stochastic integrals but they are different in many ways. For example they are not semimartingales in general and hence the usual results on semimartingales, such as maximal inequalities (i.e. inequalities for sup 0≤s≤t X s ) and existence of càdlàg versions could not be applied directly to them. Among first studies in this field one can note the works of Kotelenez [7] and Ichikawa [5] where they consider stochastic convolution integrals with respect to general martingales. They prove a maximal inequality in L 2 for stochastic convolution integrals (Theorem 1). Stochastic convolution integrals arise naturally in proving existence, uniqueness and regularity of the solutions of semilinear stochastic evolution equations, dX t = AX t dt + f (t, X t )dt + g(t, X t )dM t where A is the generator of a C 0 semigroup of linear operators on a Hilbert space and M t is a martingale. The case that the coefficients are Lipschitz operators is studied well and the theorems of existence, uniqueness and continuity with respect to initial data for the solutions in L 2 is proved, see e.g Kotelenez [8]. The proofs are based on the maximal inequality for stochastic convolution integrals, that is Theorem 1. These results have been generalized in several directions. One is the maximal inequality for pth power of the norm of stochastic convolution integrals. Tubaro has proved an upper estimate for E[sup 0≤s≤t |x(s)| p ] with p ≥ 2 in the case that M t is a real Wiener process. Ichikawa [5] has proved maximal inequality for pth power, p ≥ 2 in the special case that M t is a Hilbert space valued continuous martingale. The case of general martingale is proved by Zangeneh [18] for p ≥ 2 (see Theorem 5). Hamedani and Zangeneh [4] have generalized the maximal inequality to 0 < p < ∞. Brzezniak, Hausenblas and Zu [2] have derived a maximal inequality for pth power of the norm of stochastic convolutions driven by Poisson random measures. As far as we know, the maximal inequalities proved for stochastic convolution integrals in the literature all involve expectations. The only result that provides a pathwise (almost sure) bound is Zangeneh [?] in which is proved Theorem 2 called Itö type inequality. This inequality provides a pathwise estimate for the square of the norm of stochastic convolution integrals and is the generalization of the Itö formula to stochastic convolution integrals. In Section 2 we define and state some results about stochastic convolution integrals that will be used in the sequel. In Section 3 we state and prove the main result of this article, i.e. Theorem 6, which provides a pathwise bound for the pth power of stochastic convolution integrals with respect to general martingales. The special case that the martingale is an Itö integral with respect to a Wiener process has been proved by Jahanipour and Zangeneh [6]. The pathwise nature of Theorem 6 enables one to apply it to semilinear stochastic evolution equations with non Lipschitz coefficients. We consider the drift term to be a monotone nonlinear operator and the noise term to be a compensated Poisson random measure and prove the exitence of the mild solution in L p in Theorem 15. The precise assumptions on coefficients wll be stated in Section 4. An auxiliary result is a Bichteler-Jacod inequality in Hilbert spaces proved in Theorem 9. This result has been stated and proved before in the literature, for example in [10], but we give a new proof for it. We also show the exponential stability of the mild solutions under certain conditions in Theorem 19.

Stochastic Convolution Integrals
Let H be a separable Hilbert space with inner product , . Let S t be a C 0 semigroup on H with infinitesimal generator A : D(A) → H. Furthermore we assume the exponential growth condition on S t , i.e. there exists a constant α such that S t ≤ e αt . If α = 0, S t is called a contraction semigroup. In this section we review some properties and results about convolution integrals of type X t = t 0 S t−s dM s where M t is a martingale. These are called stochastic convolution integrals. Kotelenez [8] gives a maximal inequality for stochastic convolution integrals.
Theorem 1 (Kotelenez,[8]). Assume α ≥ 0. There exists a constant C such that for any H-valued càdlàg locally square integrable martingale M t we have Remark. Hamedani and Zangeneh [4] generalized this inequality to a stopped maximal inequality for p-th moment (0 < p < ∞) of stochastic convolution integrals.
Because of the presence of monotone nonlinearity in our equation, we need a pathwise bound for stochastic convolution integrals. For this reason the following pathwise inequality for the norm of stochastic convolution integrals has been proved in Zangeneh [18].
Theorem 2 (Itö type inequality, Zangeneh [18]). Let Z t be an H-valued càdlàg locally square integrable semimartingale. If where [Z] t is the quadratic variation process of Z t .
We state here the Burkholder-Davis-Gundy (BDG) inequality and a corollary to it, for future reference.
Theorem 3 (Burkholder-Davis-Gundy (BDG) inequality). For every p ≥ 1 there exists a constant C p > 0 such that for any real valued square integrable cadlag martingale M t with M 0 = 0 and for any T ≥ 0, Proof. See [14], page 37, and the reference there.
Corollary 4. Let p ≥ 1 and C p be the constant in the BDG inequality and M t be an H-valued square integrable cadlag martingale and X t an H-valued adapted process and T ≥ 0. Then for K > 0, where Proof. See [18], Lemma 4, page 147.
We will need also the following inequality which is an analogous of Burkholder-Davies-Gundy inequality for stochastic convolution integrals.
Theorem 5 (Burkholder Type Inequality, Zangeneh [18], Theorem 2, page 147). Let p ≥ 2 and T > 0. Let S t be a contraction semigroup on H and M t be an H-valued square integrable càdlàg martingale for t ∈ [0, T ]. Then where K p is a constant depending only on p.

Itö Type Inequality for pth Power
We use the notion of semimartingale and Itö's formula as is described in Metivier [?].
is an H-valued process with finite variation |V |(t) and M (t) is an H-valued square integrable martingale with quadratic variation [M ](t). Assume that
2. If M is a continuous martingale then the inequality takes the simpler form Before proceeding to the proof of theorem we state and prove some lemmas.
Lemma 7. It suffices to prove theorem 2 for the case that α = 0.
Proof. DefineS Note thatS t is a contraction semigroup. It is easy to see that the statement forX t implies the statement for X(t).
Hence from now on we assume α = 0.
Lemma 8 (Ordinary Itö's formula for pth power). Let p ≥ 2 and assume that Z(t) is an H-valued semimartingale. Then Proof. Use Itö's formula (Metivier [?], Theorem 27.2, Page 190) for ϕ(x) = x p and note that Proof. (see also Curtain and Pritchard page 30 Theorem 2.22 for the special case dv(t) = f (t)dt.) Let q(t) be the Radon-Nikodym derivative of v(t) with respect to |v|(t), i.e, q(t) is a D(A)-valued function which is Bochner measurable with respect to d|v|(t) and v(t) = t 0 q(s)d|v|(s). We know that for every t ∈ [0, T ], q(t) ≤ 1.
Recall from semigrop theory that one can equip D(A) with an inner product by defining x, y D(A) := x, y + Ax, Ay . By closedness of A it follows that under this inner product D(A) is a Hilbert space and A : D(A) → H is a bounded linear map. Note that S(t) is also a semigroup on D(A). Hence u(t) is a convolution integral in D(A) and hence has it's value in D(A). We use the following two simple identities that hold in D(A): We have Proof of Lemma. Note that S(t) is also a semigroup on D(A). Hence X(t) is a stochastic convolution integral in D(A) and hence has it's value in D(A).
We can apply lemma 9 to term Y (t) and deduce We use the following two simple identities that hold in D(A): Now using stochastic Fubini theorem (see [14] Theorem 8.14 page 119) we find Hence we find Hence by taking limits from both sides of (1) we get Proof of Theorem 6. By using Lemma 7, we need only to prove for the case α = 0. In this case we have to prove (2) The main idea is that we approximate M (t) and V (t) by some D(A) valued processes, and for D(A) valued processes we use ordinary Itö's formula. This is done by Yosida approximations. We recall some facts from semigroup theory in the following lemma. For proofs see Pazy [12].
We have: Hence by lemma 10, X n (t) is an ordinary stochastic integral and hence we can apply Lemma 8 to it and find Since A is the generator of a contraction semigroup, we have Ax, x ≤ 0, hence we find We claim that the inequality (3) (after choosing a suitable subsequence) converges term by term in to the following inequality and hence the following will be proved: We prove this claim in several steps.
We know that for every t, q(t) ≤ 1. We have Note that for every s and ω, (R(n) − I)q(s) ≤ 2 and tends to zero and since |V |(t) < ∞, a.s. by the Lebesgue's dominated convergence theorem, We have where we have used Step 1. For A 3 , we use Burkholder type inequality (Theorem 5) for α = 0 and find where we have used Step 2. Hence (4) is proved.
By triangle inequality, where in the last line we have used Step 3. Hence (5) is proved and in particular the sequence E sup 0≤s≤t X n (s) p is bounded for each t. ( Step 5) We claim that E|C n − C| → 0. We have For the term C n 1 we have, Now using the simple inequality |a − b| r ≤ |a r − b r | for r ≥ 1 and a, b p . Substituting and using the Holder inequality we find The second term above is bounded (according to step 4) and the third term is bounded by (E|V |(t) p ) 1 p since |V n |(t) ≤ |V |(t). We claim that the first term, after choosing a subsequence, converges to zero. We know from Step 3 that E sup 0≤s≤t X n (s) − X(s) p → 0. Hence we can choose a subsequence n k for which sup 0≤s≤t X n k (s) − X(s) p → 0, a.s. We have also sup 0≤s≤t X(s) < ∞, a.s, hence On the other hand Hence by dominated convergence theorem we have and therefore for the same subsequence C n 1 → 0. For the term C n 2 we have, By Holder inequality we have The first and third terms are bounded and the second term tends to zero by Step 3. Hence C n 2 → 0. For the term C n 3 we have, By Holder inequality we have where tends to 0 by Step 1. Hence C n 3 → 0.
(Step 6) We claim that E|D n − D| → 0. We have For the term D n 1 we use Corollary 4 for p = 1 and find Now using the simple inequality |a − b| r ≤ |a r − b r | for r ≥ 1 and a, b p . Substituting and using the Holder inequality we find The second term above is bounded (according to step 4) and the third term For the term D n 2 we use Corollary 4 for p = 1 and find By Holder inequality we have The first and third terms are bounded and the second term tends to zero by Step 3. Hence D n 2 → 0. For the term D n 3 we use Corollary 4 for p = 1 and find By Holder inequality we have where tends to 0 by Step 2. Hence C n 3 → 0. (Step 7) We claim that E|E n − E| → 0. We have For the term E n 1 we have Now using the simple inequality |a − b| r ≤ |a r − b r | for r ≥ 1 and a, b ∈ R + we have Substituting and using the Holder inequality we find For the term E n 2 we have By Holder inequality we have The first term is a constant. for the second term we have 0 p 2 tends to 0 in probability and therefore by Lebesgue's dominated convergence theorem it's expectation also tends to 0. Hence E n 2 → 0.

(Step 8)
We claim that F n → F a.s. We use the following lemma that is proved later.
Lemma 12. For x, y ∈ H we have Note that the semimartingale Z(s) is cadlag and hence is continuous except at a countable set of points 0 ≤ s ≤ t, and these are the only points in which the terms in the sums F and F n are nonzero.
By Lemma 12, As in Step 5 we choose a subsequence n k for which there exists Ω 0 ⊂ Ω with P(Ω 0 ) = 1 such that sup 0≤s≤t X n k (s) − X(s) p → 0, for ω ∈ Ω 0 . Hence for ω ∈ Ω 0 , X n (s) → X(s) and in particular sup n sup s X n (s) p−2 < ∞. Note also that ∆Z n (s) 2 ≤ ∆Z(s) 2 and that ∆Z(s) 2 < ∞. Hence by (6), for ω ∈ Ω 0 , F n is dominated by an absolutely convergent series. On the other hand since for ω ∈ Ω 0 , X n (s) → X(s) hence the terms of F n converge to terms of F. Hence by the dominated convergence theorem for series, we have F n → F for ω ∈ Ω 0 .

Semilinear Stochastic Evolution Equations with Lévy Noise and Monotone Nonlinear Drift
In this section we will apply the theory developed in the last section to stochastic evolution equations. The noise term comes from a general Lévy process and has Lipschitz coefficients, but the drift term is a non linear monotone operator. The existence and uniqueness of the mild solutions of these equations in L 2 has been proved in [15]. In this section we prove the existence and uniqueness of the solution in L p for p ≥ 2 in Theorem 15. We also provide sufficient conditions under which the solutions are exponentially asymptotically stable. Let (Ω, F , F t , P) be a filtered probability space. Let (E, E) be a measurable space and N (dt, dξ) a Poisson random measure on R + ×E with intensity measure dtν(dξ). Our goal is to study the following equation in H, whereÑ (dt, dξ) = N (dt, dξ) − dtν(dξ) is the compensated Poisson random measure corresponding to N . We will use the notion of stochastic integration with respect to compensated Poisson random measure. For the definition and properties see [14] and [1].
We assume the following, demicontinuous with respect to x and there exists a constant M such that ξ, x, ω) : R + ×E ×H ×Ω → H is predictable and there exists a constant C such that E k(t, ξ, x) − k(t, ξ, y) 2 ν(dξ) ≤ C x − y 2 , (c) There exists a constant D such that (e) X 0 (ω) is F 0 measurable and E X 0 Definition 2. By a mild solution of equation (7) with initial condition X 0 we mean an adapted càdlàg process X t that satisfies We will need an estimate for the L p norm of stochastic integrals with respect to compensated Poisson random measures. For this reason we state and prove the following theorem which is a Bichteler-Jacod inequality for Poisson integrals in infinite dimensions. This theorem is essentially the Lemma 4 of [11] with an extension to 1 ≤ p ≤ 2. We provide a new proof for this theorem based on the Burkholder-Davies -Gundy inequality.
Theorem 13 (An L p bound for Stochastic Integrals with Respect to Compensated Poisson Random Measures). Let p ≥ 1. There exists a real constant denoted by C p such that if k(t, ξ, ω) is an H-valued predictable process for which the right hand side of (9) is finite then Proof. Assume that 2 n ≤ p < 2 n+1 . We prove by induction on n. Basis of induction: n = 0. In this case we have 1 ≤ p < 2 and the statement follows from Theorem 8.23 of [14]. In fact, in this case we have |k(s, ξ, ω)| p ν(dξ)ds

Induction
Step: Now assume n ≥ 1 and we have proved the statement for n − 1.
Hence p ≥ 2. Applying Burkholder-Davies-Gundy inequality we find Subtracting a compensator from the right hand side we get Note that 2 n−1 ≤ p 2 < 2 n hence we can apply the induction hypothesis to the first term on the right hand side and find By the interpolation inequality for a suitbale θ such that θ + 1−θ By the arithmetic-geometric mean inequality taking expectations and substituting in (10) the statement is proved.
Let (Ω, F , F t , P) be a filtered probability space and assume f satisfies Hypothesis 1-(a) and there exists a constant D such that f (t, x, ω) 2 ≤ D(1 + x 2 ) and assume V (t, ω) is an adapted process with càdlàg trajectories and X 0 (ω) is F 0 measurable. We will need the following theorem, Theorem 14 (Zangeneh, [?] and [17]). With assumptions made above, the equation has a unique measurable adapted càdlàg solution X t (ω). Furtheremore The main theorem of this section, Theorem 15 (Existence of the Solution in L p ). Let p ≥ 2. Then under assumptions of Hypothesis 1, equation (7) has a unique square integrable càdlàg mild solution X(t) such that E sup 0≤s≤t X(s) p < ∞.
Lemma 16. It suffices to prove theorem 15 for the case that α = 0.
Note thatS t is a contraction semigroup. It is easy to see that X t is a mild solution of equation (7) if and only ifX t = e −αt X t is a mild solution of equation with coefficientsS,f ,k.
Proof of Theorem 15. Existence and uniqueness of the mild solution in L 2 has been proved in [15], Theorem 4. Uniqueness in L 2 implies the uniqueness in L p for p ≥ 2. It remains to prove the existence in L p .
Existence. It suffices to prove the existence of a solution on a finite interval [0, T ]. Then one can show easily that these solutions are consistent and give a global solution. We define adapted càdlàg processes X n t recursively as follows.
is defined. Theorem 14 implies that there exists an adapted càdlàg solution X n t of where It is proved in [15] that {X n } converge to some adapted càdlàg process X t in the sense that and that X t is the mild solution of equation (7). We wish to show that {X n } converge to X t in L p with the supremum norm. This is done by the following two lemmas.
Proof. We prove by induction on n. By Theorem 14 we have the following estimate, Hence, Taking supremum and using Cauchy-Schwartz inequality we find Using Hypothesis 1-(c) and Holder's inequality we find Since p 2 ≥ 1, we can apply Theorem 13 to second term and find Combining (??) and (12) we find by Hypothesis 1 (c) we have, where C 1 = 2 p 2 D(1 + C p 2 ) and C 2 = 2 p 2 C p 2 D, now by Holder inequality we find, which is finite by induction. The basis of induction follows directly from Hypothesis 1-(e).
where C 0 and C 1 are constants that are introduced below.
Applying Theorem 6, for α = 0, we have Note that for a càdlàg function the set of discontinuity points is countable, hence when integrating with respect to Lebesgue measure, they can be neglected. Therefore from now on, we neglect the left limits in integrals with respect to Lebesgue measure. So, for the term A t , the semimonotonicity assumption on f implies Now, taking expectations on both sides of (17) and substituting (18) and (??) and (19) and noting that B t is a martingale we find, Theorem 19 (Exponential Stability in the pth Moment). Let X t and Y t be mild solutions of (7) with initial conditions X 0 and Y 0 . Then for γ = pα + pM + 1 2 p(p − 1)C + 1 2 p(p − 1)((2 p−2 + 1)C + 2 p−2 F ). In particular, if γ < 0 then all mild solutions are exponentially stable in the L p norm.
Proof. First we consider the case that α = 0. Subtract X t and Y t , Applying Itö type inequality (Theorem 2), for α = 0, to X t − Y t and rewriting it with respect to random Poisson measure, we find Using Hypothesis 1 (a) for term A t we find Using Hypothesis 1 (b) for term C t we find For term D s we have by Lemma 12, Taking expectations on both sides of (20) and noting that B t is a martingale and substituting (21), (22) and (23) we find where γ = pM + 1 2 p(p − 1)C + 1 2 p(p − 1)((2 p−2 + 1)C + 2 p−2 F ). Now applying Gronwall's inequality the statement follows. Hence the proof for the case α = 0 is complete. Now for the general case, apply the change of variables used in Lemma 16.
Remark. The results of this section remain valid by adding a Wiener noise term to equation (7). i.e. for the equation dX t = AX t dt + f (t, X t )dt + g(t, X t− )dW t + E k(t, ξ, X t− )Ñ (dt, dξ), (24) where W t is a cylindrical Wiener process on a Hilbert space K, independent of N and g(t, x, ω) : R + × H × Ω → L HS (K, H) (Space of Hilbert-Schmidt operators from K to H) is Lipschitz and has linear growth. The proofs are straightforward generalizations of the proofs of this section.