Skip to main content

Estimates of the approximation of weighted sums of conditionally independent random variables by the normal law

Abstract

A Berry-Esseen-like inequality is provided for the difference between the characteristic function of a weighted sum of conditionally independent random variables and the characteristic function of the standard normal law. Some applications and possible extensions of the main result are also illustrated.

MSC:60F05, 60G50.

1 Introduction and main result

Berry-Esseen inequalities are currently considered, within the realm of the central limit problem of probability theory, as a powerful tool to evaluate the error in approximating the law of a standardized sum of independent random variables (r.v.’s) by the normal distribution. The classical version of the statement at issue is due, independently, to Berry [1] and to Esseen [2], and it is condensed into the well-known inequality

sup x R | F n ( x ) G ( x ) | C m 3 σ 3 1 n
(1)

valid for a given a sequence { ξ n } n 1 of non-degenerate, independent and identically distributed (i.i.d.) real-valued r.v.’s such that m 3 :=E[ | ξ 1 | 3 ]<+. Here, C is a universal constant, σ 2 stands for the variance Var ξ 1 and F n , G denote the distribution functions

F n (x):=P [ j = 1 n ( ξ j E ξ j ) n σ 2 x ] andG(x):= x 1 2 π e u 2 / 2 du,

respectively. The proof of (1) is based on the evaluation of an upper bound for the modulus of the difference between the characteristic function (c.f.) φ n (t):= R e i t x d F n (x) and the c.f. e t 2 / 2 of the standard normal distribution. See, for example, Theorems 1-2 in Chapter 5 of [3]. In fact, under the above hypotheses on { ξ n } n 1 , one can prove that

| φ n ( t ) e t 2 / 2 | 16 m 3 σ 3 1 n | t | 3 e t 2 / 3
(2)

holds for every t[ σ 3 n 4 m 3 , σ 3 n 4 m 3 ], while an analogous estimate for the first two derivatives can be also obtained with a bit of work, namely

| d l d t l ( φ n ( t ) e t 2 / 2 ) | C m 3 σ 3 1 n ( | t | 3 l + | t | 6 + l ) e t 2 / 12
(3)

for all t[ σ 3 n m 3 , σ 3 n m 3 ] and l=1,2, C being another suitable constant. As a reference for (2)-(3), see, e.g., Lemma 1 in Chapter 5 and Lemma 4 in Chapter 6 of [3], respectively.

Due to a significant and constantly updating employment of Berry-Esseen-like inequalities in different areas of pure and applied mathematics - such as stochastic processes, mathematical statistics, Markov chain Monte Carlo, random graphs, combinatorics, coding theory and kinetic theory of gases - the researchers have tried to continuously generalize this kind of estimates. Confining ourselves to the case of a limiting distribution coinciding with the standard normal law, the main lines of research can be summarized as follows: (1) Taking account of different kinds of stochastic dependence - typically, less restrictive than the i.i.d. setting - such as the case of sequences of independent, non-identically distributed r.v.’s (see Chapters 5-6 of [3] and the recent papers [46]), martingales [7], exchangeable r.v.’s [8] and Markov processes [9]. (2) Evaluation of the discrepancy between F n and G by means of probability metrics different from the Kolmogorov distance, i.e., the LHS of (1). Classical references are Chapters 5-6 of [10], Chapters 5-6 of [3] and Chapters 3-5 of [11], while a more recent treatment is contained in [12]. Strong metrics, such as total variation or entropy metrics, are dealt with in [1317]. (3) Formulation of different hypotheses about the moments, both in the direction of weakening (i.e., considering moments of order 2+δ, with δ(0,1)) or sharpening (i.e., considering the k th moments with k>3) the initial assumption m 3 <+. The above-mentioned references are comprehensive also of this kind of variants.

The present paper contains some generalizations of the Berry-Esseen estimate, which fall within the three aforementioned lines of research, by starting from the main statement (Theorem 1 below) reminiscent of inequalities (2)-(3). To present the main result in a framework as general as possible, we consider a weighted sum of the form

S n := j = 1 n c j ξ j ,
(4)

where the c j ’s are constants satisfying

j = 1 n c j 2 =1
(5)

and the ξ j ’s are conditionally independent r.v.’s, possibly non-identically distributed. To formalize this point, we assume that there exist a probability space (Ω,F,P) and a sub-σ-algebra of such that

P[ ξ 1 A 1 ,, ξ n A n H]= j = 1 n P[ ξ j A j H]
(6)

holds almost surely (a.s.) for any A 1 ,, A n in the Borel class B(R). We also assume that

E[ ξ j H]=0
(7)

holds a.s. for j=1,,n, and that

max j = 1 , , n esssup ω Ω E [ | ξ j | 4 H ] (ω)=: m 4 <+
(8)

is in force. Then, after putting φ n (t):=E[ e i t S n ] and defining the entities

X : = j = 1 n c j 2 | E [ ξ j 2 ] 1 | , Y : = | j = 1 n c j 3 E [ ξ j 3 ] | , W : = m 4 j = 1 n c j 4 , Z : = E [ ( j = 1 n c j 2 E [ ξ j 2 H ] 1 ) 2 ] , τ : = 1 2 W 1 / 4 , T : = { ω Ω | j = 1 n c j 2 E [ ξ j 2 H ] ( ω ) 1 / 3 } ,

we state our main result.

Theorem 1 Under assumptions (5)-(8) one has

| d l d t l ( φ n ( t ) e t 2 / 2 ) | E [ | d l d t l E [ e i t S n H ] | 1 T ] + u 2 , l ( t ) X + u 3 , l ( t ) Y + u 4 , l ( t ) W + v l ( t ) Z
(9)

for every t in [τ,τ] and l=0,1,2, where u 2 , l , u 3 , l , u 4 , l , v l are rapidly decreasing continuous functions, which can be put into the form c 1 t 2 r (1+ t 2 m ) e t 2 / c 2 with r,m N 0 and c 1 , c 2 >0 explicitly.

The general setting of this theorem is suitable to treat a vast number of standard cases. For example, in the simplest situation of a sequence { ξ n } n 1 of i.i.d. r.v.’s with E[ ξ 1 ]=0, E[ ξ 1 2 ]=1 and E[ ξ 1 4 ]= m 4 <+, one can put H=F to get E[ ξ 1 2 H]=1 a.s. and, consequently, X=Z=P(T)=0. Whence,

| d l d t l ( φ n ( t ) e t 2 / 2 ) | u 3 , l (t) | E [ ξ 1 3 ] | | j = 1 n c j 3 | + u 4 , l (t)E [ ξ 1 4 ] j = 1 n c j 4
(10)

holds for all t such that |t| 1 2 m 4 1 / 4 ( j = 1 n c j 4 ) 1 / 4 and l=0,1,2. Therefore, after noting that u 3 , 0 (t) and u 4 , 0 (t) have a standard form with r1, a plain application of Theorems 1-2 in Chapter 5 of [3] leads to an inequality of the type of (1), with a rate of convergence to zero proportional to | j = 1 n c j 3 | and j = 1 n c j 4 , provided that these two quantities go to zero when n diverges. Moreover, when the additional hypothesis |E[ e i t ξ 1 ]|=o( | t | p ), for |t|+, is in force with some p>0 (to be compared with (14) in [5]), it is possible to deduce a bound for the normal approximation w.r.t. the total variation distance, by following the argument developed in [18]. Indeed, starting from Proposition 4.1 therein (Beurling’s inequality), one can write

sup A B ( R ) | P [ j = 1 n c j ξ j A ] 1 2 π A e x 2 / 2 d x | C l = 0 1 ( R | d l d t l ( φ n ( t ) e t 2 / 2 ) | 2 d t ) 1 / 2
(11)

with a suitable constant C. Then, one can bound the RHS from above exactly as in Section 4 of the quoted paper. In general, it should be noticed that (9) reduces to a more standard inequality with Y and W only, whenever the ξ j ’s are properly scaled w.r.t. conditional variances (i.e., when E[ ξ j 2 H]=1 holds a.s. for j=1,,n). In the less trivial case of independent, non-identically distributed r.v.’s, with E[ ξ j ]=0 for j=1,,n and max j = 1 , , n E[ ξ j 4 ]:= m 4 <+, one can take again H=F to show that the normalization of S n w.r.t. variance reduces to j = 1 n c j 2 E[ ξ j 2 H]=1 a.s. Since Z=P(T)=0 ensues from this condition, the RHS of (9) assumes again a nice form with X, Y and W only. Now, we do not linger further on these standard applications of (9), since we deem more convenient to stress the role of the unusual terms like E[| d l d t l E[ e i t S n H]| 1 T ], X and Z, and to comment on the utility of formulating Theorem 1 in the general framework of conditionally independent r.v.’s, which, being a novelty in this study, has motivated the drafting of this paper. As an illustrative example, we consider, on (Ω,F,P), a sequence X= { X j } j 1 of independent r.v.’s and a random function f ˜ , taking values in some subset of the space of all measurable, uniformly bounded functions f:RR (i.e., for which there exists M>0 such that |f(x)|M for all xR and fF), which is stochastically independent of X. Putting ξ j := f ˜ ( X j ) for all jN and H:=σ( f ˜ ), and assuming that E[f( X j )]=0 for all jN and fF, we have that (6)-(8) are fulfilled. At this stage, it is worth recalling that a number of problems in pure and applied statistics, such as filtering and reproducing kernel Hilbert space estimates, can be formulated within this framework and would be greatly enhanced by the knowledge of the error in approximating the law of the sum 1 n j = 1 n ξ j by the normal distribution. See [19], and also [20] for a Bayesian viewpoint. Then Theorem 1 entails the following corollary.

Corollary 2 Suppose that sup A B ( R ) | 1 n j = 1 n μ j (A) μ ¯ (A)|:= α n converges to zero as n diverges, where μ j ():=P[ X j ] and μ ¯ is a suitable probability measure satisfying E[ R f ˜ 2 (x) μ ¯ (dx)]=1. Suppose further that E[| 1 n j = 1 n R f ˜ 1 2 (x) μ j (dx) R f ˜ 2 2 (x) μ ¯ (dx)|]:= δ n converges to zero as n diverges, where f ˜ 1 and f ˜ 2 are two independent copies of f ˜ . If

sup f F | t | n 1 / 4 2 M | d l d t l j = 1 n E [ e i t f ( X j ) / n ] | 2 dt:= β n , l
(12)

satisfies lim n + β n , l =0 for l=0,1, then there is a constant C such that

sup A B ( R ) | P [ 1 n j = 1 n ξ j A ] 1 2 π A e x 2 / 2 d x | C max { 1 n , α n , δ n , β n , 0 , β n , 1 }

is valid for all nN.

To conclude the presentation of the main result, we will deal with three other applications, specifically to exchangeable sequences of r.v.’s, to mixtures of Markov chains and to the homogeneous Boltzmann equation. Due to the technical nature of these applications, we have deemed more convenient to isolate each of them into a respective subsection, labeled Subsections 1.1, 1.2 and 1.3, respectively. Section 2 is dedicated to the proof of Theorem 1, which contains also an explicit characterization of the functions u h , l ’s and v l ’s, and to the proof of Corollary 2.

1.1 Application to exchangeable sequences

Here, we consider a sequence { ξ n } n 1 of exchangeable r.v.’s such that E[ ξ 1 ]=0, E[ ξ 1 2 ]=1, Cov( ξ 1 , ξ 2 )=Cov( ξ 1 2 , ξ 2 2 )=0, as in [8]. Taking as the σ-algebra of the permutable events, we can invoke the celebrated de Finetti’s representation theorem to show that (6) is fulfilled. Moreover, from the arguments developed in the above-mentioned paper, we obtain that the assumption on the covariances entails (7) and E[ ξ j 2 H]=1 a.s. for all jN. Finally, we notice that there are many simple cases (for example, when | ξ j |M a.s. for a suitable constant M and all jN), in which (8) is easily verified. Hence, we conclude that X=Z=P(T)=0, so that the bound (10) is in force, and an estimate of the type of (1) will follow from the application of Theorems 1-2 in Chapter 5 of [3]. To condense these facts into a unitary statement, we denote with p ˜ the random probability measure which meets P[ ξ 1 A 1 ,, ξ n A n p ˜ ]= i = 1 n p ˜ ( A i ) for all nN and A 1 ,, A n B(R), according to de Finetti’s theorem, and we state the following.

Proposition 3 Let { ξ n } n 1 be an exchangeable sequence of r.v.’s such that E[ ξ 1 ]=0, E[ ξ 1 2 ]=1, Cov( ξ 1 , ξ 2 )=Cov( ξ 1 2 , ξ 2 2 )=0. If R x 4 p ˜ (dx) m 4 holds a.s. with some positive constant m 4 , then one gets

sup x R | P [ S n x ] 1 2 π x e y 2 / 2 d y | C [ m 4 3 / 4 0 + u 3 , 0 ( t ) t d t | j = 1 n c j 3 | + m 4 0 + u 4 , 0 ( t ) t d t j = 1 n c j 4 ] ,
(13)

where C is an absolute constant.

The bound (13) represents an obvious generalization of (2.10) in [8] because of the arbitrariness, in the former inequality, of the weights c 1 ,, c n .

1.2 Application to mixtures of Markov chains

The papers [21, 22] deal with sequences { ξ n } n 0 , where each ξ n takes value in a discrete state space I, whose law is a mixture of laws of Markov chains. From a Bayesian standpoint, one could think of { ξ n } n 0 as a Markov chain with random transition matrix, to which a prior distribution is assigned on the space of all transition matrices. The work [22] proves that, under the assumption of recurrence, this condition on the law of the sequence is equivalent to the property of partial exchangeability of the random matrix V= ( V i , n ) i I , n 1 , where V i , n denotes the position of the process immediately after the n th visit to the state i. We also recall that partial exchangeability (in the sense of de Finetti) means that the distribution of V is invariant under finite permutation within rows. An equivalent condition to partial exchangeability is the existence of a σ-algebra such that the V i , n ’s are independent conditionally on it, that is,

P [ r = 1 k m = 1 n r { V j r , m = v r , m } | H ] = r = 1 k m = 1 n r P[ V j r , m = v r , m H]

holds for every kN, j 1 ,, j k I, n 1 ,, n k N and v r , m I, and, in addition, each of the sequences ( V i , n ) n 1 is exchangeable. Therefore, upon assuming, for simplicity, that I is finite and that E[f( V i , n )H]=0 holds a.s. with a suitable function f:IR, for all iI and nN, we have that the family of r.v.’s { f ( V i , n ) } i I , n { 1 , , N } meets conditions (6)-(8) and fits the general setting of the present paper. The ultimate motivation of this application is, in fact, to provide a Berry-Esseen inequality in order to quantify the error in approximating the law of n σ f 2 ( 1 n j = 1 n f( ξ j ) f ¯ ) by the standard normal distribution, where f ¯ :=E[ E (f( ξ 0 ))], σ f 2 :=E[ Var (f( ξ 0 ))+2 j = 1 + Cov (f( ξ 0 ),f( ξ j ))] and E , Var , Cov represent expectation, variance and covariance, respectively, w.r.t. the (random) stationary distribution of { ξ n } n 0 , given . Such a result could prove an extremely concrete achievement in Markov chain Monte Carlo settings, where the existence of a Berry-Esseen inequality allows one to estimate σ f 2 in order to decide whether 1 n j = 1 n f( ξ j ) (which is a quantity that can be simulated) is a good estimate for f ¯ or not. At this stage, the well-known relation between the V i , n ’s and the ξ n ’s, contained in [22] or in Sections 9-10 of Chapter II of [23], would come in useful to establish a link between two Berry-Esseen-like inequalities: the former being relative to the family of r.v.’s { f ( V i , n ) } i I , n { 1 , , N } , which follows from a direct application of Theorem 1, the latter being relative to the sequence n σ f 2 ( 1 n j = 1 n f( ξ j ) f ¯ ). Due to the mathematical complexity of this task, this topic will be carefully developed in a forthcoming paper [24].

1.3 Application to the Boltzmann equation

In [25], the study of the rate of relaxation to equilibrium of solutions to the spatially homogeneous Boltzmann equation for Maxwellian molecules is conducted by resorting to a new probabilistic representation, set forth in Section 1.5 of that paper. The key ingredient of such a representation is the random sum S(u):= j = 1 ν π j , ν ψ j , ν (u) V j , where:

  1. (i)

    u is a fixed point of the unitary sphere S 2 R 3 ;

  2. (ii)

    (Ω,F, P t ) is a probability space with a probability measure depending on the parameter t0, and GH are suitable σ-algebras of ;

  3. (iii)

    ν is a r.v. such that P t [ν=n]= e t ( 1 e t ) n 1 for all nN, and such that ν is -measurable;

  4. (iv)

    for any nN and j=1,,n, π j , n is a -measurable r.v., with the property that j = 1 n π j , n 2 =1 for all nN;

  5. (v)

    for any nN and j=1,,n, ψ j , n (u) is an -measurable random vector, taking values in S 2 ;

  6. (vi)

    { V j } j 1 is a sequence of i.i.d. random vectors taking values in R 3 , independent of and such that E t [ V 1 ]=0, E t [ V 1 , i V 1 , j ]= δ i , j σ i 2 for any i,j{1,2,3}, with i = 1 3 σ i 2 =3, and E t [ | V 1 | 4 ]= m 4 <+. Here, E t stands for the expectation w.r.t. P t and δ i , j is the Kronecker symbol.

After these preliminaries - whose detailed explanation the reader will find in the quoted section of [25] - it is clear that each realization of the random measure A P t [S(u)AG], with AB(R), has the same structure as the probability law of the sum S n given by (4) in the present paper. In [18, 2628] the reader will find some analogous representations, set forth in connection with allied new forms of the Berry-Esseen inequality. Thanks to the above-mentioned link, it is important to note that Theorem 1 of the present paper, along with its proof, represents the key result needed to complete the argument developed in Section 2.2.3 of [25]. On the other hand, the application of Theorem 1 to the context of the Boltzmann equation appears as a significant use of this abstract result in its full generality, for the conditional independence is the form of stochastic dependence actually involved and the normalization to conditional variances does not necessarily occur, so that all the terms in the RHS of (9) play an active role. The successful utilization of Theorem 1 lies in the fact that the quantities P(T), X, Y, W and Z are now easily tractable, thanks to the computations developed in Appendices A.1 and A.13 of [25].

2 Proofs

2.1 Proof of Theorem 1

Start by putting ψ ˜ j (t):=E[ e i t c j ξ j H] for j=1,,n, and recall the standard expansion for c.f.’s to write

ψ ˜ j (t)=1 1 2 c j 2 E [ ξ j 2 H ] t 2 i 3 ! c j 3 E [ ξ j 3 H ] t 3 + R ˜ j (t)=1+ q ˜ j (t)
(14)

with a suitable expression of the remainder R ˜ j . Superscript ˜ will be used throughout this section to remark that a certain quantity is random. Now, a plain application of the Taylor formula with the Bernstein form of the remainder gives

e x = k = 0 3 x k k ! + x 4 3 ! 0 1 e x u ( 1 u ) 3 d u = k = 0 3 x k k ! + x 3 2 0 1 ( e x u 1 ) ( 1 u ) 2 d u = k = 0 3 x k k ! + x 2 0 1 ( e x u x u 1 ) ( 1 u ) d u

so that R ˜ j can assume one of the following forms:

R ˜ j (t)= 1 3 ! c j 4 t 4 E [ ξ j 4 0 1 e i u t c j ξ j ( 1 u ) 3 d u | H ]
(15)
= i 2 c j 3 t 3 E [ ξ j 3 0 1 ( e i u t c j ξ j 1 ) ( 1 u ) 2 d u | H ]
(16)
= c j 2 t 2 E [ ξ j 2 0 1 ( e i u t c j ξ j 1 i u t c j ξ j ) ( 1 u ) d u | H ] .
(17)

Combining (8) and (15) yields | R ˜ j (t)| 1 24 m 4 c j 4 t 4 a.s., for every t and j=1,,n. Consequently, the definition of W entails

j = 1 n | R ˜ j ( t ) | 1 24 W t 4 a.s.
(18)

for every t and, if |t|τ,

j = 1 n | R ˜ j ( t ) | 1 384 a.s.
(19)

Now, to obtain an upper bound for q ˜ j (t) in (14), observe the following facts. First, the Lyapunov inequality for moments yields E[ ξ j 2 H] m 4 1 / 2 and E[ | ξ j | 3 H] m 4 3 / 4 a.s. Second, |t|τ implies | c j t| 1 2 m 4 1 / 4 . Hence, for every t in [τ,τ], one has

| q ˜ j ( t ) | 19 128 a.s.
(20)

so that the quantity Log ψ ˜ j (t) makes sense when Log is meant to be the principal branch of the logarithm. Then put

Φ(z):= z Log ( 1 + z ) z 2 = 0 + 0 s e z x ( s x s ) e s dsdx

for all complex z such that z>1 (with the proviso that Φ(0)=1/2) and note that the restriction of Φ to the interval ]1,+[ is a completely monotone function. See Chapter 13 of [29]. Equation (14) now gives

Log ψ ˜ j ( t ) = 1 2 c j 2 t 2 1 2 c j 2 ( E [ ξ j 2 H ] 1 ) t 2 i 3 ! c j 3 E [ ξ j 3 H ] t 3 + R ˜ j ( t ) Φ ( q ˜ j ( t ) ) q ˜ j 2 ( t )
(21)

and, taking account of (5),

j = 1 n Log ψ ˜ j ( t ) = 1 2 t 2 1 2 t 2 j = 1 n c j 2 ( E [ ξ j 2 H ] 1 ) i 3 ! t 3 j = 1 n c j 3 E [ ξ j 3 H ] + j = 1 n R ˜ j ( t ) j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j 2 ( t ) .
(22)

Now, put

A ˜ n : = 1 2 j = 1 n c j 2 ( E [ ξ j 2 H ] 1 ) , B ˜ n : = 1 3 ! j = 1 n c j 3 E [ ξ j 3 H ] , H ˜ n ( t ) : = j = 1 n R ˜ j ( t ) j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j 2 ( t )

and exploit the conditional independence in (6) to obtain

E [ e i t S n H ] = exp { Log [ j = 1 n ψ ˜ j ( t ) ] } = exp { j = 1 n Log ψ ˜ j ( t ) } = e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) .
(23)

At this stage, it remains to provide upper bounds for | q ˜ j ( t ) | 2 and | H ˜ n (t)|, to be used throughout this section. From the definition of q ˜ j , one has

| q ˜ j ( t ) | 2 3 4 c j 4 | E [ ξ j 2 H ] | 2 t 4 + 3 36 c j 6 | E [ ξ j 3 H ] | 2 t 6 +3 | R ˜ j ( t ) | 2 a.s.,

which implies that

j = 1 n | q ˜ j ( t ) | 2 2 , 369 3 , 072 W t 4 a.s.
(24)

for every t in [τ,τ], thanks to (18)-(19) and the Lyapunov inequality for moments. The monotonicity of Φ yields

| H ˜ n ( t ) | ( 1 24 + 2 , 369 3 , 072 Φ ( 19 / 128 ) ) W t 4 a.s.
(25)

for every t, and

| H ˜ n ( t ) | 1 16 ( 1 24 + 2 , 369 3 , 072 Φ ( 19 / 128 ) ) :=Ha.s.
(26)

for every t in [τ,τ].

Then the validity of (9) with l=0 can be derived from a combination of the above arguments starting from

| φ n ( t ) e t 2 / 2 | E [ | E [ e i t S n H ] | 1 T ] + P ( T ) e t 2 / 2 + e t 2 / 2 | E [ ( e A ˜ n t 2 + i B ˜ n t 3 ( e H ˜ n ( t ) 1 ) ) 1 T c ] | + e t 2 / 2 | E [ ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | .
(27)

First, apply the Markov inequality to conclude that

P(T)P { | j = 1 n c j 2 E [ ξ j 2 H ] 1 | 1 / 3 } 9Z.
(28)

Second, after noting that

A ˜ n 1 T c 1/3a.s.,
(29)

combine the elementary inequality | e z 1||z| e | z | with (25)-(26) to obtain

e t 2 / 2 | E [ ( e A ˜ n t 2 + i B ˜ n t 3 ( e H ˜ n ( t ) 1 ) ) 1 T c ] | κW t 4 e t 2 / 6
(30)

for every t in [τ,τ], with κ:= e H ( 1 24 + 2 , 369 3 , 072 Φ(19/128)). As far as the fourth summand in the RHS of (27) is concerned, write

| E [ ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | | E [ ( e A ˜ n t 2 cos ( B ˜ n t 3 ) 1 ) 1 T c ] | + | E [ e A ˜ n t 2 sin ( B ˜ n t 3 ) 1 T c ] | | E [ e A ˜ n t 2 ( cos ( B ˜ n t 3 ) 1 ) 1 T c ] | + | E [ ( e A ˜ n t 2 1 ) 1 T c ] | + | E [ ( e A ˜ n t 2 1 ) sin ( B ˜ n t 3 ) 1 T c ] | + | E [ sin ( B ˜ n t 3 ) 1 T c ] |
(31)

and proceed by analyzing each summand in the last bound separately. As to the first term, invoke (29) and use the elementary inequality 1 cos x x 2 1 2 to obtain

| E [ e A ˜ n t 2 ( cos ( B ˜ n t 3 ) 1 ) 1 T c ] | 1 2 t 6 e t 2 / 3 E [ B ˜ n 2 ] .

Since the Lyapunov inequality for moments entails

B ˜ n 2 = 1 36 ( j = 1 n c j 3 E [ ξ j 3 H ] ) 2 1 36 m 4 1 / 2 Wa.s.,
(32)

one gets

| E [ e A ˜ n t 2 ( cos ( B ˜ n t 3 ) 1 ) 1 T c ] | 1 72 m 4 1 / 2 W t 6 e t 2 / 3 .
(33)

To continue, put G r (x):= h = 0 r x h /h! for r in N 0 and note that the Lagrange form of the remainder in the Taylor formula gives

| e x G r ( x ) | | x | r + 1 ( r + 1 ) ! ( 1 + e x )
(34)

for every x in . Moreover, the Lyapunov inequality for moments shows that

| A ˜ n | 1 2 ( m 4 1 / 2 + 1 ) a.s.
(35)

Concerning the second summand in the last member of (31), take account of (34) with r=1 to write

| E [ ( e A ˜ n t 2 1 ) 1 T c ] | | E [ A ˜ n 1 T c ] | t 2 + 1 2 E [ A ˜ n 2 ( 1 + e A ˜ n t 2 ) 1 T c ] t 4

and, by means of (35) and the definitions of T, X and Z, conclude that

| E [ ( e A ˜ n t 2 1 ) 1 T c ] | | E [ A ˜ n ] | t 2 + 1 2 ( m 4 1 / 2 + 1 ) P ( T ) t 2 + 1 8 t 4 ( 1 + e t 2 / 3 ) Z = 1 2 t 2 X + 1 2 ( m 4 1 / 2 + 1 ) t 2 P ( T ) + 1 8 t 4 ( 1 + e t 2 / 3 ) Z .
(36)

For the third summand, the combination of inequality |sinx||x| with (34) with r=0 yields

| E [ ( e A ˜ n t 2 1 ) sin ( B ˜ n t 3 ) 1 T c ] | E [ | A ˜ n B ˜ n | ( 1 + e A ˜ n t 2 ) 1 T c ] | t | 5

which, by means of the inequalities 2xy x 2 + y 2 , (29) and (32), becomes

| E [ ( e A ˜ n t 2 1 ) sin ( B ˜ n t 3 ) 1 T c ] | 1 2 E [ A ˜ n 2 + B ˜ n 2 ] ( 1 + e t 2 / 3 ) | t | 5 ( 1 8 Z + 1 72 m 4 1 / 2 W ) ( 1 + e t 2 / 3 ) | t | 5 .
(37)

Finally, the elementary inequality x sin x x 3 1 6 entails

| E [ sin ( B ˜ n t 3 ) 1 T c ] | | E [ B ˜ n 1 T c ] | | t | 3 + 1 6 E [ | B ˜ n | 3 ] | t | 9 .

After using again the Lyapunov inequality to write

| B ˜ n | 1 6 m 4 3 / 4 a.s.,
(38)

note that (32) and (38) lead to

| E [ sin ( B ˜ n t 3 ) 1 T c ] | 1 6 Y | t | 3 + 1 6 m 4 3 / 4 P(T) | t | 3 + 1 1 , 296 m 4 5 / 4 W | t | 9 .
(39)

The combination of (28), (31), (33), (36), (37) and (39) gives

e t 2 / 2 | E [ ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | 1 2 X t 2 e t 2 / 2 + 1 6 Y | t | 3 e t 2 / 2 + W [ 1 72 m 4 1 / 2 t 6 e t 2 / 6 + 1 72 m 4 1 / 2 ( 1 + e t 2 / 3 ) | t | 5 e t 2 / 2 + 1 1 , 296 m 4 5 / 4 | t | 9 e t 2 / 2 ] + Z [ 9 2 ( m 4 1 / 2 + 1 ) t 2 e t 2 / 2 + 1 8 ( 1 + e t 2 / 3 ) ( t 4 + | t | 5 ) e t 2 / 2 + 3 2 m 4 3 / 4 | t | 3 e t 2 / 2 ] .
(40)

At this stage, the upper bound (9) with l=0 follows from (27), (28), (30) and (40) by putting

u 2 , 0 ( t ) : = 1 2 t 2 e t 2 / 2 , u 3 , 0 ( t ) : = 1 6 | t | 3 e t 2 / 2 , u 4 , 0 ( t ) : = 1 72 m 4 1 / 2 t 6 e t 2 / 6 + 1 72 m 4 1 / 2 ( 1 + e t 2 / 3 ) | t | 5 e t 2 / 2 u 4 , 0 ( t ) : = + 1 1 , 296 m 4 5 / 4 | t | 9 e t 2 / 2 + κ t 4 e t 2 / 6 , v 0 ( t ) : = 9 2 ( m 4 1 / 2 + 1 ) t 2 e t 2 / 2 + 1 8 ( 1 + e t 2 / 3 ) ( t 4 + | t | 5 ) e t 2 / 2 + 3 2 m 4 3 / 4 | t | 3 e t 2 / 2 + 9 e t 2 / 2 .

To prove (9) when l=1, differentiate (23) with respect to t to obtain

d d t E [ e i t S n H ] = t e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) + e t 2 / 2 ( 2 A ˜ n t + 3 i B ˜ n t 2 ) e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) + e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 H ˜ n ( t ) e H ˜ n ( t ) .
(41)

As the first step, write

| d d t [ φ n ( t ) e t 2 / 2 ] | E [ | d d t E [ e i t S n H ] | 1 T ] + P ( T ) | t | e t 2 / 2 + | t | e t 2 / 2 | E [ ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | + | t | e t 2 / 2 | E [ e A ˜ n t 2 + i B ˜ n t 3 ( e H ˜ n ( t ) 1 ) 1 T c ] | + e t 2 / 2 | E [ ( 2 A ˜ n t + 3 i B ˜ n t 2 ) 1 T c ] | + e t 2 / 2 | E [ ( 2 A ˜ n t + 3 i B ˜ n t 2 ) ( e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) 1 ) 1 T c ] | + e t 2 / 2 E [ e A ˜ n t 2 | H ˜ n ( t ) | e | H ˜ n ( t ) | 1 T c ]
(42)

and then proceed to study each summand in a separate way. All of these terms, except the last one, can be bounded by using inequalities already proved. First of all, the first summand coincides with the first term of (9) with l=1, and for the second summand it suffices to recall (28). The bound for the third summand is given by (40) while, for the fourth one, use (30). Next, thanks to (35) and (38), write

| E [ ( 2 A ˜ n t + 3 i B ˜ n t 2 ) 1 T c ] | P(T) [ m 4 1 / 2 + 1 + 1 2 m 4 3 / 4 ] +|t|X+ 1 2 t 2 Y.
(43)

As for the sixth summand, start from

| E [ ( 2 A ˜ n t + 3 i B ˜ n t 2 ) ( e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) 1 ) 1 T c ] | E [ | 2 A ˜ n t + 3 i B ˜ n t 2 | | e H ˜ n ( t ) 1 | e A ˜ n t 2 1 T c ] + | E [ ( 2 A ˜ n t + 3 i B ˜ n t 2 ) ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | .
(44)

Then recall (29), (35) and (38) and combine the elementary inequality | e z 1||z| e | z | with (25) and (26) to conclude that

E [ | 2 A ˜ n t + 3 i B ˜ n t 2 | | e H ˜ n ( t ) 1 | e A ˜ n t 2 1 T c ] κ W [ ( m 4 1 / 2 + 1 ) | t | + 1 2 m 4 3 / 4 t 2 ] e t 2 / 3 t 4 .
(45)

As for the latter term in the RHS of (44), note that

| E [ ( 2 A ˜ n t + 3 i B ˜ n t 2 ) ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | E [ ( 2 | A ˜ n t | + 3 | B ˜ n t 2 | ) | e A ˜ n t 2 cos ( B ˜ n t 3 ) 1 | 1 T c ] + E [ ( 2 | A ˜ n t | + 3 | B ˜ n t 2 | ) | sin ( B ˜ n t 3 ) | e A ˜ n t 2 1 T c ] .

Now, combining the elementary inequality 1 cos x x 2 1 2 with (29) and (34) entails

| e A ˜ n t 2 cos ( B ˜ n t 3 ) 1 | 1 T c | e A ˜ n t 2 [ cos ( B ˜ n t 3 ) 1 ] | 1 T c + | e A ˜ n t 2 1 | 1 T c 1 2 B ˜ n 2 t 6 e t 2 / 3 + | A ˜ n | t 2 ( 1 + e t 2 / 3 ) a.s.

for every t in and hence

E [ ( 2 | A ˜ n t | + 3 | B ˜ n t 2 | ) | e A ˜ n t 2 cos ( B ˜ n t 3 ) 1 | 1 T c ] 1 72 m 4 1 / 2 ( m 4 1 / 2 + 1 ) W | t | 7 e t 2 / 3 + 1 144 m 4 5 / 4 W t 8 e t 2 / 3 + Z ( 1 + e t 2 / 3 ) ( 1 2 | t | 3 + 3 8 t 4 ) + 1 24 m 4 1 / 2 W t 4 ( 1 + e t 2 / 3 )
(46)

holds, along with

E [ ( 2 | A ˜ n t | + 3 | B ˜ n t 2 | ) | e A ˜ n t 2 sin ( B ˜ n t 3 ) | 1 T c ] 1 4 Z t 4 e t 2 / 3 + W m 4 1 / 2 ( 1 36 t 4 + 1 12 | t | 5 ) e t 2 / 3 .
(47)

The study of the last summand in (42) reduces to the analysis of the first derivative of H ˜ n , which is equal to

H ˜ n ( t ) = j = 1 n R ˜ j ( t ) 2 j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j ( t ) j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j 2 ( t ) .
(48)

Now, recall (16) and note that a dominated convergence argument yields

R ˜ j ( t ) = 3 i 2 c j 3 t 2 E [ ξ j 3 0 1 ( 1 u ) 2 ( e i u t c j ξ j 1 ) d u | H ] + 1 2 c j 4 t 3 E [ ξ j 4 0 1 u ( 1 u ) 2 e i u t c j ξ j d u | H ] ,

from which

| R ˜ j ( t ) | 1 6 m 4 c j 4 | t | 3 a.s.
(49)

for every t. Then, if t is in [τ,τ], then (49) gives

j = 1 n | R ˜ j ( t ) | 1 48 m 4 1 / 4 a.s.
(50)

From the equality q ˜ j (t)= c j 2 E[ ξ j 2 H]t i 2 c j 3 E[ ξ j 3 H] t 2 + R ˜ j (t) and the fact that | c j t| 1 2 m 4 1 / 4 when |t|τ it follows that

| q ˜ j ( t ) | 2 3 m 4 c j 4 t 2 + 3 16 m 4 c j 4 t 2 +3 | R ˜ j ( t ) | 2 a.s.

for every t in [τ,τ]. This, combined with (49)-(50), yields

j = 1 n | q ˜ j ( t ) | 2 W ( 51 16 t 2 + 1 96 m 4 1 / 4 | t | 3 ) a.s.
(51)

Moreover, for every t is in [τ,τ], one has

j = 1 n | q ˜ j ( t ) | 2 613 768 m 4 1 / 2 a.s.
(52)

The complete monotonicity of Φ entails

2 j = 1 n | Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j ( t ) | + j = 1 n | Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j 2 ( t ) | Φ ( 19 128 ) { j = 1 n | q ˜ j ( t ) | 2 + j = 1 n | q ˜ j ( t ) | 2 } + | Φ ( 19 128 ) | 613 768 m 4 1 / 4 j = 1 n | q ˜ j ( t ) | 2

for every t is in [τ,τ] and, in view of (24) and (51),

2 j = 1 n | Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j ( t ) | + j = 1 n | Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j 2 ( t ) | W [ Φ ( 19 128 ) 2 , 369 3 , 072 t 4 + Φ ( 19 128 ) 1 96 m 4 1 / 4 | t | 3 + Φ ( 19 128 ) 51 16 t 2 + | Φ ( 19 128 ) | 2 , 369 3 , 072 613 768 m 4 1 / 4 t 4 ] .

The combination of this last inequality with (49) yields

| H ˜ n ( t ) | W [ Φ ( 19 128 ) 2 , 369 3 , 072 t 4 + Φ ( 19 128 ) 1 96 m 4 1 / 4 | t | 3 + Φ ( 19 128 ) 51 16 t 2 + | Φ ( 19 128 ) | 2 , 369 3 , 072 613 768 m 4 1 / 4 t 4 + 1 6 | t | 3 ] .
(53)

Taking account of (26), (29) and (53), the last member in (42) can be bounded as follows:

e t 2 / 2 E [ e A ˜ n t 2 | H ˜ n ( t ) | e | H ˜ n ( t ) | 1 T c ] e H W [ Φ ( 19 128 ) 2 , 369 3 , 072 t 4 + Φ ( 19 128 ) 1 96 m 4 1 / 4 | t | 3 + Φ ( 19 128 ) 51 16 t 2 + | Φ ( 19 128 ) | 2 , 369 3 , 072 613 768 m 4 1 / 4 t 4 + 1 6 | t | 3 ] e t 2 / 6 .
(54)

At this point, use (42) along with (28), (30), (40), (43), (45)-(47) and (54), to obtain (9) in the case l=1, with the following functions:

u 2 , 1 ( t ) : = ( 1 2 t 2 + 1 ) | t | e t 2 / 2 , u 3 , 1 ( t ) : = ( 1 2 t 2 + 1 6 t 4 ) e t 2 / 2 , u 4 , 1 ( t ) : = 1 72 m 4 1 / 2 | t | 7 e t 2 / 6 + 1 72 m 4 1 / 2 ( 1 + e t 2 / 3 ) t 6 e t 2 / 2 u 4 , 1 ( t ) : = + 1 1 , 296 m 4 5 / 4 t 10 e t 2 / 2 + κ | t | 5 e t 2 / 6 u 4 , 1 ( t ) : = + e H [ Φ ( 19 128 ) 2 , 369 3 , 072 t 4 + Φ ( 19 128 ) 1 96 m 4 1 / 4 | t | 3 + Φ ( 19 128 ) 51 16 t 2 u 4 , 1 ( t ) : = + | Φ ( 19 128 ) | 2 , 369 3 , 072 613 768 m 4 1 / 4 t 4 + 1 6 | t | 3 ] e t 2 / 6 u 4 , 1 ( t ) : = + κ [ ( m 4 1 / 2 + 1 ) | t | + 1 2 m 4 3 / 4 t 2 ] t 4 e t 2 / 6 u 4 , 1 ( t ) : = + 1 72 m 4 1 / 2 ( m 4 1 / 2 + 1 ) | t | 7 e t 2 / 6 + 1 144 m 4 5 / 4 t 8 e t 2 / 6 u 4 , 1 ( t ) : = + 1 24 m 4 1 / 2 t 4 ( 1 + e t 2 / 3 ) e t 2 / 2 + m 4 1 / 2 ( 1 36 t 4 + 1 12 | t | 5 ) e t 2 / 6 , v 1 ( t ) : = 9 2 ( m 4 1 / 2 + 1 ) | t | 3 e t 2 / 2 + 1 8 ( 1 + e t 2 / 3 ) ( | t | 5 + t 6 ) e t 2 / 2 v 1 ( t ) : = + 3 2 m 4 3 / 4 t 4 e t 2 / 2 + 9 | t | e t 2 / 2 + 9 [ m 4 1 / 2 + 1 + 1 2 m 4 3 / 4 ] e t 2 / 2 v 1 ( t ) : = + | t | 3 ( 1 2 + 3 8 | t | ) ( 1 + e t 2 / 3 ) e t 2 / 2 + 1 4 t 4 e t 2 / 6 .

To complete the proof of the proposition, it remains to study the second derivative. Therefore, differentiate (41) with respect to t to obtain

d 2 d t 2 E [ e i t S n H ] = ( t 2 1 ) e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) + [ 2 A ˜ n + 6 i B ˜ n t ] e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) + [ 4 A ˜ n 2 t 2 9 B ˜ n 2 t 4 + 12 i A ˜ n B ˜ n t 3 ] e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) + [ H ˜ n ( t ) + ( H ˜ n ( t ) ) 2 ] e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) + 2 H ˜ n ( t ) [ ( 2 A ˜ n 1 ) t + 3 i B ˜ n t 2 ] e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) 2 t [ 2 A ˜ n t + 3 i B ˜ n t 2 ] e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) .
(55)

The first step consists in splitting the expectation E[ d 2 d t 2 E[ e i t S n H]] on T and T c , which produces

| d 2 d t 2 [ φ n ( t ) e t 2 / 2 ] | E [ | d 2 d t 2 E [ e i t S n H ] | 1 T ] +P(T) | t 2 1 | e t 2 / 2 + r = 1 6 E r ,
(56)

where the terms E r will be defined and studied separately. First,

E 1 : = | E [ ( t 2 1 ) e t 2 / 2 ( e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) 1 ) 1 T c ] | | t 2 1 | e t 2 / 2 | E [ e A ˜ n t 2 + i B ˜ n t 3 ( e H ˜ n ( t ) 1 ) 1 T c ] | + | t 2 1 | e t 2 / 2 | E [ ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] |

and a bound for this quantity is provided by multiplying by | t 2 1| the sum of the RHSs in (30) and (40). Second,

E 2 : = | E [ [ 2 A ˜ n ( 1 2 t 2 ) + 6 i B ˜ n t ( 1 t ) ] e t 2 / 2 e A ˜ n t 2 + i B ˜ n t 3 e H ˜ n ( t ) 1 T c ] | e t 2 / 6 E [ [ 2 | A ˜ n | | 1 2 t 2 | + 6 | B ˜ n | | t ( 1 t ) | ] | e H ˜ n ( t ) 1 | ] + e t 2 / 2 | E [ [ 2 A ˜ n ( 1 2 t 2 ) + 6 i B ˜ n t ( 1 t ) ] ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | + e t 2 / 2 | E [ ( 2 A ˜ n ( 1 2 t 2 ) + 6 i B ˜ n t ( 1 t ) ) 1 T c ] |
(57)

and, according to the same line of reasoning used to get (45),

E [ [ 2 | A ˜ n | | 1 2 t 2 | + 6 | B ˜ n | | t ( 1 t ) | ] | e H ˜ n ( t ) 1 | ] κ W [ ( m 4 1 / 2 + 1 ) | 1 2 t 2 | + m 4 3 / 4 | t ( 1 t ) | ] t 4 .

Next, as far as the second summand in the RHS of (57) is concerned, the same argument used to obtain (46)-(47) leads to

e t 2 / 2 | E [ [ 2 A ˜ n ( 1 2 t 2 ) + 6 i B ˜ n t ( 1 t ) ] ( e A ˜ n t 2 + i B ˜ n t 3 1 ) 1 T c ] | 1 72 m 4 1 / 2 W [ e t 2 / 6 t 6 ( ( m 4 1 / 2 + 1 ) | 1 2 t 2 | + m 4 3 / 4 | t ( 1 t ) | ) + 2 ( 3 e t 2 / 2 t 2 ( 1 + e t 2 / 3 ) | t ( 2 t ) | + e t 2 / 6 | t | 3 | 1 2 t 2 | + 6 e t 2 / 6 t 4 | 1 t | ) ] + 1 4 Z [ 2 e t 2 / 2 t 2 ( 1 + e t 2 / 3 ) | 1 2 t 2 | + 3 e t 2 / 2 t 2 ( 1 + e t 2 / 3 ) | t ( 1 t ) | + e t 2 / 6 | t | 3 | 1 2 t 2 | ] .

For the third summand in the RHS of (57), it is enough to exploit the definitions of X, Y, Z and W, along with (35) and (38), to have

| E [ ( 2 A ˜ n ( 1 2 t 2 ) + 6 i B ˜ n t ( 1 t ) ) 1 T c ] | X | 1 2 t 2 | + Y | t ( 1 t ) | + 9 Z [ ( m 4 1 / 2 + 1 ) | 1 2 t 2 | + m 4 3 / 4 | t ( 1 t ) | ] .

Then E 3 :=E[[4 A ˜ n 2 t 2 +9 B ˜ n 2 t 4 +12| A ˜ n B ˜ n | | t | 3 ] e t 2 / 2 + A ˜ n t 2 + | H ˜ n ( t ) | 1 T c ] can be bounded by resorting to (26) and (29). Whence,

E 3 e t 2 / 6 e H { ( 4 t 2 + 6 | t | 3 ) E [ A ˜ n 2 ] + ( 9 t 4 + 6 | t | 3 ) E [ B ˜ n 2 ] }

and, from (32) and the definition of Z, one gets

E 3 e t 2 / 6 e H { 1 4 ( 4 t 2 + 6 | t | 3 ) Z + 1 36 m 4 1 / 2 ( 9 t 4 + 6 | t | 3 ) W } .

The next term is E 4 := e t 2 / 2 E[ ( H ˜ n ( t ) ) 2 e A ˜ n t 2 + | H ˜ n ( t ) | 1 T c ], whose upper bound is given immediately by resorting to (26), (29) and (53). Therefore, since W m 4 ,

E 4 W e t 2 / 6 e H m 4 [ Φ ( 19 128 ) 2 , 369 3 , 072 t 4 + Φ ( 19 128 ) 1 96 m 4 1 / 4 | t | 3 + Φ ( 19 128 ) 51 16 t 2 + | Φ ( 19 128 ) | 2 , 369 3 , 072 613 768 m 4 1 / 4 t 4 + 1 6 | t | 3 ] 2 .

To analyze E 5 := e t 2 / 2 E[| H ˜ n (t)| e A ˜ n t 2 + | H ˜ n ( t ) | 1 T c ], it is necessary to study the second derivative of H ˜ n , that is,

H ˜ n ( t ) = j = 1 n R ˜ j ( t ) 4 j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j ( t ) [ q ˜ j ( t ) ] 2 2 j = 1 n Φ ( q ˜ j ( t ) ) [ q ˜ j ( t ) ] 2 2 j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j ( t ) j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j 2 ( t ) [ q ˜ j ( t ) ] 2 j = 1 n Φ ( q ˜ j ( t ) ) q ˜ j ( t ) q ˜ j 2 ( t ) .
(58)

To bound this quantity, first recall (17) and exchange derivatives with integrals to obtain

R ˜ j ( t ) = 2 c j 2 E [ ξ j 2 0 1 ( 1 u ) ( e i u t c j ξ j 1 i u t c j ξ j ) d u | H ] c j 2 t 2 E [ ξ j 2 0 1 ( 1 u ) ( i u c j ξ j ) 2 e i u t c j ξ j d u | H ] 2 c j 2 t E [ ξ j 2 0 1 ( 1 u ) ( i u c j ξ j ) ( e i u t c j ξ j 1 ) d u | H ] ,

which, after applications of the elementary inequality | e i x k = 0 r 1 ( i x ) k /k!| | x | r /r!, gives

| R ˜ j ( t ) | 1 3 m 4 c j 4 t 2 a.s.
(59)

Whence,

j = 1 n | R ˜ j ( t ) | 1 3 W t 2 a.s.
(60)

is valid for every t, and

j = 1 n | R ˜ j ( t ) | 1 12 m 4 1 / 2 a.s.
(61)

holds for each t in [τ,τ]. From q ˜ j (t)= c j 2 E[ ξ j 2 H]i c j 3 E[ ξ j 3 H]t+ R ˜ j (t) and the fact that | c j t| 1 2 m 4 1 / 4 for each t in [τ,τ], one deduces | q ˜ j ( t ) | 2 3 c j 4 m 4 + 3 4 c j 4 m 4 +3 | R ˜ j ( t ) | 2 . This inequality, combined with (59)-(61), yields

j = 1 n | q ˜ j ( t ) | 2 W ( 15 4 + 1 12 m 4 1 / 2 t 2 ) a.s.
(62)

and, for t in [τ,τ],

j = 1 n | q ˜ j ( t ) | 2 181 48 m 4 a.s.
(63)

At this stage, invoke (58) and the complete monotonicity of Φ, and combine the inequality 2|xy| x 2 + y 2 with (20), (24), (51)-(52) and (62)-(63) to get

| H ˜ n ( t ) | W { 1 3 t 2 + 4 | Φ ( 19 128 ) | 19 128 ( 51 16 t 2 + 1 96 m 4 1 / 4 | t | 3 ) + 2 Φ ( 19 128 ) ( 51 16 t 2 + 1 96 m 4 1 / 4 | t | 3 ) + Φ ( 19 128 ) 613 768 2 , 369 3 , 072 m 4 1 / 2 t 4 + | Φ ( 19 128 ) | 181 48 2 , 369 3 , 072 m 4 1 / 2 t 4 + Φ ( 19 128 ) [ ( 15 4 + 1 12 m 4 1 / 2 t 2 ) + 2 , 369 3 , 072 t 4 ] } .

Therefore, an upper bound for E 5 is given by multiplying the above RHS by e t 2 / 6 e H . Finally, the last term

E 6 := e t 2 / 2 E [ 2 | H ˜ n ( t ) | [ | ( 2 A ˜ n 1 ) t | + 3 | B ˜ n | t 2 ] e A ˜ n t 2 + | H ˜ n ( t ) | 1 T c ]

can be handled without further effort by resorting to (26), (29), (35), (38) and (53). Therefore, an upper bound is obtained immediately by multiplying the RHS of (53) by 2(( m 4 1 / 2 +2)|t|+ 1 2 m 4 3 / 4 t 2 ) e t 2 / 6 e H .

A combination of all the last inequalities starting with (56) leads to the proof of (9) for l=2, with the coefficients specified as follows:

u 2 , 2 ( t ) : = ( 1 2 t 2 | t 2 1 | + | 1 2 t 2 | ) e t 2 / 2 , u 3 , 2 ( t ) : = ( | t ( 1 t ) | + 1 6 | t | 3 | t 2 1 | ) e t 2 / 2 , u 4 , 2 ( t ) : = [ 1 72 m 4 1 / 2 t 6 e t 2 / 6 + 1 72 m 4 1 / 2 ( 1 + e t 2 / 3 ) | t | 5 e t 2 / 2 + 1 1 , 296 m 4 5 / 4 | t | 9 e t 2 / 2 ] u 4 , 2 ( t ) : = × | t 2 1 | + κ | t 2 1 | t 4 e t 2 / 6 + κ [ ( m 4 1 / 2 + 1 ) | 1 2 t 2 | + m 4 3 / 4 | t ( 1 t ) | ] t 4 e t 2 / 6 u 4 , 2 ( t ) : = + 1 72 m 4 1 / 2 [ e t 2 / 6 t 6 ( ( m 4 1 / 2 + 1 ) | 1 2 t 2 | + m 4 3 / 4 | t ( 1 t ) | ) u 4 , 2 ( t ) : = + 6 e t 2 / 2 t 2 ( 1 + e t 2 / 3 ) | t ( 1 t ) | + 2 e t 2 / 6 | t 3 ( 1 2 t 2 ) | + 12 e t 2 / 6 t 4 | 1 t | ] u 4 , 2 ( t ) : = + 1 36 e H m 4 1 / 2 ( 9 t 4 + 6 | t | 3 ) e t 2 / 6 u 4 , 2 ( t ) : = + m 4 e H [ Φ ( 19 128 ) 2 , 369 3 , 072 t 4 + Φ ( 19 128 ) 1 96 m 4 1 / 4 | t | 3 u 4 , 2 ( t ) : = + Φ ( 19 128 ) 51 16 t 2 + | Φ ( 19 128 ) | 2 , 369 3 , 072 613 768 m 4 1 / 4 t 4 + 1 6 | t | 3 ] 2 e t 2 / 6 u 4 , 2 ( t ) : = + e H { 1 3 t 2 + 4 | Φ ( 19 128 ) | 19 128 ( 51 16 t 2 + 1 96 m 4 1 / 4 | t | 3 ) u 4 , 2 ( t ) : = + 2 Φ ( 19 128 ) ( 51 16 t 2 + 1 96 m 4 1 / 4 | t | 3 ) u 4 , 2 ( t ) : = + Φ ( 19 128 ) [ ( 15 4 + 1 12 m 4 1 / 2