Skip to main content

Precise asymptotics in the law of the iterated logarithm for R/S statistic

Abstract

Let {X, X n ,n1} be a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with zero mean and possibly infinite variance, Q(n)=R(n)/S(n) be the rescaled range statistic, where R(n)= max 1 k n { j = 1 k ( X j X ¯ n )} min 1 k n { j = 1 k ( X j X ¯ n )}, S 2 (n)= j = 1 n ( X j X ¯ n ) 2 /n and X ¯ n = j = 1 n X j /n. Then two precise asymptotics related to probability convergence for Q(n) statistic are established under some mild conditions in this paper. Moreover, the precise asymptotics related to almost surely convergence for Q(n) statistic is also considered under some mild conditions.

MSC:60F15, 60G50.

1 Introduction and main results

Let {X, X n ,n1} be a sequence of i.i.d. random variables and set S n = j = 1 n X j for n1, logx=ln(xe) and loglogx=log(logx). Hsu and Robbins [1] and Erdős [2] established the well known complete convergence result: for any ε>0, n = 1 P(| S n |εn)< if and only if EX=0 and E X 2 <. Baum and Katz [3] extended this result and proved that, for 1p<2, ε>0 and rp, n = 1 n r 2 P(| S n |ε n 1 / p )< holds if and only if EX=0 and E | X | r p <. Since then, many authors considered various extensions of the results of Hsu-Robbins-Erdős and Baum-Katz. Some of them studied the precise asymptotics of the infinite sums as ε0 (cf. Heyde [4], Chen [5] and Spătaru [6]). We note that the above results do not hold for p=2, this is due to the fact that P(| S n |ε n 1 / 2 )P(|N(0,1)|ε/E X 2 ) by the central limit theorem when EX=0, where N(0,1) denotes a standard normal random variable. It should be noted that P(|N(0,1)|ε/E X 2 ) is irrespective of n. However, if n 1 / 2 is replaced by some other functions of n, the results of precise asymptotics may still hold. For example, by replacing n 1 / 2 by n log log n , Gut and Spătaru [7] established the following results called the precise asymptotics in the law of the iterated logarithm.

Theorem A Suppose {X, X n ,n1} is a sequence of i.i.d. random variables with EX=0, E X 2 = σ 2 and E X 2 ( log log | X | ) 1 + δ < for some δ>0, and let a n =O( n / ( log log n ) γ ) for some γ>1/2. Then

lim ε 1 ε 2 1 n = 1 1 n P ( | S n | ε σ 2 n log log n + a n ) =1.

Theorem B Suppose {X, X n ,n1} is a sequence of i.i.d. random variables with EX=0 and E X 2 = σ 2 <. Then

lim ε 0 ε 2 n = 1 1 n log n P ( | S n | ε σ n log log n ) =1.

Of lately, by applying strong approximation method which is different from Gut and Spătaru’s, Zhang [8] gave the sufficient and necessary conditions for this kind of results to be held. One of his results is stated as follows.

Theorem C Let a>1 and b>1/2 and let a n (ε) be a function of ε such that

a n (ε)loglognτas n and ε a + 1 .

Suppose that

EX=0,E X 2 = σ 2 <andE X 2 ( log | X | ) a ( log log | X | ) b 1 <
(1.1)

and

E X 2 I { | X | t } =o ( ( log log t ) 1 ) as t.
(1.2)

Then

lim ε a + 1 ( ε 2 ( a + 1 ) ) b + 1 / 2 n = 1 ( log n ) a ( log log n ) b n P ( M n ( ε + a n ( ε ) ) 2 σ 2 n log log n ) = 2 1 π ( a + 1 ) exp ( 2 τ a + 1 ) Γ ( b + 1 / 2 )
(1.3)

and

lim ε a + 1 ( ε 2 ( a + 1 ) ) b + 1 / 2 n = 1 ( log n ) a ( log log n ) b n P ( | S n | ( ε + a n ( ε ) ) 2 σ 2 n log log n ) = 1 π ( a + 1 ) exp ( 2 τ a + 1 ) Γ ( b + 1 / 2 ) .
(1.4)

Here M n = max k n | S k |, and here and in what follows Γ() is a gamma function. Conversely, if either (1.3) or (1.4) holds for a>1, b>1/2 and some 0<σ<, then (1.1) holds and

lim inf t (loglogt)E X 2 I { | X | t } =0.

It is worth mentioning that the precise asymptotics in a Chung-type law of the iterated logarithm, law of logarithm and Chung-type law of logarithm were also considered by Zhang [9], Zhang and Lin [10] and Zhang [11], respectively.

The above-mentioned results are all related to partial sums. This paper is devoted to the study of some precise asymptotics for the rescaled range statistic (or the R/S statistic), defined by Q(n)=R(n)/S(n), where

{ R ( n ) = max 1 k n { j = 1 k ( X j X ¯ n ) } min 1 k n { j = 1 k ( X j X ¯ n ) } , S 2 ( n ) = 1 n j = 1 n ( X j X ¯ n ) 2 , X ¯ n = 1 n j = 1 n X j .
(1.5)

This statistic, introduced by Hurst [12] when he studied hydrology data of the Nile river and reservoir design, plays an important role in testing statistical dependence of a sequence of random variables and has been used in many practical subjects such as hydrology, geophysics and economics, etc. Because of the importance of this statistic, some people studied some limit theorems for R/S statistic. Among them, Feller [13] established the limit distribution of R(n)/ n for i.i.d. case, Mandelbrot [14] studied weak convergence of Q(n) for a more general case, while Lin [1517] and Lin and Lee [18] established the law of the iterated logarithm for Q(n) under various assumptions. Among Lin’s results, we notice that Lin [15] proved that

lim sup n 2 n log log n Q(n)=1a.s.
(1.6)

holds only if {X, X n ,n1} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with zero mean.

Recently, based on applying a similar method to the one employed by Gut and Spătaru [7], a result related to the precise asymptotics in the law of the iterated logarithm for R/S statistic was established by Wu and Wen [19], that is, we have the following.

Theorem D Suppose {X, X n ,n1} is a sequence of i.i.d. random variables with EX=0, E X 2 <. Then for b>1,

lim ε 0 ε 2 ( b + 1 ) n = 1 ( log log n ) b n log n P ( Q ( n ) ε 2 n log log n ) = E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) .
(1.7)

Here and in what follows, we denote Y= sup 0 t 1 B(t) inf 0 t 1 B(t) and B(t) be a standard Brownian bridge.

It is natural to ask whether there is a similar result for R/S statistic when ε tends to a constant which is not equal to zero. In the present paper, the positive answer will be partially given under some mild conditions with the help of strong approximation method, and, since R/S statistic is defined in a self-normalized form, we will not restrict the finiteness of the second moment for {X, X n ,n1}. Moreover, a more strong result than Wu and Wen’s is established in this paper, based on which, a precise asymptotics related to a.s. convergence for Q(n) statistic is considered under some mild conditions. Throughout the paper, we denote C a positive constant whose value can be different in different places. The following are our main results.

Theorem 1.1 Suppose {X, X n ,n1} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with EX=0, and the truncated second moment l(x)=E X 2 I{|X|x} satisfies l(x) c 1 exp( c 2 ( log x ) β ) for some c 1 >0, c 2 >0 and 0β<1. Let 1<a<0, b>2 and a n (ε) be a function of ε such that

a n (ε)loglognτas n and ε a + 1 /2.
(1.8)

Then we have

lim ε a + 1 / 2 ( 4 ε 2 ( a + 1 ) ) b + 2 n = 1 ( log n ) a ( log log n ) b n P ( Q ( n ) ( ε + a n ( ε ) ) 2 n log log n ) = 4 ( a + 1 ) Γ ( b + 2 ) exp ( 4 τ a + 1 ) .
(1.9)

Theorem 1.2 Suppose {X, X n ,n1} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with EX=0, and the truncated second moment l(x)=E X 2 I{|X|x} satisfies l(x) c 1 exp( c 2 ( log x ) β ) for some c 1 >0, c 2 >0 and 0β<1. Then for b>1, (1.7) is true.

Theorem 1.3 Suppose {X, X n ,n1} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with EX=0, and l(x) satisfies l(x) c 1 exp( c 2 ( log x ) β ) for some c 1 >0, c 2 >0 and 0β<1. Then for any b>1, we have

lim ε 0 ε 2 ( b + 1 ) n = 1 ( log log n ) b n log n I { Q ( n ) ε 2 n log log n } = E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) a.s.

Remark 1.1 Note that X belonging to the domain of attraction of the normal law is equivalent to l(x) being a slowly varying function at ∞. We note also that l(x) c 1 exp( c 2 ( log x ) β ) is a weak enough assumption, which is satisfied by a large class of slowly varying functions such as ( log log x ) α and ( log x ) α , for some 0<α<.

Remark 1.2 When E X 2 = σ 2 <, the truncated second moment l(x) automatically satisfies the condition l(x) c 1 exp( c 2 ( log x ) β ) for some c 1 >0, c 2 >0 and 0β<1. Hence, Theorems 1.1-1.3 not only hold for the random variables with finite second moments, but they also hold for a class of random variables with infinite second moments. Especially, Theorem 1.2 includes Theorem D as a special case.

Remark 1.3 From Theorem C, one can see that the finiteness of the second moment does not guarantee the results about precise asymptotics in LIL for partial sums when a>0. Moreover, it is clear that R/S statistic is more complicated than partial sums. Hence, it seems that it is not possible, at least not easy, to prove (1.9) for a>0 under the conditions stated in Theorem 1.1 only. However, if we impose more strong moment conditions which are similar to (1.1) and (1.2) on {X, X n ,n1}, it would be possible to prove (1.9) for a>0, by following the ideas in Zhang [8].

Remark 1.4 Checking the proof of Theorem 1.1, one can find that

lim ε a + 1 ( ε 2 ( a + 1 ) ) b + 2 n = 1 ( log n ) a ( log log n ) b n P ( Q ( n ) ( ε + a n ( ε ) ) n log log n / 2 ) = 4 ( a + 1 ) Γ ( b + 2 ) exp ( 4 τ a + 1 )

holds if a n (ε)loglogn τ as n and ε a + 1 , which seems maybe more natural due to (1.6).

The remaining of this paper is organized as follows. In Section 2, Theorem 1.1 will be proved when {X, X n ,n1} is a sequence of normal variables with zero mean. In Section 3, truncation method and strong approximation method will be employed to approximate the probability related to R(n) statistic. In Section 4, Theorem 1.1 and Theorem 1.2 will be proved, while in Section 5 the proof of Theorem 1.3 will be given, based on some preliminaries.

2 Normal case

In this section, Theorem 1.1 in the case that {X, X n ,n1} is a sequence of normal random variables with zero mean is proved. In order to do it, we firstly recall that B(t) is a standard Brownian bridge and Y= sup 0 t 1 B(t) inf 0 t 1 B(t). The distribution of Y plays an important role in our first result, and, fortunately, it has been given by Kennedy [20]:

P(Yx)=12 n = 1 ( 4 x 2 n 2 1 ) exp ( 2 x 2 n 2 ) .
(2.1)

Now, the main results in this section are stated as follows.

Proposition 2.1 Let a>1, b>2 and a n (ε) be a function of ε such that

a n (ε)loglognτas n and ε a + 1 /2.
(2.2)

Then we have

lim ε a + 1 / 2 ( 4 ε 2 ( a + 1 ) ) b + 2 n = 1 ( log n ) a ( log log n ) b n P ( Y ( ε + a n ( ε ) ) 2 log log n ) = 4 ( a + 1 ) Γ ( b + 2 ) exp ( 4 τ a + 1 ) .

Proof Firstly, it follows easily from (2.1) that

P(Yx)8 x 2 exp ( 2 x 2 )

as x+. Then, by condition (2.2), one has

P ( Y ( ε + a n ( ε ) ) 2 log log n ) 16 ( ε + a n ( ε ) ) 2 log log n exp ( 4 ( ε + a n ( ε ) ) 2 log log n ) 16 ε 2 log log n exp ( 4 ε 2 log log n ) exp ( 8 ε a n ( ε ) log log n )

as n uniformly in ε( a + 1 /2, a + 1 /2+δ) for some δ>0. Hence, for above-mentioned δ>0 and any 0<θ<1, there exists an integer n 0 such that, for all n n 0 and ε( a + 1 /2, a + 1 /2+δ),

4 ( a + 1 ) log log n exp ( 4 ε 2 log log n ) exp ( 4 τ a + 1 θ ) P ( Y ( ε + a n ( ε ) ) 2 log log n ) 4 ( a + 1 ) log log n exp ( 4 ε 2 log log n ) exp ( 4 τ a + 1 + θ ) .

Obviously, it suffices to show

lim ε a + 1 / 2 ( 4 ε 2 ( a + 1 ) ) b + 2 n = 1 ( log n ) a ( log log n ) b + 1 n exp ( 4 ε 2 log log n ) = Γ ( b + 2 )
(2.3)

for proving Proposition 2.1 by the arbitrariness of θ. To this end, by noting that the limit in (2.3) does not depend on any finite terms of the infinite series, we have

lim ε a + 1 / 2 ( 4 ε 2 ( a + 1 ) ) b + 2 n = 1 ( log n ) a ( log log n ) b + 1 n exp ( 4 ε 2 log log n ) = lim ε a + 1 / 2 ( 4 ε 2 ( a + 1 ) ) b + 2 e e ( log x ) a 4 ε 2 ( log log x ) b + 1 x d x = lim ε a + 1 / 2 ( 4 ε 2 ( a + 1 ) ) b + 2 1 exp ( y ( a + 1 4 ε 2 ) ) y b + 1 d y ( by letting  y = log log x ) = lim ε a + 1 / 2 4 ε 2 ( a + 1 ) e u u b + 1 d u ( by letting  u = y ( 4 ε 2 ( a + 1 ) ) ) = Γ ( b + 2 ) .

The proposition is proved now. □

Proposition 2.2 For any b>1,

lim ε 0 ε 2 ( b + 1 ) n = 1 ( log log n ) b n log n P(Yε 2 log log n )= E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) .

Proof The proof can be found in Wu and Wen [19]. □

3 Truncation and approximation

In this section, we will use the truncation method and strong approximation method to show that the probability related to R(n) with suitable normalization can be approximated by that for Y. To do this, we first give some notations. Put c=inf{x1:l(x)>0} and

η n =inf { s : s c + 1 , l ( s ) s 2 ( log log n ) 4 n } .
(3.1)

For each n and 1in, we let

{ X n i = X i I { | X i | η n } , X n i = X n i E X n i , S n i = j = 1 i X n j , S n i = j = 1 i X n j , X ¯ n = 1 n S n n , D n 2 = j = 1 n Var ( X n j ) .
(3.2)

It follows easily that

D n 2 j = 1 n E X n j 2 nl( η n ) η n 2 ( log log n ) 4 .

Furthermore, we denote R (n) be the truncated R statistic which is defined by the first expression of (1.5) with every X i being replaced by X n i , i=1,,n. In addition, for any 0β<1, all jk and k large enough, following the lines of the proof of (2.4) in Pang, Zhang and Wang [21], we easily have

C l ( η k ) ( log k ) β ( log log k ) 2 exp ( c 2 ( log k ) β ) 2 l ( η k ) j = k 1 j exp ( c 2 ( log j ) β ) log j ( log log j ) 2 j = k 1 j l ( η j ) log j ( log log j ) 2 ,
(3.3)

despite a little difference for the definitions of η n , which are from Pang, Zhang and Wang [21] and this paper, respectively.

Next, we will give the main result in this section as follows.

Proposition 3.1 For any a<0, bR and 1/2<p<2, there exists a sequence of positive numbers { p n ,n1} such that, for any x>0,

P ( Y x + 2 / ( log log n ) p ) p n P ( R ( n ) x D n ) P ( Y x 2 / ( log log n ) p ) + p n ,

where p n 0 satisfies

n = 1 ( log n ) a ( log log n ) b n p n <.
(3.4)

To show this proposition, the following lemmas are useful for the proof.

Lemma 3.1 For any sequence of independent random variables { ξ n ,n1} with zero mean and finite variance, there exists a sequence of independent normal variables { Y n ,n1} with E Y n =0 and E Y n 2 =E ξ n 2 such that, for all q>2 and y>0,

P ( max k n | i = 1 k ξ i i = 1 k Y i | y ) ( A q ) q y q i = 1 n E | ξ i | q ,

whenever E | ξ i | q <, i=1,,n. Here, A is an universal constant.

Proof See Sakhanenko [22, 23]. □

Lemma 3.2 Let {W(t);t0} be a standard Wiener process. For any ε>0 there exists a constant C=C(ε)>0 such that

P ( sup 0 s 1 h sup 0 < t h | W ( s + t ) W ( s ) | x h ) C h e x 2 2 + ε

for every positive x and 0<h<1.

Proof It is Lemma 1.1.1 of Csörgő and Révész [24]. □

Lemma 3.3 For any a<0, bR and 1/2<p<2, there exists a sequence of positive numbers { q n ,n1} such that, for any x>0,

P ( Y x + 1 / ( log log n ) p ) q n P ( R ( n ) x D n ) P ( Y x 1 / ( log log n ) p ) + q n ,
(3.5)

where q n 0 satisfies

n = 1 ( log n ) a ( log log n ) b n q n <.
(3.6)

Proof Let q n =P(| R (n)/ D n Y|>1/ ( log log n ) p ), then obviously, q n satisfies (3.5). For each n, let { W n (t),t0} be a standard Wiener process, then we have { W n (t D n 2 )/ D n ,t0} = D { W n (t),t0} and

q n 2 P ( sup 0 s 1 | j = 1 [ n s ] ( X n j X ¯ n ) D n W n ( s D n 2 ) s D n W n ( 1 ) D n | 1 2 ( log log n ) p ) 2 P ( max k n | j = 1 k ( X n j X ¯ n ) ( W n ( k n D n 2 ) k n D n W n ( 1 ) ) | D n 4 ( log log n ) p ) + 2 P ( sup 0 s 1 | ( W n ( [ n s ] n D n 2 ) [ n s ] n D n W n ( 1 ) ) ( W n ( s D n 2 ) s D n W n ( 1 ) ) | D n 4 ( log log n ) p ) : = I n + I I n .
(3.7)

We consider I n first. Clearly,

I n 2 P ( max k n | j = 1 k X n j W n ( k n D n 2 ) | D n 8 ( log log n ) p ) + 2 P ( max k n | k X ¯ n k n D n W n ( 1 ) | D n 8 ( log log n ) p ) 4 P ( max k n | j = 1 k X n j W n ( k n D n 2 ) | D n 8 ( log log n ) p ) .

It follows from Lemma 3.1 and (3.3) that, for all q>2,

n = 1 ( log n ) a ( log log n ) b n I n C n = 1 ( log n ) a ( log log n ) b n ( ( log log n ) p D n ) q j = 1 n E | X n j | q C n = 1 ( log n ) a ( log log n ) b + p q ( n l ( η n ) ) q / 2 E | X | q I { | X | η n } C n = 1 ( log n ) a ( log log n ) b + p q ( n l ( η n ) ) q / 2 k = 1 n E | X | q I { η k 1 < | X | η k } C k = 1 E | X | q I { η k 1 < | X | η k } n = k ( log n ) a ( log log n ) b + p q ( n l ( η n ) ) q / 2 C k = 1 η k q 2 E X 2 I { η k 1 < | X | η k } ( log k ) a ( log log k ) b + p q k q / 2 1 ( l ( η k ) ) q / 2 C k = 1 ( log k ) a ( log log k ) b + p q 2 q + 4 l ( η k ) E X 2 I { η k 1 < | X | η k } C k = 1 j = k 1 j l ( η j ) log j ( log log j ) 2 E X 2 I { η k 1 < | X | η k } C j = 1 1 j l ( η j ) log j ( log log j ) 2 k = 1 j E X 2 I { η k 1 < | X | η k } C j = 1 1 j log j ( log log j ) 2 < .
(3.8)

Next, we treat with I I n . Clearly, one has

I I n 2 P ( sup 0 s 1 | W n ( [ n s ] n D n 2 ) W n ( s D n 2 ) | D n 8 ( log log n ) p ) + 2 P ( sup 0 s 1 | [ n s ] n D n W n ( 1 ) s D n W n ( 1 ) | D n 8 ( log log n ) p ) : = I I n ( 1 ) + I I n ( 2 ) .
(3.9)

It follows from Lemma 3.2 that

I I n ( 1 ) = 2 P ( sup 0 s 1 | W n ( [ n s ] n ) W n ( s ) | 1 8 ( log log n ) p ) = 2 P ( sup 0 s 1 | W n ( [ n s ] n ) W n ( s ) | 1 n n 8 ( log log n ) p ) C n exp ( n 192 ( log log n ) 2 p ) ,

which obviously leads to

n = 1 ( log n ) a ( log log n ) b n I I n (1)<.
(3.10)

On the other hand,

I I n ( 2 ) = 2 P ( sup 0 s 1 | [ n s ] n s | | W n ( 1 ) | 1 8 ( log log n ) p ) 2 P ( | W n ( 1 ) | n 8 ( log log n ) p ) C ( log log n ) p n ( 1 + o ( 1 ) ) l ( η n ) exp ( n 2 128 ( 1 + o ( 1 ) ) l ( η n ) ( log log n ) 2 p ) ,

which also obviously leads to

n = 1 ( log n ) a ( log log n ) b n I I n (2)<.
(3.11)

Equations (3.7)-(3.11) yield (3.6). The proposition is proved now. □

Lemma 3.4 For any a<0 and bR, one has

n = 1 ( log n ) a ( log log n ) b P ( | X | > η n ) <.

Proof It follows from (3.3) that

n = 1 ( log n ) a ( log log n ) b P ( | X | > η n ) C n = 1 ( log n ) a ( log log n ) b k = n P ( η k < | X | η k + 1 ) C k = 1 1 η k 2 E X 2 I { η k < | X | η k + 1 } n = 1 k ( log n ) a ( log log n ) b C k = 1 ( log k ) a ( log log k ) b + 4 l ( η k ) E X 2 I { η k < | X | η k + 1 } C k = 1 j = k 1 j l ( η j ) log j ( log log j ) 2 E X 2 I { η k < | X | η k + 1 } C j = 1 1 j log j ( log log j ) 2 < .

 □

Lemma 3.5 Let X be a random variable. Then the following statements are equivalent:

  1. (a)

    X is in the domain of attraction of the normal law,

  2. (b)

    x 2 P(|X|>x)=o(l(x)),

  3. (c)

    xE(|X|I{|X|>x})=o(l(x)),

  4. (d)

    E( | X | n I{|X|x})=o( x n 2 l(x)) for n>2.

Proof It is Lemma 1 in Csörgő, Szyszkowicz and Wang [25]. □

Lemma 3.6 For any a<0 and bR, one has, for δ(n)=1/(loglognlogloglogn),

n = 1 ( log n ) a ( log log n ) b n P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) <.

Proof It is easy to see that, for large n,

P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( | 1 n i = 1 n X i 2 l ( η n ) | > δ ( n ) l ( η n ) / 2 ) + P ( X ¯ n 2 > δ ( n ) l ( η n ) / 2 ) P ( i = 1 n X i 2 > ( 1 + δ ( n ) / 2 ) n l ( η n ) ) + P ( i = 1 n X i 2 < ( 1 δ ( n ) / 2 ) n l ( η n ) ) + n P ( | X | > η n ) + P ( i = 1 n X n i > n δ ( n ) l ( η n ) / 2 ) P ( i = 1 n X n i 2 > ( 1 + δ ( n ) / 2 ) n l ( η n ) ) + P ( i = 1 n X n i 2 < ( 1 δ ( n ) / 2 ) n l ( η n ) ) + 2 n P ( | X | > η n ) + P ( i = 1 n X n i > n δ ( n ) l ( η n ) / 2 ) ,
(3.12)

since

| E ( i = 1 n X n i ) | nE|X|I { | X | > η n } =o ( n l ( η n ) / η n ) =o ( n δ ( n ) l ( η n ) )

by Lemma 3.5. Applying Lemma 3.4, we only need to show

{ n = 1 ( log n ) a ( log log n ) b n P ( i = 1 n X n i 2 > ( 1 + δ ( n ) / 2 ) n l ( η n ) ) < , n = 1 ( log n ) a ( log log n ) b n P ( i = 1 n X n i 2 < ( 1 δ ( n ) / 2 ) n l ( η n ) ) < , n = 1 ( log n ) a ( log log n ) b n P ( i = 1 n X n i > n δ ( n ) l ( η n ) / 2 ) <
(3.13)

for proving Lemma 3.6. Consider the first part of (3.13) first. By employing Lemma 3.5 and Bernstein’s inequality (cf. Lin and Bai [26]), we have for any fixed ν>1

n = 1 ( log n ) a ( log log n ) b n P ( i = 1 n X n i 2 > ( 1 + δ ( n ) / 2 ) n l ( η n ) ) = n = 1 ( log n ) a ( log log n ) b n P ( i = 1 n X n i 2 n l ( η n ) > δ ( n ) n l ( η n ) / 2 ) C n = 1 ( log n ) a ( log log n ) b n exp ( δ 2 ( n ) n 2 l 2 ( η n ) / 4 2 ( n E X 4 I { | X | η n } + δ ( n ) η n 2 n l ( η n ) / 2 ) ) C n = 1 ( log n ) a ( log log n ) b n exp ( δ 2 ( n ) n 2 l 2 ( η n ) / 4 o ( 1 ) η n 2 n l ( η n ) ) C n = 1 ( log n ) a ( log log n ) b n exp ( ν log log n ) < .
(3.14)

The second part of (3.13) can be proved by similar arguments. Now, let us consider the third part of (3.13). It follows from Markov’s inequality that

n = 1 ( log n ) a ( log log n ) b n P ( i = 1 n X n i > n δ ( n ) l ( η n ) / 2 ) C n = 1 ( log n ) a ( log log n ) b + 1 n n l ( η n ) n 2 δ ( n ) l ( η n ) < .

The proof is completed now. □

Lemma 3.7 Define Δ n =| R (n)R(n)|. Then for any a<0 and bR, one has

n = 1 ( log n ) a ( log log n ) b n P ( Δ n > D n ( log log n ) 2 ) <.

Proof Firstly, notice that R(n) statistic has an equivalent expression

R(n)= max 1 i < j n | S j S i j i n S n |
(3.15)

and so does R (n) with X i being replaced by X n i in (3.15), i=1,,n. That is,

R (n)= max 1 i < j n | ( S n j S n i j i n S n n ) ( E S n j E S n i j i n E S n n ) | .

Let β n =2nE|X|I{|X|> η n }, then

max 1 i < j n | E S n j E S n i j i n E S n n | β n .

Setting

L= { n : β n η n ( log log n ) 2 } ,

then it is easily seen that, for nL,

{ Δ n D n ( log log n ) 2 } j = 1 n { X j X n j } ,

since D n η n ( log log n ) 2 . Hence, it follows from Lemma 3.4 that

n L ( log n ) a ( log log n ) b n P ( Δ n > D n ( log log n ) 2 ) n = 1 ( log n ) a ( log log n ) b P ( | X | > η n ) < .

When nL, applying (3.3) yields

n L ( log n ) a ( log log n ) b n P ( Δ n > D n ( log log n ) 2 ) n L ( log n ) a ( log log n ) b n n L ( log n ) a ( log log n ) b n β n ( log log n ) 2 η n C n = 1 ( log n ) a ( log log n ) b + 4 n l ( η n ) E | X | I { | X | > η n } C n = 1 ( log n ) a ( log log n ) b + 4 n l ( η n ) k = n E | X | I { η k < | X | η k + 1 } C k = 1 k ( log k ) a ( log log k ) b + 4 l ( η k ) E X 2 I { η k < | X | η k + 1 } η k C k = 1 ( log k ) a ( log log k ) b + 6 l ( η k ) E X 2 I { η k < | X | η k + 1 } C j = 1 1 j log j ( log log j ) 2 < .

 □

Now, we turn to the proof of Proposition 3.1.

Proof of Proposition 3.1 Applying Lemma 3.3, one easily has

P ( R ( n ) x D n ) P ( R ( n ) x D n , Δ n D n ( log log n ) 2 ) + P ( Δ n > D n ( log log n ) 2 ) P ( R ( n ) x D n D n ( log log n ) 2 ) + P ( Δ n > D n ( log log n ) 2 ) P ( Y x 1 ( log log n ) 2 1 ( log log n ) p ) + q n + P ( Δ n > D n ( log log n ) 2 ) P ( Y x 2 ( log log n ) p ) + q n + P ( Δ n > D n ( log log n ) 2 ) .

Also, one has

P ( R ( n ) x D n ) P ( R ( n ) x D n , Δ n D n ( log log n ) 2 ) P ( R ( n ) x D n + D n ( log log n ) 2 ) P ( Δ n > D n ( log log n ) 2 ) P ( Y x + 1 ( log log n ) 2 + 1 ( log log n ) p ) q n P ( Δ n > D n ( log log n ) 2 ) P ( Y x + 2 ( log log n ) p ) q n P ( Δ n > D n ( log log n ) 2 ) .

Letting p n = q n +P( Δ n > D n / ( log log n ) 2 ) completes the proof by Lemmas 3.3 and 3.7. □

4 Proofs of Theorems 1.1 and 1.2

Proof of Theorem 1.1 For any 0<δ< a + 1 /4 and a + 1 /2δ<ε< a + 1 /2+δ, we have

P ( Y ( ε + a n ′′ ( ε ) ) 2 log log n ) p n P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) = P ( Y ( ε + a n ( ε ) ) 2 log log n + 2 / ( log log n ) p ) p n P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( R ( n ) ( ε + a n ( ε ) ) 2 log log n D n ) P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( R ( n ) ( ε + a n ( ε ) ) 2 ( 1 + δ ( n ) ) n l ( η n ) log log n ) P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( Q ( n ) ( ε + a n ( ε ) ) 2 n log log n ) P ( R ( n ) ( ε + a n ( ε ) ) 2 ( 1 δ ( n ) ) n l ( η n ) log log n ) + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( R ( n ) ( ε + a n ′′′ ( ε ) ) 2 log log n D n ) + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( Y ( ε + a n ′′′ ( ε ) ) 2 log log n 2 / ( log log n ) p ) + p n + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) = P ( Y ( ε + a n ′′′′ ( ε ) ) 2 log log n ) + p n + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) ,
(4.1)

where

{ a n ( ε ) = n l ( η n ) D n ( ε + a n ( ε ) ) 1 + δ ( n ) ε , a n ′′ ( ε ) = n l ( η n ) D n ( ε + a n ( ε ) ) 1 + δ ( n ) ε + 2 ( log log n ) p + 1 / 2 , a n ′′′ ( ε ) = n l ( η n ) D n ( ε + a n ( ε ) ) 1 δ ( n ) ε , a n ′′′′ ( ε ) = n l ( η n ) D n ( ε + a n ( ε ) ) 1 δ ( n ) ε 2 ( log log n ) p + 1 / 2 .

Noting that nl( η n ) D n 2 nl( η n ), one easily has

( n l ( η n ) D n ( ε + a n ( ε ) ) 1 ± δ ( n ) ε ) log log n = n l ( η n ) D n 1 ± δ ( n ) a n ( ε ) log log n + ( n l ( η n ) D n 1 ± δ ( n ) 1 ) ε log log n
(4.2)

and for large n,

| ( n l ( η n ) D n 1 ± δ ( n ) 1 ) ε log log n | | ( 1 ± 2 δ ( n ) 1 ) ε log log n | 2 ε δ ( n ) log log n = 2 ε / log log log n ,
(4.3)

which tends to zero as n and ε a + 1 /2. Hence, we have

a n ′′ (ε)loglognτand a n ′′′′ (ε)loglognτas n and ε a + 1 /2,

since p>1/2 and a n (ε) satisfies (1.8). Now, it follows from Proposition 2.1, (3.4) and Lemma 3.6 that Theorem 1.1 is true. □

Proof of Theorem 1.2 For any 0<γ<1, applying similar arguments to those used in (4.1), we have for large n,

P ( Y ε 2 ( 1 + γ ) log log n ) p n P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) = P ( Y ε 2 ( 1 + γ ) log log n + 2 / ( log log n ) p ) p n P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( R ( n ) ε 2 ( 1 + γ ) log log n D n ) P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( R ( n ) ε 2 ( 1 + δ ( n ) ) n l ( η n ) log log n ) P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( Q ( n ) ε 2 n log log n ) P ( R ( n ) ε 2 ( 1 δ ( n ) ) n l ( η n ) log log n ) + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( R ( n ) ε 2 ( 1 γ ) log log n D n ) + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) P ( Y ε 2 ( 1 γ ) log log n 2 / ( log log n ) p ) + p n + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) = P ( Y ε 2 ( 1 γ ) log log n ) + p n + P ( | S 2 ( n ) l ( η n ) | > δ ( n ) l ( η n ) ) ,

where

{ ε = ε + 2 1 + γ ( log log n ) p + 1 / 2 ε , ε = ε 2 1 γ ( log log n ) p + 1 / 2 ε

as n. Hence, Proposition 2.2, (3.4) and Lemma 3.6 guarantee that

( 1 + γ ) b + 1 E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) lim inf ε 0 ε 2 ( b + 1 ) n = 1 ( log log n ) b n log n P ( Q ( n ) ε 2 n log log n ) lim sup ε 0 ε 2 ( b + 1 ) n = 1 ( log log n ) b n log n P ( Q ( n ) ε 2 n log log n ) ( 1 γ ) b + 1 E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) .

Letting γ0 completes the proof. □

5 Proof of Theorem 1.3

In this section, we first modify the definition in (3.1) as follows:

η ˜ n =inf { s : s c + 1 , l ( s ) s 2 log log n n } .
(5.1)

Then one easily has nl( η ˜ n ) η ˜ n 2 loglogn. Moreover, we define for each n and 1in,

{ X ˜ n i = X i I { | X i | η ˜ n } , X ˜ n i = X ˜ n i E X ˜ n i , S ˜ n i = j = 1 i X ˜ n j , S ˜ n i = j = 1 i X ˜ n j , D n 2 = Var ( S ˜ n n ) .

Secondly, we give two notations related to the truncated R(n) statistic. That is,

R ˜ (n):= max 1 k n { j = 1 k ( X ˜ n j 1 n j = 1 n X ˜ n j ) } min 1 k n { j = 1 k ( X ˜ n j 1 n j = 1 n X ˜ n j ) }

and

R ˜ (n):= max 1 k n { j = 1 k ( X ˜ n j 1 n j = 1 n X ˜ n j ) } min 1 k n { j = 1 k ( X ˜ n j 1 n j = 1 n X ˜ n j ) } .

Then two lemmas which play key roles in the proof of Theorem 1.3 will be given, after which, we will finish the proof of Theorem 1.3.

Lemma 5.1 Suppose {X, X n ,n1} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with EX=0, and l(x) satisfies l(x) c 1 exp( c 2 ( log x ) β ) for some c 1 >0, c 2 >0 and 0β<1. Then, for any bR and 1/2<p<2, there exists a sequence of positive numbers { q n ,n1} such that, for any x>0,

P ( Y x + 1 / ( log log n ) p ) q n P ( R ˜ ( n ) x D n ) P ( Y x 1 / ( log log n ) p ) + q n ,

where q n 0 satisfies

n = 1 ( log log n ) b n log n q n <.

Proof The essential difference between this lemma and Lemma 3.3 is the different truncation levels are imposed on the random variables { X n ,n1} in two lemmas. However, by checking the proof of Lemma 3.3 carefully, one can find that the proof of Lemma 3.3 is not sensitive to the powers of loglogn. Hence, one can easily finish the proof by similar arguments to those used in Lemma 3.3. We omit the details here. □

Lemma 5.2 Suppose {X, X n ,n1} is a sequence of i.i.d. random variables which is in the domain of attraction of the normal law with EX=0, and let f() be a real function such that sup x R |f(x)|C and sup x R | f (x)|C. Then for any bR, 0<ε<1/4 and l>m1, we have

{ Var ( n = m l ( log log n ) b n log n f ( R ˜ ( n ) ρ ( n , ε ) ) ) = O ( ( log log m ) 2 b 1 / 2 ε log m ) , Var ( n = m l ( log log n ) b n log n f ( i = 1 n X ˜ n i 2 ( 1 ± γ / 2 ) n l ( η ˜ n ) ) = O ( ( log log m ) 2 b log m ) , Var ( n = m l ( log log n ) b n log n f ( S ˜ n n n γ l ( η ˜ n ) / 2 ) = O ( ( log log m ) 2 b m ( log m ) 2 ) , Var ( n = m l ( log log n ) b n log n i = 1 n I { | X i | > η ˜ n } ) = O ( ( log log m ) 2 b + 1 log m ) ,
(5.2)

where ρ(n,ε)=ε 2 n l ( η ˜ n ) log log n .

Proof Firstly, we consider the first part of (5.2). For j>i, since R ˜ (i) is independent of

R ˜ (i+1,j):= max i < k j { l = i + 1 k ( X ˜ j l 1 j l = i + 1 j X ˜ j l ) } min i < k j { l = i + 1 k ( X ˜ j l 1 j l = i + 1 j X ˜ j l ) } .

It follows that

Cov ( f ( R ˜ ( i ) ρ ( i , ε ) ) , f ( R ˜ ( j ) ρ ( j , ε ) ) ) = Cov ( f ( R ˜ ( i ) ρ ( i , ε ) ) , f ( R ˜ ( j ) ρ ( j , ε ) ) f ( R ˜ ( i + 1 , j ) ρ ( j , ε ) ) ) C E | f ( R ˜ ( j ) ρ ( j , ε ) ) f ( R ˜ ( i + 1 , j ) ρ ( j , ε ) ) | C E | l = 1 i X ˜ j l | ε 2 j l ( η ˜ j ) log log j C i l ( η ˜ j ) ε 2 j l ( η ˜ j ) log log j = O ( i ε j log log j ) .
(5.3)

Hence, for any 0<ε<1/4 and lm1, we have

Var ( n = m l ( log log n ) b n log n f ( R ˜ ( n ) ρ ( n , ε ) ) ) C n = m l ( log log n ) 2 b n 2 ( log n ) 2 + 2 j = m + 1 l i = m j 1 ( log log i ) b i log i ( log log j ) b j log j O ( i ε j log log j ) C ( log log m ) 2 b m ( log m ) 2 + O ( 1 ) j = m + 1 l ( log log j ) 2 b 1 / 2 j ( log j ) 2 ε = O ( ( log log m ) 2 b 1 / 2 ε log m ) .

Consider the second part of (5.2). Similar arguments used in (5.3) leads easily to

Cov ( f ( k = 1 i X ˜ i k 2 ( 1 ± γ / 2 ) i l ( η ˜ i ) ) , f ( k = 1 j X ˜ j k 2 ( 1 ± γ / 2 ) j l ( η ˜ j ) ) ) Cov ( f ( k = 1 i X ˜ i k 2 ( 1 ± γ / 2 ) i l ( η ˜ i ) ) , f ( k = 1 j X ˜ j k 2 ( 1 ± γ / 2 ) j l ( η ˜ j ) ) f ( k = i + 1 j X ˜ j k 2 ( 1 ± γ / 2 ) j l ( η ˜ j ) ) ) C i j .

It follows that

Var ( n = m l ( log log n ) b n log n f ( i = 1 n X ˜ n i 2 ( 1 ± γ / 2 ) n l ( η ˜ n ) ) ) C ( log log m ) 2 b m ( log m ) 2 + 2 j = m + 1 l i = m j 1 ( log log i ) b i log i ( log log j ) b j log j i j = O ( ( log log m ) 2 b log m ) .

Consider the third part of (5.2). The similar arguments used in (5.3) also lead easily to

Cov ( f ( S ˜ i i i γ l ( η ˜ i ) / 2 ) , f ( S ˜ j j j γ l ( η ˜ j ) / 2 ) ) =O ( i j ) ,

which implies that

Var ( n = m l ( log log n ) b n log n f ( S n n n γ l ( η ˜ n ) / 2 ) ) C ( log log m ) 2 b m ( log m ) 2 + 2 j = m + 1 l i = m j 1 ( log log i ) b i log i ( log log j ) b j log j O ( i j ) = O ( ( log log m ) 2 b m ( log m ) 2 ) .

Finally, we turn to handling the fourth part of (5.2). By employing Lemma 3.5 one has

Var ( n = m l ( log log n ) b n log n i = 1 n I { | X i | > η ˜ n } ) C n = m l ( log log n ) 2 b n 2 ( log n ) 2 n P ( | X | > η ˜ n ) + 2 j = m + 1 l i = m j 1 ( log log i ) b i log i ( log log j ) b j log j Cov ( k = 1 i I { | X k | > η ˜ i } , k = 1 j I { | X k | > η ˜ j } ) o ( 1 ) n = m l ( log log n ) 2 b + 1 n 2 ( log n ) 2 + 2 j = m + 1 l i = m j 1 ( log log i ) b i log i ( log log j ) b j log j i P ( | X | > η ˜ j ) = o ( 1 ) ( log log m ) 2 b + 1 m ( log m ) 2 + C j = m + 1 l ( log log j ) 2 b + 1 j ( log j ) 2 = O ( ( log log m ) 2 b + 1 log m ) .

The proof is completed. □

Proof of Theorem 1.3 At the beginning of the proof, we first give an upper bound and a lower bound for the indicator function of R/S statistic. For any xλ n log log n with λ>0, 0<γ<1/2 and large n, one has the following fact:

I { R ( n ) S ( n ) x } I { R ( n ) ( 1 γ ) l ( η ˜ n ) x } + I { | S 2 ( n ) l ( η ˜ n ) | > γ l ( η ˜ n ) } I { R ˜ ( n ) ( 1 γ ) l ( η ˜ n ) x } + I { i = 1 n | X i | > η ˜ n } + I { i = 1 n X i 2 > ( 1 + γ / 2 ) n l ( η ˜ n ) } + I { i = 1 n X i 2 < ( 1 γ / 2 ) n l ( η ˜ n ) } + I { | S n | > n γ l ( η ˜ n ) / 2 } I { R ˜ ( n ) ( 1 γ ) l ( η ˜ n ) x + o ( 1 ) } + 3 I { i = 1 n | X i | > η ˜ n } + I { i = 1 n X ˜ n i 2 > ( 1 + γ / 2 ) n l ( η ˜ n ) } + I { i = 1 n X ˜ n i 2 < ( 1 γ / 2 ) n l ( η ˜ n ) } + I { | S ˜ n n | > n γ l ( η ˜ n ) / 2 } I { R ˜ ( n ) ( 1 2 γ ) l ( η ˜ n ) x } + 3 I { i = 1 n | X i | > η ˜ n } + I { i = 1 n X ˜ n i 2 > ( 1 + γ / 2 ) n l ( η ˜ n ) } + I { i = 1 n X ˜ n i 2 < ( 1 γ / 2 ) n l ( η ˜ n ) } + I { | S ˜ n n | > n γ l ( η ˜ n ) / 2 } ,
(5.4)

since one easily has

| E R ˜ ( n ) | =o ( n l ( η ˜ n ) log log n ) .

Also, one has, for any xλ n log log n with λ>0, 0<γ<1/2 and large n,

I { R ( n ) S ( n ) x } I { R ( n ) ( 1 + γ ) l ( η ˜ n ) x } I { | S 2 ( n ) l ( η ˜ n ) | > γ l ( η ˜ n ) } I { R ˜ ( n ) ( 1 + 2 γ ) l ( η ˜ n ) x } 3 I { i = 1 n | X i | > η ˜ n } I { i = 1 n X ˜ n i 2 > ( 1 + γ / 2 ) n l ( η ˜ n ) } I { i = 1 n X ˜ n i 2 < ( 1 γ / 2 ) n l ( η ˜ n ) } I { | S ˜ n n | > n γ l ( η ˜ n ) / 2 } .

Denote K(ε)=exp(exp(1/( ε 2 M))) for any 0<ε<1/4 and fixed M>0. Let { f i (),i=1,,5} be real functions such that sup x | f i (x)|< for i=1,,5 and

{ I { | x | 1 2 γ } f n , 1 ( x ) : = f 1 ( x ) I { | x | 1 2 γ } , I { | x | 1 + γ / 2 } f n , 2 ( x ) : = f 2 ( x ) I { | x | 1 + γ / 4 } , I { | x | 1 γ / 2 } f n , 3 ( x ) : = f 3 ( x ) I { | x | 1 γ / 4 } , I { | x | > 1 } f n , 4 ( x ) : = f 4 ( x ) I { | x | > 1 / 2 } , I { | x | γ } f n , 5 ( x ) : = f 5 ( x ) I { | x | γ / 2 } .
(5.5)

Define ε k =1/k, kM. Then it follows from Lemma 5.2 that

Var ( n > B ( ε k ) ( log log n ) b n log n f ( R ˜ ( n ) ρ ( n , ε k ) ) ) =O ( k ( k 2 / M ) 2 b 1 / 2 exp ( k 2 / M ) ) ,

which together with the Borel-Cantelli lemma easily yield

n > B ( ε k ) ( log log n ) b n log n ( f ( R ˜ ( n ) ρ ( n , ε k ) ) E f ( R ˜ ( n ) ρ ( n , ε k ) ) ) 0a.s.
(5.6)

as k. Similar arguments also yield

{ n > B ( ε k ) ( log log n ) b n log n ( f ( i = 1 n X ˜ n i 2 ( 1 ± γ / 2 ) n l ( η ˜ n ) ) E f ( i = 1 n X ˜ n i 2 ( 1 ± γ / 2 ) n l ( η ˜ n ) ) ) 0 a.s. , n > B ( ε k ) ( log log n ) b n log n f ( ( S ˜ n n n γ l ( η ˜ n ) / 2 ) ) E f ( S ˜ n n n γ l ( η ˜ n ) / 2 ) ) ) 0 a.s. , n > B ( ε k ) ( log log n ) b n log n ( i = 1 n I { | X i | > η ˜ n } n P ( | X | > η ˜ n ) ) 0 a.s.
(5.7)

as k. Denote β(n,ε)=ε 2 n log log n . Using the inequality (5.4), one has

lim sup ε 0 ε 2 ( b + 1 ) n > K ( ε ) ( log log n ) b n log n I { Q ( n ) ε 2 n log log n } lim sup k ε k 1 2 ( b + 1 ) n > B ( ε k 1 ) ( log log n ) b n log n I { Q ( n ) β ( n , ε k ) } lim sup k ε k 1 2 ( b + 1 ) n > B ( ε k 1 ) ( log log n ) b n log n ( I { R ˜ ( n ) ( 1 2 γ ) l ( η ˜ n ) β ( n , ε k ) } + I { i = 1 n X ˜ n i 2 > ( 1 + γ / 2 ) n l ( η ˜ n ) } + I { i = 1 n X ˜ n i 2 < ( 1 γ / 2 ) n l ( η ˜ n ) } + 3 I { i = 1 n | X i | > η ˜ n } + I { | S ˜ n n | > n γ l ( η ˜ n ) / 2 } ) : = I I I + I V + V + V I + V I I .
(5.8)

We are going to treat the above terms, respectively. In view of (5.5), (5.6), Lemma 5.1 and Proposition 2.2, one has

I I I lim sup k ε k 1 2 ( b + 1 ) n > B ( ε k 1 ) ( log log n ) b n log n f 1 ( R ˜ ( n ) l ( η ˜ n ) β ( n , ε k ) ) lim sup k ε k 1 2 ( b + 1 ) n > B ( ε k 1 ) ( log log n ) b n log n E f 1 ( R ˜ ( n ) l ( η ˜ n ) β ( n , ε k ) ) lim sup k ε k 1 2 ( b + 1 ) n > B ( ε k 1 ) ( log log n ) b n log n P ( R ˜ ( n ) l ( η ˜ n ) β ( n , ε k ) 1 2 γ ) lim sup k ε k 2 ( b + 1 ) n = 1 ( log log n ) b n log n P ( Y ε k ( 1 2 γ ) 2 log log n 1 / ( log log n ) p ) E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) ( 1 2 γ ) 2 ( b + 1 ) , a.s.
(5.9)

since

ε k 1 2 ( 1 2 γ ) ( log log n ) p + 1 / 2 ε k as n.

Applying (5.5), (5.7), and Bernstein’s inequality, one has, for any ν >1,

0 I V lim sup k ε k 1 2 ( b + 1 ) n > B ( ε k 1 ) ( log log n ) b n log n P ( i = 1 n X ˜ n i 2 > ( 1 + γ / 4 ) n l ( η ˜ n ) ) lim sup k ε k 1 2 ( b + 1 ) n = 1 ( log log n ) b n log n P ( i = 1 n X ˜ n i 2 > ( 1 + γ / 4 ) n l ( η ˜ n ) ) lim sup k ε k 1 2 ( b + 1 ) n = 1 C n log n ( log log n ) 1 + ν = 0 , a.s.
(5.10)

Similarly, one can prove

V=0,a.s.
(5.11)

For the fourth part of (5.8), by similar arguments to those used in (5.9) and Lemma 3.4, we have

0 V I 3 lim sup k ε k 1 2 ( b + 1 ) n = 1 ( log log n ) b log n P ( | X | > η ˜ n / 2 ) = 0 , a.s. ,

and the details are omitted here. As for the fifth part of (5.8), one can easily show that, for any fixed γ>0,

0 V I I lim sup k ε k 1 2 ( b + 1 ) n = 1 ( log log n ) b n log n P ( S ˜ n n > n γ l ( η ˜ n ) / 4 ) C lim sup k ε k 1 2 ( b + 1 ) n = 1 ( log log n ) b n log n n l ( η ˜ n ) n 2 l ( η ˜ n ) = 0 , a.s.
(5.12)

Hence, it follows from (5.8)-(5.12) that

lim sup ε 0 ε 2 ( b + 1 ) n > K ( ε ) ( log log n ) b n log n I { Q ( n ) ε 2 n log log n } E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) ( 1 2 γ ) 2 ( b + 1 ) , a.s.
(5.13)

On the other hand,

lim sup ε 0 ε 2 ( b + 1 ) n K ( ε ) ( log log n ) b n log n I { Q ( n ) ε 2 n log log n } lim sup ε 0 ε 2 ( b + 1 ) n K ( ε ) ( log log n ) b n log n 1 M b + 1 .
(5.14)

By (5.13), (5.14) and the arbitrarinesses of M and γ, one has

lim sup ε 0 ε 2 ( b + 1 ) n = 1 ( log log n ) b n log n I { Q ( n ) ε 2 n log log n } E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) ,a.s.

Similarly, one has

lim inf ε 0 ε 2 ( b + 1 ) n = 1 ( log log n ) b n log n I { Q ( n ) ε 2 n log log n } E Y 2 ( b + 1 ) 2 b + 1 ( b + 1 ) ,a.s.

The proof is completed now. □

References

  1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33: 25–31. 10.1073/pnas.33.2.25

    Article  MathSciNet  Google Scholar 

  2. Erdős P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 1949, 20: 286–291. 10.1214/aoms/1177730037

    Article  Google Scholar 

  3. Baum LE, Katz M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 1965, 120: 108–123. 10.1090/S0002-9947-1965-0198524-1

    Article  MathSciNet  Google Scholar 

  4. Heyde CC: A supplement to the strong law of large numbers. J. Appl. Probab. 1975, 12: 173–175. 10.2307/3212424

    Article  MathSciNet  Google Scholar 

  5. Chen R: A remark on the tail probability of a distribution. J. Multivar. Anal. 1978, 8: 328–333. 10.1016/0047-259X(78)90084-2

    Article  Google Scholar 

  6. Spătaru A: Precise asymptotics in Spitzer’s law of large numbers. J. Theor. Probab. 1999, 12: 811–819. 10.1023/A:1021636117551

    Article  Google Scholar 

  7. Gut A, Spătaru A: Precise asymptotics in the law of the iterated logarithm. Ann. Probab. 2000, 28: 1870–1883. 10.1214/aop/1019160511

    Article  MathSciNet  Google Scholar 

  8. arXiv: http://arxiv.org/abs/math.PR/0610519

  9. Zhang LX: Precise asymptotics in Chung’s law of the iterated logarithm. Acta Math. Sin. Engl. Ser. 2008,24(4):631–646. 10.1007/s10114-007-1033-6

    Article  MathSciNet  Google Scholar 

  10. Zhang LX, Lin ZY: Precise rates in the law of the logarithm under minimal conditions. Chinese J. Appl. Probab. Statist. 2006,22(3):311–320.

    MathSciNet  Google Scholar 

  11. arXiv: http://arxiv.org/abs/math.PR/0610521

  12. Hurst HE: Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116: 770–808.

    Google Scholar 

  13. Feller W: The asymptotic distribution of the range of sums of independent random variables. Ann. Math. Stat. 1951, 22: 427–432. 10.1214/aoms/1177729589

    Article  MathSciNet  Google Scholar 

  14. Mandelbrot BB: Limit theorems on the self-normalized range for weakly and strongly dependent processes. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1975, 31: 271–285. 10.1007/BF00532867

    Article  MathSciNet  Google Scholar 

  15. Lin ZY: The law of the iterated logarithm for the rescaled R/S statistics without the second moment. Comput. Math. Appl. 2004,47(8–9):1389–1396. 10.1016/S0898-1221(04)90131-9

    Article  MathSciNet  Google Scholar 

  16. Lin ZY: The law of iterated logarithm for R/S statistics. Acta Math. Sci. 2005,25(2):326–330.

    MathSciNet  Google Scholar 

  17. Lin ZY: Strong laws of R/S statistics with a long-range memory sample. Stat. Sin. 2005,15(3):819–829.

    Google Scholar 

  18. Lin ZY, Lee SC: The law of iterated logarithm of rescaled range statistics for AR(1) model. Acta Math. Sin. Engl. Ser. 2006,22(2):535–544. 10.1007/s10114-005-0553-1

    Article  MathSciNet  Google Scholar 

  19. Wu HM, Wen JW: Precise rates in the law of the iterated logarithm for R/S statistics. Appl. Math. J. Chin. Univ. Ser. B 2006,21(4):461–466. 10.1007/s11766-006-0010-7

    Article  MathSciNet  Google Scholar 

  20. Kennedy DP: The distribution of the maximum Brownian excursion. J. Appl. Probab. 1976,13(2):371–376. 10.2307/3212843

    Article  Google Scholar 

  21. Pang TX, Zhang LX, Wang JF: Precise asymptotics in the self-normalized law of the iterated logarithm. J. Math. Anal. Appl. 2008,340(2):1249–1262. 10.1016/j.jmaa.2007.09.054

    Article  MathSciNet  Google Scholar 

  22. Sakhanenko AI: On estimates of the rate of convergence in the invariance principle. In Advances in Probab. Theory: Limit Theorems and Related Problems. Edited by: Borovkov AA. Springer, New York; 1984:124–135.

    Google Scholar 

  23. Sakhanenko AI: Convergence rate in the invariance principle for non-identically distributed variables with exponential moments. In Advances in Probab. Theory: Limit Theorems and Related Problems. Edited by: Borovkov AA. Springer, New York; 1985:2–73.

    Google Scholar 

  24. Csörgő M, Révész P: Strong Approximations in Probability and Statistics. Academic Press, New York; 1981.

    Google Scholar 

  25. Csörgő M, Szyszkowicz B, Wang QY: Donsker’s theorem for self-normalized partial sums processes. Ann. Probab. 2003, 31: 1228–1240. 10.1214/aop/1055425777

    Article  MathSciNet  Google Scholar 

  26. Lin ZY, Bai ZD: Probability Inequalities. Science Press, Beijing; 2010. Springer, New York

    Google Scholar 

Download references

Acknowledgements

Supported by National Natural Science Foundation of China (No. J1210038, No. 11171303 and No. 61272300) and the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kyo-Shin Hwang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Pang, TX., Lin, ZY. & Hwang, KS. Precise asymptotics in the law of the iterated logarithm for R/S statistic. J Inequal Appl 2014, 137 (2014). https://doi.org/10.1186/1029-242X-2014-137

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-137

Keywords