Open Access

An improved result in almost sure central limit theorem for self-normalized products of partial sums

Journal of Inequalities and Applications20132013:129

https://doi.org/10.1186/1029-242X-2013-129

Received: 16 November 2012

Accepted: 5 March 2013

Published: 27 March 2013

Abstract

Let X , X 1 , X 2 , be a sequence of independent and identically distributed random variables in the domain of attraction of the normal law. A universal result in an almost sure limit theorem for the self-normalized products of partial sums is established.

MSC:60F15.

Keywords

domain of attraction of the normal law self-normalized partial sums almost sure central limit theorem

1 Introduction

Let { X , X n } n N be a sequence of independent and identically distributed (i.i.d.) positive random variables with a non-degenerate distribution function and E X = μ > 0 . For each n 1 , the symbol S n / V n denotes self-normalized partial sums, where S n = i = 1 n X i , V n 2 = i = 1 n ( X i μ ) 2 . We say that the random variable X belongs to the domain of attraction of the normal law if there exist constants a n > 0 , b n R such that
S n b n a n d N .
(1)

Here and in the sequel, N is a standard normal random variable, and d denotes the convergence in distribution. We say that { X n } n N satisfies the central limit theorem (CLT).

It is known that (1) holds if and only if
lim x x 2 P ( | X | > x ) E X 2 I ( | X | x ) = 0 .
(2)

In contrast to the well-known classical central limit theorem, Gine et al. [7] obtained the following self-normalized version of the central limit theorem: ( S n E S n ) / V n d N as n if and only if (2) holds.

The limit theorem of products Π j = 1 n S j was initiated by Arnold and Villaseñor [1]. Their result was generalized by Wu [20], Ye and Wu [22], and Rempala and Wesolowski [16] who proved that if { X n ; n 1 } is a sequence of i.i.d. positive and finite second moment random variables with E X 1 = μ , Var X 1 = σ 2 > 0 and the coefficient of variation γ = σ / μ , then
( i = 1 n S i n ! μ n ) 1 / ( γ n ) d e 2 N as  n .
(3)
Recently Pang et al. [14] obtained the following self-normalized products of sums for i.i.d. sequences: Let { X , X n } n N be a sequence of i.i.d. positive random variables with E X = μ > 0 , and assume that X is in the domain of attraction of the normal law. Then
( i = 1 n S i n ! μ n ) μ / V n d e 2 N as  n .
(4)
Brosamler [4] and Schatte [17] obtained the following almost sure central limit theorem (ASCLT): Let { X n } n N be i.i.d. random variables with mean 0, variance σ 2 > 0 , and partial sums S n . Then
lim n 1 D n k = 1 n d k I { S k σ k < x } = Φ ( x ) a.s. for all  x R ,
(5)

with d k = 1 / k and D n = k = 1 n d k ; here and in the sequel, I denotes an indicator function, and Φ ( x ) is the standard normal distribution function. Some ASCLT results for partial sums were obtained by Lacey and Philipp [12], Ibragimov and Lifshits [11], Miao [13], Berkes and Csáki [2], Hörmann [9], Wu [18, 19]. Gonchigdanzan and Rempala [8] gave ASCLT for products of partial sums. Huang and Pang [10], Wu [21], and Zhang and Yang [23] obtained ASCLT results for self-normalized version.

Under mild moment conditions, ASCLT follows from the ordinary CLT, but in general, the validity of ASCLT is a delicate question of a totally different character as CLT. The difference between CLT and ASCLT lies in the weight in ASCLT.

The terminology of summation procedures (see, e.g., Chandrasekharan and Minakshisundaram [5], p.35) shows that the larger the weight sequence { d k ; k 1 } in (5) is, the stronger the relation becomes. By this argument, one should also expect to get stronger results if we use larger weights. It would be of considerable interest to determine the optimal weights.

On the other hand, by Theorem 1 of Schatte [17], (5) fails for weight d k = 1 . The optimal weight sequence remains unknown.

The purpose of this paper is to study and establish the ASCLT for self-normalized products of partial sums of random variables in the domain of attraction of the normal law. We show that the ASCLT holds under a fairly general growth condition on d k = k 1 exp ( ( ln k ) α ) , 0 α < 1 / 2 .

In the following, we assume that { X , X n } n N is a sequence of i.i.d. positive random variables in the domain of attraction of the normal law with E X = μ > 0 . Let b k , n = j = k n 1 / j , S k = i = 1 k X i , V k 2 = i = 1 k ( X i μ ) 2 , S k , k = i = 1 k b i , k ( X i μ ) for 1 k n . a n b n denotes lim n a n / b n = 1 . The symbol c stands for a generic positive constant which may differ from one place to another.

Our theorem is formulated in a general setting.

Theorem 1.1 Let { X , X n } n N be a sequence of i.i.d. positive random variables in the domain of attraction of the normal law with mean μ > 0 . Suppose 0 α < 1 / 2 and set
d k = exp ( ln α k ) k , D n = k = 1 n d k .
(6)
Then
lim n 1 D n k = 1 n d k I ( ( i = 1 k S i k ! μ k ) μ / V k x ) = F ( x ) a.s. x R .
(7)

Here and in the sequel, F is the distribution function of the random variable e 2 N .

By the terminology of summation procedures, we have the following corollary.

Corollary 1.2 Theorem  1.1 remains valid if we replace the weight sequence { d k } k N by { d k } k N such that 0 d k d k , k = 1 d k = .

Remark 1.3 Our results give substantial improvements for weight sequence in Theorem 1.1 obtained by Zhang and Yang [23].

Remark 1.4 If X is in the domain of attraction of the normal law, then E | X | p < for 0 < p < 2 . On the contrary, if E X 2 < , then X is in the domain of attraction of the normal law. Therefore, the class of random variables in Theorem 1.1 is of very broad range.

Remark 1.5 Essentially, the problem whether Theorem 1.1 holds for 1 / 2 α < 1 remains open.

2 Proofs

Furthermore, the following three lemmas will be useful in the proof, and the first is due to Csörgo et al. [6].

Lemma 2.1 Let X be a random variable with E X = μ , and denote l ( x ) = E ( X μ ) 2 I { | X μ | x } . The following statements are equivalent.
  1. (i)

    X is in the domain of attraction of the normal law.

     
  2. (ii)
    x 2 P ( | X μ | > x ) = o ( l ( x ) )
    .
     
  3. (iii)
    x E ( | X μ | I ( | X μ | > x ) ) = o ( l ( x ) )
    .
     
  4. (iv)
    E ( | X μ | α I ( | X μ | x ) ) = o ( x α 2 l ( x ) )
    for α > 2 .
     
  5. (v)
    l ( x )
    is a slowly varying function at ∞.
     
Lemma 2.2 Let { ξ , ξ n } n N be a sequence of uniformly bounded random variables. If there exist constants c > 0 and δ > 0 such that
| E ξ k ξ j | c ( k j ) δ for 1 k < j ,
(8)
then
lim n 1 D n k = 1 n d k ξ k = 0 a.s. ,
(9)

where d k and D n are defined by (6).

Proof

Since
E ( k = 1 n d k ξ k ) 2 k = 1 n d k 2 E ξ k 2 + 2 1 k < j n d k d j | E ξ k ξ j | = k = 1 n d k 2 E ξ k 2 + 2 1 k < j n ; j / k ln 2 / δ D n d k d j | E ξ k ξ j | + 2 1 k < j n ; j / k < ln 2 / δ D n d k d j | E ξ k ξ j | : = T n 1 + 2 ( T n 2 + T n 3 ) .
(10)
By the assumption of Lemma 2.2, there exists a constant c > 0 such that | ξ k | c for any k. Noting that exp ( ln α x ) = exp ( 1 x α ( ln u ) α 1 u d u ) , we have that exp ( ln α x ) , α < 1 , is a slowly varying function at infinity. Hence,
T n 1 c k = 1 n exp ( 2 ln α k ) k 2 c k = 1 exp ( 2 ln α k ) k 2 < .
By (8),
T n 2 c 1 k < j n ; j / k ln 2 / δ D n d k d j ( k j ) δ c 1 k < j n ; j / k ln 2 / δ D n d k d j ln 2 D n c D n 2 ln 2 D n .
(11)
On the other hand, if α = 0 , we have d k = e / k , D n e ln n , and hence, for sufficiently large n,
T n 3 c k = 1 n 1 k j = k k ln 2 / δ D n 1 j c D n ln ln D n D n 2 ln 2 D n .
(12)
If 0 < α < 1 / 2 , then by y α 0 , y , for arbitrary small ε > 0 , there exists n 0 such that for y ln n 0 , ( 1 α ) y α / α < ε . Therefore
1 0 ln n ( exp ( y α ) + 1 α α y α exp ( y α ) ) d y 0 ln n exp ( y α ) d y 0 ln n 0 exp ( y α ) ( 1 + 1 α α y α ) d y + ( 1 + ε ) ln n 0 ln n exp ( y α ) d y 0 ln n exp ( y α ) d y 1 + ε .
This implies
0 ln n exp ( y α ) d y 0 ln n ( exp ( y α ) + 1 α α y α exp ( y α ) ) d y

from the arbitrariness of ε.

Hence,
D n 1 n exp ( ln α x ) x d x = 0 ln n exp ( y α ) d y 0 ln n ( exp ( y α ) + 1 α α y α exp ( y α ) ) d y = 0 ln n 1 α ( y 1 α exp ( y α ) ) d y = 1 α ln 1 α n exp ( ln α n ) , n .
(13)
This implies
ln D n ln α n , exp ( ln α n ) α D n ( ln D n ) 1 α α , ln ln D n α ln ln n .
Thus combining | ξ k | c for any k,
T n 3 c k = 1 n 1 k < j n ; j / k < ( ln D n ) 2 / δ d k d j c k = 1 n d k k < j k ( ln D n ) 2 / δ exp ( ln α n ) 1 j c exp ( ln α n ) ln ln D n k = 1 n d k c D n 2 ln ln D n ( ln D n ) ( 1 α ) / α .
Since α < 1 / 2 implies ( 1 2 α ) / ( 2 α ) > 0 and ε 1 : = 1 / ( 2 α ) 1 > 0 , thus for sufficiently large n, we get
T n 3 c D n 2 ( ln D n ) 1 / ( 2 α ) ln ln D n ( ln D n ) ( 1 2 α ) / ( 2 α ) D n 2 ( ln D n ) 1 / ( 2 α ) = D n 2 ( ln D n ) 1 + ε 1 .
(14)
Let T n : = 1 D n k = 1 n d k ξ k , ε 2 : = min ( 1 , ε 1 ) . Combining (10)-(12) and (14), for sufficiently large n, we get
E T n 2 c ( ln D n ) 1 + ε 2 .
By (13), we have D n + 1 D n . Let 0 < η < ε 2 1 + ε 2 , n k = inf { n ; D n exp ( k 1 η ) } , then D n k exp ( k 1 η ) , D n k 1 < exp ( k 1 η ) . Therefore
1 D n k exp ( k 1 η ) D n k 1 exp ( k 1 η ) < 1 ,
that is,
D n k exp ( k 1 η ) .
Since ( 1 η ) ( 1 + ε 2 ) > 1 from the definition of η, thus for any ε > 0 , we have
k = 1 P ( | T n k | > ε ) c k = 1 E T n k 2 c k = 1 1 k ( 1 η ) ( 1 + ε 2 ) < .
By the Borel-Cantelli lemma,
T n k 0 a.s.
Now, for n k < n n k + 1 , by | ξ k | c for any k,
| T n | | T n k | + c D n k i = n k + 1 n k + 1 d i | T n k | + c ( D n k + 1 D n k 1 ) 0 a.s.

from D n k + 1 D n k exp ( ( k + 1 ) 1 η ) exp ( k 1 η ) = exp ( k 1 η ( ( 1 + 1 / k ) 1 η 1 ) ) exp ( ( 1 η ) k η ) 1 , i.e., (9) holds. This completes the proof of Lemma 2.2. □

Let l ( x ) = E ( X μ ) 2 I { | X μ | x } , b = inf { x 1 ; l ( x ) > 0 } and
η j = inf { s ; s b + 1 , l ( s ) s 2 1 j } for  j 1 .
By the definition of η j , we have j l ( η j ) η j 2 and j l ( η j ε ) > ( η j ε ) 2 for any ε > 0 . It implies that
n l ( η n ) η n 2 as  n .
(15)
For every 1 i k n , let
X ¯ k i = ( X i μ ) I ( | X i μ | η k ) , V ¯ k 2 = i = 1 k X ¯ k i 2 , S ¯ k , k = i = 1 k b i , k X ¯ k i .
Lemma 2.3 Suppose that the assumptions of Theorem  1.1 hold. Then
(16)
(17)
(18)

where d k and D n are defined by (6) and f is a non-negative, bounded Lipschitz function.

Proof By the central limit theorem for i.i.d. random variables and Var S ¯ n , n 2 n l ( η n ) as n from k = 1 n b k , n 2 2 n , it follows that
S ¯ n , n E S ¯ n , n 2 n l ( η n ) d N as  n ,
where N denotes the standard normal random variable. This implies that for any g ( x ) which is a non-negative, bounded Lipschitz function,
E g ( S ¯ n , n E S ¯ n , n 2 n l ( η n ) ) E g ( N ) as  n .
Hence, we obtain
lim n 1 D n k = 1 n d k E g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) = E g ( N )

from the Toeplitz lemma.

On the other hand, note that (16) is equivalent to
lim n 1 D n k = 1 n d k g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) = E g ( N ) a.s.
from Theorem 7.1 of Billingsley [3] and Section 2 of Peligrad and Shao [15]. Hence, to prove (16), it suffices to prove
lim n 1 D n k = 1 n d k ( g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) E g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) ) = 0 a.s.
(19)

for any g ( x ) which is a non-negative, bounded Lipschitz function.

For any k 1 , let
ξ k = g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) E g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) .
For any 1 k < j , note that g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) and g ( S ¯ j , j E S ¯ j , j i = 1 k b i , j ( X ¯ j i E X ¯ j i ) 2 j l ( η j ) ) are independent and g ( x ) is a non-negative, bounded Lipschitz function. By the definition of η j , we get
| E ξ k ξ j | = | Cov ( g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) , g ( S ¯ j , j E S ¯ j , j 2 j l ( η j ) ) ) | = | Cov ( g ( S ¯ k , k E S ¯ k , k 2 k l ( η k ) ) , g ( S ¯ j , j E S ¯ j , j 2 j l ( η j ) ) g ( S ¯ j , j E S ¯ j , j i = 1 k b i , j ( X ¯ j i E X ¯ j i ) 2 j l ( η j ) ) ) | c E | i = 1 k b i , j ( X ¯ j i E X ¯ j i ) | j l ( η j ) c E ( i = 1 k b i , j ( X ¯ j i E X ¯ j i ) ) 2 j l ( η j ) c i = 1 k b i , j 2 E X ¯ j i 2 j l ( η j ) c i = 1 k ( b i , k + b k + 1 , j ) 2 l ( η j ) j l ( η j ) c i = 1 k b i , k 2 + i = 1 k b k + 1 , j 2 j c k + k ln 2 ( j / k ) j c ( k j ) 1 / 4 .

By Lemma 2.2, (19) holds.

Now we prove (17). Let
Z k = I ( i = 1 k ( | X i μ | > η k ) ) E I ( i = 1 k ( | X i μ | > η k ) ) for any  k 1 .
It is known that I ( A B ) I ( B ) I ( A ) for any sets A and B. Then for 1 k < j , by Lemma 2.1(ii) and (15), we get
P ( | X μ | > η j ) = o ( 1 ) l ( η j ) η j 2 = o ( 1 ) j .
(20)
Hence
| E Z k Z j | = | Cov ( I ( i = 1 k ( | X i μ | > η k ) ) , I ( i = 1 j ( | X i μ | > η j ) ) ) | = | Cov ( I ( i = 1 k ( | X i μ | > η k ) ) , I ( i = 1 j ( | X i μ | > η j ) ) I ( i = k + 1 j ( | X i μ | > η j ) ) ) | E | I ( i = 1 j ( | X i μ | > η j ) ) I ( i = k + 1 j ( | X i μ | > η j ) ) | E I ( i = 1 k ( | X i μ | > η j ) ) k P ( | X μ | > η j ) k j .

By Lemma 2.2, (17) holds.

Finally, we prove (18). Let
ζ k = f ( V ¯ k 2 k l ( η k ) ) E f ( V ¯ k 2 k l ( η k ) ) for any  k 1 .
For 1 k < j ,
| E ζ k ζ j | = | Cov ( f ( V ¯ k 2 k l ( η k ) ) , f ( V ¯ j 2 j l ( η j ) ) ) | = | Cov ( f ( V ¯ k 2 k l ( η k ) ) , f ( V ¯ j 2 j l ( η j ) ) f ( V ¯ j 2 i = 1 k ( X i μ ) 2 I ( | X i μ | η j ) j l ( η j ) ) ) | c E ( i = 1 k ( X i μ ) 2 I ( | X i μ | η j ) ) j l ( η j ) = c k E ( X μ ) 2 I ( | X μ | η j ) j l ( η j ) = c k l ( η j ) j l ( η j ) = c k j .

By Lemma 2.2, (18) holds. This completes the proof of Lemma 2.3. □

Proof of Theorem 1.1 Let U i = S i / ( i μ ) ; then (7) is equivalent to
lim n 1 D n k = 1 n d k I ( μ 2 V k i = 1 k ln U i x ) = Φ ( x ) a.s.  x R .
(21)
Let q ( 4 / 3 , 2 ) , then E | X | < and E | X | q < from Remark 1.4. Using the Marcinkiewicz-Zygmund strong large number law, we have
U k 1 = S k μ k k μ 0 a.s.
S k μ k = o ( k 1 / q ) a.s.
Hence let a k = 2 ( 1 ± ε ) k l ( η k ) for any given 0 < ε < 1 , by | ln ( 1 + x ) x | = O ( x 2 ) for | x | < 1 / 2 ,
| μ a k i = 1 k ln U i μ a k i = 1 k ( U i 1 ) | c 1 k l ( η k ) i = 1 k ( U i 1 ) 2 c k l ( η k ) i = 1 k i 2 ( 1 / q 1 ) c 1 k 3 / 2 2 / q l ( η k ) 0 a.s.  k ,

from 3 / 2 2 / q > 0 , l ( x ) is a slowly varying function at ∞, and η k k + 1 .

Therefore, for any δ > 0 and almost every event ω, there exists k 0 = k 0 ( ω , δ , x ) such that for k > k 0 ,
I ( μ a k i = 1 k ( U i 1 ) x δ ) I ( μ a k i = 1 k ln U i x ) I ( μ a k i = 1 k ( U i 1 ) x + δ ) .
(22)
Note that under the condition | X j μ | η k , 1 j k ,
μ i = 1 k ( U i 1 ) = i = 1 k S i i μ i = i = 1 k 1 i j = 1 i ( X j μ ) = j = 1 k i = j k 1 i X ¯ k j = j = 1 k b j , k X ¯ k j = S ¯ k , k .
(23)
Thus, by (22) and (23), for any given 0 < ε < 1 , δ > 0 , we have for k > k 0 ,
and
Hence, to prove (21), it suffices to prove
(24)
(25)
(26)
(27)

for any 0 < ε < 1 and δ 1 > 0 .

Firstly, we prove (24). Let 0 < β < 1 / 2 , and let h ( ) be a real function such that for any given x R ,
I ( y 1 ± ε x ± δ 1 β ) h ( y ) I ( y 1 ± ε x ± δ 1 + β ) .
(28)
By E ( X i μ ) = 0 , Lemma 2.1(iii) and (15), we have
| E S ¯ k , k | = | E i = 1 k b i , k ( X i μ ) I ( | X i μ | η k ) | i = 1 k b i , k E | X i μ | I ( | X i μ | > η k ) = i = 1 k j = i k 1 j E | X μ | I ( | X μ | > η k ) = j = 1 k i = 1 j 1 j o ( l ( η k ) ) η k = o ( k l ( η k ) ) .

This, combining with (16), (28) and the arbitrariness of β in (28), (24), holds.

By (17), (20) and the Toeplitz lemma,
0 1 D n k = 1 n d k I ( i = 1 k ( | X i μ | > η k ) ) 1 D n k = 1 n d k E I ( i = 1 k ( | X i μ | > η k ) ) 1 D n k = 1 n d k k P ( | X μ | > η k ) 0 a.s.

Hence, (25) holds.

Now we prove (26). For any λ > 0 , let f be a non-negative, bounded Lipschitz function such that
I ( x > 1 + λ ) f ( x ) I ( x > 1 + λ / 2 ) .
From E V ¯ k 2 = k l ( η k ) , X ¯ n i is i.i.d., Lemma 2.1(iv), and (15),
P ( V ¯ k 2 > ( 1 + λ 2 ) k l ( η k ) ) = P ( V ¯ k 2 E V ¯ k 2 > λ k l ( η k ) / 2 ) c E ( V ¯ k 2 E V ¯ k 2 ) 2 k 2 l 2 ( η k ) c E ( X μ ) 4 I ( | X μ | η k ) k l 2 ( η k ) = o ( 1 ) η k 2 k l ( η k ) = o ( 1 ) 0 .
Therefore, from (18) and the Toeplitz lemma,
0 1 D n k = 1 n d k I ( V ¯ k 2 > ( 1 + λ ) k l ( η k ) ) 1 D n k = 1 n d k f ( V ¯ k 2 k l ( η k ) ) 1 D n k = 1 n d k E f ( V ¯ k 2 k l ( η k ) ) 1 D n k = 1 n d k E I ( V ¯ k 2 > ( 1 + λ / 2 ) k l ( η k ) ) = 1 D n k = 1 n d k P ( V ¯ k 2 > ( 1 + λ / 2 ) k l ( η k ) ) 0 a.s.

Hence, (26) holds. By similar methods used to prove (26), we can prove (27). This completes the proof of Theorem 1.1. □

Authors’ information

Qunying Wu, Professor, Doctor, working in the field of probability and statistics.

Declarations

Acknowledgements

The authors are very grateful to the referees and the editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. Supported by the National Natural Science Foundation of China (11061012), and the support Program of the Guangxi China Science Foundation (2012GXNSFAA053010).

Authors’ Affiliations

(1)
College of Science, Guilin University of Technology
(2)
Department of Mathematics, Ji’nan University

References

  1. Arnold BC, Villaseñor JA: The asymptotic distribution of sums of records. Extremes 1998, 1(3):351–363.View ArticleGoogle Scholar
  2. Berkes I, Csáki E: A universal result in almost sure central limit theory. Stoch. Process. Appl. 2001, 94: 105–134. 10.1016/S0304-4149(01)00078-3View ArticleGoogle Scholar
  3. Billingsley P: Convergence of Probability Measures. Wiley, New York; 1968.Google Scholar
  4. Brosamler GA: An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 1988, 104: 561–574. 10.1017/S0305004100065750MathSciNetView ArticleGoogle Scholar
  5. Chandrasekharan K, Minakshisundaram S: Typical Means. Oxford University Press, Oxford; 1952.Google Scholar
  6. Csörgo M, Szyszkowicz B, Wang QY: Donsker’s theorem for self-normalized partial sums processes. Ann. Probab. 2003, 31(3):1228–1240. 10.1214/aop/1055425777MathSciNetView ArticleGoogle Scholar
  7. Gine E, Götze F, Mason DM: When is the Student t -statistic asymptotically standard normal? Ann. Probab. 1997, 25: 1514–1531.MathSciNetView ArticleGoogle Scholar
  8. Gonchigdanzan K, Rempala G: A note on the almost sure central limit theorem for the product of partial sums. Appl. Math. Lett. 2006, 19: 191–196. 10.1016/j.aml.2005.06.002MathSciNetView ArticleGoogle Scholar
  9. Hörmann S: Critical behavior in almost sure central limit theory. J. Theor. Probab. 2007, 20: 613–636. 10.1007/s10959-007-0080-3View ArticleGoogle Scholar
  10. Huang SH, Pang TX: An almost sure central limit theorem for self-normalized partial sums. Comput. Math. Appl. 2010, 60: 2639–2644. 10.1016/j.camwa.2010.08.093MathSciNetView ArticleGoogle Scholar
  11. Ibragimov IA, Lifshits M: On the convergence of generalized moments in almost sure central limit theorem. Stat. Probab. Lett. 1998, 40: 343–351. 10.1016/S0167-7152(98)00134-5MathSciNetView ArticleGoogle Scholar
  12. Lacey MT, Philipp W: A note on the almost sure central limit theorem. Stat. Probab. Lett. 1990, 9: 201–205. 10.1016/0167-7152(90)90056-DMathSciNetView ArticleGoogle Scholar
  13. Miao Y: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc. Indian Acad. Sci. Math. Sci. 2008, 118(2):289–294. 10.1007/s12044-008-0021-9MathSciNetView ArticleGoogle Scholar
  14. Pang TX, Lin ZY, Hwang KS: Asymptotics for self-normalized random products of sums of i.i.d. random variables. J. Math. Anal. Appl. 2007, 334: 1246–1259. 10.1016/j.jmaa.2006.12.085MathSciNetView ArticleGoogle Scholar
  15. Peligrad M, Shao QM: A note on the almost sure central limit theorem for weakly dependent random variables. Stat. Probab. Lett. 1995, 22: 131–136. 10.1016/0167-7152(94)00059-HMathSciNetView ArticleGoogle Scholar
  16. Rempala G, Wesolowski J: Asymptotics for products of sums and U -statistics. Electron. Commun. Probab. 2002, 7: 47–54.MathSciNetView ArticleGoogle Scholar
  17. Schatte P: On strong versions of the central limit theorem. Math. Nachr. 1988, 137: 249–256. 10.1002/mana.19881370117MathSciNetView ArticleGoogle Scholar
  18. Wu QY: Almost sure limit theorems for stable distribution. Stat. Probab. Lett. 2011, 81(6):662–672. 10.1016/j.spl.2011.02.003View ArticleGoogle Scholar
  19. Wu QY: An almost sure central limit theorem for the weight function sequences of NA random variables. Proc. Indian Acad. Sci. Math. Sci. 2011, 121(3):369–377. 10.1007/s12044-011-0036-5MathSciNetView ArticleGoogle Scholar
  20. Wu QY: Almost sure central limit theory for products of sums of partial sums. Appl. Math. J. Chin. Univ. Ser. B 2012, 27(2):169–180.View ArticleGoogle Scholar
  21. Wu QY: A note on the almost sure limit theorem for self-normalized partial sums of random variables in the domain of attraction of the normal law. J. Inequal. Appl. 2012., 2012: Article ID 17. doi:10.1186/1029–242X-2012–17Google Scholar
  22. Ye DX, Wu QY: Almost sure central limit theorem for product of partial sums of strongly mixing random variables. J. Inequal. Appl. 2011., 2011: Article ID 576301Google Scholar
  23. Zhang Y, Yang XY: An almost sure central limit theorem for self-normalized products of sums of i.i.d. random variables. J. Math. Anal. Appl. 2011, 376: 29–41. 10.1016/j.jmaa.2010.10.021MathSciNetView ArticleGoogle Scholar

Copyright

© Wu and Chen; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.