Skip to main content

A note on the almost sure central limit theorem for the product of some partial sums

Abstract

Let ( X n ) be a sequence of i.i.d., positive, square integrable random variables with E( X 1 )=μ>0, Var( X 1 )= σ 2 . Denote by S n , k = i = 1 n X i X k and by γ=σ/μ the coefficient of variation. Our goal is to show the unbounded, measurable functions g, which satisfy the almost sure central limit theorem, i.e.,

lim N 1 log N n = 1 N 1 n g ( ( k = 1 n S n , k ( n 1 ) n μ n ) 1 γ n ) = 0 g(x)dF(x)a.s.,

where F() is the distribution function of the random variable e N and is a standard normal random variable.

MSC:60F15, 60F05.

1 Introduction

The almost sure central limit theorem (ASCLT) has been first introduced independently by Schatte [1] and Brosamler [2]. Since then, many studies have been done to prove the ASCLT in different situations, for example, in the case of function-typed almost sure central limit theorem (FASCLT) (see Berkes et al. [3], Ibragimov and Lifshits [4]). The purpose of this paper is to investigate the FASCLT for the product of some partial sums.

Let ( X n ) be a sequence of i.i.d. random variables and define the partial sum S n = k = 1 n X k for n1. In a recent paper of Rempala and Wesolowski [5], it is showed under the assumption E( X 2 )< and X>0 that

( k = 1 n S k n ! μ n ) 1 γ n d e 2 N ,
(1)

where is a standard normal random variable, μ=E(X) and γ=σ/μ with σ 2 =var(X). For further results in this field, we refer to Qi [6], Lu and Qi [7] and Rempala and Wesolowski [8].

Recently Gonchigdanzan and Rempala [9] obtained the almost sure limit theorem related to (1) as follows.

Theorem A Let ( X n ) be a sequence of i.i.d., positive random variables with E( X 1 )=μ>0 and Var( X 1 )= σ 2 . Denote by γ=σ/μ the coefficient of variation. Then, for any real x,

lim N 1 log N n = 1 N 1 n I ( ( k = 1 n S k n ! μ n ) 1 γ n x ) =G(x)a.s.,
(2)

where G(x) is the distribution function of e 2 N , is a standard normal random variable. Some extensions on the above result can be found in Ye and Wu [10]and the reference therein.

A similar result on the product of partial sums was provided by Miao [11], which stated the following.

Theorem B Let ( X n ) be a sequence of i.i.d., positive, square integrable random variables with E( X 1 )=μ>0 and Var( X 1 )= σ 2 . Denote by S n , k = i = 1 n X i X k and γ=σ/μ the coefficient of variation. Then

( k = 1 n S n , k ( n 1 ) n μ n ) 1 γ n d e N ,
(3)

and for any real x,

lim N 1 log N n = 1 N 1 n I ( ( k = 1 n S n , k ( n 1 ) n μ n ) 1 γ n x ) =F(x)a.s.,
(4)

where F() is the distribution function of the random variable e N and is a standard normal random variable.

The purpose of this paper is to investigate the validity of (4) for some class of unbounded measurable functions g.

Throughout this article, ( X n ) is a sequence of i.i.d. positive, square integrable random variables with E( X 1 )=μ>0 and Var( X 1 )= σ 2 . We denote by S n , k = i = 1 n X i X k and by γ=σ/μ the coefficient of variation. Furthermore, is the standard normal random variable, Φ is the standard normal distribution function, ϕ is its density function and ab stands for lim sup n | a n / b n |<.

2 Main result

We state our main result as follows.

Theorem 1 Let g(x) be a real-valued, almost everywhere continuous function on R such that |g( e x )ϕ(x)|c ( 1 + | x | ) α with some c>0 and α>5. Then, for any real x,

lim N 1 log N n = 1 N 1 n g ( ( k = 1 n S n , k ( n 1 ) n μ n ) 1 γ n ) = 0 g(x)dF(x)a.s.,
(5)

where F() is the distribution function of the random variable e N .

Let f(x)=g( e x ). By a simple calculation, we can get the following result.

Remark 1 Let f(x) be a real-valued, almost everywhere continuous function on R such that |f(x)ϕ(x)|c ( 1 + | x | ) α with some c>0 and α>5. Then (5) is equivalent to

lim N 1 log N n = 1 N 1 n f ( 1 γ n k = 1 n log S n , k ( n 1 ) μ ) = f(x)ϕ(x)dxa.s.
(6)

Remark 2 Lu et al. [12] proved the function-typed almost sure central limit theorem for a type of random function, which can include U-statistics, Von-Mises statistics, linear processes and some other types of statistics, but their results cannot imply Theorem 1.

3 Auxiliary results

In this section, we state and prove several auxiliary results, which will be useful in the proof of Theorem 1.

Let S ˜ n = i = 1 n X i μ σ and U i = 1 γ i k = 1 i log S i , k ( i 1 ) μ . Observe that for |x|<1 we have

log(1+x)=x+ θ 2 x 2 ,

where θ(1,0). Thus

U i = 1 γ i k = 1 i log S i , k ( i 1 ) μ = 1 γ i k = 1 i ( S i , k ( i 1 ) μ 1 ) + 1 γ i k = 1 i θ k 2 ( S i , k ( i 1 ) μ 1 ) 2 = 1 i k = 1 i ( j k , j i ( X j μ ) ( i 1 ) σ ) + 1 γ i k = 1 i θ k 2 ( S i , k ( i 1 ) μ 1 ) 2 = 1 i k = 1 i X k μ σ + 1 γ i k = 1 i θ k 2 ( S i , k ( i 1 ) μ 1 ) 2 = : 1 i S ˜ i + R i .
(7)

By the law of iterated logarithm, we have for k

max 1 k i | S i , k ( i 1 ) μ 1 | =O ( ( log log i / i ) 1 / 2 ) a.s.

Therefore,

| R i |= | 1 γ i k = 1 i θ k 2 ( S i , k ( i 1 ) μ 1 ) 2 | 1 i k = 1 i ( S i , k ( i 1 ) μ 1 ) 2 log log i i 1 / 2 a.s.
(8)

Obviously,

E | R i | = E | 1 γ i k = 1 i θ k 2 ( S i , k ( i 1 ) μ 1 ) 2 | 1 i k = 1 i E ( S i , k ( i 1 ) μ 1 ) 2 1 i k = 1 i 1 i 1 1 i 1 / 2 .
(9)

Our proof mainly relies on decomposition (7). Properties (8) and (9) will be extensively used in the following parts of this section.

Lemma 1 Let X and Y be random variables. We write F(x)=P(X<x), G(x)=P(X+Y<x). Then

F(xε)P ( | Y | ε ) G(x)F(x+ε)+P ( | Y | ε )

for every ε>0 and x.

Proof It is Lemma 1.3 of Petrov [13]. □

Lemma 2 Let ( X n ) be a sequence of i.i.d. random variables. Let S n = k n X k , F s denote the distribution function obtained from F by symmetrization, and choose L>0 so large that | x | L x 2 d F s 1. Then, for any n1, λ>0,

sup a P ( a S n n a + λ ) Aλ

with some absolute constant A, provided λ n L.

Proof It can be obtained from Berkes et al. [3]. □

Lemma 3 Assume that (6) is true for all indicator functions of intervals and for a fixed a.e. continuous function f(x)= f 0 (x). Then (6) is also true for all a.e. continuous functions f such that |f(x)|| f 0 (x)|, xR, and, moreover, the exceptional set of probability 0 can be chosen universally for all such f.

Proof See Berkes et al. [3]. □

In view of Lemma 3 and Remark 1, in order to prove Theorem 1, it suffices to prove (6) for the case when f(x)ϕ(x)= ( 1 + | x | ) α , α>5. Thus, in the following part, we put f(x)ϕ(x)= ( 1 + | x | ) α , α>5 and

ξ k = i = 2 k + 1 2 k + 1 1 i f ( U i ) , ξ k = i = 2 k + 1 2 k + 1 1 i f ( U i ) I { f ( U i ) k ( log k ) β } ,

where 1<β< 1 2 (α3).

Lemma 4 Under the conditions of Theorem  1, we have P( ξ k ξ k  i.o.)=0.

Proof Let f 1 denote an inverse function of f in some interval, and let α, β satisfy 1<β< 1 2 (α3). It is easy to check that

{ ξ k ξ k } { | U i | f 1 ( k / ( log k ) β )  for some  2 k < i 2 k + 1 }

and

f ( ( 2 log k + ( α 2 β ) log log k ) 1 / 2 ) = k ( log k ) β 2 π ( log k ) α / 2 { 1 + ( 2 log k + ( α 2 β ) log log k ) 1 / 2 } α k ( log k ) β .
(10)

Note that the function f is even and strictly increasing for x x 0 . We have

f 1 ( k / ( log k ) β ) ( 2 log k + ( α 2 β ) log log k ) 1 / 2 .
(11)

Observing that 2 k <i 2 k + 1 implies k 1 2 logi, in view of (8) we get

P ( ξ k ξ k  i.o. ) P ( | U i | ( 2 log log i + ( α 2 β ) log log log i O ( 1 ) ) 1 / 2  i.o. ) = P ( | S ˜ i i + R i | ( 2 log log i + ( α 2 β ) log log log i O ( 1 ) ) 1 / 2  i.o. ) P ( | S ˜ i i | ( 2 log log i + ( α 2 β ) log log log i O ( 1 ) ) 1 / 2  i.o. ) = 0 ,

where in the last step we use the assumption α2β>3 and a version of the Kolmogorov-Erdös-Feller-Petrovski test (see Feller [14], Theorem 2). This completes the proof of Lemma 4. □

Let a k = f 1 (k/ ( log k ) β ) and let G i and F i denote, respectively, the distribution function of U i and S ˜ i i . Set

σ i 2 = i i x 2 d F i ( x ) ( i i x d F i ( x ) ) 2 , η i = sup x | G i ( x ) Φ ( x σ i ) | , ε i = sup x | F i ( x ) Φ ( x σ i ) | .

Clearly, σ i 1, lim i σ i =1.

Lemma 5 Under the conditions of Theorem  1, we have

k N E ( ξ k ) 2 N 2 ( log N ) 2 β .

Proof Observe now that the relation

| a a ψ ( x ) d ( G 1 ( x ) G 2 ( x ) ) | sup a x a | ψ ( x ) | sup a x a | G 1 ( x ) G 2 ( x ) |
(12)

is valid for any bounded, measurable functions ψ and distribution functions G 1 , G 2 . Let, as previously, a k = f 1 (k/ ( log k ) β ). Thus, for any 2 k <i 2 k + 1 , we obtain that

E f 2 ( U i ) I { f ( U i ) k ( log k ) β } = | x | a k f 2 ( x ) d G i ( x ) | x | a k f 2 ( x ) d Φ ( x σ i ) + η i k 2 ( log k ) 2 β | x | a k f 2 ( x ) d Φ ( x ) + η i k 2 ( log k ) 2 β ,

where in the last step, we have used the fact that σ i 1, lim i σ i =1. Hence, by the Cauchy-Schwarz inequality, we have

E ( ξ k ) 2 E [ ( i = 2 k + 1 2 k + 1 ( 1 i ) 2 ) 1 / 2 ( i = 2 k + 1 2 k + 1 f 2 ( U i ) I { f ( U i ) k ( log k ) β } ) 1 / 2 ] 2 ( i = 2 k + 1 2 k + 1 1 i 2 ) ( i = 2 k + 1 2 k + 1 ( | x | a k f 2 ( x ) d Φ ( x ) + η i k 2 ( log k ) 2 β ) ) 1 2 k ( 2 k | x | a k f 2 ( x ) d Φ ( x ) + k 2 ( log k ) 2 β i = 2 k + 1 2 k + 1 η i ) | x | a k e x 2 / 2 ( 1 + | x | ) 2 α d x + k 2 ( log k ) 2 β i = 2 k + 1 2 k + 1 η i i .
(13)

Note that

0 t e x 2 / 2 ( 1 + | x | ) 2 α dx= 0 t / 2 + t / 2 t t e t 2 / 8 + 1 t 2 α + 1 t / 2 t x e x 2 / 2 dx e t 2 / 2 t 2 α + 1 ,

and thus by (10) and (11), we have

| x | a k e x 2 / 2 ( 1 + | x | ) 2 α dx e a k 2 / 2 a k 2 α + 1 f( a k ) 1 a k α + 1 k ( log k ) β + ( α + 1 ) / 2 .
(14)

Now we estimate η i . By Lemma 1, we have that for some ε>0,

η i = sup x | G i ( x ) Φ ( x σ i ) | sup x | G i ( x ) F i ( x ) | + sup x | F i ( x ) Φ ( x σ i ) | = sup x | P ( U i x ) P ( S ˜ i i x ) | + ε i = sup x | P ( ( S ˜ i i + R i ) x ) P ( S ˜ i i x ) | + ε i P ( | R i | ε ) + sup x { P ( S ˜ i i x + ε ) P ( S ˜ i i x ) } + ε i .

The Markov inequality and (9) imply that

P ( | R i | ε ) E | R i | ε 1 i 1 / 2 ε .

In addition, Lemma 2 yields

sup x { P ( S ˜ i i x + ε ) P ( S ˜ i i x ) } ε.

Setting ε= i 1 / 3 , we have

η i 1 i 1 / 6 + 1 i 1 / 3 + ε i .

Using Theorem 1 of Friedman et al. [15], we get

i = 1 ε i i <.

Hence,

i = 1 η i i i = 1 1 i 1 / 6 + ε i i <,
(15)

which, coupled with (13), (14) and the fact 1 2 (α+1)>β, yields

k N E ( ξ k ) 2 k N k ( log k ) β + ( α + 1 ) / 2 + k N k 2 ( log k ) 2 β i = 2 k + 1 2 k + 1 η i i N 2 ( log N ) 2 β ,

which completes the proof. □

Lemma 6 Let ξ k = i = 2 k + 1 2 k + 1 1 i f( U i )I{f( U i ) k ( log k ) β }, ξ l = i = 2 l + 1 2 l + 1 1 i f( U i )I{f( U i ) l ( log l ) β }. Under the conditions of Theorem  1, we have for l l 0

| cov ( ξ k , ξ l ) | k l ( log k ) β ( log l ) β 2 ( l k 1 ) / 4 .

Proof We first show the following result, for any 1i j 2 and real x, y,

| P ( U i x , U j y ) P ( U i x ) P ( U j y ) | ( i j ) 1 / 4 .
(16)

Letting ρ= i j , the Chebyshev inequality yields

P ( | S ˜ i j | ρ 1 / 4 ) 1 j ρ 1 / 2 E | S ˜ i | 2 = ρ 1 / 2 .
(17)

Using the Markov inequality and (9), we have

P ( | R j | ρ 1 / 4 ) E | R j | ρ 1 / 4 1 j 1 / 2 ρ 1 / 4 = 1 j 1 / 4 i 1 / 4 ρ 1 / 4 .
(18)

It follows from Lemma 1, Lemma 2, (17), (18) and the positivity and independence of ( X n ) that

P ( U i x , U j y ) = P ( U i x , S ˜ j j + R j y ) = P ( U i x , S ˜ i j + 1 ρ S ˜ j S ˜ i j i + R j y ) P ( U i x , 1 ρ S ˜ j S ˜ i j i y ) P ( y 2 ρ 1 / 4 1 ρ S ˜ j S ˜ i j i y ) P ( | S ˜ i j | ρ 1 / 4 ) P ( | R j | ρ 1 / 4 ) P ( U i x , 1 ρ S ˜ j S ˜ i j i y ) ( 4 A + O ( 1 ) + 1 ) ρ 1 / 4 = P ( U i x ) P ( 1 ρ S ˜ j S ˜ i j i y ) ( 4 A + O ( 1 ) + 1 ) ρ 1 / 4 .
(19)

We can obtain an analogous upper estimate for the first probability in (19) by the same way. Thus

P( U i x, U j y)=P( U i x)P ( 1 ρ S ˜ j S ˜ i j i y ) θ ( 4 A + O ( 1 ) + 1 ) ρ 1 / 4 ,

where |θ|1. A similar argument yields

P( U i x)P( U j y)=P( U i x)P ( 1 ρ S ˜ j S ˜ i j i y ) θ ( 4 A + O ( 1 ) + 1 ) ρ 1 / 4 ,

where | θ |1, and (16) follows. Letting G i , j (x,y) denote the joint distribution function of U i and U j , in view of (12), (16), we get for l l 0

| cov ( f ( U i ) I { f ( U i ) k ( log k ) β } , f ( U j ) I { f ( U j ) l ( log l ) β } ) | = | | x | a k | y | a l f ( x ) f ( y ) d ( G i , j ( x , y ) G i ( x ) G j ( y ) ) | k l ( log k ) β ( log l ) β 2 ( l k 1 ) / 4 ,

where the last relation follows from the facts that: f is strictly increasing for x x 0 , f( a i )= i ( log i ) β and 2 k <i 2 k + 1 , 2 l <j 2 l + 1 . Thus

| cov ( ξ k , ξ l ) | k l ( log k ) β ( log l ) β 2 ( l k 1 ) / 4 .

 □

Lemma 7 Under the conditions of Theorem  1, letting ζ k = ξ k E ξ k , we have

E ( ζ 1 + + ζ N ) 2 =O ( N 2 ( log N ) 2 β 1 ) ,N.

Proof By Lemma 6, we have

| 1 k l N l k > 40 log N E ( ζ k ζ l ) | N 2 ( log N ) 2 β N 2 2 10 log N =o(1).

On the other hand, letting denote the L 2 norm, Lemma 5 and the Cauchy-Schwarz inequality imply

| 1 k l N l k 40 log N E ( ζ k ζ l ) | 1 k l N l k 40 log N ζ k ζ l 1 k l N l k 40 log N ξ k ξ l = 0 j 40 log N k = 1 N j ξ k ξ k + j ( k = 1 N ξ k 2 ) 1 / 2 ( l = 1 N ξ l 2 ) 1 / 2 40 log N = O ( N 2 ( log N ) 2 β 1 ) ,

and Lemma 7 is proved. □

4 Proof of the main result

We only prove the property in (6), since, in view of Remark 1, it is sufficient for the proof of Theorem 1.

Proof of Theorem 1 By Lemma 7 we have

E ( ζ 1 + + ζ N N ) 2 =O ( ( log N ) 1 2 β ) ,

and thus setting N k =[exp( k λ )] with ( 2 β 1 ) 1 <λ<1, we get

k = 1 E ( ζ 1 + + ζ N k N k ) 2 <,

and therefore

lim k ζ 1 + + ζ N k N k =0a.s.
(20)

Observe now that for 2 k <i 2 k + 1 we have

E f ( U i ) I { f ( U i ) k ( log k ) β } = | x | a k f ( x ) d G i ( x ) = | x | a k f ( x ) d Φ ( x σ i ) + | x | a k f ( x ) d ( G i ( x ) Φ ( x σ i ) ) .

Put m= f(x)dΦ(x). Since σ i 1, lim i σ i =1 and a k as k, we have

lim k sup 2 k < i 2 k + 1 | | x | a k f ( x ) d Φ ( x σ i ) m | =0,

and thus, using (12), we get

| E f ( U i ) I { f ( U i ) k ( log k ) β } m | k η i ( log k ) β + o k (1).

Thus we have

E ξ k =m i = 2 k + 1 2 k + 1 1 i + ϑ k k ( log k ) β i = 2 k + 1 2 k + 1 η i i + o k (1),| ϑ k |1.

Consequently, using the relation i L 1/i=logL+O(1) and (15), we conclude

| E ( ξ 1 + + ξ N ) log 2 N + 1 m | 1 N k N k ( log k ) β i = 2 k + 1 2 k + 1 η i i + o N ( 1 ) = O ( ( log N ) β ) + o N ( 1 ) = o N ( 1 ) ,

and thus (20) gives

lim k ξ 1 + + ξ N k log 2 N k + 1 =ma.s.

By Lemma 4 this implies

lim k ξ 1 + + ξ N k log 2 N k + 1 =ma.s.
(21)

The relation λ<1 implies lim k N k + 1 / N k =1, and thus (21) and the positivity of ξ k yield

lim N ξ 1 + + ξ N log 2 N + 1 =ma.s.,
(22)

i.e., (6) holds for the subsequence { 2 N + 1 }. Now, for each N4, there exists n, depending on N, such that 2 n + 1 N 2 n + 2 . Then

ξ 1 + ξ 2 + + ξ n log 2 n + 1 i = 1 N 1 i f ( U i ) log N log N log 2 n + 1 ξ 1 + ξ 2 + + ξ n + 2 log 2 n + 2 log 2 n + 2 log 2 n + 1
(23)

by the positivity of each term of ( ξ k ). Noting that (n+1)log2logN(n+2)log2 as N, we get (6) by (22) and (23). □

References

  1. Schatte P: On strong versions of the central limit theorem. Math. Nachr. 1988, 137: 249–256. 10.1002/mana.19881370117

    Article  MathSciNet  MATH  Google Scholar 

  2. Brosamler ZD: An almost everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 1988, 104: 561–574. 10.1017/S0305004100065750

    Article  MathSciNet  MATH  Google Scholar 

  3. Berkes I, Csáki E, Horváth L: Almost sure limit theorems under minimal conditions. Stat. Probab. Lett. 1998, 37: 67–76. 10.1016/S0167-7152(97)00101-6

    Article  MathSciNet  MATH  Google Scholar 

  4. Ibragimov I, Lifshits M: On the convergence of generalized moments in almost sure central limit theorem. Stat. Probab. Lett. 1998, 40: 343–351. 10.1016/S0167-7152(98)00134-5

    Article  MathSciNet  MATH  Google Scholar 

  5. Rempala G, Wesolowski J: Asymptotics for products of sums and U-statistics. Electron. Commun. Probab. 2002, 7: 47–54.

    Article  MathSciNet  MATH  Google Scholar 

  6. Qi Y: Limit distributions for products of sums. Stat. Probab. Lett. 2003, 62: 93–100. 10.1016/S0167-7152(02)00438-8

    Article  MathSciNet  MATH  Google Scholar 

  7. Lu X, Qi Y: A note on asymptotic distribution of products of sums. Stat. Probab. Lett. 2004, 68: 407–413. 10.1016/j.spl.2004.04.009

    Article  MathSciNet  MATH  Google Scholar 

  8. Rempala G, Wesolowski J: Asymptotics for products of independent sums with an application to Wishart determinants. Stat. Probab. Lett. 2005, 74: 129–138. 10.1016/j.spl.2005.04.034

    Article  MathSciNet  MATH  Google Scholar 

  9. Gonchigdanzan K, Rempala G: A note on the almost sure limit theorem for the product of partial sums. Appl. Math. Lett. 2006, 19: 191–196. 10.1016/j.aml.2005.06.002

    Article  MathSciNet  MATH  Google Scholar 

  10. Ye D, Wu Q: Almost sure central limit theorem of product of partial sums for strongly mixing. J. Inequal. Appl. 2011., 2011: Article ID 576301

    Google Scholar 

  11. Miao Y: Central limit theorem and almost sure central limit theorem for the product of some partial sums. Proc. Indian Acad. Sci. Math. Sci. 2008, 118: 289–294. 10.1007/s12044-008-0021-9

    Article  MathSciNet  MATH  Google Scholar 

  12. Lu C, Qiu J, Xu J: Almost sure central limit theorems for random functions. Sci. China Ser. A 2006, 49: 1788–1799. 10.1007/s11425-006-2021-5

    Article  MathSciNet  MATH  Google Scholar 

  13. Petrov V: Sums of Independent Random Variables. Springer, New York; 1975.

    Book  MATH  Google Scholar 

  14. Feller W: The law of iterated logarithm for identically distributed random variables. Ann. Math. 1946, 47: 631–638. 10.2307/1969225

    Article  MathSciNet  MATH  Google Scholar 

  15. Friedman N, Katz M, Koopmans LH: Convergence rates for the central limit theorem. Proc. Natl. Acad. Sci. USA 1966, 56: 1062–1065. 10.1073/pnas.56.4.1062

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the editor and the referees for their very valuable comments by which the quality of the paper has been improved. The authors would also like to thank Professor Zuoxiang Peng for several discussions and suggestions. Research supported by the National Science Foundation of China (No. 11326175), the Natural Science Foundation of Zhejiang Province of China (No. LQ14A010012) and the Research Start-up Foundation of Jiaxing University (No. 70512021).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongquan Tan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Y., Tan, Z. & Wang, K. A note on the almost sure central limit theorem for the product of some partial sums. J Inequal Appl 2014, 243 (2014). https://doi.org/10.1186/1029-242X-2014-243

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-243

Keywords