Open Access

Note on the strong convergence of a weighted sum

Journal of Inequalities and Applications20142014:179

https://doi.org/10.1186/1029-242X-2014-179

Received: 11 January 2014

Accepted: 21 April 2014

Published: 12 May 2014

Abstract

In this paper, we consider an interesting weighted sums μ n : = 1 ( log n ) 1 p j = 2 n X n + 2 j j ( log j ) p , 0 < p < 1 , where { X , X n , n 1 } is a sequence of independent and identically distributed random variables with E X = 0 , and the equivalence of the almost sure and complete convergence of μ n is proved.

Keywords

strong convergencecomplete convergenceweighted sums

1 Introduction

Hsu and Robbins [1] introduced the concept of complete convergence as follows: a sequence { Y n , n 1 } converges completely to the constant c if
n = 1 P ( | Y n c | > ε ) < for all  ε > 0 .

They also show that the sequence of arithmetic means of i.i.d. random variables converges completely to the expected value if the variance of the summands is finite. The converse was proved by Erdös [2, 3]. Furthermore, if Y n c completely, then the Borel-Cantelli lemma trivially implies that Y n a . s . c , almost sure convergence as n . The converse statement is generally, or even ‘typically’, not true.

There are numerous publications in the literature studying the almost sure convergence and complete convergence for the weighted sums of a sequence of random variables. Gut [4] provided necessary and sufficient conditions for the complete convergence of the Cesáro means of i.i.d. random variables. Li et al. [5] obtained some results on complete convergence for weighted sums of independent random variables. Cuzick [6] proved a strong law for weighted sums of i.i.d. random variables. Miao and Xu [7] established a general result for the weighted sums of stationary sequence.

Throughout this paper, assume that { X , X n , n 1 } is a sequence of independent and identically distributed random variables with E X = 0 . Chow and Lai [8] established the following result.

Theorem 1.1 [[8], Theorem 3]

Let X , X 1 , X 2 , be i.i.d. random variables such that E X 1 = 0 . For α > 1 , the following statements are equivalent:
  1. 1.

    E exp ( t | X | α ) < t > 0 ;

     
  2. 2.

    lim n ( log n ) 1 / α X n = 0 a.e.;

     
  3. 3.

    lim n ( log n ) 1 / α i = 1 n c n i X i = 0 a.e. for some (or equivalently for every) nonvoid sequence of real numbers ( c n , n 0 ) such that c n = O ( n β ) , where β = α / ( α + 1 ) .

     
As an interesting particular case of Theorem 1.1, we consider the following weighted sums:
μ n : = 1 ( log n ) 1 p j = 2 n X n + 2 j j ( log j ) p , 0 < p < 1 .

The aim of this paper is to prove the following.

Theorem 1.2 We have μ n a . s . 0 if and only if μ n 0 completely.

Remark 1.1 For the case p = 1 , Wu [9] discussed the following weighted sums:
ν n : = 1 log log n j = 2 n X n + 2 j j log j

and proved the equivalence of the almost sure and complete convergence of the sequence  ν n . On the one hand, because of the limitation of α in Theorem 1.1, we here only discuss the case of p > 0 . On the other hand, in order to prove the equivalence of the almost sure and complete convergence of μ n for the case p = 0 , we need the exponential integrability of X, but from Theorem 1.1, this does not hold.

Let S j = k = 2 j X k k ( log k ) p ; then we have the following.

Corollary 1.1 If μ n a . s . 0 , then
1 ( log n ) 1 p max 2 j n S j 0 completely .

2 Proofs of main results

Let [ ] denote the usual integer part of ‘  ’ and assume that n > n 0 = [ e 20 ] . The constant C in the proofs below depends only on the distribution of the underlying random variable X and may denote different quantities at different appearances.

2.1 Proof of Theorem 1.2

The proof of Theorem 1.2 can be derived from Lemma 2.2 and Lemma 2.3.

Lemma 2.1 The following estimate holds:
j = [ 1 2 log n ] n log ( 1 + C ( log n j ( log j ) p ) 2 ) < C log n .
Proof Since g ( j ) = log ( 1 + C ( log n j ( log j ) p ) 2 ) decreases in j,
j = [ 1 2 log n ] n log ( 1 + C ( log n j ( log j ) p ) 2 ) 1 2 log n n log ( 1 + C ( log n x ( log x ) p ) 2 ) d x + O ( 1 ) log n 1 2 n log n log ( 1 + C ( 1 t ( log t + log log n ) p ) 2 ) d t + O ( 1 ) log n 1 2 log ( 1 + C ( 1 t ( log t + log log n ) p ) 2 ) d t + O ( 1 ) C log n ,

which yields the desired result. □

Lemma 2.2 If μ n a . s . 0 , then
μ n ( 1 ) : = 1 ( log n ) 1 p j = [ 1 2 log n ] n X n + 2 j j ( log j ) p 0
completely and
1 ( log n ) 1 p j = [ 1 2 log n ] + 1 [ 1 2 log [ n log n ] ] X n + 2 j j ( log j ) p 0

completely.

Proof From Theorem 1.1, we know E e | X | 1 1 p < , which implies E e | X | < . Hence by the elementary inequality
| e η x 1 η x | η 2 e | x | for all  η [ 0 , 1 ] ,
we have
E e η X 1 + η 2 E e | X | = 1 + η 2 C .
Notice that if n > n 0 and n j [ 1 2 log n ] , then log n j ( log j ) p < 1 . Hence, by Lemma 2.1, for any ε > 0 ,
n = n 0 P ( μ n ( 1 ) > ε ) n = n 0 n ε ( log n ) ( 1 p ) E e log n j = [ 1 2 log n ] n X n + 2 j j ( log j ) p = n = n 0 n ε ( log n ) ( 1 p ) j = [ 1 2 log n ] n E e log n j ( log j ) p X n + 2 j n = n 0 n ε ( log n ) ( 1 p ) j = [ 1 2 log n ] n ( 1 + C ( log n j ( log j ) p ) 2 ) n = n 0 n ε ( log n ) ( 1 p ) e C log n < .

Similarly, we can obtain n = n 0 P ( μ n ( 1 ) < ε ) < , then n = n 0 P ( | μ n ( 1 ) | > ε ) < . The first statement now follows if we combine the two inequalities. The same technique yields the second statement. □

Lemma 2.3 If
μ n ( 2 ) : = 1 ( log n ) 1 p j = 2 [ 1 2 log n ] X n + 2 j j ( log j ) p a . s . 0 ,

then μ n ( 2 ) 0 completely.

Proof For any ε > 0 we have
0 = P ( lim sup n | μ n ( 2 ) | > ε ) P ( lim sup m | μ n ( m ) ( 2 ) | > ε ) ,
where n ( m ) = [ m log m ] , m N . Since
n ( m + 1 ) + 2 1 2 log n ( m + 1 ) > n ( m )

for m > 3 , the random variables μ n ( m ) ( 2 ) , m = 4 , 5 , are independent. Notice also that ( log n ( m ) ) 1 p < ( 2 log m ) 1 p . So by the Borel-Cantelli lemma and the second statement of Lemma 2.2, we have m = n 0 P ( | μ n ( m ) ( 2 ) | > ε ) < , proving the lemma. □

2.2 Proof of Corollary 1.1

We easily see that
E S n 2 = E X 2 j = 2 n 1 ( j ( log j ) p ) 2 < C < .
Now we apply the inequality
P ( max 2 j n S j > η ) 2 P ( S n > η E S n 2 )
(cf. Chow and Teicher [10], p.111) by taking η = ( log n ) 1 p ε for ε > 0 . Since η E S n 2 > 1 2 η for all n sufficiently large,
P ( max 2 j n S j ( log n ) 1 p > ε ) 2 P ( 1 ( log n ) 1 p j = 2 n X n + 2 j j ( log j ) p 1 2 ε ) .
Therefore,
n = n 0 P ( max 2 j n S j ( log n ) 1 p > ε ) 2 n = n 0 P ( μ n > 1 2 ε ) < .

Replacing X j with X j , the corollary follows.

Declarations

Acknowledgements

This work is supported by HASTIT (No. 2011HASTIT011), NSFC (No. 11001077), NCET (NCET-11-0945), and Plan For Scientific Innovation Talent of Henan Province (124100510014).

Authors’ Affiliations

(1)
College of Mathematics and Information Science, Henan Normal University

References

  1. Hsu PL, Robbins H: Complete convergence and the law of large number. Proc. Natl. Acad. Sci. USA 1947, 33: 25–31. 10.1073/pnas.33.2.25MathSciNetView ArticleGoogle Scholar
  2. Erdös P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 1949, 20: 286–291. 10.1214/aoms/1177730037View ArticleGoogle Scholar
  3. Erdös P: Remark on my paper ‘On a theorem of Hsu and Robbins’. Ann. Math. Stat. 1950, 21: 138. 10.1214/aoms/1177729897View ArticleGoogle Scholar
  4. Gut A: Complete convergence and Cesáro summation for i.i.d. random variables. Probab. Theory Relat. Fields 1993, 97: 169–178. 10.1007/BF01199318MathSciNetView ArticleGoogle Scholar
  5. Li DL, Rao MB, Jiang TF, Wang XC: Complete convergence and almost sure convergence of weighted sums of random variables. J. Theor. Probab. 1995, 8: 49–76. 10.1007/BF02213454MathSciNetView ArticleGoogle Scholar
  6. Cuzick J: A strong law for weighted sums of i.i.d. random variables. J. Theor. Probab. 1995, 8: 625–641. 10.1007/BF02218047MathSciNetView ArticleGoogle Scholar
  7. Miao Y, Xu SF: Almost sure convergence of weighted sum. Miskolc Math. Notes 2013, 14: 173–181.MathSciNetGoogle Scholar
  8. Chow YS, Lai TL: Limiting behavior of weighted sums of independent random variables. Ann. Probab. 1973, 1: 810–824. 10.1214/aop/1176996847MathSciNetView ArticleGoogle Scholar
  9. Wu WB: On the strong convergence of a weighted sum. Stat. Probab. Lett. 1999, 1: 19–22.View ArticleGoogle Scholar
  10. Chow YS, Teicher H: Probability Theory. 3rd edition. Springer, New York; 1978.View ArticleGoogle Scholar

Copyright

© Li et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.