Skip to main content

Complete convergence and complete moment convergence for a class of random variables

Abstract

In this paper, we establish the complete convergence and complete moment convergence and obtain the equivalence of the complete convergence and complete moment convergence for the class of random variables satisfying a Rosenthal-type maximal inequality. Baum-Katz-type theorem and Hsu-Robbins-type theorem are extended to the case of this class of random variables.

MSC:60F15.

1 Introduction

The concept of complete convergence was introduced by Hsu and Robbins [1] as follows. A sequence of random variables { U n ,n1} is said to converge completely to a constant C if n = 1 P(| U n C|>ε)< for all ε>0. In view of the Borel-Cantelli lemma, this implies that U n C almost surely (a.s.). The converse is true if the { U n ,n1} are independent. Hsu and Robbins [1] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös [2] proved the converse. The result of Hsu-Robbin-Erdös is a fundamental theorem in probability theory which has been generalized and extended in several directions by many authors. One of the most important generalizations is that by Baum and Katz [3] for the strong law of large numbers as follows.

Theorem A (Baum and Katz [3])

Let α>1/2 and αp>1. Let { X n ,n1} be a sequence if i.i.d. random variables. Assume further that E X 1 =0 if α1. Then the following statements are equivalent:

Many authors studied the Baum-Katz-type theorem for dependent random variables; see, for example, Peligrad [4] for a strong stationary ρ-mixing sequence, Peligrad and Gut [5] for a ρ -mixing sequence, Stoica [6, 7] for a martingale difference sequence, Stoica [8] for bounded subsequences, Wang and Hu [9] for φ-mixing random variables, and so forth.

One of the most interesting inequalities to probability theory is the Rosenthal-type maximal inequality. For a sequence { X i ,1in} of i.i.d. random variables with E | X 1 | q < for some q2, there exist positive constants C q depending only on q such that

E ( max 1 j n | i = 1 j ( X i E X i ) | ) q C q { i = 1 n E | X i | q + ( i = 1 n E X i 2 ) q / 2 } .

The inequality above has been obtained for dependent random variables by many authors. See, for example, Shao [10] for negatively associated random variables, Utev and Peligrad [11] for ρ -mixing random variables, Wang et al. [12] for φ-mixing random variables with the mixing coefficients satisfying certain conditions, and so forth.

The purpose of this work is to obtain complete convergence and complete moment convergence for a sequence of random variables satisfying a Rosenthal-type maximal inequality.

Throughout the paper, let { X n ,n1} be a sequence of random variables defined on a fixed probability space (Ω,A,P). Let I(A) be the indicator function of the set A. Denote S n = i = 1 n X i , S 0 =0, ln + x=lnmax(x,e), x + =xI (x0). The symbol C denotes a positive constant which may be different in various places.

The following definition will be used frequently in the paper.

Definition 1.1 A sequence of random variables { X n ,n1} is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

P ( | X n | > x ) CP ( | X | > x )

for all x0 and n1.

In this paper, the present investigation is to provide the complete convergence and complete moment convergence for a sequence of the class of random variables satisfying a Rosenthal-type maximal inequality and prove the equivalence of the complete convergence and complete moment convergence. Baum-Katz-type theorem and Hsu-Robbins-type theorem are extended to the case of the class of random variables satisfying a Rosenthal-type maximal inequality. As a result, the Marcinkiewicz-Zygmund strong law of large numbers for the class of random variables is obtained.

2 Main results

Theorem 2.1 Let α>1/2, αp1 and p>1. Suppose that { X n ,n1} is a sequence of zero mean random variables which is stochastically dominated by a random variable X with E | X | p <. Assume that for any q2, there exists a positive constant C q depending only on q such that

E ( max 1 j n | i = 1 j ( Y t i E Y t i ) | q ) C q { i = 1 n E | Y t i | q + ( i = 1 n E Y t i 2 ) q / 2 } ,
(2.1)

where Y t i =tI( X i <t)+ X i I(| X i |t)+tI( X i >t) for all t>0. Then

n = 1 n α p 2 P ( max 1 j n | S j | ε n α ) <, for all ε>0
(2.2)

and

n = 1 n α p 2 α E ( max 1 j n | S j | ε n α ) + <, for all ε>0.
(2.3)

Furthermore, (2.2) is equivalent to (2.3).

Corollary 2.1 Let 1<p<2. Suppose that { X n ,n1} is a sequence of zero mean random variables which is stochastically dominated by a random variable X with E | X | p <. Assume further that (2.1) holds, then

S n n 1 / p 0a.s.

For p=1, we have the following theorem.

Theorem 2.2 Let α>0. Suppose that { X n ,n1} is a sequence of zero mean random variables which is stochastically dominated by a random variable X with E|X| ln + |X|<. Assume further that (2.1) holds, then for all ε>0,

n = 1 n 2 E ( max 1 j n | S j | ε n α ) + <.
(2.4)

Theorem 2.3 Let α>1/2, p>1 and αp>1. Suppose that { X n ,n1} is a sequence of zero mean random variables which is stochastically dominated by a random variable X with E | X | p <. Assume further that (2.1) holds, then for all ε>0,

n = 1 n α p 2 P ( sup j n | S j j α | ε ) <.
(2.5)

Remark 2.1 In (2.1), { Y t i ,i1} is a monotone transformation of { X i ,i1}. If { X i ,i1} is a sequence of independent random variables, then (2.1) is clearly satisfied. There are many sequences of dependent random variables satisfying (2.1) for all q2. Examples include sequences of NA random variables (see Shao [10]), ρ -mixing random variables (see Utev and Peligrad [11]), φ-mixing random variables with the mixing coefficients satisfying certain conditions (see Wang et al. [12]), ρ -mixing random variables with the mixing coefficients satisfying certain conditions (see Wang and Lu [13]).

Remark 2.2 In Theorem 2.1, we not only generalize the Baum-Katz-type theorem for the class of random variables satisfying (2.1), but also consider the case αp=1. Furthermore, if we take α=1 and p=2, then we can get the Hsu-Robbins-type theorem (see Hsu and Robbins [1]) for the class of random variables satisfying (2.1).

3 Lemmas

In this section, the following lemmas are very useful to prove the main results of the paper.

Lemma 3.1 (cf. Wu [14])

Let { X n ,n1} be a sequence of random variables, which is stochastically dominated by a random variable X. Then for any a>0 and b>0, the following two statements hold:

E | X n | a I ( | X n | b ) C 1 { E | X | a I ( | X | b ) + b a P ( | X | > b ) }

and

E | X n | a I ( | X n | > b ) C 2 E | X | a I ( | X | > b ) ,

where C 1 and C 2 are positive constants.

Lemma 3.2 Under the conditions of Theorem  2.1,

n = 1 n α p 2 α n α P ( max 1 j n | S j | > t ) dt<.
(3.1)

Proof For fixed n1, denote Y t i = X i Y t i , i1. Then it follows that

For J, noting that | Y t i || X i |I(| X i |>t), we have by Markov’s inequality, Lemma 3.1 and E | X | p < that

J C n = 1 n α p 2 α n α t 1 i = 1 n E | Y t i | d t C n = 1 n α p 2 α n α t 1 i = 1 n E | X i | I ( | X i | > t ) d t C n = 1 n α p 1 α n α t 1 E | X | I ( | X | > t ) d t = C n = 1 n α p 1 α m = n m α ( m + 1 ) α t 1 E | X | I ( | X | > t ) d t C n = 1 n α p 1 α m = n m 1 E | X | I ( | X | > m α ) = C m = 1 m 1 E | X | I ( | X | > m α ) n = 1 m n α p 1 α C m = 1 m 1 E | X | I ( | X | > m α ) m α p α = C m = 1 m α p 1 α E | X | I ( | X | > m α ) = C m = 1 m α p 1 α n = m E | X | I ( n < | X | 1 / α n + 1 ) C n = 1 E | X | I ( n < | X | 1 / α n + 1 ) m = 1 n m α p 1 α C n = 1 n α p α E | X | I ( n < | X | 1 / α n + 1 ) C E | X | p < .
(3.2)

For I, by Markov’s inequality and (2.1), we have that for q2,

I C q n = 1 n α p 2 α n α t q E ( max 1 j n | i = 1 j ( Y t i E Y t i ) | q ) d t C q n = 1 n α p 2 α n α t q i = 1 n E | Y t i | q d t + C q n = 1 n α p 2 α n α t q ( i = 1 n E Y t i 2 ) q / 2 d t : = I 1 + I 2 .
(3.3)

We will consider the following three cases:

Case 1. α>1/2, αp>1 and p2.

Taking q>max(p, α p 1 α 1 2 ), which implies that αp2αq+q/2<1. We have by Lemma 3.1 and the proof of (3.2) that

I 1 C q n = 1 n α p 2 α n α t q i = 1 n ( E | X i | q I ( | X i | t ) + t q P ( | X i | > t ) ) d t C n = 1 n α p 1 α n α t q E | X | q I ( | X | t ) d t + C n = 1 n α p 1 α n α t 1 E | X | I ( | X | > t ) d t C n = 1 n α p 1 α m = n m α ( m + 1 ) α t q E | X | q I ( | X | t ) d t + C C n = 1 n α p 1 α m = n m α 1 α q E | X | q I ( | X | ( m + 1 ) α ) + C = C m = 1 m α 1 α q E | X | q I ( | X | ( m + 1 ) α ) n = 1 m n α p 1 α + C C m = 1 m α p 1 α q E | X | q I ( m α < | X | ( m + 1 ) α ) + C m = 1 m α p 1 α q E | X | q I ( | X | m α ) + C C m = 1 m 1 E | X | p I ( m α < | X | ( m + 1 ) α ) + C m = 1 m α ( p q ) 1 j = 1 m j α q P ( j 1 < | X | 1 / α j ) + C C E | X | p + C j = 1 j α q P ( j 1 < | X | 1 / α j ) m = j m α ( p q ) 1 + C C j = 1 j α p P ( j 1 < | X | 1 / α j ) + C C E | X | p + C < .
(3.4)

Note that E X 2 < if E | X | p < for p2. We have that

I 2 C n = 1 n α p 2 α n α t q ( i = 1 n E X i 2 ) q / 2 d t C n = 1 n α p 2 α n α t q ( i = 1 n E X 2 ) q / 2 d t C n = 1 n α p 2 α + q / 2 n α t q d t C n = 1 n α p 2 α q + q / 2 < .

Case 2. α>1/2, αp>1 and 1<p<2.

Take q=2. Similar to the proofs of (3.3) and (3.4), we have that

I C n = 1 n α p 2 α n α t 2 i = 1 n ( E X i 2 I ( | X i | t ) + t 2 P ( | X i | > t ) ) d t C n = 1 n α p 1 α m = n m α 1 E X 2 I ( | X | ( m + 1 ) α ) + C = C m = 1 m α 1 E X 2 I ( | X | ( m + 1 ) α ) n = 1 m n α p 1 α + C C m = 1 m α p 1 2 α E X 2 I ( m α < | X | ( m + 1 ) α ) + C m = 1 m α p 1 2 α E X 2 I ( | X | m α ) + C C m = 1 m 1 E | X | p I ( m α < | X | ( m + 1 ) α ) + C m = 1 m α ( p 2 ) 1 j = 1 m j 2 α P ( j 1 < | X | 1 / α j ) + C C E | X | p + C j = 1 j 2 α P ( j 1 < | X | 1 / α j ) m = j m α ( p 2 ) 1 + C C j = 1 j α p P ( j 1 < | X | 1 / α j ) + C C E | X | p + C < .
(3.5)

Case 3. α>1/2, αp=1 and p>1.

Take q=2. Note that 1/2<α<1 if αp=1. Similar to the proofs of (3.5), it follows that I<. From the statements above, (3.1) is proved. The proof of the lemma is completed. □

Lemma 3.3 (cf. Sung [15])

Let { Y n ,n1} and { Z n ,n1} be sequences of random variables. Then for any q>1, ε>0 and a>0,

E ( max 1 j n | i = 1 j ( Y i + Z i ) | ε a ) + ( 1 ε q + 1 q 1 ) 1 a q 1 E max 1 j n | i = 1 j Y i | q +E max 1 j n | i = 1 j Z i | .

4 The proofs of main results

Proof of Theorem 2.1 First, we prove (2.2). For fixed n1, let X n i = n α I( X i < n α )+ X i I(| X i | n α )+ n α I( X i > n α ) and X n i = X i X n i , i1. Then it is easy to have that

For J , noting that | X n i || X i |I(| X i |> n α ), we have by Markov’s inequality, Lemma 3.1 and the proof of (3.2) that

J C n = 1 n α p 2 α i = 1 n E | X n i | C n = 1 n α p 2 α i = 1 n E | X i | I ( | X i | > n α ) C n = 1 n α p 1 α E | X | I ( | X | > n α ) C E | X | p < .
(4.1)

For I , by Markov’s inequality and (2.1), we have that for any q2,

I C q n = 1 n α p 2 α q E ( max 1 j n | i = 1 j ( X n i E X n i ) | q ) C q n = 1 n α p 2 α q i = 1 n E | X n i | q + C q n = 1 n α p 2 α q ( i = 1 n E X n i 2 ) q / 2 : = I 1 + I 2 .
(4.2)

We consider the following three cases:

Case 1. α>1/2, αp>1 and p2.

Take q>max(p, α p 1 α 1 2 ), which implies that αp2αq+q/2<1.

For I 1 , we have by C r ’s inequality, the proofs of (3.2) and (3.4) that

I 1 C n = 1 n α p 2 α q i = 1 n ( E | X i | q I ( | X i | n α ) + n α q P ( | X i | > n α ) ) C n = 1 n α p 2 α q i = 1 n ( E | X | q I ( | X | n α ) + n α q P ( | X | > n α ) ) C n = 1 n α p 1 α q E | X | q I ( | X | n α ) + C n = 1 n α p 1 α E | X | I ( | X | > n α ) < .
(4.3)

For I 2 , note that E X 2 < if E | X | p < for p2. We have that

I 2 C n = 1 n α p 2 α q ( i = 1 n E X i 2 ) q / 2 C n = 1 n α p 2 α q ( i = 1 n E X 2 ) q / 2 C n = 1 n α p 2 α q + q / 2 < .

Case 2. α>1/2, αp>1 and 1<p<2.

Take q=2. Similar to the proofs of (4.2), (4.3), (3.5) and (4.1), we have that

I C n = 1 n α p 2 2 α i = 1 n ( E X i 2 I ( | X i | n α ) + n 2 α P ( | X i | > n α ) ) C n = 1 n α p 1 2 α E X 2 I ( | X | n α ) + C n = 1 n α p 1 α E | X | I ( | X | > n α ) < .
(4.4)

Case 3. α>1/2, αp=1 and p>1.

Take q=2. Note that 1/2<α<1 if αp=1. Similar to the proof of (4.4), it follows that I <. From all the statements above, we have proved (2.2).

Next, we prove (2.3). Since S j = i = 1 j X i and X i =( X n i E X n i )+( X n i E X n i ), i=1,2,,n. By Lemma 3.3, the proofs of (4.1) and I <, it follows that

Hence (2.3) holds.

We will prove the equivalence of (2.2) and (2.3). First, we prove that (2.2) implies (2.3). In fact, for all ε>0, we have by Lemma 3.2 that

Next, we prove that (2.3) implies (2.2). It is easy to see that

(4.5)

Hence, by (4.5), (2.3) implies (2.2). The proof of the theorem is completed. □

Proof of Corollary 2.1 Taking αp=1 in Theorem 2.1, we have that for all ε>0,

n = 1 n 1 P ( max 1 j n | S j | ε n 1 / p ) <.

The rest of the proof is similar to that of Theorem 3.1 in Dung and Tien [16] and is omitted. □

Proof of Theorem 2.2 We use the same notation as that in the proof of Theorem 2.1. Taking q=2 and a= n α in Lemma 3.3, by (2.1) and Lemma 3.1, it follows that

Hence, (2.4) holds. □

Proof of Theorem 2.3 Inspired by the proof of Theorem 12.1 of Gut [17], we have that

n = 1 n α p 2 P ( sup j n | S j j α | ε ) = m = 1 n = 2 m 1 2 m 1 n α p 2 P ( sup j n | S j j α | ε ) C m = 1 P ( sup j 2 m 1 | S j j α | ε ) n = 2 m 1 2 m 1 2 m ( α p 2 ) C m = 1 2 m ( α p 1 ) P ( sup j 2 m 1 | S j j α | ε ) = C m = 1 2 m ( α p 1 ) P ( sup k m max 2 k 1 j < 2 k | S j j α | ε ) C m = 1 2 m ( α p 1 ) k = m P ( max 1 j 2 k | S j | ε 2 α ( k 1 ) ) C k = 1 P ( max 1 j 2 k | S j | ε 2 α ( k 1 ) ) m = 1 k 2 m ( α p 1 ) C k = 1 2 k ( α p 1 ) P ( max 1 j 2 k | S j | ε 2 α ( k 1 ) ) C k = 1 n = 2 k 2 k + 1 1 n α p 2 P ( max 1 j n | S j | ( ε 4 α ) n α ) C n = 1 n α p 2 P ( max 1 j n | S j | ( ε 4 α ) n α ) .
(4.6)

The desired result (2.5) follows from (2.2) and (4.6) immediately. □

References

  1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33(2):25–31. 10.1073/pnas.33.2.25

    Article  MathSciNet  MATH  Google Scholar 

  2. Erdös P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 1949, 20(2):286–291. 10.1214/aoms/1177730037

    Article  MathSciNet  MATH  Google Scholar 

  3. Baum LE, Katz M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 1965, 120(1):108–123. 10.1090/S0002-9947-1965-0198524-1

    Article  MathSciNet  MATH  Google Scholar 

  4. Peligrad M: Convergence rates of the strong law for stationary mixing sequences. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1985, 70: 307–314. 10.1007/BF02451434

    Article  MathSciNet  MATH  Google Scholar 

  5. Peligrad M, Gut A: Almost-sure results for a class of dependent random variables. J. Theor. Probab. 1999, 12(1):87–104. 10.1023/A:1021744626773

    Article  MathSciNet  MATH  Google Scholar 

  6. Stoica G: Baum-Katz-Nagaev type results for martingales. J. Math. Anal. Appl. 2007, 336(2):1489–1492. 10.1016/j.jmaa.2007.03.012

    Article  MathSciNet  MATH  Google Scholar 

  7. Stoica G: A note on the rate of convergence in the strong law of large numbers for martingales. J. Math. Anal. Appl. 2011, 381(2):910–913. 10.1016/j.jmaa.2011.04.008

    Article  MathSciNet  MATH  Google Scholar 

  8. Stoica G: The Baum-Katz theorem for bounded subsequences. Stat. Probab. Lett. 2008, 78(7):924–926. 10.1016/j.spl.2007.10.001

    Article  MathSciNet  MATH  Google Scholar 

  9. Wang XJ, Hu SH: Some Baum-Katz type results for φ -mixing random variables with different distributions. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 2012, 106(2):321–331.

    Article  MathSciNet  MATH  Google Scholar 

  10. Shao QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theor. Probab. 2000, 13(2):343–356. 10.1023/A:1007849609234

    Article  MathSciNet  MATH  Google Scholar 

  11. Utev S, Peligrad M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 2003, 16(1):101–115. 10.1023/A:1022278404634

    Article  MathSciNet  MATH  Google Scholar 

  12. Wang XJ, Hu SH, Yang WZ, Shen Y: On complete convergence for weighted sums of φ -mixing random variables. J. Inequal. Appl. 2010., 2010: Article ID 372390

    Google Scholar 

  13. Wang JF, Lu FB: Inequalities of maximum of partial sums and weak convergence for a class of weak dependent random variables. Acta Math. Sin. 2006, 22(3):693–700. 10.1007/s10114-005-0601-x

    Article  MathSciNet  MATH  Google Scholar 

  14. Wu QY: Probability Limit Theory for Mixed Sequence. China Science Press, Beijing; 2006.

    Google Scholar 

  15. Sung SH: Moment inequalities and complete moment convergence. J. Inequal. Appl. 2009., 2009: Article ID 271265

    Google Scholar 

  16. Dung LV, Tien ND: Strong law of large numbers for random fields in martingale type p Banach spaces. Stat. Probab. Lett. 2010, 80(9–10):756–763. 10.1016/j.spl.2010.01.007

    Article  MathSciNet  MATH  Google Scholar 

  17. Gut A: Probability: A Graduate Course. Springer, New York; 2005.

    MATH  Google Scholar 

Download references

Acknowledgements

The authors are most grateful to the editor Andrei Volodin and anonymous referees for careful reading of the manuscript and valuable suggestions which helped in significantly improving an earlier version of this paper. The research was supported by the National Natural Science Foundation of China (11171001, 11201001, 11126176), Natural Science Foundation of Anhui Province (1208085QA03), Provincial Natural Science Research Project of Anhui Colleges (KJ2010A005), Academic Innovation Team of Anhui University (KJTD001B), Doctoral Research Start-up Funds Projects of Anhui University, and the Talents Youth Fund of Anhui Province Universities (2011SQRL012ZD).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuhe Hu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wang, X., Hu, S. Complete convergence and complete moment convergence for a class of random variables. J Inequal Appl 2012, 229 (2012). https://doi.org/10.1186/1029-242X-2012-229

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-229

Keywords