Skip to main content

A note on the complete convergence for weighted sums of negatively dependent random variables

Abstract

The complete convergence theorems for weighted sums of arrays of rowwise negatively dependent random variables were obtained by Wu (Wu, Q: Complete convergence for weighted sums of sequences of negatively dependent random variables. J. Probab. Stat. 2011, Article ID 202015, 16 pages) and Wu (Wu, Q: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012, 50). In this paper, we complement the results of Wu.

MSC:60F15.

1 Introduction

The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [1]. A sequence { X n ,n1} of random variables converges completely to the constant θ if

n = 1 P ( | X n θ | > ϵ ) <for all ϵ>0.

By the Borel-Cantelli lemma, this implies that X n θ almost surely (a.s.). The converse is true if { X n ,n1} are independent random variables. Therefore, the complete convergence is a very important tool in establishing almost sure convergence. There are many complete convergence theorems for sums and weighted sums of independent random variables.

Volodin et al. [2] and Chen et al. [3] (β>1 and β=1, respectively) proved the following complete convergence for weighted sums of arrays of rowwise independent random elements in a real separable Banach space.

We recall that the array { X n i ,i1,n1} of random variables is said to be stochastically dominated by a random variable X if

P ( | X n i | > x ) CP ( | X | > x ) for all x>0 and for all i1 and n1,

where C is a positive constant.

Theorem 1.1 ([2, 3])

Suppose that β1. Let { X n i ,i1,n1} be an array of rowwise independent random elements in a real separable Banach space which are stochastically dominated by a random variable X. Let { a n i ,i1,n1} be an array of constants satisfying

sup i 1 | a n i |=O ( n γ ) for some γ>0
(1.1)

and

i = 1 | a n i | θ =O ( n μ )
(1.2)

for some 0<θ2 and μ such that θ+μ/γ<2 and 1+μ+β>0. If E | X | θ + ( 1 + μ + β ) / γ < and i = 1 a n i X n i 0 in probability, then

n = 1 n β P ( i = 1 a n i X n i > ϵ ) <for all ϵ>0.
(1.3)

If β<1, then (1.3) is immediate. Hence Theorem 1.1 is of interest only for β1.

Recently, Wu [4] extended Theorem 1.1 to negatively dependent random variables when β>1. Wu [4] also considered the case of 1+μ+β=0 (β>1). But, the proof of Wu [4] does not work for the case of β=1.

The concept of negatively dependent random variables was given by Lehmann [5]. A finite family of random variables { X 1 ,, X n } is said to be negatively dependent (or negatively orthant dependent) if for each n2, the following two inequalities hold:

P( X 1 x 1 ,, X n x n ) i = 1 n P( X i x i )

and

P( X 1 > x 1 ,, X n > x n ) i = 1 n P( X i > x i )

for all real numbers x 1 ,, x n . An infinite family of random variables is negatively dependent if every finite subfamily is negatively dependent.

Theorem 1.2 (Wu [4])

Suppose that β>1. Let { X n i ,i1,n1} be an array of negatively dependent random variables which are stochastically dominated by a random variable X. Let { a n i ,i1,n1} be an array of constants satisfying (1.1) for some γ>0 and (1.2) for some θ and μ such that μ<2γ and 0<θ<min{2,2μ/γ}. Furthermore, assume that E X n i =0 for all i1 and n1 if θ+(1+μ+β)/γ1.

  1. (i)

    If 1+μ+β>0 and E | X | θ + ( 1 + μ + β ) / γ <, then

    n = 1 n β P ( | i = 1 a n i X n i | > ϵ ) <for all ϵ>0.
    (1.4)
  2. (ii)

    If 1+μ+β=0 and E | X | θ log|X|<, then (1.4) holds.

Using the moment inequality of negatively dependent random variables, Wu [6] obtained a complete convergence result for weighted sums of identically distributed negatively dependent random variables.

Theorem 1.3 (Wu [6])

Suppose that r>1. Let {X, X n ,n1} be a sequence of identically distributed negatively dependent random variables. Let { a n i ,1in,n1} be an array of constants satisfying

N(n,m+1):= { 1 i n : | a n i | ( m + 1 ) 1 / 2 } m r 1 for all n,m1
(1.5)

and

i = 1 n | a n i | 2 ( r 1 ) =O(1).
(1.6)

Furthermore, assume that EX=0 if 2(r1)1. Then, for r2,

E | X | 2 ( r 1 ) log|X|<
(1.7)

if and only if

n = 1 n r 2 P ( max 1 k n | i = 1 k a n i X i | > ϵ n 1 / 2 ) <for all ϵ>0.
(1.8)

For 1<r<2, (1.7) implies (1.8).

In (1.5), ab means that a=O(b) and b=O(a). Theorem 1.3 extends the result of Liang and Su [7] for negatively associated random variables to negatively dependent case. The proof of the sufficiency part of Liang and Su [7] is mistakenly based on the fact that (1.8) implies that

P ( max 1 k n | i = 1 k a n i X i | > n 1 / 2 ) 0as n.

The proof of the sufficiency is correct when r2. However, condition (1.5) does not hold, since the left-hand side of (1.5) goes to the limit {1in: a n i 0} as m, but the right-hand side diverges. Hence, there are no arrays satisfying (1.5).

In this paper, we obtain complete convergence results for weighted sums of arrays of rowwise negatively dependent random variables. Our results complement the results of Wu [4, 6].

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance. It proves convenient to define logx=max{1,lnx}, where lnx denotes the natural logarithm.

2 Preliminary lemmas

In this section, we present some lemmas which will be used to prove our main results.

The following two lemmas are well known and their proofs are standard.

Lemma 2.1 Let { X n ,n1} be a sequence of random variables which are stochastically dominated by a random variable X. For any α>0 and b>0, the following statements hold:

  1. (i)

    E | X n | α I(| X n |b)C{E | X | α I(|X|b)+ b α P(|X|>b)}.

  2. (ii)

    E | X n | α I(| X n |>b)CE | X | α I(|X|>b).

The following Lemma 2.2(i)-(iii) can be found in Sung [8].

Lemma 2.2 Let X be a random variable with E | X | r < for some r>0. For any t>0, the following statements hold:

  1. (i)

    n = 1 n 1 t δ E | X | r + δ I(|X| n t )CE | X | r for any δ>0.

  2. (ii)

    n = 1 n 1 + t δ E | X | r δ I(|X|> n t )CE | X | r for any δ>0 such that rδ>0.

  3. (iii)

    n = 1 n 1 + t r P(|X|> n t )CE | X | r .

  4. (iv)

    n = 1 n 1 E | X | r I(|X|> n t )CE | X | r log|X|.

The Marcinkiewicz-Zygmund and Rosenthal type inequalities play an important role in establishing complete convergence. Asadian et al. [9] proved the Marcinkiewicz-Zygmund and Rosenthal inequalities for negatively dependent random variables.

Lemma 2.3 (Asadian et al. [9])

Let { X n ,n1} be a sequence of negatively dependent random variables with E X n =0 and E | X n | p < for some p1 and all n1. Then there exist constants C p >0 and D p >0 depending only on p such that

E | i = 1 n X i | p C p i = 1 n E | X i | p for  1 p 2 , E | i = 1 n X i | p D p { i = 1 n E | X i | p + ( i = 1 n E X i 2 ) p / 2 } for  p > 2 .

The last lemma is a complete convergence theorem for an array of rowwise negatively dependent mean zero random variables.

Lemma 2.4 ([10, 11])

Let { X n i ,i1,n1} be an array of rowwise negatively dependent random variables with E X n i =0 and E X n i 2 < for all i1 and n1. Let { b n ,n1} be a sequence of nonnegative constants. Suppose that the following conditions hold.

  1. (i)

    n = 1 b n i = 1 P(| X n i |>ϵ)< for all ϵ>0.

  2. (ii)

    There exists J1 such that

    n = 1 b n ( i = 1 E X n i 2 ) J <.

Then n = 1 b n P(| i = 1 X n i |>ϵ)< for all ϵ>0.

3 Main results

In this section, we obtain two complete convergence results for weighted sums of arrays of rowwise negatively dependent random variables.

Theorem 3.1 Suppose that β1. Let { X n i ,i1,n1} be an array of rowwise negatively dependent mean zero random variables which are stochastically dominated by a random variable X satisfying E | X | p < for some p1. Let { a n i ,i1,n1} be an array of constants satisfying (1.1) for some γ>0 and

i = 1 | a n i | q =O ( n 1 β + γ ( p q ) ) for some q<p.
(3.1)

Furthermore, assume that

i = 1 a n i 2 =O ( n α ) for some α>0
(3.2)

if p2. Then

n = 1 n β P ( | i = 1 a n i X n i | > ϵ ) <for all ϵ>0.

Proof Since a n i = a n i + a n i , we may assume that a n i 0. For i1 and n1, define

X n i = X n i I ( | X n i | n γ ) + n γ I ( X n i > n γ ) n γ I ( X n i < n γ ) , X n i = X n i X n i .

Then { X n i ,i1,n1} and { X n i ,i1,n1} are still arrays of rowwise negatively dependent random variables, | X n i |=| X n i |I(| X n i | n γ )+ n γ I(| X n i |> n γ ), and | X n i |=( X n i n γ )I( X n i > n γ )( X n i + n γ )I( X n i < n γ )| X n i |I(| X n i |> n γ ). Since a n i 0, { a n i X n i ,i1,n1} and { a n i X n i ,i1,n1} are also arrays of rowwise negatively dependent random variables. In view of E X n i =0 for all i1 and n1, it suffices to show that

I 1 := n = 1 n β P ( | i = 1 a n i ( X n i E X n i ) | > ϵ ) <
(3.3)

and

I 2 := n = 1 n β P ( | i = 1 a n i ( X n i E X n i ) | > ϵ ) <.
(3.4)

We will prove (3.3) and (3.4) with three cases.

Case 1 (p=1).

For I 1 , we get by Markov’s inequality, Lemmas 2.1-2.3, (1.1), and (3.1) that

I 1 ϵ 2 n = 1 n β E | i = 1 a n i ( X n i E X n i ) | 2 C n = 1 n β i = 1 | a n i | 2 E | X n i | 2 (by Lemma 2.3) C n = 1 n β i = 1 | a n i | 2 { E | X | 2 I ( | X | n γ ) + n 2 γ P ( | X | > n γ ) } (by Lemma 2.1) C n = 1 n β sup i 1 | a n i | 2 q i = 1 | a n i | q { E | X | 2 I ( | X | n γ ) + n 2 γ P ( | X | > n γ ) } C n = 1 n β n γ ( 2 q ) n 1 β + γ ( p q ) { E | X | 2 I ( | X | n γ ) + n 2 γ P ( | X | > n γ ) } C E | X | p < .

The sixth inequality follows from Lemma 2.2.

For I 2 , we first prove that

I 3 := i = 1 | a n i |E | X n i | 0.
(3.5)

By Lemma 2.1, (1.1), and (3.1), I 3 is dominated by

i = 1 | a n i | E | X n i | I ( | X n i | > n γ ) C i = 1 | a n i | E | X | I ( | X | > n γ ) C sup i 1 | a n i | 1 q i = 1 | a n i | q E | X | I ( | X | > n γ ) C n 1 β E | X | I ( | X | > n γ ) .

Since β1 and E|X|I(|X|> n γ )0 as n, (3.5) holds.

Hence, to prove (3.4), it suffices to show that

I 2 := n = 1 n β P ( | i = 1 a n i X n i | > ϵ ) <.
(3.6)

Take δ>0 such that pδ>max{0,q}. Since 0<pδ=1δ<1, we get by Markov’s inequality, Lemmas 2.1-2.2, (1.1), and (3.1) that

I 2 ϵ p + δ n = 1 n β E | i = 1 a n i X n i | p δ ϵ p + δ n = 1 n β i = 1 | a n i | p δ E | X n i | p δ (since  0 < p δ < 1 ) C n = 1 n β i = 1 | a n i | p δ E | X | p δ I ( | X | > n γ ) (by Lemma 2.1) C n = 1 n β sup i 1 | a n i | p δ q i = 1 | a n i | q E | X | p δ I ( | X | > n γ ) C n = 1 n β n γ ( p δ q ) n 1 β + γ ( p q ) E | X | p δ I ( | X | > n γ ) C E | X | p < .

Case 2 (1<p<2).

As in Case 1, we have that I 1 CE | X | p <.

For I 2 , we take δ>0 such that pδmax{1,q}. Then we have by Markov’s inequality, Lemmas 2.1-2.3, (1.1), and (3.1) that

I 2 ϵ p + δ n = 1 n β E | i = 1 a n i ( X n i E X n i ) | p δ C n = 1 n β i = 1 | a n i | p δ E | X n i | p δ (by Lemma 2.3) C n = 1 n β i = 1 | a n i | p δ E | X | p δ I ( | X | > n γ ) (by Lemma 2.1) C n = 1 n β sup i 1 | a n i | p δ q i = 1 | a n i | q E | X | p δ I ( | X | > n γ ) C n = 1 n 1 + γ δ E | X | p δ I ( | X | > n γ ) C E | X | p < .

Case 3 (p2).

In this case, we will prove (3.3) and (3.4) by using Lemma 2.4. To prove (3.3), we take δ>0. Then we obtain by Markov’s inequality, Lemmas 2.1-2.2, (1.1), and (3.1) that

n = 1 n β i = 1 P ( | a n i ( X n i E X n i ) | > ϵ ) ϵ p δ n = 1 n β i = 1 E | a n i ( X n i E X n i ) | p + δ C n = 1 n β i = 1 | a n i | p + δ E | X n i | p + δ C n = 1 n β i = 1 | a n i | p + δ { E | X | p + δ I ( | X | n γ ) + n γ ( p + δ ) P ( | X | > n γ ) } C n = 1 n β sup i 1 | a n i | p + δ q i = 1 | a n i | q { E | X | p + δ I ( | X | n γ ) + n γ ( p + δ ) P ( | X | > n γ ) } C n = 1 n 1 γ δ { E | X | p + δ I ( | X | n γ ) + n γ ( p + δ ) P ( | X | > n γ ) } C E | X | p < .

We also obtain that for J1 such that αJβ>1,

n = 1 n β ( i = 1 E | a n i ( X n i E X n i ) | 2 ) J n = 1 n β ( i = 1 a n i 2 E | X n i | 2 ) J n = 1 n β ( i = 1 a n i 2 C E | X | 2 ) J ( since  E | X | 2 < ) n = 1 n β ( C n α E | X | 2 ) J < .

Hence (3.3) holds by Lemma 2.4.

To prove (3.4), we take δ>0 such that pδmax{1,q}. The proof of the rest is similar to that of (3.3) and is omitted. □

Remark 3.1 When 0<p<1, Theorem 3.1 holds without the condition of negative dependence (see Theorem 2(i) in Sung [8]). Theorem 3.1 extends the result of Sung [8] for independent random variables to negatively dependent case.

Remark 3.2 Theorem 1.2(i) follows from Theorem 3.1 by taking p=θ+(1+μ+β)/γ and q=θ, since

i = 1 a n i 2 sup i 1 | a n i | 2 θ i = 1 | a n i | θ =O ( n ( γ ( 2 θ ) μ ) ) .

But, Theorem 1.2(i) does not deal with the case of β=1.

Note that conditions (1.1) and (3.1) together imply

i = 1 | a n i | p =O ( n 1 β ) .
(3.7)

The following theorem shows that if the moment condition of Theorem 3.1 is replaced by a stronger condition E | X | p log|X|<, then condition (3.1) can be replaced by the weaker condition (3.7).

Theorem 3.2 Suppose that β1. Let { X n i ,i1,n1} be an array of rowwise negatively dependent mean zero random variables which are stochastically dominated by a random variable X satisfying E | X | p log|X|< for some p1. Let { a n i ,i1,n1} be an array of constants satisfying (1.1) and (3.7). Furthermore, assume that (3.2) holds for some α>0 if p2. Then

n = 1 n β P ( | i = 1 a n i X n i | > ϵ ) <for all ϵ>0.

Proof As in the proof of Theorem 3.1, it suffices to prove (3.3) and (3.4). The proof of (3.3) is same as that of Theorem 3.1 except that q is replaced by p.

We now prove (3.4). When 1p<2, we have by Markov’s inequality, Lemmas 2.1-2.3, and (3.7) that

I 2 ϵ p n = 1 n β E | i = 1 a n i ( X n i E X n i ) | p C n = 1 n β i = 1 | a n i | p E | X n i E X n i | p C n = 1 n β i = 1 | a n i | p E | X | p I ( | X | > n γ ) C n = 1 n 1 E | X | p I ( | X | > n γ ) C E | X | p log | X | < .

When p2, we will prove (3.4) by using Lemma 2.4. We have by Markov’s inequality, Lemmas 2.1-2.2, and (3.7) that

n = 1 n β i = 1 P ( | a n i ( X n i E X n i ) | > ϵ ) ϵ p n = 1 n β i = 1 E | a n i ( X n i E X n i ) | p C n = 1 n β i = 1 | a n i | p E | X | p I ( | X | > n γ ) C n = 1 n 1 E | X | p I ( | X | > n γ ) C E | X | p log | X | < .

We also have that for J1 such that αJβ>1,

n = 1 n β ( i = 1 E | a n i ( X n i E X n i ) | 2 ) J n = 1 n β ( i = 1 a n i 2 E | X n i | 2 ) J n = 1 n β ( i = 1 a n i 2 C E | X | 2 ) J ( since  E | X | 2 < ) n = 1 n β ( C n α E | X | 2 ) J < .

Hence (3.4) holds by Lemma 2.4. □

Remark 3.3 If 1+μ+β=0, then μ=1β. Hence Theorem 1.2(ii) follows from Theorem 3.2 by taking p=θ. But, Theorem 1.2(ii) does not deal with the case of β=1.

As mentioned in the Introduction, (1.5) does not hold. Hence it is of interest to find a complete convergence result similar to Theorem 1.3 without condition (1.5). The following corollary does not assume condition (1.5).

Corollary 3.1 Suppose that r3/2. Let {X, X n ,n1} be a sequence of identically distributed negatively dependent mean zero random variables. Let { a n i ,1in,n1} be an array of constants satisfying (1.6) and | a n i |=O(1). If (1.7) holds, then

n = 1 n r 2 P ( | i = 1 n a n i X i | > ϵ n 1 / 2 ) <for all ϵ>0.
(3.8)

Proof Let c n i = a n i / n 1 / 2 for 1in and c n i =0 for i>n. We will apply Theorem 3.2 with p=2(r1), β=r2, X n i = X i , and a n i replaced by c n i . Then

sup i 1 | c n i | = O ( n 1 / 2 ) , i = 1 | c n i | p = n ( r 1 ) i = 1 n | a n i | 2 ( r 1 ) = O ( n 1 r ) = O ( n 1 β ) .

Furthermore, if p=2(r1)2, then

i = 1 c n i 2 = n 1 i = 1 n a n i 2 n 1 ( i = 1 n | a n i | 2 ( r 1 ) ) 1 / ( r 1 ) n 1 1 / ( r 1 ) =O ( n 1 / ( r 1 ) ) .

Hence the result follows from Theorem 3.2. □

Remark 3.4 When 1<r<3/2, Corollary 3.1 holds without the condition of negative dependence. Although (3.8) is weaker than (1.8), (3.8) can be strengthened to (1.8) if the negative dependence is replaced by the stronger condition of negative association.

References

  1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33: 25–31. 10.1073/pnas.33.2.25

    Article  MathSciNet  MATH  Google Scholar 

  2. Volodin A, Giuliano Antonini R, Hu TC: A note on the rate of complete convergence for weighted sums of arrays of Banach space valued random elements. Lobachevskii J. Math. 2004, 15: 21–33.

    MathSciNet  MATH  Google Scholar 

  3. Chen P, Sung SH, Volodin AI: Rate of complete convergence for arrays of Banach space valued random elements. Sib. Adv. Math. 2006, 16: 1–14.

    MathSciNet  Google Scholar 

  4. Wu Q: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012., 2012: Article ID 50

    Google Scholar 

  5. Lehmann EL: Some concepts of dependence. Ann. Math. Stat. 1966, 37: 1137–1153. 10.1214/aoms/1177699260

    Article  MathSciNet  MATH  Google Scholar 

  6. Wu Q: Complete convergence for weighted sums of sequences of negatively dependent random variables. J. Probab. Stat. 2011., 2011: Article ID 202015

    Google Scholar 

  7. Liang HY, Su C: Complete convergence for weighted sums of NA sequences. Stat. Probab. Lett. 1999, 45: 85–95. 10.1016/S0167-7152(99)00046-2

    Article  MathSciNet  MATH  Google Scholar 

  8. Sung SH: Complete convergence for weighted sums of random variables. Stat. Probab. Lett. 2007, 77: 303–311. 10.1016/j.spl.2006.07.010

    Article  MathSciNet  MATH  Google Scholar 

  9. Asadian N, Fakoor V, Bozorgnia A: Rosenthal’s type inequalities for negatively orthant dependent random variables. J. Iran. Stat. Soc. 2006, 5: 69–75.

    Google Scholar 

  10. Dehua Q, Chang KC, Giuliano Antonini R, Volodin A: On the strong rates of convergence for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 2011, 29: 375–385. 10.1080/07362994.2011.548683

    Article  MathSciNet  MATH  Google Scholar 

  11. Sung SH: A note on the complete convergence for arrays of dependent random variables. J. Inequal. Appl. 2011., 2011: Article ID 76

    Google Scholar 

Download references

Acknowledgement

The author would like to thank the referees for helpful comments and suggestions. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0013131).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soo Hak Sung.

Additional information

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Sung, S.H. A note on the complete convergence for weighted sums of negatively dependent random variables. J Inequal Appl 2012, 158 (2012). https://doi.org/10.1186/1029-242X-2012-158

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-158

Keywords