Skip to main content

A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables

Abstract

In this article, applying moment inequality of negatively dependent (ND) random variables which obtained by Asadian et al., the complete convergence theorem for weighted sums of arrays of rowwise ND random variables is discussed. As a result, the complete convergence theorem for ND arrays of random variables is extended. Our results generalize and improve those on complete convergence theorem previously obtained by Hu et al., Ahmed et al, Volodin, and Sung from the independent and identically distributed case to ND sequences.

Mathematical Subject Classification: 62F12.

1 Introduction

Random variables X and Y are said to be negatively dependent (ND) if

P ( X x , Y y ) P ( X x ) P ( Y y )
(1.1)

for all x, y R. A collection of random variables is said to be pairwise negatively dependent (PND) if every pair of random variables in the collection satisfies (1.1).

It is important to note that (1.1) implies

P ( X > x , Y > y ) P ( X > x ) P ( Y > y )
(1.2)

for all x, y R. Moreover, it follows that (1.2) implies (1.1), and hence, (1.1) and (1.2) are equivalent. However, (1.1) and (1.2) are not equivalent for a collection of three or more random variables. Consequently, the following definition is needed to define sequences of ND random variables.

Definition 1. Random variables X1, ..., X n are said to be ND if for all real x1, ..., x n ,

P j = 1 n ( X j x j ) j = 1 n P ( X j x j ) , P j = 1 n ( X j > x j ) j = 1 n P ( X j > x j ) .

An infinite sequence of random variables {X n ; n ≥ 1} is said to be ND if every finite subset X1, ..., X n is ND.

Definition 2. Random variables X1, X2, ..., X n , n ≥ 2 are said to be negatively associated (NA) if for every pair of disjoint subsets A1 and A2 of {1, 2, ..., n},

cov ( f 1 ( X i ; i A 1 ) , f 2 ( X j ; j A 2 ) ) 0 ,

where f1 and f2 are increasing for every variable (or decreasing for every variable), such that this covariance exists. An infinite sequence of random variables {X n ; n ≥ 1} is said to be NA if every finite subfamily is NA.

The definition of PND was given by Lehmann [1], the concept of ND and NA was introduced by Joag-Dev and Proschan [2]. These concepts of dependence random variables are very useful to reliability theory and applications.

Obviously, NA implies ND from the definition of NA and ND. But ND does not imply NA, so ND is much weaker than NA. Because of the wide applications of ND random variables, the notions of ND random variables have received more and more attention recently. A series of useful results have been established [312]. Hence, the extending the limit properties of independent variables to the case of ND variables is highly desirable and of considerably significance in the theory and application.

The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [13] as follows. A sequence {X n ; n ≥ 1} of random variables converges completely to the constant a if n = 1 P ( X n - a > ε ) < for all ϵ > 0. In view of the Borel-Cantelli lemma, this implies that X n a almost surely. The converse is true if {X n ; n ≥ 1} are independent random variables. Thus, complete convergence is one of the most important problems in probability theory. Hsu and Robbins [13] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Baum and Katz [14] proved that if {X, X n ; n ≥ 1} is a sequence of i.i.d. random variables with mean zero, then E|X|p(t+2)< ∞ (1 ≤ p < 2, t ≥ 1) is equivalent to the condition that n = 1 n t P ( i = 1 n X i / n 1 / p > ε ) < for all ϵ > 0. Some recent results can be found in [12, 1517].

In this article we study the complete convergence for ND random variables. Our results generalize and improve those on complete convergence theorem previously obtained by Hu et al. [16], Ahmed et al. [15], Volodin [17] and Sung [18] from the i.i.d. case to ND sequences.

2 Main results

Theorem 1. Let {X nk ; k, n ≥ 1} be an array of rowwise ND random variables, there exist a r.v. X and a positive constant c satisfying

P ( X n k x ) c P ( X x ) for all n , k 1 , x > 0 .
(2.1)

Suppose that β > -1, and that {a nk ; k, n ≥ 1} is an array of constants such that

sup k 1 a n k = O ( n - γ ) for some γ > 0 ,
(2.2)

and

k = 1 a n k θ = O ( n α ) , for some α < 2 γ and some 0 < θ < min ( 2 , 2 - α / γ ) .
(2.3)
  1. (i)

    If 1 + α + β > 0 and

    E X v < , v = θ + 1 + α + β γ .
    (2.4)

When v ≥ 1, further assume that EX nk = 0 for any n, k ≥ 1. Then

n = 1 n β P k = 1 a n k X n k > ε < , ε > 0 .
(2.5)
  1. (ii)

    If 1 + α + β = 0 and

    E ( X θ ln ( 1 + X ) ) < .
    (2.6)

When v = θ ≥ 1, further assume that EX nk = 0 for any n, k ≥ 1. Then (2.5) holds.

Remark 2. Theorem 1 generalize and improve those on complete convergence theorem previously obtained by Hu et al. [16], Ahmed et al. [15], Volodin [17], and Sung [18] from the i.i.d. case to ND arrays.

By using Theorrem 1, we can extend the well-known Baum and Katz [14] complete convergence theorem from the i.i.d. case to ND random variables.

Corollary 3. Let {X n , n N} be a sequence of ND random variables, there exist a r.v. X and a constant c satisfying P(|X n | ≥ x) ≤ cP(|X| ≥ x) for all n ≥ 1, x > 0. Suppose γ > 1/2 and γp > 1; and if p ≥ 1 then assume also that EX n = 0 for any n ≥ 1. If E|X|p< ∞, then

n = 1 n γ p - 2 P ( S n > ε n γ ) < , ε > 0 ,

where S n = k = 1 n X k .

3 Proofs

In the following, let a n b n denote that there exists a constant c > 0 such that a n cb n for sufficiently large n. The symbol c stands for a generic positive constant which may differ from one place to another.

Lemma 1. [3] Let X1, ..., X n be ND random variables and let {f n ; n ≥ 1} be a sequence of Borel functions all of which are monotone increasing (or all are monotone decreasing). Then {f n (X n ); n ≥ 1} is still a sequence of ND r.v.'s.

Lemma 2. [9] Let {X n ; n ≥ 1} be an ND sequence with EX n = 0 and E|X n |p< ∞, p ≥ 2. Then

E S n p c p i = 1 n E X i p + i = 1 n E X i 2 p / 2 ,

where c p > 0 depends only on p.

Lemma 3. [19] Let {X n ; n ≥ 1} be an arbitrary sequence of random variables. If there exist a r.v. X and a positive constant c such that P(|X n | ≥ x) ≤ cP(|X| ≥ x) for and n ≥ 1 and x > 0. Then for any u > 0, t > 0, and n ≥ 1,

E X n u I ( X n t ) c E X u I ( X t ) + t u P ( X > t ) ,

and

E X n u I ( X n > t ) c E X u I ( X > t ) .

Proof of Theorem 1. Let a n k + =max ( a n k , 0 ) 0 and a n k - =max ( - a n k , 0 ) 0. From (2.2), (2.5) and

n = 1 n β P k = 1 a n k X n k > ε n = 1 n β P k = 1 a n k + X n k > ε / 2 + n = 1 n β P k = 1 a n k - X n k > ε / 2 ,

without loss of generality, for all i, n ≥ 1, we can assume that a ni > 0 and

sup k 1 a n k = n - γ .
(3.1)

For any k, n ≥ 1, let

Y n k = - a n k - 1 I ( a n k X n k < - 1 ) + X n k I ( a n k X n k 1 ) + a n k - 1 I ( a n k X n k > 1 ) .

Then for any n ≥ 1,

k = 1 a n k X n k > ε = k 1 , a n k X n k 1 , k = 1 a n k Y n k > ε { k 1 , a n k X n k > 1 } .

Hence

n = 1 n β P k = 1 a n k X n k > ε n = 1 n β k = 1 P ( a n k X n k > 1 ) + n = 1 n β P k = 1 a n k Y n k > ε = J 1 + J 2 .
(3.2)

Therefore, in order to prove (2.5), it suffices to prove that J1 < ∞ and J2 < ∞.

  1. (i)

    If 1 + α + β > 0, by Lemma 3, (2.1), (2.3), (2.4), (3.1), and the Markov inequality, we have

    J 1 n = 1 n β k = 1 P ( a n k X > 1 ) n = 1 n β k = 1 E a n k X θ I ( X > a n k - 1 ) n = 1 n β E X θ I ( X > n γ ) k = 1 a n k θ n = 1 n β + α E X θ I ( X > n γ ) = n = 1 n β + α j = n E X θ I ( j γ < X ( j + 1 ) γ ) = j = 1 E X θ I ( j γ < X ( j + 1 ) γ ) n = 1 j n β + α j = 1 j 1 + α + β E X θ I ( j γ < X ( j + 1 ) γ ) j = 1 E X θ + ( 1 + α + β ) / γ I ( j γ < X ( j + 1 ) γ ) < .
    (3.3)

Next we prove that J2 < ∞ for v < 1 and v ≥ 1, respectively. Put N ( n k ) = { i ; ( n k ) γ a n i - 1 < ( n ( k + 1 ) ) γ } ,k,n1.

  1. (a)

    If v < 1. Choose t such that v < t < 1. By the Markov inequality, the c γ inequality, Lemma 3 and the process of proof of (3.3), we have

    J 2 n = 1 n β E | k = 1 a n k Y n k | t n = 1 n β k = 1 E | a n k Y n k | t n = 1 n β k = 1 { E | a n k X n k | t I ( a n k | X n k | 1 ) + P ( | a n k X n k | > 1 ) } n = 1 n β k = 1 { E | a n k X | t I ( a n k | X | 1 ) + P ( | a n k X | > 1 ) } = n = 1 n β j = 1 ( n j ) γ a n k 1 < ( n ( j + 1 ) ) γ { a n k t E | X | t I ( | X | a n k 1 ) + P ( | a n k X | > 1 ) } n = 1 n β j = 1 N ( n j ) ( n j ) γ t E | X | t I ( | X | < ( n ( j + 1 ) γ ) = n = 1 n β j = 1 N ( n j ) ( n j ) γ t i = 1 n ( j + 1 ) E | X | t I ( ( i 1 ) γ | X | < i γ ) n = 1 n β j = 1 N ( n j ) ( n j ) γ t i = 1 2 n E | X | t I ( ( i 1 ) γ | X | < i γ ) + n = 1 n β j = 1 N ( n j ) ( n j ) γ t i = 2 n + 1 n ( j + 1 ) E | X | t I ( ( i 1 ) γ | X | < i γ ) J 21 + J 22 .
    (3.4)

Since t > v and γ > 0, so t > θ, (1 + 1/k)-γθ≥ 2-γθand (nk)γ(t-θ)≥ (nj)γ(t-θ), kj.

Therefore, by (2.3) we have

n α i = 1 a n i θ = k = 1 ( n k ) γ a n i 1 < ( n ( k + 1 ) ) γ a n i θ k = 1 N ( n k ) ( n ( k + 1 ) ) γ θ k = 1 N ( n k ) 2 γ θ ( n k ) γ θ k = j N ( n k ) ( n k ) γ t ( n k ) γ ( t θ ) k = j N ( n k ) ( n k ) γ t ( n j ) γ ( t θ ) .

Hence,

k = j N ( n k ) ( n k ) - γ t n α - γ ( t - θ ) j - γ ( t - θ ) , j N .
(3.5)

Combining with (2.4) and t>v=θ+ 1 + α + β γ , i.e. α + β - γ(t - θ) < -1, we can get that

J 21 n = 1 n β n α γ ( t θ ) i = 1 2 n E | X | t I ( ( i 1 ) γ | X | < i γ ) i = 2 E | X | t I ( ( i 1 ) γ | X | < i γ ) n = [ i / 2 ] n β + α γ ( t θ ) i = 2 i β + α γ ( t θ ) + 1 E | X | t I ( ( i 1 ) γ | X | < i γ ) i = 2 E | X | θ + ( 1 + α + β ) / γ I ( ( i 1 ) γ | X | < i γ ) < .
(3.6)

By (3.5),

J 22 = n = 1 n β i = 2 n + 1 E X t I ( i - 1 ) γ X < i γ j = i n - 1 N ( n j ) ( n j ) - γ t n = 1 n β i = 2 n + 1 n α - γ ( t - θ ) i n - γ ( t - θ ) E X t I ( i - 1 ) γ X < i γ i = 2 i - γ ( t - θ ) E X t I ( i - 1 ) γ X < i γ n = 1 [ i / 2 ] n α + β i = 2 i 1 + α + β - γ ( t - θ ) E X t I ( i - 1 ) γ X < i γ i = 2 E X θ + ( 1 + α + β ) / γ I ( i - 1 ) γ X < i γ < .
(3.7)

By (3.2), (3.3), (3.4), (3.6), and (3.7), (2.5) holds.

  1. (b)

    If v ≥ 1. Since EX nk = 0, E|X|v< ∞, and v ≥ 1, v > θ, β + 1 > 0, γ > 0, by (2.3), (2.4), (3.1), and Lemma 3, we have

    k = 1 E a n k Y n k k = 1 E a n k X n k I a n k X n k 1 + k = 1 P a n k X n k > 1 = k = 1 E a n k X n k I a n k X n k > 1 + k = 1 E I a n k X n k > 1 k = 1 E a n k X n k v I a n k X n k > 1 sup k 1 a n k v - θ k = 1 a n k θ E X v I X > a n k - 1 n - γ ( v - θ ) + α E X v I X > n γ = n - ( 1 + β ) E X v I X > n γ 0 , n .
    (3.8)

Thus, in order to prove J2 < ∞, we only need to prove that for all ϵ > 0,

J 2 * = n = 1 n β P k = 1 a n k Y n k - E a n k Y n k > ε < .
(3.9)

Obviously, Y nk is monotonic on X nk . By Lemma 1, {a nk Y nk - Ea nk Y nk ; k, n ≥ 1} is also an array of rowwise ND random variables with E(a nk Y nk - Ea nk Y nk ) = 0. And note that γ(2 - θ) - α = γ(2 - α/γ - θ) > 0 from θ < 2 - α/γ and 1 + β > 0 from β > -1, let t>max 2 , 2 ( 1 + β ) γ ( 2 - θ ) - α in Lemma 2, by the Markov inequality, the c r inequality, we have

J 2 * n = 1 n β E k = 1 a n k Y n k - E a n k Y n k t n = 1 n β k = 1 E a n k Y n k t + k = 1 E a n k Y n k 2 t / 2 = J 21 * + J 22 * .
(3.10)

From the process of the proof of (3.4)-(3.7), we know that

J 21 * < .
(3.11)

Since θ < min(2, v) and E|X|v< ∞, by Lemma 3, (2.1), (2.3), and (3.1), we have

k = 1 E a n k Y n k 2 k = 1 E a n k X 2 I a n k X 1 + k = 1 P a n k X > 1 k = 1 E a n k X 2 k = 1 a n k θ sup k 1 a n k 2 - θ n α - γ ( 2 - θ ) , v 2 , k = 1 E a n k X v k = 1 a n k θ sup k 1 a n k v - θ n α - γ ( v - θ ) , v = 2 .

By the definition of t, t(γ(2 - θ) - α)/2 - β > 1 and t(1 + β)/2 - β > 1, hence

J 22 * n = 1 1 n t ( γ ( 2 - θ ) - α ) / 2 - β , v 2 , n = 1 1 n t ( 1 + β ) / 2 - β , v < 2 < .
(3.12)

By (3.10)-(3.12), we have (3.9), therefore, (2.5) holds.

  1. (ii)

    If 1 + α + β = 0, then n = 1 j n α + β ln j, similar to proof of (3.3), we have

    J 1 = n = 1 n β k = 1 P a n k X n k > 1 E X θ ln 1 + X <

from (2.6).

  1. (a)

    When v = θ < 1, similar to the corresponding part of the proof of (3.6) and (3.7), we get that

    J 21 E X θ < ,

and

J 22 E X θ ln 1 + X < ,

from (2.6). Therefore, (2.5) holds.

  1. (b)

    When v = θ ≥ 1. Since EX nk = 0, E|X|v< ∞, 1 + α + β = 0, θ < 2, and β > -1, v = θ, (3.8) remains true. Therefore, we only need to prove (3.9). By Lemmas 2 and 3, J 1 < ∞, noting that v = θ and α + β = -1, we have

    J 2 * n = 1 n β E k = 1 a n k Y n k - E a n k Y n k 2 n = 1 n β k = 1 E a n k Y n k 2 n = 1 n β k = 1 P a n k X n k > 1 + n = 1 n β k = 1 E a n k X 2 I a n k X 1 = J 1 + n = 1 n β k = 1 a n k 2 E X 2 I X a n k - 1 n = 1 n β j = 1 ( n j ) γ a n k - 1 < ( n ( j + 1 ) ) γ a n k 2 E X 2 I X a n k - 1 n = 1 n β j = 1 N ( n j ) ( n j ) - 2 γ E X 2 I X < ( n ( j + 1 ) ) γ = n = 1 n β j = 1 N ( n j ) ( n j ) - 2 γ i = 1 n ( j + 1 ) E X 2 I ( i - 1 ) γ X < i γ n = 1 n β j = 1 N ( n j ) ( n j ) - 2 γ i = 1 2 n E X 2 I ( i - 1 ) γ X < i γ + n = 1 n β j = 1 N ( n j ) ( n j ) - 2 γ i = 2 n + 1 n ( j + 1 ) E X 2 I ( i - 1 ) γ X < i γ = J 21 * + J 22 * .
    (3.13)

Since v = θ < 2, hence (3.5) also holds for t = 2. Combining with (2.6) and α+β = -1, we can get that

J 21 * n = 1 n β n α - γ ( 2 - θ ) i = 1 2 n E X 2 I ( i - 1 ) γ X < i γ n = 1 E X 2 I ( i - 1 ) γ X < i γ n = [ i / 2 ] n - 1 - γ ( 2 - θ ) i = 2 i - γ ( 2 - θ ) E X 2 I ( i - 1 ) γ X < i γ i = 2 E X θ I ( i - 1 ) γ X < i γ < .
(3.14)

By (3.5),

J 22 * = n = 1 n β i = 2 n + 1 E X 2 I ( i - 1 ) γ X < i γ j = i n - 1 N ( n j ) ( n j ) - 2 γ n = 1 n β i = 2 n + 1 n α - γ ( 2 - θ ) i n - γ ( 2 - θ ) E X 2 I ( i - 1 ) γ X < i γ i = 2 i - γ ( 2 - θ ) E X 2 I ( i - 1 ) γ X < i γ n = 1 [ i / 2 ] n - 1 i = 2 i - γ ( 2 - θ ) ln i E X 2 I ( i - 1 ) γ X < i γ i = 2 E X θ ln 1 + X I ( i - 1 ) γ X < i γ < .
(3.15)

By (3.13)-(3.15), (3.9) holds.

Proof of Corollary 3. Let a n k = n - γ , k n , 0 , k > n , 0 α < 2 γ , θ = 1 - α γ , β = γ p - 2 . Then β > -1, 0 < θ < 2 - α/γ, 1 + α + β > 0, θ + (1 + α + β)/γ = p, and k = 1 a n k θ = n α . Thus, the conditions of Theorem 1 hold, by Theorem 1,

n = 1 n γ p - 2 P S n > ε n γ < , ε > 0 .

References

  1. Lehmann EL: Some concepts of dependence. Ann Math Stat 1966, 43: 1137–1153.

    Article  Google Scholar 

  2. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann Stat 1983, 11(1):286–295. 10.1214/aos/1176346079

    Article  MathSciNet  Google Scholar 

  3. Bozorgnia A, Patterson RF, Taylor RL: Limit theorems for ND r.v.'s. University of Georgia, Atlanta; 1993.

    Google Scholar 

  4. Bozorgnia A, Patterson RF, Taylor RL: Weak laws of large numbers for negatively dependent random variables in Banach spaces. In Madan Puri Festschrift Brunner. Edited by: E, Denker, M. VSP International Science Publishers, Vilnius; 1996:11–22.

    Google Scholar 

  5. Amini M: Some contribution to limit theorems for negatively dependent random variable. Ph.D. thesis 2000.

    Google Scholar 

  6. Fakoor V, Azarnoosh HA: Probability inequalities for sums of negatively dependent random variables. Pak J Stat 2005, 21(3):257–264.

    MathSciNet  Google Scholar 

  7. Sani Nili HR, Amini M, Bozorgnia A: Strong laws for weighted sums of negative dependent random variables. J Sci Islam Repub Iran 2005, 16(3):261–265.

    MathSciNet  Google Scholar 

  8. Klesov O, Rosalsky A, Volodin A: On the almost sure growth rate of sums of lower negatively dependent nonnegative random variables. Stat Probab Lett 2005, 71: 193–202. 10.1016/j.spl.2004.10.027

    Article  MathSciNet  Google Scholar 

  9. Asadian N, Fakoor V, Bozorgnia A: Rosen-thal's type inequalities for negatively orthant dependent random variables. J Iran Stat Soc 2006, 5(1–2):66–75.

    Google Scholar 

  10. Wu QY, Jiang YY: Strong consistency of M estimator in linear model for negatively dependent random samples. Commun Stat Theory Methods 2011, 40(3):467–491. 10.1080/03610920903427792

    Article  MathSciNet  Google Scholar 

  11. Wu QY: A strong limit theorem for weighted sums of sequences of negatively dependent random variables. J Inequal Appl 2010, 383805: 8.

    Google Scholar 

  12. Wu QY: Complete convergence for negatively dependent sequences of random variables. J Inequal Appl 2010, 507293: 10.

    Google Scholar 

  13. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc Nat Acad Sci USA 1947, 33: 25–31. 10.1073/pnas.33.2.25

    Article  MathSciNet  Google Scholar 

  14. Baum LE, Katz M: Convergence rates in the law of large numbers. Trans Am Math Soc 1965, 120: 108–123. 10.1090/S0002-9947-1965-0198524-1

    Article  MathSciNet  Google Scholar 

  15. Ahmed SE, Antonini Giuliano R, Volodin A: On the rate of complete convergence for weighted sums of arrays of Banach space valued random elements with application to moving average processes. Stat Probab Lett 2002, 58: 185–194. 10.1016/S0167-7152(02)00126-8

    Article  Google Scholar 

  16. Hu TC, Li D, Rosalsky A, Volodin AI: On the rate of complete convergence for weighted sums of arrays of Banach space valued random elements. Theory Probab Appl 2002, 47(3):455–468.

    Article  MathSciNet  Google Scholar 

  17. Volodin A, Antonini Giuliano R, Hu TC: A note on the rate of complete convergence for weighted sums of arrays of Banach space valued random elements. Lobachevskiŏ J Math (Electronic) 2004, 15: 21–33.

    Google Scholar 

  18. Sung SH: Complete convergence for weighted sums of random variables. Stat Probab Lett 2007, 77: 303–311. 10.1016/j.spl.2006.07.010

    Article  Google Scholar 

  19. Wu QY: Probability Limit Theory for Mixed Sequence. 2006.

    Google Scholar 

Download references

Acknowledgements

The author is very grateful to the referees and the editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. Supported by the National Natural Science Foundation of China (11061012), and project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in the Guangxi Institutions of Higher Learning ([2011] 47).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qunying Wu.

Additional information

Competing interests

The author declares that she has no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wu, Q. A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J Inequal Appl 2012, 50 (2012). https://doi.org/10.1186/1029-242X-2012-50

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-50

Keywords