A note on the complete convergence for arrays of dependent random variables

Abstract

A complete convergence result for an array of rowwise independent mean zero random variables was established by Kruglov et al. (2006). This result was partially extended to negatively associated and negatively dependent mean zero random variables by Chen et al. (2007) and Dehua et al. (2011), respectively. In this paper, we obtain complete extended versions of Kruglov et al.

Mathematics Subject Classification 60F15

1 Introduction

The concept of complete convergence was introduced by Hsu and Robbins [1]. A sequence {X n , n ≥ 1} of random variables is said to converge completely to the constant θ if

(1.1)

Hsu and Robbins [1] proved that the sequence of arithmetic means of i.i.d. random variables converges completely to the expected value if the variance of the summands is finite. Erdös [2] proved the converse.

The result of Hsu-Robbins-Erdös has been generalized and extended in several directions by many authors. Sung et al. [3] (see also Hu et al. [4]) obtained the following complete convergence theorem for arrays of rowwise independent random variables {X ni , 1 ≤ ik n , n ≥ 1}, where {k n , n ≥ 1 } is a sequence of positive integers.

Theorem 1.1. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise independent random variables and {a n , n ≥ 1} a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exist J ≥ 2 and δ > 0 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n E X n i 2 I ( | X n i | ≤ δ ) J < ∞ ,$
(1.2)
3. (iii)

$∑ i = 1 k n E X n i I ( | X n i | ≤ δ ) →0$ as n → ∞.

Then $∑ n = 1 ∞ a n P ( | ∑ i = 1 k n X n i | > ϵ ) <∞$ for all ϵ > 0.

Kruglov et al. [5] improved Theorem 1.1 as follows.

Theorem 1.2. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise independent random variables and {a n , n ≥ 1} a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exist J ≥ 1 and δ > 0 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n V a r ( X n i I ( | X n i | ≤ δ ) ) J < ∞ .$
(1.3)

Then $∑ n = 1 ∞ a n P ( max 1 ≤ m ≤ k n | ∑ i = 1 m ( X n i - E X n i I ( | X n i | ≤ δ ) ) | > ϵ ) <∞$ for all ϵ > 0.

When mean zero condition is imposed in Theorem 1.2, Kruglov et al. [5] established the following result.

Theorem 1.3. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise independent mean zero random variables and {a n , n ≥ 1} a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exists J ≥ 1 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n E X n i 2 J < ∞ .$
(1.4)

Then $∑ n = 1 ∞ a n P ( max 1 ≤ m ≤ k n | ∑ i = 1 m X n i | > ϵ ) <∞$ for all ϵ > 0.

The above complete convergence results for independent random variables have been extended to dependent random variables by many authors.

The concept of negatively associated random variables was introduced by Alam and Saxena [6] and carefully studied by Joag-Dev and Proschan [7]. A finite family of random variables {X i , 1 ≤ in} is said to be negatively associated if for every pair of disjoint subsets A and B of {1,2,..., n},

$C o v ( f 1 ( X i , i ∈ A ) , f 2 ( X j , j ∈ B ) ) ≤ 0$
(1.5)

whenever f1 and f2 are coordinatewise increasing (or coordinatewise decreasing), the covariance exists. An infinite family of random variables is negatively associated if every finite subfamily is negatively associated.

The concept of negatively dependent random variables was introduced by Lehmann [8]. A finite family of random variables {X i , 1 ≤ in} is said to be negatively dependent (or negatively orthant dependent) if the following two inequalities hold:

$P ( X 1 ≤ x 1 , … , X n ≤ x n ) ≤ ∏ i = 1 n P ( X i ≤ x i ) , P ( X 1 > x 1 , … , X n > x n ) ≤ ∏ i = 1 n P ( X i > x i )$
(1.6)

for all real numbers x1,..., x n . An infinite family of random variables is negatively dependent if every finite subfamily is negatively dependent.

Chen et al. [9] extended Theorem 1.2 to negatively associated random variables.

Theorem 1.4. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise negatively associated random variables and {a n , n ≥ 1} a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exist J ≥ 1 and δ > 0 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n V a r ( X n i I ( | X n i | ≤ δ ) ) J < ∞ .$
(1.7)

Then $∑ n = 1 ∞ a n P ( max 1 ≤ m ≤ k n | ∑ i = 1 m ( X n i - E X n i I ( | X n i | ≤ δ ) ) | > ϵ ) <∞$ for all ϵ > 0.

When mean zero condition is imposed in Theorem 1.4, Chen et al. [9] obtained a partially extended version of Theorem 1.3 for negatively associated random variables. The partially extended version means that condition (iii) of Theorem 1.5 is added.

Theorem 1.5. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise negatively associated mean zero random variables and {a n , n ≥ 1} a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exists J ≥ 1 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n E X n i 2 J < ∞ .$
(1.8)
3. (iii)
$∑ i = 1 k n E X n i 2 →0$

as n →∞.

Then $∑ n = 1 ∞ a n P ( max 1 ≤ m ≤ k n | ∑ i = 1 m X n i | > ϵ ) <∞$ for all ϵ > 0.

Recently, Dehua et al. [10] obtained a version of Theorem 1.2 for negatively dependent random variables.

Theorem 1.6. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise negatively dependent random variables and {a n , n ≥ 1} a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exist J ≥ 1 and δ > 0 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n V a r ( X n i I ( | X n i | ≤ δ ) ) J < ∞ .$
(1.9)

Then $∑ n = 1 ∞ a n P ( | ∑ i = 1 k n ( X n i - E X n i I ( | X n i | ≤ δ ) ) | > ϵ ) <∞$ for all ϵ > 0.

When mean zero condition is imposed in Theorem 1.6, Dehua et al. [10] established a complete convergence result. However, the proof of Theorem 2 in Dehua et al. is mistakenly based on the following relation.

$∑ i = 1 k n ( X n i I ( | X n i | > δ ) - E X n i I ( | X n i | > δ ) ) > ϵ ∕ 2 ⊂ ∪ i = 1 k n ( | X n i | > δ ) .$
(1.10)

Note that the above relation is true only if $| ∑ i = 1 k n E X n i I ( | X n i | > δ ) |≤ϵ∕2.$

Chen et al. [9] and Dehua et al. [10] obtained complete convergence results (Theorems 1.4 and 1.6, respectively) for negatively associated and negatively dependent random variables and then they proved the case of mean zero by using these results. However, this approach is not good. For the case of negatively associated mean zero, an additional condition is assumed in Theorem 1.5. For the case of negatively dependent mean zero, the proof is not correct.

In this paper, we obtain complete convergence results for negatively associated and negatively dependent mean zero random variables. As corollaries of these results, we can obtain Theorems 1.4 and 1.6.

2 Main results

In this section, we will establish complete convergence theorems for negatively associated and negatively dependent mean zero random variables.

The following lemma is an exponential inequality for negatively dependent random variables which was proved by Dehua et al. [10] (see also Fakoor and Azarnoosh [11]).

Lemma 2.1. Let {X i , 1 ≤ in} be a sequence of negatively dependent random variables with EX i = 0 and $E X i 2 <∞$ for 1 ≤ in. Then, for any x > 0 and y > 0,

$P ∑ i = 1 n X i > x ≤ 2 P max 1 ≤ i ≤ n | X i | > y + 2 exp ( x ∕ y ) 1 + x y ∑ i = 1 n E X i 2 - x ∕ y .$
(2.1)

The following lemma is an exponential inequality for negatively associated random variables which was proved by Shao [12].

Lemma 2.2. Let {X i , 1 ≤ in} be a sequence of negatively associated random variables with EX i = 0 and $E X i 2 <∞$ for 1 ≤ in and let $B n = ∑ i = 1 n E X i 2 .$ Then, for any x > 0 and y > 0,

$P max 1 ≤ m ≤ n ∑ i = 1 m X i > x ≤ 2 P max 1 ≤ i ≤ n | X i | > y + 4 exp - x 2 8 B n + 4 B n 4 ( x y + B n ) x ∕ ( 1 2 y ) .$
(2.2)

Lemma 2.3. Let {X i , 1 ≤ in} be a sequence of negatively associated random variables with EX i = 0 and $E X i 2 <∞$ for 1 ≤ in and let $B n = ∑ i = 1 n E X i 2 .$ Then, for any x > 0 and y > 0,

$P max 1 ≤ m ≤ n ∑ i = 1 m X i > x ≤ 2 P max 1 ≤ i ≤ n | X i | > y + 8 2 B n 3 x y x ∕ ( 1 2 y ) .$
(2.3)

Proof. Since $e - x ≤ 1 1 + x ≤ 1 x$ for x > 0, we get that for x > 0 and y > 0,

$exp - x 2 8 B n = exp - 3 x y 2 B n x ∕ ( 1 2 y ) ≤ 2 B n 3 x y x ∕ ( 1 2 y ) .$
(2.4)

We also get that

$B n 4 ( x y + B n ) x ∕ ( 1 2 y ) ≤ B n 4 x y x ∕ ( 1 2 y ) ≤ 2 B n 3 x y x ∕ ( 1 2 y ) .$
(2.5)

Hence, the result follows from Lemma 2.2.   ■

Now, we state and prove one of our main results.

Theorem 2.4. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise negatively dependent random variables with EX ni = 0 and $E X n i 2 <∞,$ 1 ≤ ik n , n ≥ 1. Let {a n , n ≥ 1} be a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exists J ≥ 1 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n E X n i 2 J < ∞ .$
(2.6)

Then $∑ n = 1 ∞ a n P ( | ∑ i = 1 k n X n i | > ϵ ) <∞$ for all ϵ > 0.

Proof. By Lemma 2.1 with x = ϵ and y = ϵ/J, we have that

$P ∑ i = 1 k n X n i > ϵ ≤ 2 P max 1 ≤ i ≤ k n | X n i | > ϵ ∕ J + 2 e J 1 + ϵ 2 ∕ J ∑ i = 1 k n E X n i 2 - J ≤ 2 ∑ i = 1 k n P ( | X n i | > ϵ ∕ J ) + 2 e J J J ϵ - 2 J ∑ i = 1 k n E X n i 2 J .$
(2.7)

Hence, the result follows by conditions (i) and (ii).   ■

Remark 2.5. As noted in the Introduction, Dehua et al. [10] have proved Theorem 2.4, but their proof is not correct.

Theorem 2.6. Let {X ni , 1 ≤ ik n , n ≥ 1} be an array of rowwise negatively associated random variables with EX ni = 0 and $E X n i 2 <∞,$ 1 ≤ ik n , n ≥ 1. Let {a n , n ≥ 1} be a sequence of nonnegative constants. Suppose that the following conditions hold:

1. (i)

$∑ n = 1 ∞ a n ∑ i = 1 k n P ( | X n i | > ϵ ) <∞$ for all ϵ > 0,

2. (ii)

there exists J ≥ 1 such that

$∑ n = 1 ∞ a n ∑ i = 1 k n E X n i 2 J < ∞ .$
(2.8)

Then $∑ n = 1 ∞ a n P ( max 1 ≤ m ≤ k n | ∑ i = 1 m X n i | > ϵ ) <∞$ for all ϵ > 0.

Proof. By Lemma 2.3 with x = ϵ and y = ϵ/(12J), we have that

$P max 1 ≤ m ≤ k n ∑ i = 1 m X n i > ϵ ≤ 2 P max 1 ≤ i ≤ k n | X n i | > ϵ ∕ ( 1 2 J ) + 8 2 4 J 3 ϵ 2 J ∑ i = 1 k n E X n i 2 J ≤ 2 ∑ i = 1 k n P ( | X n i | > ϵ ∕ ( 1 2 J ) ) + 8 J + 1 J J ϵ - 2 J ∑ i = 1 k n E X n i 2 J .$
(2.9)

Hence, the result follows by conditions (i) and (ii).   ■

Remark 2.7. As noted in the Introduction, Chen et al. [9] have proved Theorem 2.6 under an additional condition (see also Theorem 1.5).

It can be proved Theorem 1.6 by using Theorem 2.4. But it is not easy to prove Theorem 2.4 by using Theorem 1.6. So Theorem 2.4 is more general than Theorem 1.6.

Proof of Theorem 1.6. The set of all natural numbers is partitioned into two subsets

$A ′ = n : ∑ i = 1 k n P ( | X n i | > δ ) ≤ 1 , A ′ ′ = n : ∑ i = 1 k n P ( | X n i | > δ ) > 1 .$
(2.10)

Applying (i), we obtain

$∑ n ∈ A ′ ′ a n ≤ ∑ n ∈ A ′ ′ a n ∑ i = 1 k n P ( | X n i | > δ ) < ∞ .$
(2.11)

Observing that

$P ∑ i = 1 k n ( X n i - E X n i I ( | X n i | ≤ δ ) ) > ϵ ≤ ∑ i = 1 k n P ( | X n i | > δ ) + P ∑ i = 1 k n ( X n i I ( | X n i | ≤ δ ) - E X n i I ( | X n i | ≤ δ ) ) > ϵ ,$
(2.12)

it is enough to show that

$I 1 = : ∑ n ∈ A ′ a n P ∑ i = 1 k n ( X n i I ( | X n i | ≤ δ ) - E X n i I ( | X n i | ≤ δ ) ) > ϵ < ∞ .$
(2.13)

For 1 ≤ ik n and n ≥ 1, define

$U n i = δ I ( X n i > δ ) + X n i I ( | X n i | ≤ δ ) - δ I ( X n i < - δ ) , U n i ′ = δ I ( X n i > δ ) - δ I ( X n i < - δ ) .$
(2.14)

Then

$I 1 ≤ ∑ n ∈ A ′ a n P ∑ i = 1 k n ( U n i ′ - E U n i ′ ) > ϵ ∕ 2 + ∑ n ∈ A ′ a n P ∑ i = 1 k n ( U n i - E U n i ) > ϵ ∕ 2 = : I 2 + I 3 .$
(2.15)

For I2, we have by Markov's inequality and (i) that

$I 2 ≤ 4 ϵ ∑ n ∈ A ′ a n ∑ i = 1 k n E | U n i ′ | = 4 δ ϵ ∑ n ∈ A ′ a n ∑ i = 1 k n P ( | X n i | > δ ) < ∞ .$
(2.16)

For I3, we will apply Theorem 2.4 to random variable U ni - EU ni . Note that {U ni - EU ni , 1 ≤ ik n , n ≥ 1} is an array of rowwise negatively dependent random variables with mean zero and finite second moments. By the c r -inequality, we obtain that

$∑ n ∈ A ′ a n ∑ i = 1 k n E ( U n i - E U n i ) 2 J = ∑ n ∈ A ′ a n ∑ i = 1 k n E ( X n i I ( | X n i | ≤ δ ) - E X n i I ( | X n i | ≤ δ ) + U n i ′ - E U n i ′ ) 2 J ≤ 2 2 J - 1 ∑ n ∈ A ′ a n ∑ i = 1 k n V a r ( X n i I ( | X n i | ≤ δ ) ) J + 2 2 J - 1 ∑ n ∈ A ′ a n ∑ i = 1 k n V a r ( U n i ′ ) J ≤ 2 2 J - 1 ∑ n ∈ A ′ a n ∑ i = 1 k n V a r ( X n i I ( | X n i | ≤ δ ) ) J + 2 2 J - 1 δ 2 J ∑ n ∈ A ′ a n ∑ i = 1 k n P ( | X n i | > δ ) J ≤ 2 2 J - 1 ∑ n ∈ A ′ a n ∑ i = 1 k n V a r ( X n i I ( | X n i | ≤ δ ) ) J + 2 2 J - 1 δ 2 J ∑ n ∈ A ′ a n ∑ i = 1 k n P ( | X n i | > δ ) .$
(2.17)

The last inequality follows by J ≥ 1 and the definition of A'. Hence, condition (ii) of Theorem 2.4 holds.

Finally, it remains to show that condition (i) of Theorem 2.4 holds. That is,

(2.18)

Let ϵ > 0 be given. Without loss of generality, we may assume that 0 < ϵ < 4δ. For 1 ≤ ik n and n ≥ 1, define

$X n i ′ = X n i I ( | X n i | ≤ ϵ ∕ 4 ) , X n i ′ ′ = δ I ( X n i > δ ) + X n i I ( ϵ ∕ 4 < | X n i | ≤ δ ) - δ I ( X n i < - δ ) .$
(2.19)

It follows that

$∑ i = 1 k n P ( | U n i - E U n i | > ϵ ) ≤ ∑ i = 1 k n P ( | X n i ′ - E X n i ′ | > ϵ ∕ 2 ) + ∑ i = 1 k n P ( | X n i ′ ′ - E X n i ′ ′ | > ϵ ∕ 2 ) = ∑ i = 1 k n P ( | X n i ′ ′ - E X n i ′ ′ | > ϵ ∕ 2 ) ≤ 4 ϵ ∑ i = 1 k n E | X n i ′ ′ | ≤ 4 δ ϵ ∑ i = 1 k n P ( | X n i | > δ ) + 4 δ ϵ ∑ i = 1 k n P ( | X n i | > ϵ ∕ 4 ) ,$
(2.20)

which entails that (2.18) holds by (i). Hence, the result is proved.   ■

It can be proved Theorem 1.4 by using Theorem 2.6. But it is not easy to prove Theorem 2.6 by using Theorem 1.4. So Theorem 2.6 is more general than Theorem 1.4.

Proof of Theorem 1.4. Note that

$P max 1 ≤ m ≤ k n ∑ i = 1 m ( X n i - E X n i I ( | X n i | ≤ δ ) ) > ϵ ≤ ∑ i = 1 k n P ( | X n i | > δ ) + P max 1 ≤ m ≤ k n ∑ i = 1 m ( X n i I ( | X n i | ≤ δ ) - E X n i I ( | X n i | ≤ δ ) ) > ϵ .$
(2.21)

The rest of the proof is same as that of Theorem 1.6 except using Theorem 2.6 instead of Theorem 2.4 and is omitted.   ■

References

1. 1.

Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc Nat Acad Sci USA 1947, 33: 25–31. 10.1073/pnas.33.2.25

2. 2.

Erdös P: On a theorem of Hsu and Robbins. Ann Math Stat 1949, 20: 286–291. 10.1214/aoms/1177730037

3. 3.

Sung SH, Volodin AI, Hu TC: More on complete convergence for arrays. Stat Probab Lett 2005, 71: 303–311. 10.1016/j.spl.2004.11.006

4. 4.

Hu TC, Szynal D, Volodin A: A note on complete convergence for arrays. Stat Probab Lett 1998, 38: 27–31. 10.1016/S0167-7152(98)00150-3

5. 5.

Kruglov VM, Volodin AI, Hu TC: On complete convergence for arrays. Stat Probab Lett 2006, 76: 1631–1640. 10.1016/j.spl.2006.04.006

6. 6.

Alam K, Saxena KML: Positive dependence in multivariate distributions. Commun Stat Theor Meth 1981, 10: 1183–1196. 10.1080/03610928108828102

7. 7.

Joag-Dev K, Proschan F: Negative association of random variables, with applications. Ann stat 1983, 11: 286–295. 10.1214/aos/1176346079

8. 8.

Lehmann EL: Some concepts of dependence. Ann Math Stat 1966, 37: 1137–1153. 10.1214/aoms/1177699260

9. 9.

Chen P, Hu TC, Liu X, Volodin A: On complete convergence for arrays of rowwise negatively associated random variables. Theor Probab Appl 2007, 52: 393–397.

10. 10.

Dehua Q, Chang KC, Giuliano Antonini R, Volodin A: On the strong rates of convergence for arrays of rowwise negatively dependent random variables. Stoch Anal Appl 2011, 29: 375–385. 10.1080/07362994.2011.548683

11. 11.

Fakoor V, Azarnoosh HA: Probability inequalities for sums of negatively dependent random variables. Pak J Stat 2005, 21: 257–264.

12. 12.

Shao QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J Theor Probab 2000, 13: 343–356. 10.1023/A:1007849609234

Acknowledgments

The author would like to thank the referees for the helpful comments and suggestions. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0013131).

Author information

Authors

Corresponding author

Correspondence to Soo Hak Sung.

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Reprints and Permissions

Sung, S.H. A note on the complete convergence for arrays of dependent random variables. J Inequal Appl 2011, 76 (2011). https://doi.org/10.1186/1029-242X-2011-76