Open Access

Complete convergence for Sung’s type weighted sums of END random variables

Journal of Inequalities and Applications20142014:353

https://doi.org/10.1186/1029-242X-2014-353

Received: 14 May 2014

Accepted: 11 September 2014

Published: 24 September 2014

Abstract

In this paper, the author studies the complete convergence results for Sung’s type weighted sums of sequences of END random variables and obtains some new results. These results extend and improve the corresponding theorems of Sung (Discrete Dyn. Nat. Soc. 2010:630608, 2010, doi:10.1155/2010/630608).

MSC:60F15.

Keywords

END random variables complete convergence weighted sum

1 Introduction and main results

The concept of complete convergence was introduced by Hsu and Robbins [1] as follows. A sequence of random variables { X n , n 1 } is said to converge completely to a constant c if
n = 1 P ( | X n c | > ε ) < , ε > 0 .

From then on, many authors have devoted their study to complete convergence; see [26], and so on.

Recently, Sung [5] obtained a complete convergence result for weighted sums of identically distributed ρ -mixing random variables (we call these Sung’s type weighted sums).

Theorem A Let p > 1 / α and 1 / 2 < α 1 . Let { X , X n , n 1 } be a sequence of identically distributed ρ -mixing random variables with E X = 0 and E | X | p < . Assume that { a n i , 1 i n , n 1 } is an array of real numbers satisfying
i = 1 n | a n i | q = O ( n )
(1.1)
for some q > p . Then
n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X i | > ε n α ) < , ε > 0 .
(1.2)

Conversely, (1.2) implies E X = 0 and E | X | p < if (1.2) holds for any array { a n i , 1 i n , n 1 } with (1.1) for some q > p .

In this paper, we will extend Theorem A under the END setup. We firstly introduce the concept of END random variables.

Definition 1.1 Random variables Y 1 , Y 2 , are said to be extended negatively dependent (END) if there exists a constant M > 0 such that, for each n 2 ,
P ( Y 1 y 1 , , Y n y n ) M i = 1 n P ( Y i y i )
and
P ( Y 1 > y 1 , , Y n > y n ) M i = 1 n P ( Y i > y i )

hold for every sequence { y 1 , , y n } of real numbers.

The concept was introduced by Liu [7]. When M = 1 , the notion of END random variables reduces to the well-known notion of so-called negatively dependent (ND) random variables, which was firstly introduced by Embrahimi and Ghosh [8]; some properties and limit results can be found in Alam and Saxena [9], Block et al. [10], Joag-Dev and Proschan [11], and Wu and Zhu [12]. As is mentioned in Liu [7], the END structure is substantially more comprehensive than the ND structure in that it can reflect not only a negative dependence structure but also a positive one, to some extent. Liu [7] pointed out that the END random variables can be taken as negatively or positively dependent and provided some interesting examples to support this idea. Joag-Dev and Proschan [11] also pointed out that negatively associated (NA) random variables must be ND and ND is not necessarily NA, thus NA random variables are END. A great number of articles for NA random variables have appeared in the literature. But very few papers are written for END random variables. For example, for END random variables with heavy tails Liu [7] obtained the precise large deviations and Liu [13] studied sufficient and necessary conditions for moderate deviations, and Qiu et al. [3] and Wu and Guan [14] studied complete convergence for weighted sums and arrays of rowwise END, and so on.

Now we state the main results; some lemmas and the proofs will be detailed in the next section.

Theorem 1.1 Let p > 1 / α and 1 / 2 < α 1 . Let { X , X n , n 1 } be a sequence of identically distributed END random variables with E X = 0 and E | X | p < . Assume that { a n i , 1 i n , n 1 } is an array of real numbers satisfying (1.1) for some q > p . Then (1.2) holds. Conversely, (1.2) implies E X = 0 and E | X | p < if (1.2) holds for any array { a n i , 1 i n , n 1 } with (1.1) for some q > p .

Remark 1.1 The tool is the maximal Rosenthal’s moment inequality in the proof of Theorem A. But we do not know whether the maximal Rosenthal’s moment inequality holds or not for an END sequence. So the proof of Theorem 1.1 is different from that of Theorem A.

Remark 1.2 Theorem 1.1 does not discuss the very interesting case: p α = 1 . In fact, it is still an open problem whether (1.2) holds or not even in the partial sums of an END sequence when p α = 1 . But we have the following partial result.

Theorem 1.2 Let { X , X n , n 1 } be a sequence of identically distributed END random variables with E X = 0 . Assume that { a n i , 1 i n , n 1 } is an array of real numbers satisfying (1.1) for some q > 1 . Then
n = 1 n 1 P ( max 1 j n | i = 1 j a n i X i | > ε n ) < , ε > 0 .
(1.3)

Conversely, (1.3) implies E X = 0 if (1.3) holds for any array { a n i , 1 i n , n 1 } with (1.1) for some q > 1 .

Throughout this paper, C always stands for a positive constant which may differ from one place to another.

2 Lemmas and proofs of main results

To prove the main result, we need the following lemmas.

Lemma 2.1 ([7])

Let X 1 , X 2 , , X n be END random variables. Assume that f 1 , f 2 , , f n are Borel functions all of which are monotone increasing (or all are monotone decreasing). Then f 1 ( X 1 ) , f 2 ( X 2 ) , , f n ( X n ) are END random variables.

The following lemma is due to Chen et al. [15] when 1 < r < 2 and Shen [16] when r 2 .

Lemma 2.2 For any r > 1 , there is a positive constant C r depending only on r such that if { X n , n 1 } is a sequence of END random variables with E X n = 0 for every n 1 , then, for all n 1 ,
E | i = 1 n X i | r C r i = 1 n E | X i | r
holds when 1 < r < 2 and
E | i = 1 n X i | r C r { i = 1 n E | X i | r + ( i = 1 n E | X i | 2 ) r / 2 }

holds when r 2 .

By Lemma 2.2 and the same argument as Theorem 2.3.1 in Stout [17], the following lemma holds.

Lemma 2.3 For any r > 1 , there is a positive constant C r depending only on r such that if { X n , n 1 } is a sequence of END random variables with E X n = 0 for every n 1 , then, for all n 1 ,
E max 1 k n | i = 1 k X i | r C r ( log n ) r i = 1 n E | X i | r
holds when 1 < r < 2 and
E max 1 k n | i = 1 k X i | r C r ( log n ) r { i = 1 n E | X i | r + ( i = 1 n E | X i | 2 ) r / 2 }

holds when r 2 .

Lemma 2.4 Let p > 1 / α and 1 / 2 < α 1 . Let { X , X n , n 1 } be a sequence of identically distributed END random variables with E X = 0 and E | X | p < . Assume that { a n i , 1 i n , n 1 } is an array of real numbers satisfying | a n i | 1 for 1 i n and n 1 . Then (1.2) holds.

Proof Without loss of generality, we can assume that
a n i 0 , 1 i n , n 1 ,
(2.1)
from which it follows that
i = 1 n a n i τ n , τ 1 .
(2.2)

Since p > 1 / α and 1 / 2 < α 1 , we have 0 ( 1 α ) / ( p α α ) < 1 . We take t as given such that ( 1 α ) / ( p α α ) < t < 1 .

For 1 i n , n 1 , set
X n i ( 1 ) = n t α I ( X i < n t α ) + X i I ( | X i | n t α ) + n t α I ( X i > n t α ) , X n i ( 2 ) = ( X i n t α ) I ( n t α < X i n α ) + n α I ( X i > n α ) , X n i ( 3 ) = ( X i n t α n α ) I ( X i > n α ) , X n i ( 4 ) = ( X i + n t α ) I ( n α X i < n t α ) n α I ( X i < n α ) , X n i ( 5 ) = ( X i + n t α + n α ) I ( X i < n α ) .
Therefore
n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X i | > ε n α ) l = 1 5 n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X n i ( l ) | > ε n α / 5 ) : = l = 1 5 I l .
For I 3 ,
I 3 n = 1 n p α 2 P ( i = 1 n ( X n i ( 3 ) 0 ) ) n = 1 n p α 2 i = 1 n P ( | X i | > n α ) = n = 1 n p α 1 P ( | X | > n α ) C E | X | p < .
(2.3)

By the same argument as (2.3), we also have I 5 < .

For I 1 , by E X = 0 , Markov’s inequality, (2.1), (2.2), and ( 1 α ) / ( p α α ) < t < 1 ,
n α max 1 j n | i = 1 j a n i E X n i ( 1 ) | 2 n α i = 1 n a n i E | X i | I ( | X i | > n t α ) 2 n α E | X | I ( | X | > n t α ) i = 1 n a n i 2 n 1 α ( p α α ) t E | X | p I ( | X | > n t α ) 0 , n .
(2.4)
Hence, to prove I 1 < , it is enough to show that
I 1 = n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i ( X n i ( 1 ) E X n i ( 1 ) ) | > ε n α / 10 ) < .
By the Markov inequality, Lemma 2.1, Lemma 2.3, the C r -inequality, (2.1), and (2.2), for any r 2 ,
I 1 C n = 1 n ( p r ) α 2 E max 1 j n | i = 1 j a n i ( X n i ( 1 ) E X n i ( 1 ) ) | r C n = 1 n ( p r ) α 2 ( log n ) r { i = 1 n a n i r E | X n i ( 1 ) | r + ( i = 1 n a n i 2 E | X n i ( 1 ) | 2 ) r / 2 } C n = 1 n ( p r ) α 1 ( log n ) r E | X n 1 ( 1 ) | r + C n = 1 n ( p r ) α 2 + r / 2 ( log n ) r ( E | X n 1 ( 1 ) | 2 ) r / 2 : = C I 11 + C I 12 .
(2.5)
If p 2 , then ( p α 1 ) / ( α 1 / 2 ) p . Taking r such that r > ( p α 1 ) / ( α 1 / 2 ) ,
I 12 n = 1 n ( p r ) α 2 + r / 2 ( log n ) r ( E | X | 2 ) r / 2 < .
Since r > p and t < 1 , by the C r -inequality, we get
I 11 C n = 1 n ( p r ) α 1 ( log n ) r { E | X | r I ( | X | n t α ) + n t r α P ( | X | > n t α ) } C n = 1 n ( p r ) ( 1 t ) α 1 ( log n ) r E | X | p < .
(2.6)

If p < 2 , then we can take r = 2 , in this case I 11 = I 12 in (2.5). Since r > p and t < 1 , (2.6) still holds. Therefore I 1 < .

For I 2 , note that I 2 = n = 1 n p α 2 P ( i = 1 n a n i X n i ( 2 ) > ε n α / 5 ) , by (2.1) and (2.2),
0 n α i = 1 n E ( a n i X n i ( 2 ) ) n 1 α { E X I ( n t α < X n α ) + n α P ( X > n α ) } n 1 α E | X | I ( | X | > n t α ) + n P ( | X | > n α ) n 1 α ( p α α ) t E | X | p I ( | X | > n t α ) + n 1 p α E | X | p I ( | X | > n α ) 0 , n .
(2.7)
Hence, in order to prove I 2 < , it is enough to show that
I 2 = n = 1 n p α 2 P ( i = 1 n a n i ( X n i ( 2 ) E X n i ( 2 ) ) > ε n α / 10 ) < .
By the Markov inequality, Lemma 2.1, Lemma 2.2, the C r -inequality, (2.1), and (2.2), we have, for any r 2 ,
I 2 C n = 1 n ( p r ) α 2 { i = 1 n a n i r E | X n i ( 2 ) | r + ( i = 1 n a n i 2 E | X n i ( 2 ) | 2 ) r / 2 } C n = 1 n ( p r ) α 1 E | X n 1 ( 2 ) | r + C n = 1 n ( p r ) α 2 + r / 2 ( E | X n 1 ( 2 ) | 2 ) r / 2 : = C I 21 + C I 22 .
(2.8)
If p 2 , we take r such that r > ( p α 1 ) / ( α 1 / 2 ) . It follows that
I 22 C n = 1 n ( p r ) α 2 + r / 2 ( E | X | 2 ) r / 2 < .
Since r > p , we get by (2.9) of Sung [4]
I 21 C n = 1 n ( p r ) α 1 E | X | r I ( | X | n α ) + C n = 1 n p α 1 P ( | X | > n α ) C E | X | p < .
(2.9)

If p < 2 , then we take r = 2 , in this case I 21 = 22 . Since r > p , (2.9) still holds. Therefore, I 2 < . Similar to the proof of I 2 < , we also have I 4 < . Thus, (1.2) holds. □

Lemma 2.5 Let p > 1 / α and 1 / 2 < α 1 . Let { X n , n 1 } be a sequence of identically distributed END random variables with E X = 0 . Assume that { a n i , 1 i n , n 1 } is an array of real numbers satisfying (1.1) for some q > p and a n i = 0 or | a n i | > 1 . Then (1.2) holds.

Proof Let t be as in Lemma 2.4. Without loss of generality, we may assume that a n i 0 and i = 1 n a n i q n for some q > p , thus, we have
i = 1 n a n i τ n , 0 < τ q .
(2.10)
Similar to the proof of Lemma 2.4 of Sung [5], we may assume that (2.10) holds for some p < q < 2 when p < 2 . For 1 i n , n 1 , set
X n i ( 1 ) = n t α I ( a n i X i < n t α ) + a n i X i I ( | a n i X i | n t α ) + n t α I ( a n i X i > n t α ) , X n i ( 2 ) = ( a n i X i n t α ) I ( n t α < a n i X i n α ) + n α I ( a n i X i > n α ) , X n i ( 3 ) = ( a n i X i n t α n α ) I ( a n i X i > n α ) , X n i ( 4 ) = ( a n i X i + n t α ) I ( n α a n i X i < n t α ) n α I ( a n i X i < n α ) , X n i ( 5 ) = ( a n i X i + n t α + n α ) I ( a n i X i < n α ) .
Therefore
n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X i | > ε n α ) l = 1 5 n = 1 n p α 2 P ( max 1 j n | i = 1 j X n i ( l ) | > ε n α / 5 ) : = l = 1 5 I l .
By the proof of Lemma 2.4 in Sung [5], we have
I 3 n = 1 n p α 2 P ( i = 1 n ( X n i ( 3 ) 0 ) ) n = 1 n p α 2 i = 1 n P ( | a n i X i | > n α ) C E | X | p < .
(2.11)

Similarly, we have I 5 < .

For I 1 , since E X i = 0 , p > 1 / α , 1 / 2 < α 1 , ( 1 α ) / ( p α α ) < t < 1 and (2.10), we get
n α max 1 j n | i = 1 j X n i ( 1 ) | 2 n α i = 1 n E | a n i X i | I ( | a n i X i | > n t α ) 2 n α ( p 1 ) t α i = 1 n a n i p E | X i | p I ( | a n i X i | > n t α ) C n α ( p 1 ) t α i = 1 n a n i p C n 1 α ( p 1 ) t α 0 , n .
(2.12)
Hence, in order to prove I 1 < , it is enough to show that
I 1 = n = 1 n p α 2 P ( max 1 j n | i = 1 j ( X n i ( 1 ) E X n i ( 1 ) ) | > ε n α / 10 ) < .
Similar to the proof of (2.5), we have, for any r 2 ,
I 1 C n = 1 n ( p r ) α 2 ( log n ) r i = 1 n E | X n i ( 1 ) | r + C n = 1 n ( p r ) α 2 ( log n ) r ( i = 1 n E | X n i ( 1 ) | 2 ) r / 2 : = C I 11 + C I 12 .
(2.13)
If p 2 , we take r such that r > ( p α 1 ) / ( α 1 / 2 ) . By (2.10)
I 12 C n = 1 n ( p r ) α 2 ( log n ) r ( i = 1 n a n i 2 E | X 1 | 2 ) r / 2 C n = 1 n ( p r ) α 2 + r / 2 ( log n ) r < .
Since r > p and 0 < t < 1 , we get by (2.10)
I 11 C n = 1 n ( p r ) α 2 ( log n ) r { i = 1 n ( E | a n i X i | r I ( | a n i X i | n t α ) + n t r α P ( | a n i X i | > n t α ) ) } C n = 1 n ( p r ) α 2 ( log n ) r { i = 1 n n ( r p ) t α a n i p E | X i | p } C n = 1 n ( p r ) ( 1 t ) α 1 ( log n ) r E | X | p < .
(2.14)

If p < 2 , then we take r = 2 , in this case I 11 = I 12 in (2.13). Since r > p and t < 1 , (2.14) still holds. Therefore I 1 < .

For I 2 , since ( 1 α ) / ( p α α ) < t < 1 , we have by (2.10)
0 n α i = 1 n E ( X n i ( 2 ) ) n α i = 1 n { E a n i X i I ( n t α < a n i X i n α ) + n α P ( a n i X i > n α ) } i = 1 n { n α E a n i X i I ( a n i X i > n t α ) + P ( a n i X i > n α ) } i = 1 n { n ( p 1 ) t α α E | a n i X i | p I ( | a n i X i | > n t α ) + n p α E | a n i X i | p I ( | a n i X i | > n α ) } C i = 1 n a n i p ( n ( p 1 ) t α α + n p α ) C n 1 α ( p 1 ) t α + C n 1 p α 0 , n .
(2.15)
Hence, in order to prove I 2 < , it is enough to show that
I 2 = n = 1 n p α 2 P ( i = 1 n ( X n i ( 2 ) E X n i ( 2 ) ) > ε n α / 10 ) < .
Similar to the proof of (2.8), we have for any r 2
I 2 C n = 1 n ( p r ) α 2 i = 1 n E | X n i ( 2 ) | r + C n = 1 n ( p r ) α 2 ( i = 1 n E | X n i ( 2 ) | 2 ) r / 2 : = C I 21 + C I 22 .
(2.16)
If p 2 , we take r such that r > { ( p α 1 ) / ( α 1 / 2 ) , q } . By (2.10), we have
I 22 C n = 1 n ( p r ) α 2 ( i = 1 n E { | a n i X i | 2 I ( | a n i X i | n α ) + n 2 α P ( | a n i X i | > n α ) } ) r / 2 C n = 1 n ( p r ) α 2 ( i = 1 n a n i 2 E | X i | 2 ) r / 2 C n = 1 n ( p r ) α 2 + r / 2 < ,
and we get by the C r -inequality and (2.21)-(2.23) of Sung [5] and (2.16)
I 21 C n = 1 n ( p r ) α 2 i = 1 n E { ( a n i X i ) r I ( n t α < a n i X i n α ) + n r α I ( a n i X i > n α ) } C n = 1 n ( p r ) α 2 i = 1 n E | a n i X i | r I ( | a n i X i | n α ) + C n = 1 n p α 2 i = 1 n P ( | a n i X i | > n α ) C E | X | p < .
(2.17)

If p < 2 , then we take r = 2 , in this case I 21 = I 22 in (2.16). Similar to the proof of Lemma 2.4 of Sung [5], (2.17) still holds. Therefore, I 2 < . Similar to the proof of I 2 < , we have I 4 < . Thus, (1.2) holds. □

Proof of Theorem 1.1 By Lemmas 2.4 and 2.5, the proof is similar to that in Sung [4], so we omit the details. □

Proof of Theorem 1.2 Sufficiency. Without loss of generality, we can assume that a n i 0 and (1.1) holds for 1 < q 2 by the Hölder inequality. We firstly prove that
n = 1 n 1 P ( | i = 1 n a n i X i | > ε n ) < , ε > 0 .
(2.18)
For 1 i n , n 1 , set
X n i ( 1 ) = n I ( X i < n ) + X i I ( | X i | n ) + n I ( X i > n ) , X n i ( 2 ) = X i X n i ( 1 ) .
Note that E X = 0 , by the Hölder inequality,
n 1 | E i = 1 n a n i X n i ( 1 ) | C E | X | I ( | X | > n ) 0 .
Hence to prove (2.18), it is enough to show that for any ε > 0
I 1 = n = 1 n 1 P ( | i = 1 n a n i ( X n i ( 1 ) E X n i ( 1 ) ) | > ε n ) <
and
I 2 = n = 1 n 1 P ( | i = 1 n a n i X n i ( 2 ) | > ε n ) < .
By the Markov inequality, Lemma 2.3, the C r -inequality, (1.1), and a standard computation
I 1 C n = 1 n 1 q E | i = 1 n a n i ( X n i ( 1 ) E X n i ( 1 ) ) | q C n = 1 n 1 q ( i = 1 n | a n i | q ) { E | X | q I ( | X | n ) + n q P ( | X | > n ) } C n = 1 n q E | X | q I ( | X | n ) + C n = 1 P ( | X | > n ) C E | X | < .
Obviously,
I 2 n = 1 P ( | X | > n ) C E | X | < .
To prove (1.3), it is enough to prove that
n = 1 n 1 P ( max 1 j n | i = 1 j a n i ( X i + E X i + ) | > ε n ) < , ε > 0
(2.19)
and
n = 1 n 1 P ( max 1 j n | i = 1 j a n i ( X i E X i ) | > ε n ) < , ε > 0 ,
(2.20)

where x + = max { 0 , x } and x = ( x ) + .

Let ε > 0 be given. By E X + E | X | < , (1.1), and the Hölder inequality, there exists a constant x = x ( ε ) > 0 such that
n 1 i = 1 n a n i E ( X i + x ) I ( X i + > x ) = n 1 ( i = 1 n a n i ) E { ( X + x ) I ( X + > x ) } ε / 6 .
(2.21)
Set
X i , x ( 1 ) = X i + I ( X i + x ) + x I ( X i + > x ) , X i , x ( 2 ) = X i + X i , x ( 1 ) .
Note that by (2.21)
max 1 j n n 1 | i = 1 j a n i ( X i + E X i + ) | max 1 j n n 1 | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | + max 1 j n n 1 | i = 1 j a n i ( X i , x ( 2 ) E X i , x ( 2 ) ) | max 1 j n n 1 | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | + n 1 | i = 1 n a n i ( X i , x ( 2 ) E X i , x ( 2 ) ) | + ε / 3 .
Therefore, to prove (2.19), it is enough to prove that
I 3 = n = 1 n 1 P ( max 1 j n | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | > ε n / 3 ) <
and
I 4 = n = 1 n 1 P ( | i = 1 n a n i ( X i , x ( 2 ) E X i , x ( 2 ) ) | > ε n / 3 ) < .
By the Markov inequality, Lemma 2.4, and (1.1)
I 3 C n = 1 n 1 q E max 1 j n | i = 1 j a n i ( X i , x ( 1 ) E X i , x ( 1 ) ) | q n = 1 n q ( log n ) q < .

By Lemma 2.1, { X i , x ( 2 ) E X i , x ( 2 ) , i 1 } is a sequence of identically distributed END with zero mean. Then I 4 < by taking { X i , x ( 2 ) E X i , x ( 2 ) , i 1 } instead of { X i , i 1 } in (2.18). Hence (2.19) holds.

The proof of (2.20) is the same as that of (2.19).

Necessity. It is similar to the proof of Theorem 2.2 in Sung [5]. Here we omit the details. So we complete the proof. □

Declarations

Acknowledgements

The author would like to thank the referees and the editors for the helpful comments and suggestions. The research is supported by the General Project of Science and Technology Department of Hunan Province (2013NK3017) and by the Agriculture Science and Ttechnology Supporting Programme of Hengyang Municipal Bureau of Science and Technology (2013KN36).

Authors’ Affiliations

(1)
Department of Mathematics and Computational Science, Hengyang Normal University

References

  1. Hsu P, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33: 25–31. 10.1073/pnas.33.2.25MathSciNetView ArticleMATHGoogle Scholar
  2. Chen P, Hu T-C, Volodin A: Limiting behavior of moving average processes under negative association. Teor. Imovir. Mat. Stat. 2007, 7: 154–166.MATHGoogle Scholar
  3. Qiu D, Chen P, Antonini RG, Volodin A: On the complete convergence for arrays of rowwise extended negatively dependent random variables. J. Korean Math. Soc. 2013,50(2):379–392. 10.4134/JKMS.2013.50.2.379MathSciNetView ArticleMATHGoogle Scholar
  4. Sung SH: Complete convergence for weighted sums of random variables. Stat. Probab. Lett. 2007,77(3):303–311. 10.1016/j.spl.2006.07.010View ArticleMATHMathSciNetGoogle Scholar
  5. Sung SH:Complete convergence for weighted sums of ρ -mixing random variables. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 630608 10.1155/2010/630608Google Scholar
  6. Zhang L, Wang J: A note on complete convergence of pairwise NQD random sequences. Appl. Math. J. Chin. Univ. Ser. A 2004,19(2):203–208. 10.1007/s11766-004-0055-4View ArticleMATHGoogle Scholar
  7. Liu L: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 2009,79(9):1290–1298. 10.1016/j.spl.2009.02.001View ArticleMATHMathSciNetGoogle Scholar
  8. Ebrahimi N, Ghosh M: Multivariate negative dependence. Commun. Stat., Theory Methods 1981, 10: 307–336. 10.1080/03610928108828041MathSciNetView ArticleMATHGoogle Scholar
  9. Alam K, Saxena KML: Positive dependence in multivariate distributions. Commun. Stat., Theory Methods 1981,10(12):1183–1196. 10.1080/03610928108828102MathSciNetView ArticleMATHGoogle Scholar
  10. Block HW, Savits TH, Shaked M: Some concepts of negative dependence. Ann. Probab. 1982,10(3):765–772. 10.1214/aop/1176993784MathSciNetView ArticleMATHGoogle Scholar
  11. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann. Stat. 1983,11(1):286–295. 10.1214/aos/1176346079MathSciNetView ArticleMATHGoogle Scholar
  12. Wu YF, Zhu DJ: Convergence properties of partial sums for arrays of rowwise negatively orthant dependent random variables. J. Korean Stat. Soc. 2010, 39: 189–197. 10.1016/j.jkss.2009.05.003MathSciNetView ArticleMATHGoogle Scholar
  13. Liu L: Necessary and sufficient conditions for moderate deviations of dependent random variables with heavy tails. Sci. China Math. 2010,53(6):1421–1434. 10.1007/s11425-010-4012-9MathSciNetView ArticleMATHGoogle Scholar
  14. Wu YF, Guan M: Convergence properties of the partial sums for sequences of END random variables. J. Korean Math. Soc. 2012, 49: 1097–1110. 10.4134/JKMS.2012.49.6.1097MathSciNetView ArticleMATHGoogle Scholar
  15. Chen P, Bai P, Sung SH: The von Bahr-Esseen moment inequality for pairwise independent random variables and applications. J. Math. Anal. Appl. 2014, 419: 1290–1302. 10.1016/j.jmaa.2014.05.067MathSciNetView ArticleMATHGoogle Scholar
  16. Shen A: Probability inequalities for END sequence and their applications. J. Inequal. Appl. 2011., 2011: Article ID 98 10.1186/1029-242X-2011-98Google Scholar
  17. Stout WF: Almost Sure Convergence. Academic Press, New York; 1974.MATHGoogle Scholar

Copyright

© Zhang; licensee Springer 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.