Open Access

Complete convergence for negatively orthant dependent random variables

Journal of Inequalities and Applications20142014:145

https://doi.org/10.1186/1029-242X-2014-145

Received: 14 November 2013

Accepted: 26 March 2014

Published: 9 April 2014

Abstract

In this paper, necessary and sufficient conditions of the complete convergence are obtained for the maximum partial sums of negatively orthant dependent (NOD) random variables. The results extend and improve those in Kuczmaszewska (Acta Math. Hung. 128(1-2):116-130, 2010) for negatively associated (NA) random variables.

MSC:60F15, 60G50.

Keywords

NODcomplete convergence

1 Introduction

The concept of complete convergence for a sequence of random variables was introduced by Hsu and Robbins [1] as follows. A sequence { U n , n 1 } of random variables converges completely to the constant θ if
n = 1 P ( | U n θ | > ε ) < for all  ε > 0 .

Moreover, they proved that the sequence of arithmetic means of independent identically distribution (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. This result has been generalized and extended in several directions by many authors. One can refer to [216], and so forth. Kuczmaszewska [8] proved the following result.

Theorem A Let { X n , n 1 } be a sequence of negatively associated (NA) random variables and X be a random variables possibly defined on a different space satisfying the condition
1 n i = 1 n P ( | X i | > x ) = D P ( | X | > x )
for all x > 0 , all n 1 and some positive constant D. Let α p > 1 and α > 1 / 2 . Moreover, additionally assume that E X n = 0 for all n 1 if p 1 . Then the following statements are equivalent:
  1. (i)

    E | X | p < ,

     
  2. (ii)

    n = 1 n α p 2 P ( max 1 j n | i = 1 j X i | ε n α ) < , ε > 0 .

     

The aim of this paper is to extend and improve Theorem A to negatively orthant dependent (NOD) random variables. The tool in the proof of Theorem A is the Rosenthal maximal inequality for NA sequence (cf. [17]), but no one established the kind of maximal inequality for NOD sequence. So the truncated method is different and the proofs of our main results are more complicated and difficult.

The concept of negatively associated (NA) and negatively orthant dependent (NOD) was introduced by Joag-Dev and Proschan [18] in the following way.

Definition 1.1 A finite family of random variables { X i , 1 i n } is said to be negatively associated (NA) if for every pair of disjoint nonempty subset A 1 , A 2 of { 1 , 2 , , n } ,
| Cov ( f 1 ( X i , i A 1 ) , f 2 ( X j , j A 2 ) ) | 0 ,

where f 1 and f 2 are coordinatewise nondecreasing such that the covariance exists. An infinite sequence of { X n , n 1 } is NA if every finite subfamily is NA.

Definition 1.2 A finite family of random variables { X i , 1 i n } is said to be
  1. (a)
    negatively upper orthant dependent (NUOD) if
    P ( X i > x i , i = 1 , 2 , , n ) i = 1 n P ( X i > x i )
     
for x 1 , x 2 , , x n R ,
  1. (b)
    negatively lower orthant dependent (NLOD) if
    P ( X i x i , i = 1 , 2 , , n ) i = 1 n P ( X i x i )
     
for x 1 , x 2 , , x n R ,
  1. (c)

    negatively orthant dependent (NOD) if they are both NUOD and NLOD.

     

A sequence of random variables { X n , n 1 } is said to be NOD if for each n, X 1 , X 2 , , X n are NOD.

Obviously, every sequence of independent random variables is NOD. Joag-Dev and Proschan [18] pointed out that NA implies NOD, neither being NUOD nor being NLOD implies being NA. They gave an example that possesses NOD, but does not possess NA, which shows that NOD is strictly wider than NA. For more details of NOD random variables, one can refer to [3, 6, 11, 14, 1921], and so forth.

In order to prove our main results, we need the following lemmas.

Lemma 1.1 (Bozorgnia et al. [19])

Let X 1 , X 2 , , X n be NOD random variables.
  1. (i)

    If f 1 , f 2 , , f n are Borel functions all of which are monotone increasing (or all monotone decreasing), then f 1 ( X 1 ) , f 2 ( X 2 ) , , f n ( X n ) are NOD random variables.

     
  2. (ii)

    E i = 1 n X i + i = 1 n E X i + , n 2 .

     

Lemma 1.2 (Asadian et al. [22])

For any q 2 , there is a positive constant C ( q ) depending only on q such that if { X n , n 1 } is a sequence of NOD random variables with E X n = 0 for every n 1 , then for all n 1 ,
E | i = 1 n X i | q C ( q ) { i = 1 n E | X i | q + ( i = 1 n E X i 2 ) q / 2 } .
Lemma 1.3 For any q 2 , there is a positive constant C ( q ) depending only on q such that if { X n , n 1 } is a sequence of NOD random variables with E X n = 0 for every n 1 , then for all n 1 ,
E max 1 j n | i = 1 j X i | q C ( q ) ( log ( 4 n ) ) q { i = 1 n E | X i | q + ( i = 1 n E X i 2 ) q / 2 } .

Proof By Lemma 1.2, the proof is similar to that of Theorem 2.3.1 in Stout [23], so it is omitted here. □

Lemma 1.4 (Kuczmaszewska [8])

Let β, γ be positive constants. Suppose that { X n , n 1 } is a sequence of random variables and X is a random variable. There exists constant D > 0 such that
i = 1 n P ( | X i | > x ) D n P ( | X | > x ) , x > 0 , n 1 ;
(1.1)
  1. (i)

    if E | X | β < , then 1 n j = 1 n E | X j | β C E | X | β ;

     
  2. (ii)

    1 n j = 1 n E | X j | β I ( | X j | γ ) C { E | X | β I ( | X | γ ) + γ β P ( | X | > γ ) } ;

     
  3. (iii)

    1 n j = 1 n E | X j | β I ( | X j | > γ ) C E | X | β I ( | X | > γ ) .

     
Recall that a function h ( x ) is said to be slowly varying at infinity if it is real valued, positive, and measurable on [ 0 , ) , and if for each λ > 0
lim x h ( λ x ) h ( x ) = 1 .

We refer to Seneta [24] for other equivalent definitions and for a detailed and comprehensive study of properties of slowly varying functions.

We frequently use the following properties of slowly varying functions (cf. Seneta [24]).

Lemma 1.5 If h ( x ) is a function slowly varying at infinity, then for any s > 0
C 1 n s h ( n ) i = n i 1 s h ( i ) C 2 n s h ( n )
and
C 3 n s h ( n ) i = 1 n i 1 + s h ( i ) C 4 n s h ( n ) ,

where C 1 , C 2 , C 3 , C 4 > 0 depend only on s.

Throughout this paper, C will represent positive constants of which the value may change from one place to another.

2 Main results and proofs

Theorem 2.1 Let α > 1 / 2 , p > 0 , α p > 1 and h ( x ) be a slowly varying function at infinity. Let { X n , n 1 } be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1). Moreover, additionally assume that for α 1 , E X n = 0 for all n 1 . If
E | X | p h ( | X | 1 / α ) < ,
(2.1)
then the following statements hold:
( i ) n = 1 n α p 2 h ( n ) P ( max 1 j n | S j | ε n α ) < , ε > 0 ;
(2.2)
( i i ) n = 2 n α p 2 h ( n ) P ( max 1 k n | S n ( k ) | ε n α ) < , ε > 0 ;
(2.3)
( i i i ) n = 1 n α p 2 h ( n ) P ( max 1 j n | X j | ε n α ) < , ε > 0 ;
(2.4)
( i v ) n = 1 n α p 2 h ( n ) P ( sup j n j α | S j | ε ) < , ε > 0 ;
(2.5)
( v ) n = 1 n α p 2 h ( n ) P ( sup j n j α | X j | ε ) < , ε > 0 .
(2.6)

Here S n = i = 1 n X i , S n ( k ) = S n X k , k = 1 , 2 , , n .

Proof First, we prove (2.2). Choose q such that 1 / α p < q < 1 . Let X i ( n , 1 ) = n α q I ( X i < n α q ) + X i I ( | X i | n α q ) + n α q I ( X i > n α q ) , X i ( n , 2 ) = ( X i n α q ) I ( X i > n α q ) , X i ( n , 3 ) = ( X i + n α q ) I ( X i < n α q ) , n 1 , 1 i n . Note that
X i = X i ( n , 1 ) + X i ( n , 2 ) X i ( n , 3 )
and
n = 1 n α p 2 h ( n ) P ( max 1 j n | S j | > ε n α ) n = 1 n α p 2 h ( n ) P ( max 1 j n | i = 1 j X i ( n , 1 ) | > ε n α / 3 ) + n = 1 n α p 2 h ( n ) P ( i = 1 n X i ( n , 2 ) > ε n α / 3 ) + n = 1 n α p 2 h ( n ) P ( i = 1 n X i ( n , 3 ) > ε n α / 3 ) = def I 1 + I 2 + I 3 .
(2.7)
In order to prove (2.2), it suffices to show that I l < for l = 1 , 2 , 3 . Obviously, for 0 < η < p , the condition (2.1) implies E | X | p η < . Therefore, we choose 0 < η < p , α ( p η ) > α ( p η ) q > 1 and p η 1 > 0 if p > 1 . In order to prove I 1 < , we first prove that
lim n n α max 1 j n | i = 1 j E X i ( n , 1 ) | = 0 .
(2.8)
This holds when α 1 . Since α p > 1 , p > 1 . By E X i = 0 , i 1 , and Lemma 1.4, we have
n α max 1 j n | i = 1 j E X i ( n , 1 ) | n α max 1 j n i = 1 j { E | X i | I ( | X i | > n α q ) + n α q P ( | X i | > n α q ) } 2 n α i = 1 n E | X i | I ( | X i | > n α q ) C n 1 α E | X | I ( | X | > n α q ) C n { α ( p η ) q 1 } α ( 1 q ) E | X | p η 0 , n .
When α > 1 , p > 1 ,
n α max 1 j n | i = 1 j E X i ( n , 1 ) | n α max 1 j n i = 1 j { E | X i | I ( | X i | n α q ) + n α q P ( | X i | > n α q ) } n α i = 1 n E | X i | C n 1 α E | X | 0 , n .
When α > 1 , p 1 ,
n α max 1 j n | i = 1 j E X i ( n , 1 ) | n α max 1 j n i = 1 j { E | X i | I ( | X i | n α q ) + n α q P ( | X i | > n α q ) } n α i = 1 n { E | X i | I ( | X i | n α q ) + n α q P ( | X i | > n α q ) } n α i = 1 n ( n α ( 1 p + η ) q E | X i | p η ) C n { α ( p η ) q 1 } α ( 1 q ) E | X | p η 0 , n .
Therefore, (2.8) holds. So, in order to prove I 1 < , it is enough to prove that
I 1 : = n = 1 n α p 2 h ( n ) P ( max 1 j n | i = 1 j ( X i ( n , 1 ) E X i ( n , 1 ) ) | > ε n α / 6 ) < .
(2.9)
By Lemma 1.1 for n 1 , { X i ( n , 1 ) E X i ( n , 1 ) , 1 i n } is a sequence of NOD random variables. When 0 < p 2 , by α ( p η ) > 1 and 0 < q < 1 , we have α 1 2 α ( 1 p η 2 ) q > α 1 2 α ( 1 p η 2 ) > 0 . Taking v such that v > max { 2 , p , ( α p 1 ) / ( α 1 / 2 ) , ( α p 1 ) / ( α 1 2 α ( 1 p η 2 ) q ) , p ( p η ) q 1 q } , we get by the Markov inequality, the C r inequality, the Hölder inequality, and Lemma 1.3,
I 1 C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v i = 1 n E | X i ( n , 1 ) | v + C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v ( i = 1 n E | X i ( n , 1 ) | 2 ) v / 2 = def I 11 + I 12 .
By the C r inequality, Lemma 1.4, and Lemma 1.5, we have
I 11 C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v i = 1 n E { | X i | v I ( | X i | n α q ) + n α q v P ( | X i | > n α q ) } C n = 1 n α p α v 1 h ( n ) ( log ( 4 n ) ) v E { | X | v I ( | X | n α q ) + n α q v P ( | X | > n α q ) } C n = 1 n α { ( 1 q ) v + p q ( p η ) } 1 h ( n ) ( log ( 4 n ) ) v E | X | p η < .
By the C r inequality and Lemma 1.4,
I 12 C n = 1 n α p α v 2 h ( n ) ( log ( 4 n ) ) v { i = 1 n ( E | X i | 2 I ( | X i | n α q ) + n 2 α q P ( | X i | > n α q ) ) } v / 2 C n = 1 n α p 2 ( α 1 / 2 ) v h ( n ) ( log ( 4 n ) ) v { E | X | 2 I ( | X | n α q ) + n 2 α q P ( | X | > n α q ) } v / 2 .
When p > 2 ,
I 12 C n = 1 n α p 2 ( α 1 / 2 ) v h ( n ) ( log ( 4 n ) ) v ( E X 2 ) v / 2 < .
When 0 < p 2 ,
I 12 C n = 1 n α p 2 ( α 1 / 2 ) v h ( n ) ( log ( 4 n ) ) v ( E | X | p η ) v / 2 n α q { 2 ( p η ) } v / 2 C n = 1 n α p 2 { α 1 2 α ( 1 p η 2 ) q } v h ( n ) ( log ( 4 n ) ) v < .
Therefore, (2.9) holds for I 2 . Define Y i ( n , 2 ) = ( X i n α q ) I ( n α q < X i n α + n α q ) + n α I ( X i > n α + n α q ) , 1 i n , n 1 , since X i ( n , 2 ) = Y i ( n , 2 ) + ( X i n α q n α ) I ( X i > n α + n α q ) , we have
I 2 n = 1 n α p 2 h ( n ) P ( i = 1 n Y i ( n , 2 ) > ε n α / 6 ) I 2 + n = 1 n α p 2 h ( n ) P ( i = 1 n ( X i n α q n α ) I ( X i > n α + n α q ) > ε n α / 6 ) I 2 = def I 21 + I 22 .
(2.10)
By Lemma 1.5, (2.1), and a standard computation, we have
I 22 n = 1 n α p 2 h ( n ) i = 1 n P ( X i > n α + n α q ) n = 1 n α p 2 h ( n ) i = 1 n P ( | X i | > n α ) C n = 1 n α p 1 h ( n ) P ( | X | > n α ) C + C E | X | p h ( | X | 1 / α ) < .
(2.11)
Now we prove I 21 < . By (2.1) and Lemma 1.4, we have
0 n α i = 1 n E Y i ( n , 2 ) { n α i = 1 n E X i I ( X i > n α q ) , if  p > 1 , n α i = 1 n { E | X i | I ( | X i | 2 n α ) + n α P ( | X i | > 2 n α q ) } , if  0 < p 1 { C n { α ( p η ) q 1 } α ( 1 q ) E | X | p η , if  p > 1 , C n 1 α ( p η ) q E | X | p η , if  0 < p 1 0 , n .
Therefore, in order to prove I 21 < , it is enough to prove that
I 21 n = 1 n α p 2 h ( n ) P ( i = 1 n ( Y i ( n , 2 ) E Y i ( n , 2 ) ) > ε n α / 12 ) < .
(2.12)
Taking v such that v > max { 2 , α p 1 α 1 / 2 , 2 ( α p 1 ) α ( p η ) 1 } , we get by Lemma 1.1, the Markov inequality, the C r inequality, the Hölder inequality, and Lemma 1.2,
I 21 C n = 1 n α p α v 2 h ( n ) E | i = 1 n ( Y i ( n , 2 ) E Y i ( n , 2 ) ) | v C n = 1 n α p α v 2 h ( n ) i = 1 n E | Y i ( n , 2 ) | v + C n = 1 n α p α v 2 h ( n ) ( i = 1 n E ( Y i ( n , 2 ) ) 2 ) v / 2 = def I 211 + I 212 .
By the C r inequality, Lemma 1.4, Lemma 1.5, (2.1), and a standard computation, we have
I 211 = C n = 1 n α p α v 2 h ( n ) i = 1 n E | Y i ( n , 2 ) | v C n = 1 n α p α v 2 h ( n ) i = 1 n { E X i v I ( n α q < X i n α q + n α ) + n α v P ( X i > n α q + n α ) } C n = 1 n α p α v 2 h ( n ) i = 1 n { E | X i | v I ( | X i | 2 n α ) + n α v P ( | X i | > n α ) } C n = 1 n α p α v 1 h ( n ) { E | X | v I ( | X | 2 n α ) + n α v P ( | X | > n α ) } C + C E | X | p h ( | X | 1 / α ) <
and
I 212 C n = 1 n α p α v 2 h ( n ) { i = 1 n ( E X i 2 I ( n α q < X i n α q + n α ) + n 2 α P ( X i > n α q + n α ) ) } v / 2 C n = 1 n α p α v + v / 2 2 h ( n ) { E X 2 I ( | X | 2 n α ) + n 2 α P ( | X | > n α ) } v / 2 { C n = 1 n α p ( α 1 / 2 ) v 2 h ( n ) ( E X 2 ) v / 2 , if  p > 2 , C n = 1 n α p 2 { α ( p η ) 1 } v / 2 h ( n ) ( E | X | p η ) v / 2 , if  p 2 { C n = 1 n α p ( α 1 / 2 ) v 2 h ( n ) , if  p > 2 , C n = 1 n α p 2 { α ( p η ) 1 } v / 2 h ( n ) , if  p 2 < .

Therefore, (2.12) holds. By (2.10)-(2.12) we get I 2 < . In a similar way of I 2 < we can obtain I 3 < . Thus, (2.2) holds.

(2.2) (2.3). Note that | S n ( k ) | = | S n X k | | S n | + | X k | = | S n | + | S k S k 1 | | S n | + | S k | + | S k 1 | 3 max 1 j n | S j | , we have ( max 1 k n | S n ( k ) | ε n α ) ( max 1 j n | S j | ε n α / 3 ) , hence, from (2.2), (2.3) holds.

(2.3) (2.4). Since 1 2 | S n | n 1 n | S n | = | 1 n k = 1 n S n ( k ) | max 1 k n | S n ( k ) | , n 2 , and | X k | = | S n S n ( k ) | | S n | + | S n ( k ) | , we have ( max 1 k n | X k | ε n α ) ( | S n | ε n α / 2 ) ( max 1 k n | S n ( k ) | ε n α / 2 ) ( max 1 k n | S n ( k ) | ε n α / 4 ) , n 1 , hence, from (2.3), (2.4) holds.

(2.2) (2.5). By Lemma 1.5 and (2.3), we have
n = 1 n α p 2 h ( n ) P ( sup j n j α | S j | ε ) = i = 1 2 i 1 n < 2 i n α p 2 h ( n ) P ( sup j n j α | S j | ε ) C i = 1 2 i ( α p 1 ) h ( 2 i ) P ( sup j 2 i 1 j α | S j | ε ) C i = 1 2 i ( α p 1 ) h ( 2 i ) k = i P ( max 2 k 1 j < 2 k | S j | ε 2 α ( k 1 ) ) C k = 1 P ( max 2 k 1 j < 2 k | S j | ε 2 α ( k 1 ) ) i = 1 k 2 i ( α p 1 ) h ( 2 i ) C k = 1 2 k ( α p 1 ) h ( 2 k ) P ( max 1 j < 2 k | S j | ε 2 α ( k 1 ) ) < .

(2.5) (2.6). The proof of (2.5) (2.6) is similar to that of (2.2) (2.4), so it is omitted. □

Theorem 2.2 Let α > 1 / 2 , p > 0 , α p > 1 and h ( x ) be a slowly varying function at infinity. Let { X n , n 1 } be a sequence of NOD random variables and X be a random variables possibly defined on a different space. Moreover, additionally assume that for α 1 , E X n = 0 for all n 1 . If there exist constant D 1 > 0 and D 2 > 0 such that
D 1 n i = n 2 n 1 P ( | X i | > x ) P ( | X | > x ) D 2 n i = n 2 n 1 P ( | X i | > x ) , x > 0 , n 1 ,

then (2.1)-(2.6) are equivalent.

Proof From the proof of Theorem 2.1, in order to prove Theorem 2.2, it is enough to show that (2.4) (2.6) and (2.6) (2.1). The proof of (2.4) (2.6) is similar to that of (2.2) (2.5). Now, we prove (2.6) (2.1). Firstly we prove that
lim n P ( sup j n j α | X j | ε ) = 0 , ε > 0 .
(2.13)
Otherwise, there are ε 0 > 0 , δ > 0 , and a sequence of positive integers { n k , k 1 } , n k such that P ( sup j n k j α | X j | ε 0 ) δ , k 1 . Without loss of generality, we can assume that n k + 1 2 n k , k 1 . Therefore, we have
P ( sup j 2 n k j α | X j | ε 0 ) δ , k 1 .
By α p > 1 we have
n = 1 n α p 2 h ( n ) P ( sup j n j α | X j | ε 0 ) k = 1 n = n k + 1 2 n k n α p 2 h ( n ) P ( sup j n j α | X j | ε 0 ) C k = 1 n k α p 1 h ( n k ) P ( sup j 2 n k j α | X j | ε 0 ) = ,
which is in contradiction with (2.6), thus, (2.13) holds. By Lemma 1.1, we get
P ( sup j n j α | X j | ε ) P ( max n j < 2 n j α | X j | ε ) P ( max n j < 2 n | X j | ( 2 n ) α ε ) 1 P ( max n j < 2 n X j < ( 2 n ) α ε ) = 1 E ( j = n 2 n 1 I ( X j < ( 2 n ) α ε ) ) 1 j = n 2 n 1 P ( X j < ( 2 n ) α ε ) = 1 j = n 2 n 1 ( 1 P ( X j ( 2 n ) α ε ) ) 1 exp ( j = n 2 n 1 P ( X j ( 2 n ) α ε ) ) .
By (2.13), we have lim n j = n 2 n 1 P ( X j ( 2 n ) α ε ) = 0 , ε > 0 . Therefore, when n is large enough, we have
P ( max n j < 2 n j α | X j | ε ) 1 { 1 j = n 2 n 1 P ( X j ( 2 n ) α ε ) + 1 2 ( j = n 2 n 1 P ( X j ( 2 n ) α ε ) ) 2 } C j = n 2 n 1 P ( X j ( 2 n ) α ε ) , ε > 0 .
In a similar way, when n is large enough,
P ( max n j < 2 n j α | X j | ε ) C j = n 2 n 1 P ( X j ( 2 n ) α ε ) , ε > 0 .
Thus, when n is large enough, we have
P ( max n j < 2 n j α | X j | ε ) C j = n 2 n 1 P ( | X j | ( 2 n ) α ε ) C n P ( | X | ( 2 n ) α ε ) , ε > 0 .
(2.14)
Taking ε = 2 α , by (2.6), (2.14), Lemma 1.5, and a standard computation, we have
> n = 1 n α p 2 h ( n ) P ( sup j n j α | X j | 2 α ) n = 1 n α p 2 h ( n ) P ( max n j < 2 n j α | X j | 2 α ) C n = 1 n α p 1 h ( n ) P ( | X | n α ) C E | X | p h ( | X | 1 α ) .

Thus, (2.1) holds. □

In the following, let { τ n , n 1 } be a sequence of non-negative, integer valued random variables and τ a positive random variable. All random variables are defined on the same probability space.

Theorem 2.3 Let α > 1 / 2 , p > 0 , α p > 1 and h ( x ) > 0 be a slowly varying function as x + . Let { X n , n 1 } be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1) and (2.1). Moreover, additionally assume that for α 1 , E X n = 0 for all n 1 . If there exists λ > 0 such that n = 1 n α p 2 h ( n ) P ( τ n n < λ ) < , then
n = 1 n α p 2 h ( n ) P ( | S τ n | ε τ n α ) < , ε > 0 .
(2.15)
Proof Note that
( | S τ n | ε τ n α ) ( τ n / n < λ ) ( | S τ n | ε τ n α , τ n λ n ) ( τ n / n < λ ) ( sup j λ n j α | S j | ε ) .

Thus, by (2.5) of Theorem 2.1, we have (2.15). □

Theorem 2.4 Let α > 1 / 2 , p > 0 , α p > 1 and h ( x ) be a slowly varying function at infinity. Let { X n , n 1 } be a sequence of NOD random variables and X be a random variables possibly defined on a different space satisfying the condition (1.1) and (2.1). Moreover, additionally assume that for α 1 , E X n = 0 for all n 1 . If there exists θ > 0 such that n = 1 n α p 2 h ( n ) P ( | τ n n τ | > θ ) < with P ( τ B ) = 1 for some B > 0 , then
n = 1 n α p 2 h ( n ) P ( | S τ n | ε n α ) < , ε > 0 .
(2.16)
Proof Note that
( | S τ n | ε n α ) ( | τ n n τ | > θ ) ( | S τ n | ε n α , | τ n n τ | θ ) ( | τ n n τ | > θ ) ( | S τ n | ε n α , τ n ( τ + θ ) n ) ( | τ n n τ | > θ ) ( | S τ n | ε n α , τ n ( B + θ ) n ) ( | τ n n τ | > θ ) ( max 1 j ( B + θ ) n | S j | ε n α ) .

Thus, by (2.2) of Theorem 2.1, we have (2.16). □

Declarations

Acknowledgements

The authors would like to thank the referees and the editors for the helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (Grant No. 11271161).

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Guangdong University of Finance and Economics, Guangzhou, P.R. China
(2)
College of Science, Guilin University of Technology, Guilin, P.R. China
(3)
Department of Mathematics, Jinan University, Guangzhou, P.R. China

References

  1. Hsu P, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33: 25–31. 10.1073/pnas.33.2.25MathSciNetView ArticleGoogle Scholar
  2. Baum IE, Katz M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 1965, 120: 108–123. 10.1090/S0002-9947-1965-0198524-1MathSciNetView ArticleGoogle Scholar
  3. Baek J, Park ST: Convergence of weighted sums for arrays of negatively dependent random variables and its applications. J. Stat. Plan. Inference 2010, 140: 2461–2469. 10.1016/j.jspi.2010.02.021MathSciNetView ArticleGoogle Scholar
  4. Bai ZD, Su C: The complete convergence for partial sums of i.i.d. random variables. Sci. China Ser. A 1985, 5: 399–412.Google Scholar
  5. Chen P, Hu TC, Liu X, Volodin A: On complete convergence for arrays of rowwise negatively associated random variables. Theory Probab. Appl. 2007, 52: 323–328.MathSciNetView ArticleGoogle Scholar
  6. Gan S, Chen P: Strong convergence rate of weighted sums for negatively dependent sequences. Acta. Math. Sci. Ser. A 2008, 28: 283–290. (in Chinese)MathSciNetGoogle Scholar
  7. Gut A: Complete convergence for arrays. Period. Math. Hung. 1992, 25: 51–75. 10.1007/BF02454383MathSciNetView ArticleGoogle Scholar
  8. Kuczmaszewska A: On complete convergence in Marcinkiewica-Zygmund type SLLN for negatively associated random variables. Acta Math. Hung. 2010,128(1–2):116–130. 10.1007/s10474-009-9166-yMathSciNetView ArticleGoogle Scholar
  9. Liang HY, Wang L: Convergence rates in the law of large numbers for B -valued random elements. Acta Math. Sci. Ser. B 2001, 21: 229–236.MathSciNetGoogle Scholar
  10. Peligrad M, Gut A: Almost-sure results for a class of dependent random variables. J. Theor. Probab. 1999, 12: 87–104. 10.1023/A:1021744626773MathSciNetView ArticleGoogle Scholar
  11. Qiu DH, Chang KC, Antonini RG, Volodin A: On the strong rates of convergence for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 2011, 29: 375–385. 10.1080/07362994.2011.548683MathSciNetView ArticleGoogle Scholar
  12. Sung SH: Complete convergence for weighted sums of random variables. Stat. Probab. Lett. 2007, 77: 303–311. 10.1016/j.spl.2006.07.010View ArticleGoogle Scholar
  13. Sung SH: A note on the complete convergence for arrays of rowwise independent random elements. Stat. Probab. Lett. 2008, 78: 1283–1289. 10.1016/j.spl.2007.11.018View ArticleGoogle Scholar
  14. Taylor RL, Patterson R, Bozorgnia A: A strong law of large numbers for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 2002, 20: 643–656. 10.1081/SAP-120004118MathSciNetView ArticleGoogle Scholar
  15. Wang XM: Complete convergence for sums of NA sequence. Acta Math. Appl. Sin. 1999, 22: 407–412.Google Scholar
  16. Zhang LX, Wang JF: A note on complete convergence of pairwise NQD random sequences. Appl. Math. J. Chin. Univ. Ser. A 2004, 19: 203–208. 10.1007/s11766-004-0055-4View ArticleGoogle Scholar
  17. Shao QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theor. Probab. 2000, 13: 343–356. 10.1023/A:1007849609234View ArticleGoogle Scholar
  18. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann. Stat. 1983, 11: 286–295. 10.1214/aos/1176346079MathSciNetView ArticleGoogle Scholar
  19. Bozorgnia A, Patterson RF, Taylor RL: Limit theorems for dependent random variables. II. In Proc. of the First World Congress of Nonlinear Analysts ’92. de Gruyter, Berlin; 1996:1639–1650.Google Scholar
  20. Ko MH, Han KH, Kim TS: Strong laws of large numbers for weighted sums of negatively dependent random variables. J. Korean Math. Soc. 2006, 43: 1325–1338.MathSciNetView ArticleGoogle Scholar
  21. Ko MH, Kim TS: Almost sure convergence for weighted sums of negatively dependent random variables. J. Korean Math. Soc. 2005, 42: 949–957.MathSciNetView ArticleGoogle Scholar
  22. Asadian N, Fakoor V, Bozorgnia A: Rosenthal’s type inequalities for negatively orthant dependent random variables. J. Iran. Stat. Soc. 2006,5(1–2):69–75.Google Scholar
  23. Stout WF: Almost Sure Convergence. Academic Press, New York; 1974.Google Scholar
  24. Seneta E Lecture Notes in Math. 508. In Regularly Varying Function. Springer, Berlin; 1976.View ArticleGoogle Scholar

Copyright

© Qiu et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.