Skip to main content

On the strong convergence and some inequalities for negatively superadditive dependent sequences

Abstract

In this paper, we study the Marcinkiewicz-type strong law of large numbers, Hajek-Renyi-type inequality and other inequalities for negatively superadditive dependent (NSD) sequences. As an application, the integrability of supremum for NSD random variables is obtained. Our results extend the corresponding ones of Christofides and Vaggelatou (J. Multivar. Anal. 88:138-151, 2004) and Liu et al. (Stat. Probab. Lett. 43:99-105, 1999).

MSC:60F15.

1 Introduction

Let { X n ,n1} be a sequence of random variables defined on a fixed probability space (Ω,F,P). S n i = 1 n X i , n1, S 0 0. The concept of negatively associated (NA) random variables was introduced by Joag-Dev and Proschan [1]. A finite family of random variables { X i ,1in} is said to be negatively associated if for every pair of disjoint subsets A,B{1,2,,n},

Cov ( f ( X i , i A ) , f ( X j , j B ) ) 0,

whenever f and g are coordinatewise nondecreasing such that this covariance exists. An infinite family of random variables is NA if every finite subfamily is NA.

The concept of negatively superadditive dependent (NSD) random variables was introduced by Hu [2] based on the class of superadditive functions. Superadditive structure functions have important reliability interpretations, which describe whether a system is more series-like or more parallel-like [3].

Definition 1.1 (Kemperman [4])

A function ϕ: R n R is called superadditive if ϕ(xy)+ϕ(xy)ϕ(x)+ϕ(y) for all x,y R n , where is for componentwise maximum and is for componentwise minimum.

Definition 1.2 (Hu [2])

A random vector X=( X 1 , X 2 ,, X n ) is said to be negatively superadditive dependent (NSD) if

Eϕ( X 1 , X 2 ,, X n )Eϕ ( X 1 , X 2 , , X n ) ,
(1.1)

where X 1 , X 2 ,, X n are independent such that X i and X i have the same distribution for each i and ϕ is a superadditive function such that the expectations in (1.1) exist.

Hu [2] gave an example illustrating that NSD does not imply NA, and Hu posed an open problem whether NA implies NSD. Christofides and Vaggelatou [5] solved this open problem and indicated that NA implies NSD. Negatively superadditive dependent structure is an extension of negatively associated structure and sometimes more useful than negatively associated structure. Moreover, we can get many important probability inequalities for NSD random variables. For example, the structure function of a monotone coherent system can be superadditive [3], so inequalities derived from NSD can give one-side or two-side bounds of the system reliability. The notion of NSD random variables has wide applications in multivariate statistical analysis and reliability theory.

Eghbal et al. [6] derived two maximal inequalities and strong law of large numbers of quadratic forms of NSD random variables under the assumption that { X i ,i1} is a sequence of nonnegative NSD random variables with E X i r < for all i1 and some r>1. Eghbal et al. [7] provided some Kolmogorov inequality for quadratic forms T n = 1 i < j n X i X j and weighted quadratic forms Q n = 1 i < j n a i j X i X j , where { X i ,i1} is a sequence of nonnegative NSD uniformly bounded random variables. Shen et al. [8] obtained the Khintchine-Kolmogorov convergence theorem and strong stability for NSD random variables. Strong convergence for NA sequences and other dependent sequences has been extensively investigated. For example, Sung [9, 10] obtained the complete convergence results for identically distributed NA random variables and ρ -mixing random variables respectively, Zhou et al. [11] studied complete convergence for identically distributed ρ -mixing random variables under a suitable moment condition, Zhou [12] discussed complete moment convergence of moving average process under φ-mixing assumptions.

This paper is organized as follows. In Section 2, some preliminary lemmas and inequalities for NSD random variables are provided. In Section 3, Marcinkiewicz-type strong law of large numbers, Hajek-Renyi-type inequalities and the integrability of supremum for NSD random variables are presented. These results extend the corresponding results of Christofides and Vaggelatou [5] and Liu et al. [13].

Throughout the paper, X 1 , X 2 ,, X n are independent such that X i and X i have the same distribution for each i. C denotes a positive constant not depending on n, which may be different in various places. a n b n represents that there exists a constant C>0 such that a n C b n for all sufficiently large n.

2 Preliminaries

Lemma 2.1 (Hu [2])

If ( X 1 , X 2 ,, X n ) is NSD, then ( X i 1 , X i 2 ,, X i m ) is NSD for any 1 i 1 < i 2 << i m , 2m<n.

Lemma 2.2 (Hu [2])

Let X=( X 1 , X 2 ,, X n ) be an NSD random vector, and let X =( X 1 , X 2 ,, X n ) be an independent vector such that X i and X i have the same distribution for each i. Then, for any nondecreasing convex function f and n1,

Ef ( max 1 k n i = 1 k X i ) Ef ( max 1 k n i = 1 k X i ) .
(2.1)

Lemma 2.3 (Toeplitz lemma)

Let { a n ,n1} be a sequence of real numbers. If b n = k = 1 n a k and lim n x n =x (finite), then lim n 1 b n k = 1 n a k x k =x.

Shen et al. [8] obtained the following two results. Lemma 2.4 comes from the proof of Theorem 2.1 of [8]. Lemma 2.5 is the Khintchine-Kolmogorov-type convergence theorem for NSD random variables.

Lemma 2.4 (Shen et al. [8])

Let X 1 , X 2 ,, X n be NSD random variables with mean zero and finite second moments. Then, for n1,

E ( max 1 k n | i = 1 k X i | 2 ) 2 i = 1 n E X i 2 .

Lemma 2.5 (Shen et al. [8])

Let { X n ,n1} be a sequence of NSD random variables. Assume that

n = 1 Var X n <,
(2.2)

then n = 1 ( X n E X n ) converges almost surely.

3 Main results

Theorem 3.1 (Marcinkiewicz-type strong law of large numbers for NSD)

Let { X n ,n1} be a sequence of NSD identically distributed random variables. There exists some 0<p<2 such that

E | X 1 | p <.
(3.1)

If 1p<2, we further assume that E X 1 =0. Then

lim n n 1 p k = 1 n X k =0,a.s.
(3.2)

Proof Let Y n = n 1 / p I( X n < n 1 / p )+ X n I(| X n | n 1 / p )+ n 1 / p I( X n > n 1 / p ). By (3.1),

n = 1 P( X n Y n )= n = 1 P ( | X n | > n 1 / p ) = n = 1 P ( | X 1 | > n 1 / p ) E | X 1 | p <.

Therefore P( X n Y n , i . o . )=0 follows from the Borel-Cantelli lemma. Thus (3.2) is equivalent to the following:

lim n n 1 p k = 1 n Y k =0,a.s.

So, in order to prove (3.2), we need only to prove

lim n n 1 p k = 1 n ( Y k E Y k )=0,a.s.,
(3.3)
lim n n 1 p k = 1 n E Y k =0.
(3.4)

Firstly, we prove (3.3). According to Lemma 2.5, it suffices to prove

n = 1 Var ( Y n n 1 / p ) <.

By 0<p<2 and (3.1),

n = 1 Var ( Y n n 1 / p ) n = 1 E Y n 2 n 2 / p = n = 1 n 2 / p [ E ( X 1 2 I ( | X 1 | n 1 / p ) ) + E ( n 2 / p I ( | X 1 | > n 1 / p ) ) ] = n = 1 [ n 2 / p E ( X 1 2 I ( | X 1 | n 1 / p ) ) + P ( | X 1 | > n 1 / p ) ] .

Notice that

n = 1 P ( | X 1 | > n 1 / p ) E | X 1 | p < , n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) = n = 1 n 2 / p k = 1 n E X 1 2 I ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) = k = 1 n = k n 2 / p E X 1 2 I ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) = k = 1 k 2 / p E X 1 2 I ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) = + k = 1 n = k + 1 n 2 / p E X 1 2 I ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) k = 1 k 2 / p E k 2 / p I ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) = + k = 1 E X 1 2 I ( k 1 < | X 1 | p k ) k x 2 / p d x n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) k = 1 P ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) = + k = 1 p 2 p k p 2 p E | X 1 | p k 2 p p I ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) k = 1 E | X 1 | p I ( k 1 < | X 1 | p k ) n = 1 n 2 / p E X 1 2 I ( | X 1 | n 1 / p ) = E | X 1 | p < .

Thus

n = 1 Var ( Y n n 1 / p ) <.

Secondly, we prove (3.4).

  1. (i)

    If p=1, then

    |E Y n | | E X 1 I ( | X 1 | n ) | +E ( n I ( | X 1 | > n ) ) .

It is easy to see that

lim n E X 1 I ( | X 1 | n ) = E X 1 = 0 , lim n E ( n I ( | X 1 | > n ) ) lim n E | X 1 | I ( | X 1 | > n ) = 0 .

Therefore, it has lim n E Y n =0. By Lemma 2.3, we can get (3.4) immediately.

  1. (ii)

    If 0<p<1, by (3.1)

    n = 1 | E Y n | n 1 / p n = 1 n 1 / p [ E | X n | I ( | X n | n 1 / p ) + E ( n 1 / p I ( | X n | > n 1 / p ) ) ] = n = 1 [ n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) + E I ( | X 1 | > n 1 / p ) ] .

Notice that

n = 1 E I ( | X 1 | > n 1 / p ) = n = 1 P ( | X 1 | > n 1 / p ) E | X 1 | p < , n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) = n = 1 j = 1 n n 1 / p E | X 1 | I ( j 1 < | X 1 | p j ) n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) = j = 1 n = j n 1 / p E | X 1 | I ( j 1 < | X 1 | p j ) n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) = j = 1 j 1 / p E | X 1 | I ( j 1 < | X 1 | p j ) n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) = + j = 1 n = j + 1 n 1 / p E | X 1 | I ( j 1 < | X 1 | p j ) n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) j = 1 j 1 / p E j 1 / p I ( j 1 < | X 1 | p j ) n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) = + j = 1 E | X 1 | I ( j 1 < | X 1 | p j ) j x 1 / p d x n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) j = 1 E | X 1 | p I ( j 1 < | X 1 | p j ) n = 1 n 1 / p E | X 1 | I ( | X 1 | n 1 / p ) = E | X 1 | p < ,

we can get (3.4) from Kronecker’s lemma.

  1. (iii)

    If 1<p<2, by E X 1 =0 and (3.1),

    n = 1 | E Y n | n 1 / p n = 1 n 1 / p [ | E X 1 I ( | X 1 | > n 1 / p ) | + E ( n 1 / p I ( | X 1 | > n 1 / p ) ) ] n = 1 [ n 1 / p E | X 1 | I ( | X 1 | > n 1 / p ) + P ( | X 1 | > n 1 / p ) ] .

Notice that

n = 1 P ( | X 1 | > n 1 / p ) E | X 1 | p < , n = 1 n 1 / p E | X 1 | I ( | X 1 | > n 1 / p ) = n = 1 j = n n 1 / p E | X 1 | I ( j < | X 1 | p j + 1 ) n = 1 n 1 / p E | X 1 | I ( | X 1 | > n 1 / p ) = j = 1 n = 1 j n 1 / p E | X 1 | I ( j < | X 1 | p j + 1 ) n = 1 n 1 / p E | X 1 | I ( | X 1 | > n 1 / p ) j = 1 E | X 1 | p | X 1 | 1 p I ( j < | X 1 | p j + 1 ) 0 j x 1 / p d x n = 1 n 1 / p E | X 1 | I ( | X 1 | > n 1 / p ) j = 1 E | X 1 | p I ( j < | X 1 | p j + 1 ) n = 1 n 1 / p E | X 1 | I ( | X 1 | > n 1 / p ) j = 0 E | X 1 | p I ( j < | X 1 | p j + 1 ) n = 1 n 1 / p E | X 1 | I ( | X 1 | > n 1 / p ) = E | X 1 | p < .

Thus we can also get (3.4) from Kronecker’s lemma. □

Theorem 3.2 Let { X n ,n1} be a sequence of NSD random variables with mean zero and finite second moments. Let { b n ,n1} be a sequence of positive nondecreasing real numbers. Then, for any ε>0 and n1,

P ( max 1 k n | 1 b k j = 1 k X j | > ε ) 8 ε 2 j = 1 n E X j 2 b j 2 .
(3.5)

Proof Without loss of generality, we may assume that b n 1 for all n1. Let α= 2 . For i0, define A i ={1kn: α i b k < α i + 1 }. Then A i may be an empty set. For A i , we let v(i)=max{k:k A i } and t n be the index of the last nonempty set A i . Obviously, A i A j = if ij and i = 0 t n A i ={1,2,,n}. It is easy to see that α i b k b v ( i ) < α i + 1 if k A i . By Markov’s inequality and Lemma 2.4, we have

P { max 1 k n | 1 b k j = 1 k X j | > ε } = P { max 0 i t n , A i max k A i | 1 b k j = 1 k X j | > ε } = P { 0 i t n , A i ( max k A i | 1 b k j = 1 k X j | > ε ) } i = 0 , A i t n P { max k A i | 1 b k j = 1 k X j | > ε } i = 0 , A i t n P { 1 α i max 1 k v ( i ) | j = 1 k X j | > ε } 1 ε 2 i = 0 , A i t n 1 α 2 i E ( max 1 k v ( i ) | j = 1 k X j | 2 ) 2 ε 2 i = 0 , A i t n 1 α 2 i j = 1 v ( i ) E X j 2 = 2 ε 2 j = 1 n E X j 2 i = 0 , A i , v ( i ) j t n 1 α 2 i .
(3.6)

Now we estimate i = 0 , A i , v ( i ) j t n 1 α 2 i . Let i 0 =min{i: A i ,v(i)j}. Then b j b v ( i 0 ) < α i 0 + 1 follows from the definition of v(i). Therefore

i = 0 , A i , v ( i ) j t n 1 α 2 i < i = i 0 1 α 2 i = 1 1 1 α 2 1 α 2 i 0 = α 2 1 1 α 2 1 α 2 ( i 0 + 1 ) < α 2 1 1 α 2 1 b j 2 = 4 b j 2 .
(3.7)

Combining (3.6) with (3.7), we obtain (3.5) immediately. □

Theorem 3.3 Let { X n ,n1} be a sequence of NSD random variables with mean zero and finite second moments. Let { b n ,n1} be a sequence of positive nondecreasing real numbers. Then, for any ε>0 and for any positive integer m<n,

P ( max m k n | 1 b k j = 1 k X j | > ε ) 4 ε 2 b m 2 { j = 1 m E X j 2 + 2 1 k < j m Cov ( X k , X j ) } + 32 ε 2 j = m + 1 n E X j 2 b j 2 .
(3.8)

Proof Observe that

max m k n | 1 b k j = 1 k X j || 1 b m j = 1 m X j |+ max m + 1 k n | 1 b k j = m + 1 k X j |.

Then

P { max m k n | 1 b k j = 1 k X j | > ε } P { | 1 b m j = 1 m X j | > ε 2 } + P { max m + 1 k n | 1 b k j = m + 1 k X j | > ε 2 } I + I I .
(3.9)

For I, by Markov’s inequality, we have

I 4 ε 2 b m 2 E | j = 1 m X j | 2 = 4 ε 2 b m 2 [ j = 1 m E X j 2 + 2 1 k < j m E X k X j ] = 4 ε 2 b m 2 [ j = 1 m E X j 2 + 2 1 k < j m Cov ( X k , X j ) ] .
(3.10)

For II, we will apply Theorem 3.2 to { X m + i ,1inm} and { b m + i ,1inm}. According to Lemma 2.1, { X m + i ,1inm} is NSD. Noting that

max m + 1 k n | 1 b k j = m + 1 k X j |= max 1 k n m | 1 b m + k j = 1 k X m + j |,

thus, by Theorem 3.2, we obtain

I I = P { max m + 1 k n | 1 b k j = m + 1 k X j | > ε 2 } = P { max 1 k n m | 1 b m + k j = 1 k X m + j | > ε 2 } 8 ( ε 2 ) 2 j = 1 n m E X m + j 2 b m + j 2 = 32 ε 2 j = m + 1 n E X j 2 b j 2 .
(3.11)

Therefore the desired result (3.8) follows from (3.9)-(3.11) immediately. □

Theorem 3.4 Let { b n ,n1} be a sequence of positive nondecreasing real numbers. Let { X n ,n1} be a sequence of NSD random variables with mean zero and j = 1 E X j 2 b j 2 <. If 0<r<2, then

E ( sup n 1 | S n b n | r ) 1+ 8 r 2 r j = 1 E X j 2 b j 2 <.
(3.12)

Proof For 0<r<2, t>0, by Theorem 3.2, it follows that

P ( sup n 1 | S n b n | r > t ) = P [ N = 1 ( max 1 n N | S n b n | > t 1 / r ) ] = lim N P ( max 1 n N | S n b n | > t 1 / r ) lim N 8 t 2 / r j = 1 N E X j 2 b j 2 = 8 t 2 / r j = 1 E X j 2 b j 2 .

Thus

E ( sup n 1 | S n b n | r ) = 0 P ( sup n 1 | S n b n | r > t ) d t = 0 1 P ( sup n 1 | S n b n | r > t ) d t + 1 P ( sup n 1 | S n b n | r > t ) d t 1 + 8 j = 1 E X j 2 b j 2 1 t 2 / r d t = 1 + 8 r 2 r j = 1 E X j 2 b j 2 < .

 □

Example 3.5 Similar to the proof of Theorem 2.1 of Shen et al. [8], we can get the following inequalities for NSD random variables.

Let p1. Suppose that { X n ,n1} is a sequence of NSD random variables with mean zero and E | X n | p <. Then, for n1,

E ( max 1 k n | i = 1 k X i | p ) 2E| i = 1 n X i | p ,
(3.13)
E ( max 1 k n | i = 1 k X i | p ) 2 3 p i = 1 n E | X i | p for 1p2,
(3.14)
E ( max 1 k n | i = 1 k X i | p ) C p { ( i = 1 n E X i 2 ) p / 2 + i = 1 n E | X i | p } for p>2,
(3.15)

where C p is a positive constant depending only on p.

In fact, taking f(x)= [ max ( 0 , x ) ] p in (2.1), we can get (3.13) similar to the proof of Theorem 2.1 in [8]. Following the same line of arguments as in the proof of Theorem 2 of Shao [14], we can obtain (3.14) and (3.15) immediately.

Equation (3.15) provides the Rosenthal-type inequality for NSD random variables, which is one of the most interesting inequalities in probability theory. Hu [2] pointed out that the Rosenthal-type inequality remains true for NSD random variables. Here we give the proof of this inequality and provide other two inequalities.

Remark 3.6 Theorem 3.1 provides Marcinkiewicz-type strong law of large numbers for NSD random variables. Marcinkiewicz strong law of large numbers for independent sequences and other dependent sequences was studied by many authors. See, for example, Lin et al. [15] for independent sequences, Wu and Jiang [16] for ρ ˜ -mixing sequences, Wu [17] for PNQD sequences, Wang et al. [18] for Martingale difference sequences, and so forth.

Remark 3.7 If { X n ,n1} is a sequence of NA random variables with finite second moments, Chrisofides and Vaggelatou [5] obtained the following result: for any ε>0,

P ( max m k n | 1 b k j = 1 k ( X j E X j ) | > ε ) 32 ε 2 b m 2 j = 1 m Var X j + 32 ε 2 j = m + 1 n Var X j b j 2 .

So, if we further assume that E X n =0, n1, then

P ( max m k n | 1 b k j = 1 k X j | > ε ) 32 ε 2 b m 2 j = 1 m E X j 2 + 32 ε 2 j = m + 1 n E X j 2 b j 2 .

If { X n ,n1} is a sequence of NA, then { X n ,n1} is a sequence of NSD, by Cov( X k , X j )0 and (3.8)

P ( max m k n | 1 b k j = 1 k X j | > ε ) 4 ε 2 b m 2 j = 1 m E X j 2 + 32 ε 2 j = m + 1 n E X j 2 b j 2 .

Hence Theorem 3.2 and Theorem 3.3 extend the Hajek-Renyi-type inequalities for NA random variables (Chrisofides and Vaggelatou [5]) and the factors are improved. As an application of Theorem 3.2, Theorem 3.4 extends the result of NA random variables (Liu et al. [13]) to NSD random variables.

References

  1. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann. Stat. 1983, 11: 286-295. 10.1214/aos/1176346079

    Article  MathSciNet  MATH  Google Scholar 

  2. Hu TZ: Negatively superadditive dependence of random variables with applications. Chinese J. Appl. Probab. Statist. 2000, 16: 133-144.

    MathSciNet  MATH  Google Scholar 

  3. Block HW, Griffths WS, Savits TH: L -superadditive structure functions. Adv. Appl. Probab. 1989, 21: 919-929. 10.2307/1427774

    Article  MathSciNet  MATH  Google Scholar 

  4. Kemperman JHB: On the FKG-inequalities for measures on a partially ordered space. Indag. Math. 1977, 80: 313-331. 10.1016/1385-7258(77)90027-0

    Article  MathSciNet  MATH  Google Scholar 

  5. Christofides TC, Vaggelatou E: A connection between supermodular ordering and positive/negative association. J. Multivar. Anal. 2004, 88: 138-151. 10.1016/S0047-259X(03)00064-2

    Article  MathSciNet  MATH  Google Scholar 

  6. Eghbal N, Amini M, Bozorgnia A: Some maximal inequalities for quadratic forms of negative superadditive dependence random variables. Stat. Probab. Lett. 2010, 80: 587-591. 10.1016/j.spl.2009.12.014

    Article  MathSciNet  MATH  Google Scholar 

  7. Eghbal N, Amini M, Bozorgnia A: On the Kolmogorov inequalities for quadratic forms of dependent uniformly bounded random variables. Stat. Probab. Lett. 2011, 81: 1112-1120. 10.1016/j.spl.2011.03.005

    Article  MathSciNet  MATH  Google Scholar 

  8. Shen Y, Wang XJ, Yang WZ, Hu SH: Almost sure convergence theorem and strong stability for weighted sums of NSD random variables. Acta Math. Sin. Engl. Ser. 2013, 29: 743-756. 10.1007/s10114-012-1723-6

    Article  MathSciNet  MATH  Google Scholar 

  9. Sung SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 2011, 52: 447-454. 10.1007/s00362-009-0241-9

    Article  MathSciNet  MATH  Google Scholar 

  10. Sung SH:On the strong convergence for weighted sums of ρ -mixing random variables. Stat. Pap. 2013, 54: 773-781. 10.1007/s00362-012-0461-2

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhou XC, Tan CC, Lin JG:On the strong laws for weighted sums of ρ -mixing random variables. J. Inequal. Appl. 2011., 2011: Article ID 157816

    Google Scholar 

  12. Zhou XC: Complete moment convergence of moving average processes under φ -mixing assumptions. Stat. Probab. Lett. 2010, 80: 285-292. 10.1016/j.spl.2009.10.018

    Article  MathSciNet  MATH  Google Scholar 

  13. Liu JJ, Gan SX, Chen PY: The Hajek-Renyi inequality for the NA random variables and its application. Stat. Probab. Lett. 1999, 43: 99-105. 10.1016/S0167-7152(98)00251-X

    Article  MathSciNet  MATH  Google Scholar 

  14. Shao QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J. Theor. Probab. 2000, 13: 343-355. 10.1023/A:1007849609234

    Article  MathSciNet  MATH  Google Scholar 

  15. Lin ZY, Lu CR, Su ZG: Probability Limit Theory. Higher Education Press, Beijing; 1999.

    Google Scholar 

  16. Wu QY, Jiang YY:Some strong limit theorems for ρ ˜ -mixing sequences of random variables. Stat. Probab. Lett. 2008, 78: 1017-1023. 10.1016/j.spl.2007.09.061

    Article  MathSciNet  MATH  Google Scholar 

  17. Wu QY: Convergence properties of pairwise NQD random sequences. Acta Math. Sin. New Ser. 2002, 45(3):617-624.

    MathSciNet  MATH  Google Scholar 

  18. Wang XJ, Hu SH, Yang WZ, Wang XH: Convergence rate in the strong law of large numbers for martingale difference sequences. Abstr. Appl. Anal. 2012., 2012: Article ID 572493

    Google Scholar 

Download references

Acknowledgements

Article is supported by the National Natural Science Foundation of China (11171001, 11201001), Doctoral Research Start-up Funds Project of Anhui University, Key Program of Research and Development Foundation of Hefei University (13KY05ZD) and Natural Science Foundation of Anhui Province (1208085QA03).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuhe Hu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Shen, Y., Wang, X. & Hu, S. On the strong convergence and some inequalities for negatively superadditive dependent sequences. J Inequal Appl 2013, 448 (2013). https://doi.org/10.1186/1029-242X-2013-448

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-448

Keywords