Skip to main content

Probability inequalities for END sequence and their applications

Abstract

Some probability inequalities for extended negatively dependent (END) sequence are provided. Using the probability inequalities, we present some moment inequalities, especially the Rosenthal-type inequality for END sequence. At last, we study the asymptotic approximation of inverse moment for nonnegative END sequence with finite first moments, which generalizes and improves the corresponding results of Wu et al. [Stat. Probab. Lett. 79, 1366-1371 (2009)], Wang et al. [Stat. Probab. Lett. 80, 452-461 (2010)], and Sung [J. Inequal. Appl. 2010, Article ID 823767, 13pp. (2010). doi:10.1155/2010/823767].

MSC(2000): 60E15; 62G20.

1 Introduction

It is well known that the probability inequality plays an important role in various proofs of limit theorems. In particular, it provides a measure of convergence rate for the strong law of large numbers. The main purpose of the article is to provide some probability inequalities for extended negatively dependent (END) sequence, which contains independent sequence, NA sequence, and NOD sequence as special cases. These probability inequalities for END random variables are mainly inspired by Fakoor and Azarnoosh [1] and Asadian et al. [2]. Using the probability inequalities, we can further study the moment inequalities and asymptotic approximation of inverse moment for END sequence.

First, we will recall the definitions of NOD and END sequences.

Definition 1.1 (cf. Joag-Dev and Proschan[3]). A finite collection of random variables X1, X2, ..., X n is said to be negatively upper orthant dependent (NUOD) if for all real numbers x1, x2, ..., x n ,

P ( X i > x i , i = 1 , 2 , , n ) i = 1 n P ( X i > x i ) ,
(1.1)

and negatively lower orthant dependent (NLOD) if for all real numbers x1, x2, ..., x n ,

P ( X i x i , i = 1 , 2 , , n ) i = 1 n P ( X i x i ) .
(1.2)

A finite collection of random variables X1, X2, ..., X n is said to be negatively orthant dependent (NOD) if they are both NUOD and NLOD.

An infinite sequence {X n , n ≥ 1} is said to be NOD if every finite subcollection is NOD.

Definition 1.2 (cf. Liu[4]). We call random variables {X n , n ≥ 1} END if there exists a constant M > 0 such that both

P ( X 1 > x 1 , X 2 > x 2 , , X n > x n ) M i = 1 n P ( X i > x i )
(1.3)

and

P ( X 1 x 1 , X 2 x 2 , , X n x n ) M i = 1 n P ( X i x i )
(1.4)

hold for each n ≥ 1 and all real numbers x1, x2, ..., x n .

The concept of END sequence was introduced by Liu [4]. Some applications for END sequence have been found. See, for example, Liu [4] obtained the precise large deviations for dependent random variables with heavy tails. Liu [5] studied the sufficient and necessary conditions of moderate deviations for dependent random variables with heavy tails. It is easily seen that independent random variables and NOD random variables are END. Joag-Dev and Proschan [3] pointed out that NA random variables are NOD. Thus, NA random variables are END. Since END random variables are much weaker than independent random variables, NA random variables and NOD random variables, studying the limit behavior of END sequence is of interest.

Throughout the article, let {X n , n ≥ 1} be a sequence of END random variables defined on a fixed probability space ( Ω , , P ) with respective distribution functions F1, F2, .... Denote X+ = max{0, X}. c n ~ d n means c n d n - 1 1 as n → ∞, and c n = o(d n ) means c n d n - 1 0 as n → ∞. Let M and C be positive constants which may be different in various places. Set

M t , n = i = 1 n E | X i | t , S n = i = 1 n X i , n 1 .

The following lemma is useful.

Lemma 1.1 (cf. Liu[5]). Let random variables X1, X2, ..., X n be END.

(i) If f1, f2, ..., f n are all nondecreasing (or nonincreasing) functions, then random variables f1(X1), f2(X2), ..., f n (X n ) are END.

(ii) For each n ≥ 1, there exists a constant M > 0 such that

E j = 1 n X j + M j = 1 n E X j + .
(1.5)

Lemma 1.2. Let {X n , n ≥ 1} be a sequence of END random variables and {t n , n ≥ 1} be a sequence of nonnegative numbers (or nonpositive numbers), then for each n ≥ 1, there exists a constant M > 0 such that

E i = 1 n e t i X i M i = 1 n E e t i X i .
(1.6)

As a byproduct, for any t ,

E i = 1 n e t X i M i = 1 n E e t X i .
(1.7)

Proof. The desired result follows from Lemma 1.1 (i) and (ii) immediately.   □

The organization of this article is as follows: The probability inequalities for END sequence are provided in Section 2, the moment inequalities for END sequence are presented in Section 3, and the asymptotic approximation of inverse moment for nonnegative END sequence is studied in Section 4.

2 Probability inequalities for sums of END sequence

In this section, we will give some probability inequalities for END random variables, which can be applied to obtain the moment inequalities and strong law of large numbers. The proofs of the probability inequalities for END random variables are mainly inspired by Fakoor and Azarnoosh [1] and Asadian et al. [2]. Let x, y be arbitrary positive numbers.

Theorem 2.1. Let 0 < t ≤ 1. Then, there exists a positive constant M such that

P ( S n x ) i = 1 n P ( X i y ) + M exp x y - x y log 1 + x y t - 1 M t , n .
(2.1)

If xyt-1> M t, n , then

P ( S n x ) i = 1 n P ( X i y ) + M exp x y - M t , n y t - x y log x y t - 1 M t , n .
(2.2)

Proof. For y > 0, denote Y i = min(X i , y), i = 1, 2, ..., n and T n = i = 1 n Y i , n ≥ 1. It is easy to check that

{ S n x } { T n S n } { T n x } ,

which implies that for any positive number h,

P ( S n x ) P ( T n S n ) + P ( T n x ) i = 1 n P ( X i y ) + e - h x E e h T n .
(2.3)

Lemma 1.1 (i) implies that Y1, Y2, ..., Y n are still END random variables. It follows from (2.3) and Lemma 1.2 that

P ( S n x ) i = 1 n P ( X i y ) + M e - h x i = 1 n E e h Y i ,
(2.4)

where M is a positive constant. For 0 < t ≤ 1, the function (ehu - 1)/ut is increasing on u > 0. Thus,

E e h Y i = - y ( e h u - 1 ) d F i ( u ) + y ( e h y - 1 ) d F i ( u ) + 1 0 y ( e h u - 1 ) d F i ( u ) + y ( e h y - 1 ) d F i ( u ) + 1 e h y - 1 y t 0 y u t d F i ( u ) + e h y - 1 y t y u t d F i ( u ) + 1 1 + e h y - 1 y t E | X i | t exp e h y - 1 y t E | X i | t .

Combining the inequality above and (2.4), we can get that

P ( S n x ) i = 1 n P ( X i y ) + M exp e h y - 1 y t M t , n - h x .
(2.5)

Taking h= 1 y log 1 + x y t - 1 M t , n in the right-hand side of (2.5), we can get (2.1) immediately. If xyt-1> M t, n , then the right-hand side of (2.5) attains a minimum value when h= 1 y log 1 + x y t - 1 M t , n . Substitute this value of h to the right-hand side of (2.5), we can get (2.2) immediately. This completes the proof of the theorem.

By Theorem 2.1, we can get the following Theorem 2.2 immediately.

Theorem 2.2. Let 0 < t ≤ 1. Then, there exists a positive constant M such that

P ( | S n | x ) i = 1 n P ( | X i | y ) + 2 M exp x y - x y log 1 + x y t - 1 M t , n .
(2.6)

If xyt-1> M t, n , then

P ( | S n | x ) i = 1 n P ( X i y ) + 2 M exp x y - M t , n y t - x y log x y t - 1 M t , n .
(2.7)

Theorem 2.3. Assume that EX i = 0 for each i ≥ 1, then for any h, x, y > 0, there exists a positive constant M such that

P ( | S n | x ) i = 1 n P ( | X i | y ) + 2 M exp e h y - 1 - h y y 2 M 2 , n - h x .
(2.8)

If we take h= 1 y log 1 + x y M 2 , n , then

P ( | S n | x ) i = 1 n P ( | X i | y ) + 2 M exp x y - x y log 1 + x y M 2 , n .
(2.9)

Proof. We use the same notations as that in Theorem 2.1. It is easy to see that (ehu - 1 - hu)/u2 is nondecreasing on the real line. Therefore,

E e h Y i 1 + h E X i + - y ( e h u - 1 - h u ) d F i ( u ) + y ( e h y - 1 - h y ) d F i ( u ) = 1 + - y e h u - 1 - h u u 2 u 2 d F i ( u ) + y ( e h y - 1 - h y ) d F i ( u ) 1 + e h y - 1 - h y y 2 - y u 2 d F i ( u ) + y y 2 d F i ( u ) 1 + e h y - 1 - h y y 2 E X i 2 exp e h y - 1 - h y y 2 E X i 2 ,

which implies that

P ( S n x ) i = 1 n P ( X i y ) + M exp e h y - 1 - h y y 2 M 2 , n - h x .

Replacing X i by -X i , we have

P ( - S n x ) i = 1 n P ( - X i y ) + M exp e h y - 1 - h y y 2 M 2 , n - h x .

Therefore, (2.8) follows from statements above immediately, which yields the desired result (2.9). The proof is completed.

Theorem 2.4. Assume that EX i = 0 and |X i | ≤ C for each i ≥ 1, where C is a positive constant. Denote B n = i = 1 n E X i 2 for each n ≥ 1. Then, for any x > 0, there exists a positive constant M such that

P ( S n x ) M exp - x 2 C arcsin h C x 2 B n
(2.10)

and

P ( | S n | x ) 2 M exp - x 2 C arcsin h C x 2 B n .
(2.11)

Proof. It is easily seen that

e x - x - 1 e x + e - x - 2 = 2 ( cosh x - 1 ) = 2 ( cosh | x | - 1 ) , x

and

2 ( cosh x - 1 ) x sinh x , x 0 .

Thus, for all α > 0 and i = 1, 2, ..., n, we can get that

E ( e α X i - 1 ) = E ( e α X i - α X i - 1 ) 2 E ( cosh α X i - 1 ) = 2 E ( cosh α | X i | - 1 ) E ( α | X i | sinh α | X i | ) = E α 2 X i 2 sinh α | X i | α | X i | α E X i 2 C sinh α C .

The last inequality above follows from the fact that the function sinh x x is nondecreasing on the half-line (0, ∞).

Since x = x - 1 + 1 ≤ ex-1for all x , we have by Lemma 1.2 that

E i = 1 n e α X i M i = 1 n E e α X i M i = 1 n exp ( E e α X i - 1 ) M exp α B n sinh α C C ,

where C is a positive constant. Therefore, for all α > 0 and x > 0, we have

P ( S n x ) e - α x E e α S n M exp α B n sinh α C C - x .
(2.12)

Taking α= 1 C arcsinh C x 2 B n in the right-hand side of (2.12), we can see that B n sinh α C C = x 2 and (2.10) follows.

Since {-X n , n ≥ 1} is still a sequence of END random variables from Lemma 1.1, we have by (2.10) that

P ( - S n x ) M exp - x 2 C arcsin h C x 2 B n .
(2.13)

Hence, (2.11) follows from (2.10) and (2.13) immediately. This completes the proof of the theorem.

Theorem 2.5. Assume that EX i = 0 and |X i | ≤ C for each i ≥ 1, where C is a positive constant. If B n = i = 1 n E X i 2 =O ( n ) , then n-1S n → 0 completely and in consequence n-1S n → 0 a.s.

Proof. For any ε > 0, we have by Theorem 2.4 that

P ( S n n ε ) M exp - n ε 2 C arcsin h C n ε 2 B n M exp { - n D } ,

where D is a positive constant. Therefore,

n = 1 P ( S n n ε ) < ,

which implies that n-1S n → 0 completely and in consequence n-1S n → 0 a.s. by Borel-Cantelli Lemma. The proof is completed.

Theorem 2.6. Assume that E X n 2 <and ES n ≤ 0 for each n ≥ 1 . Denote s n =E S n 2 . If there exists a nondecreasing sequence of positive numbers {c n , n ≥ 1} such that P(S n c n ) = 1, then for any x > 0,

P ( S n x ) exp - x 2 2 ( s n + x c n ) 1 + 2 3 log 1 + x c n s n .
(2.14)

In order to prove Theorem 2.6, the following lemma is useful.

Lemma 2.1 (cf. Shao[6]). For any x ≥ 0,

log ( 1 + x ) x 1 + x + x 2 2 ( 1 + x ) 2 [ 1 + 2 3 log ( 1 + x ) ] .

Proof of Theorem 2.6. Noting that (ex - 1 - x)/x2 is nondecreasing on the real line, for any h > 0 and n ≥ 1, we have

E e h S n = 1 + h E S n + E e h S n - 1 - h S n ( h S n ) 2 ( h S n ) 2 1 + E e h c n - 1 - h c n ( h c n ) 2 ( h S n ) 2 = 1 + e h c n - 1 - h c n c n 2 s n exp e h c n - 1 - h c n c n 2 s n .

Hence,

P ( S n x ) e - h x E e h S n exp e h c n - 1 - h c n c n 2 s n - h x .
(2.15)

Taking h= 1 c n log 1 + x c n s n in the right-hand side of (2.15), we can obtain that

P ( S n x ) exp x c n - x c n 1 + s n x c n log 1 + x c n s n .
(2.16)

By Lemma 2.1, we can get that

x c n 1 + s n x c n log 1 + x c n s n x c n 1 + s n x c n x c n s n + x c n + 1 2 x c n s n + x c n 2 1 + 2 3 log 1 + x c n s n = x c n + x 2 2 ( s n + x c n ) 1 + 2 3 log 1 + x c n s n .

The desired result (2.14) follows from the above inequality and (2.16) immediately.

3 Moment inequalities for END sequence

In this section, we will present some moment inequalities, especially the Rosenthal-type inequality for END sequence by means of the probability inequalities that obtained in Section 2. The proofs are also inspired by Asadian et al. [2]. The Rosenthal-type inequality can be applied to prove the asymptotic approximation of inverse moment for nonnegative END random variables in Section 4.

Theorem 3.1. Let 0 < t ≤ 1 and g(x) be a nonnegative even function and nondecreasing on the half-line [0, ∞). Assume that g(0) = 0 and Eg(X i ) < ∞ for each i ≥ 1, then for every r > 0, there exists a positive constant M such that

E g ( S n ) i = 1 n E g ( r X i ) + 2 M e r 0 1 + x t r t - 1 M t , n - r d g ( x ) .
(3.1)

Proof. Taking y= x r in Theorem 2.2, we have

P ( | S n | x ) i = 1 n P | X i | x r + 2 M e r 1 + x t r t - 1 M t , n - r ,

which implies that

0 P ( | S n | x ) d g ( x ) i = 1 n 0 P ( r | X i | x ) d g ( x ) + 2 M e r 0 1 + x t r t - 1 M t , n - r d g ( x ) .

Therefore, the desired result (3.1) follows from the inequality above and Lemma 2.4 in Petrov [7] immediately. This completes the proof of the theorem.

Corollary 3.1. Let 0 < t ≤ 1, pt, and E|X i | p < ∞ for each i ≥ 1. Then, there exists a positive constant C(p, t) depending only on p and t such that

E | S n | p C ( p , t ) M p , n + M t , n p t .
(3.2)

Proof. Taking g(x) = |x| p , pt in Theorem 3.1, we can get that

E | S n | p r p i = 1 n E | X i | p + 2 p M e r 0 x p - 1 1 + x t r t - 1 M t , n - r d x .
(3.3)

It is easy to check that

I 0 x p - 1 1 + x t r t - 1 M t , n - r d x = 0 x p - 1 r t - 1 M t , n r t - 1 M t , n + x t r d x = 0 x p - 1 1 - x t r t - 1 M t , n + x t r d x .

If we set y= x t r t - 1 M t , n + x t in the last equality above, then we have for r > p/t that

I = r p - p t M t , n p t t 0 1 y p t - 1 ( 1 - y ) r - p t - 1 d y = r p - p t M t , n p t t B p t , r - p t ,

where

B ( α , β ) = 0 1 x α - 1 ( 1 - x ) β - 1 d x , α , β > 0

is the Beta function. Substitute I to (3.3) and choose

C ( p , t ) = max r p , 2 p M e r B p t , r - p t r p - p t t ,

we can obtain the desired result (3.2) immediately. The proof is completed.

Similar to the proofs of Theorem 3.1 and Corollary 3.1, we can get the following Theorem 3.2 and Corollary 3.2 using Theorem 2.3. The details are omitted.

Theorem 3.2. Let EX i = 0 for each i ≥ 1. Assume that the conditions of Theorem 3.1 are satisfied, then for every r > 0, there exists a positive constant M such that

E g ( S n ) i = 1 n E g ( r X i ) + 2 M e r 0 1 + x 2 r M 2 , n - r d g ( x ) .
(3.4)

Corollary 3.2 (Rosenthal-type inequality). Let p ≥ 2, EX i = 0, and E|X i | p < ∞ for each i ≥ 1. Then, there exists a positive constant C p depending only on p such that

E | S n | p C p i = 1 n E | X i | p + i = 1 n E | X i | 2 p 2 .
(3.5)

4 Asymptotic approximation of inverse moment for nonnegative END random variables

Recently, Wu et al. [8] studied the asymptotic approximation of inverse moment for nonnegative independent random variables by means of the truncated method and Berstein's inequality, and obtained the following result:

Theorem A. Let {Z n , n ≥ 1} be a sequence of independent, nonnegative, and non-degenerated random variables. Suppose that

(i)E Z n 2 <, n ≥ 1;

(ii) EX n → ∞ as n → ∞, where

X n = i = 1 n Z i / B n , B n 2 = i = 1 n V a r Z i ;

(iii) there exists a finite positive constant C1not depending on n such that sup1≤inEZ i /B n C1;

(iv) for some η > 0,

B n - 2 i = 1 n E Z i 2 I ( Z i > η B n ) 0 , n .
(4.1)

Then, for all real numbers a > 0 and α > 0,

E ( a + X n ) - α ~ ( a + E X n ) - α , n .
(4.2)

Wang et al. [9] pointed out that the condition (iii) in Theorem A can be removed and extended the result for independent random variables to the case of NOD random variables. Shi et al. [10] obtained (4.2) for B n = 1 and pointed out that the existence of finite second moments is not required. Sung [11] studied the asymptotic approximation of inverse moments for nonnegative random variables satisfying a Rosenthal-type inequality. For more details about asymptotic approximation of inverse moment, one can refer to Garcia and Palacios [12], Kaluszka and Okolewski [13], and Hu et al. [14], and so on.

The main purpose of this section is to show that (4.2) holds under very mild conditions. Our results will extend and improve the results of Wu et al. [8], Wang et al. [9], and Sung [11].

Now, we state and prove the results of asymptotic approximation of inverse moments for nonnegative END random variables.

Theorem 4.1. Let {Z n , n ≥ 1} be a sequence of nonnegative END random variables and {B n , n ≥ 1} be a sequence of positive constants. Suppose that

(i) EZ n < ∞, n ≥ 1;

(ii) μ n E X n as n∞, where X n = B n - 1 k = 1 n Z k ;

(iii) there exists some b > 0 such that

k = 1 n E Z k I ( Z k > b B n ) k = 1 n E Z k 0 , n .
(4.3)

Then, for all real numbers a > 0 and α > 0, (4.2) holds.

Proof. It is easily seen that f(x) = (a + x)-αis a convex function of x on [0, ∞), by Jensen's inequality, we have

E ( a + X n ) - α ( a + E X n ) - α ,
(4.4)

which implies that

liminf n ( a + E X n ) α E ( a + X n ) - α 1 .
(4.5)

To prove (4.2), it is enough to show that

limsup n ( a + E X n ) α E ( a + X n ) - α 1 .
(4.6)

In order to prove (4.6), we need only to show that for all δ (0, 1),

limsup n ( a + E X n ) α E ( a + X n ) - α ( 1 - δ ) - α .
(4.7)

By (iii), we can see that for all δ (0, 1), there exists n(δ) > 0 such that

k = 1 n E Z k I ( Z k > b B n ) δ 4 k = 1 n E Z k , n n ( δ ) .
(4.8)

Let

U n = B n - 1 k = 1 n Z k I ( Z k b B n ) + b B n I ( Z k > b B n )

and

E ( a + X n ) - α = E ( a + X n ) - α I ( U n μ n - δ μ n ) + E ( a + X n ) - α I ( U n < μ n - δ μ n ) Q 1 + Q 2 .
(4.9)

For Q1, since X n U n , we have

Q 1 E ( a + X n ) - α I ( X n μ n - δ μ n ) ( a + μ n - δ μ n ) - α .
(4.10)

By (4.8), we have for nn(δ) that

μ n - E U n = B n - 1 k = 1 n E Z k I ( Z k > b B n ) - B n - 1 k = 1 n b B n E I ( Z k > b B n ) B n - 1 k = 1 n E Z k I ( Z k > b B n ) + B n - 1 k = 1 n b B n E I ( Z k > b B n ) B n - 1 k = 1 n E Z k I ( Z k > b B n ) + B n - 1 k = 1 n E Z k I ( Z k > b B n ) = 2 B n - 1 k = 1 n E Z k I ( Z k > b B n ) δ μ n 2 .
(4.11)

For each n ≥ 1, it is easy to see that {Z k I(Z k bB n ) + bB n I(Z k > bB n ), 1 ≤ kn} are END random variables by Lemma 1.1. Therefore, by (4.11), Markov's inequality, Corollary 3.2, and C r 's inequality, for any p > 2 and nn(δ),

Q 2 a - α P U n < μ n - δ μ n = a - α P ( E U n - U n > δ μ n - ( μ n - E U n ) ) a - α P ( E U n - U n > δ μ n 2 ) a - α P ( | U n - E U n | > δ μ n 2 ) C μ n - p E | U n - E U n | p C μ n - p B n - 2 k = 1 n E Z k 2 I ( Z k b B n ) + B n - 2 k = 1 n b 2 B n 2 E I ( Z k > b B n ) p 2 + C μ n - p B n - p k = 1 n E Z k p I ( Z k b B n ) + B n - p k = 1 n b p B n p E I ( Z k > b B n ) C μ n - p B n - 1 k = 1 n E Z k I ( Z k b B n ) + B n - 1 k = 1 n E Z k I ( Z k > b B n ) p 2 + C μ n - p B n - 1 k = 1 n E Z k I ( Z k b B n ) + C μ n - p B n - 1 k = 1 n E Z k I ( Z k > b B n ) = C μ n - p μ n p 2 + μ n = C μ n - p 2 + μ n 1 - p .
(4.12)

Taking p > max {2, 2α, α +1}, we have by (4.9), (4.10), and (4.12) that

lim sup n ( a + μ n ) α E ( a + X n ) α lim sup n ( a + μ n ) α ( a + μ n δ μ n ) α + lim sup n ( a + μ n ) α ( C μ n p / 2 + C μ n 1 p ) = ( 1 δ ) α ,

which implies (4.7). This completes the proof of the theorem.

Remark 4.1. Theorem 4.1 in this article generalizes and improves the corresponding results of Wu et al. [8], Wang et al. [9], and Sung [11]. First, Theorem 4.1 in this article is based on the condition EZ n < ∞, n ≥ 1, which is weaker than the condition E Z n 2 <, n ≥ 1 in the above cited references. Second, {B n , n ≥ 1} is an arbitrary sequence of positive constants in Theorem 4.1, while B n 2 = i = 1 n Var Z i in the above cited references. If we take B n ≡ 1, we can get the asymptotic approximation of inverse moments for the partial sums of nonnegative END random variables. Third, (4.3) is weaker than (4.1). Actually, by the condition (4.1), we can see that

B n - 1 i = 1 n E Z i I ( Z i > η B n ) η - 1 B n - 2 i = 1 n E Z i 2 I ( Z i > η B n ) 0 , n ,

which implies that

i = 1 n E Z i I ( Z i > η B n ) i = 1 n E Z i = B n - 1 i = 1 n E Z i I ( Z i > η B n ) μ n 0 , n ,

since μ n → ∞, i.e. (4.3) holds.

References

  1. Fakoor V, Azarnoosh HA: Probability inequalities for sums of negatively dependent random variables. Park J Stat 2005,21(3):257–264.

    MathSciNet  Google Scholar 

  2. Asadian N, Fakoor V, Bozorgnia A: Rosenthal's type inequalities for negatively orthant dependent random variables. JIRSS 2006,5(1–2):69–75.

    Google Scholar 

  3. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann Stat 1983,11(1):286–295. 10.1214/aos/1176346079

    Article  MathSciNet  Google Scholar 

  4. Liu L: Precise large deviations for dependent random variables with heavy tails. Stat Probab Lett 2009, 79: 1290–1298. 10.1016/j.spl.2009.02.001

    Article  Google Scholar 

  5. Liu L: Necessary and sufficient conditions for moderate deviations of dependent random variables with heavy tails. Sci China Ser A Math 2010,53(6):1421–1434. 10.1007/s11425-010-4012-9

    Article  Google Scholar 

  6. Shao QM: A comparison theorem on moment inequalities between negatively associated and independent random variables. J Theoret Probab 2000, 13: 343–356. 10.1023/A:1007849609234

    Article  MathSciNet  Google Scholar 

  7. Petrov VV: Limit Theorems of Probability Theory: Sequences of Independent Random Variables. Clarendon Press, Oxford; 1995.

    Google Scholar 

  8. Wu TJ, Shi XP, Miao BQ: Asymptotic approximation of inverse moments of nonnegative random variables. Stat Probab Lett 2009, 79: 1366–1371. 10.1016/j.spl.2009.02.010

    Article  MathSciNet  Google Scholar 

  9. Wang XJ, Hu SH, Yang WZ, Ling NX: Exponential inequalities and inverse moment for NOD sequence. Stat Probab Lett 2010, 80: 452–461. 10.1016/j.spl.2009.11.023

    Article  MathSciNet  Google Scholar 

  10. Shi XP, Wu YH, Liu Y: A note on asymptotic approximations of inverse moments of nonnegative random variables. Stat Probab Lett 2010, 80: 1260–1264. 10.1016/j.spl.2010.04.004

    Article  MathSciNet  Google Scholar 

  11. Sung SH: On inverse moments for a class of nonnegative random variables. J Inequal Appl 2010, 2010: 13. Article ID 823767

    Article  Google Scholar 

  12. Garcia NL, Palacios JL: On inverse moments of nonnegative random variables. Stat Probab Lett 2001, 53: 235–239. 10.1016/S0167-7152(01)00008-6

    Article  MathSciNet  Google Scholar 

  13. Kaluszka M, Okolewski A: On Fatou-type lemma for monotone moments of weakly convergent random variables. Stat Probab Lett 2004, 66: 45–50. 10.1016/j.spl.2003.10.009

    Article  MathSciNet  Google Scholar 

  14. Hu SH, Chen GJ, Wang XJ, Chen EB: On inverse moments of nonnegative weakly convergent random variables. Acta Math Appl Sin 2007, 30: 361–367. (in Chinese)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The author was most grateful to the Editor Andrei Volodin and anonymous referee for the careful reading of the manuscript and valuable suggestions which helped in significantly improving an earlier version of this article.

The research was supported by the National Natural Science Foundation of China (11171001, 71071002), the Academic Innovation Team of Anhui University (KJTD001B), Provincial Natural Science Research Project of Anhui Colleges (KJ2010A005), Talents youth Fund of Anhui Province Universities (2010SQRL016ZD) and Youth Science Research Fund of Anhui University (2009QN011A).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aiting Shen.

Additional information

Competing interests

The author declares that they have no competing interests.

Authors' contributions

Some probability inequalities and moment inequalities for extended negatively dependent (END) sequence are provided. The asymptotic approximation of inverse moment for nonnegative END sequence with finite first moments is obtained. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Shen, A. Probability inequalities for END sequence and their applications. J Inequal Appl 2011, 98 (2011). https://doi.org/10.1186/1029-242X-2011-98

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2011-98

Keywords