Skip to main content

Some probability inequalities for a class of random variables and their applications

Abstract

Some probability inequalities for a class of random variables are presented. As applications, we study the complete convergence for it. Our main results generalize the corresponding ones for negatively associated random variables and negatively orthant dependent random variables.

MSC:60E15, 60F15.

1 Introduction

Let { X n ,n1} be a sequence of random variables defined on a fixed probability space (Ω,F,P). The exponential inequality for the partial sums i = 1 n ( X i E X i ) plays an important role in various proofs of limit theorems. In particular, it provides a measure of convergence rate for the strong law of large numbers. The main purpose of the paper is to present some probability inequalities for a class of random variables. As applications, we will give some complete convergence for a class of random variables.

Firstly, we will recall the definitions of negatively orthant dependent random variables and acceptable random variables.

Definition 1.1 A finite collection of random variables X 1 , X 2 ,, X n is said to be negatively orthant dependent (NOD, in short) if the following two inequalities:

P( X 1 > x 1 , X 2 > x 2 ,, X n > x n ) i = 1 n P( X i > x i )
(1.1)

and

P( X 1 x 1 , X 2 x 2 ,, X n x n ) i = 1 n P( X i x i )
(1.2)

hold for all real numbers x 1 , x 2 ,, x n . An infinite sequence { X n ,n1} is said to be NOD if every finite subcollection is NOD.

The notion of NOD random variables was introduced by Lehmann [1] and developed in Joag-Dev and Proschan [2]. Obviously, independent random variables are NOD. Joag-Dev and Proschan [2] pointed out that negatively associated (NA, in short) random variables are NOD, but neither NUOD nor NLOD implies NA. They also presented an example in which X=( X 1 , X 2 , X 3 , X 4 ) possesses NOD, but does not possess NA. So, we can see that NOD is weaker than NA.

Recently, Giuliano Antonini et al. [3] introduced the following notion of acceptability.

Definition 1.2 We say that a finite collection of random variables X 1 , X 2 ,, X n is acceptable if for any real λ,

Eexp ( λ i = 1 n X i ) i = 1 n Eexp(λ X i ).
(1.3)

An infinite sequence of random variables { X n ,n1} is acceptable if every finite subcollection is acceptable.

As is mentioned in Giuliano Antonini et al. [3], a sequence of NOD random variables with a finite Laplace transform or finite moment generating function near zero (and hence a sequence of negatively associated random variables with finite Laplace transform, too) provides us an example of acceptable random variables. For example, Xing et al. [4] consider a strictly stationary negatively associated sequence of random variables. According to the sentence above, the sequence of strictly stationary and negatively associated random variables is acceptable. Hence, the model of acceptable random variables is more general than models considered in the previous literature. Studying the limiting behavior of acceptable random variables is of interest.

The main purpose of the paper is to present some exponential probability inequalities for a sequence of acceptable random variables and give some applications by using these exponential probability inequalities. For more details about the exponential probability inequality, one can refer to Wang et al. [57], Sung [8], Sung et al. [9] and Xing et al. [4, 10], and so forth.

The paper is organized as follows. The exponential probability inequalities for a sequence of acceptable random variables are presented in Section 2, and the complete convergence for it is obtained in Section 3. Our results are based on some moment conditions, while the main results of Sung et al. [9] are under the condition of moment and identical distribution.

Throughout the paper, let { X n ,n1} be a sequence of acceptable random variables and denote S n = i = 1 n X i for each n1.

2 Probability inequalities for acceptable random variables

Theorem 2.1 Let { X n ,n1} be a sequence of acceptable random variables and { g n ,n1} be a sequence of positive numbers with G n = i = 1 n g i for each n1. For fixed n1, if there exists a positive number T such that

E e t X k e 1 2 g k t 2 ,0tT,k=1,2,,n,
(2.1)

then

P( S n x){ e x 2 2 G n , 0 x G n T , e T x 2 , x G n T .
(2.2)

Proof For each x, by Markov’s inequality, Definition 1.2 and (2.1), we can see that

P( S n x) e t x E e t S n e t x i = 1 n E e t X i e G n t 2 2 t x ,0tT,
(2.3)

which implies that

P( S n x) inf 0 < t T e G n t 2 2 t x = e inf 0 < t T ( G n t 2 2 t x ) .
(2.4)

For fixed x0, if T x G n 0, then

e inf 0 < t T ( G n t 2 2 t x ) = e x 2 2 G n ;
(2.5)

if T x G n , then

e inf 0 < t T ( G n t 2 2 t x ) = e G n T 2 2 T x e T x 2 .
(2.6)

The desired result (2.2) follows from (2.4)-(2.6) immediately. □

Corollary 2.1 Let { X n ,n1} be a sequence of acceptable random variables and { g n ,n1} be a sequence of positive numbers with G n = i = 1 n g i for each n1. For fixed n1, if there exists a positive number T such that

E e t X k e 1 2 g k t 2 ,|t|T,k=1,2,,n,
(2.7)

then

P( S n x){ e x 2 2 G n , 0 x G n T , e T x 2 , x G n T ,
(2.8)

and

P ( | S n | x ) { 2 e x 2 2 G n , 0 x G n T , 2 e T x 2 , x G n T .
(2.9)

Proof It is easily seen that { X n ,n1} is still a sequence of acceptable random variables. By Theorem 2.1, we can see that

P( S n x){ e x 2 2 G n , 0 x G n T , e T x 2 , x G n T ,
(2.10)

which implies that (2.8) is valid. Finally, (2.9) follows (2.2) and (2.8) immediately. □

Corollary 2.2 Let { X n ,n1} be a sequence of acceptable random variables with E X k =0 and E X k 2 = σ k 2 < for each k1. Denote B n 2 = k = 1 n σ k 2 for each n1. For fixed n1, if there exists a positive number H such that

| E X k m | m ! 2 σ k 2 H m 2 ,k=1,2,,n
(2.11)

for any positive integer m2, then

P ( | S n | x ) { 2 e x 2 4 B n 2 , 0 x B n 2 H , 2 e x 4 H , x B n 2 H .
(2.12)

Proof By (2.11), we can see that

E e t X k =1+ t 2 2 σ k 2 + t 3 6 E X k 3 +1+ t 2 2 σ k 2 ( 1 + H | t | + H 2 t 2 + )

for k=1,2,,n. When |t| 1 2 H , it follows that

E e t X k 1+ t 2 σ k 2 2 1 1 H | t | 1+ t 2 σ k 2 e t 2 σ k 2 e 1 2 g k t 2 ,k=1,2,,n,
(2.13)

where g k =2 σ k 2 . Take G n k = 1 n g k =2 B n 2 and T= 1 2 H . Hence, the conditions of Corollary 2.1 are satisfied. Therefore, (2.12) follows from Corollary 2.1 immediately. □

3 Complete convergence for acceptable random variables

In this section, we will present some complete convergence for a sequence of acceptable random variables by using the probability inequality.

Theorem 3.1 Let { X n ,n1} be a sequence of acceptable random variables with E X k =0 and E X k 2 σ k 2 < for each k1. Denote B n 2 = i = 1 n σ i 2 , n1. For fixed n1, suppose that there exists a positive number H such that (2.11) holds true. If for any ε>0,

n = 1 exp { b n 2 ε 2 4 B n 2 } <,
(3.1)

and

n = 1 exp { b n ε 4 H } <,
(3.2)

where { b n ,n1} is a sequence of positive numbers, then 1 b n i = 1 n X i 0 completely as n.

Proof By Corollary 2.2, we have for any x0,

P ( | i = 1 n X i | x ) 2exp { x 2 4 B n 2 } +2exp { x 4 H } ,

which implies that

n = 1 P ( | 1 b n i = 1 n X i | ε ) 2 n = 1 exp { b n 2 ε 2 4 B n 2 } +2 n = 1 exp { b n ε 4 H } <.

This completes the proof of the theorem. □

It is easily seen that (3.2) holds if b n =n. So, we have the following corollary.

Corollary 3.1 Let { X n ,n1} be a sequence of acceptable random variables with E X i =0 and E X i 2 σ i 2 < for each i1. Denote B n 2 = i = 1 n σ i 2 , n1. Suppose that conditions (2.11) and (3.1) hold with b n =n. Then 1 n i = 1 n X i 0 completely as n.

Theorem 3.2 Let { X n ,n1} be a sequence of acceptable random variables with E X i =0 and E X i 2 σ i 2 < for each i1. Denote B n 2 = i = 1 n σ i 2 and

c n =max { ess sup | X i | B n 2 , 1 i n } ,n1.

If for any ε>0,

n = 1 exp { n ε 2 2 c n 2 B n 2 } <,
(3.3)

then 1 n i = 1 n X i 0 completely as n.

Proof By Markov’s inequality and Definition 1.2, for any ε>0 and t>0,

P ( | i = 1 n X i | ε ) = P ( i = 1 n X i ε ) + P ( i = 1 n ( X i ) ε ) e t ε B n 2 E exp { t B n 2 i = 1 n X i } + e t ε B n 2 E exp { t B n i = 1 n X i } e t ε B n 2 ( i = 1 n E e t X i B n 2 + i = 1 n E e t X i B n 2 ) 2 exp { t ε B n 2 + n t 2 c n 2 2 } .

Taking t= ε n c n 2 B n 2 in the inequality above, we can get that

P ( | i = 1 n X i | ε ) 2exp { ε 2 2 n c n 2 B n } .

It follows from the inequality above and (2.5) that

n = 1 P ( | 1 n i = 1 n X i | ε ) 2 n = 1 exp { n ε 2 2 c n 2 B n 2 } <,

which completes the proof of the theorem. □

Hanson and Wright (1971) obtained a bound on tail probabilities for quadratic forms in independent random variables using the following condition: for all n1 and all x0, there exist positive constants M and γ such that

P ( | X n | x ) M x + e γ t 2 dt.
(3.4)

Wright [11] proved that the bound established by Hanson and Wright [12] for independent symmetric random variables also holds when the random variables are not symmetric but condition (3.4) is valid. We will study the complete convergence for a sequence of acceptable random variables under condition (3.4). The main result is as follows.

Theorem 3.3 Let { X n ,n1} be a sequence of acceptable random variables satisfying condition (3.4) for all n1 and all x0, where M and γ are positive constants. Suppose that there exists a positive constant C not depending on n such that

E ( i = 1 n X i ) 2 C i = 1 n E X i 2 .
(3.5)

Then for all β>1, 1 n β i = 1 n X i 0 completely as n.

Proof By Markov’s inequality and assumption (3.5), we have that for any ε>0,

n = 1 P ( | 1 n β i = 1 n X i | > ε ) n = 1 1 n 2 β ε 2 E ( i = 1 n X i ) 2 C n = 1 1 n 2 β ε 2 i = 1 n E X i 2 .

In the following, we will estimate E X i 2 . By (3.4), we can see that

E X i 2 = 0 + 2 x P ( | X i | x ) d x 0 + 2 x ( M x + e γ t 2 d t ) d x = M 0 + e γ t 2 ( 0 t 2 x d x ) d t = M 0 + t 2 e γ t 2 d t = M π 4 γ 3 / 2 .

Hence,

n = 1 P ( | 1 n β i = 1 n X i | > ε ) C M π 4 γ 3 / 2 ε 2 n = 1 1 n 2 β 1 <.

This completes the proof of the theorem. □

References

  1. Lehmann E: Some concepts of dependence. Ann. Math. Stat. 1996, 37: 1137–1153.

    Article  Google Scholar 

  2. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann. Stat. 1983, 11(1):286–295. 10.1214/aos/1176346079

    Article  MathSciNet  Google Scholar 

  3. Giuliano AR, Kozachenko Y, Volodin A: Convergence of series of dependent φ -sub-Gaussian random variables. J. Math. Anal. Appl. 2008, 338: 1188–1203. 10.1016/j.jmaa.2007.05.073

    Article  MathSciNet  Google Scholar 

  4. Xing GD, Yang SC, Liu AL, Wang X: A remark on the exponential inequality for negatively associated random variables. J. Korean Stat. Soc. 2009, 38: 53–57. 10.1016/j.jkss.2008.06.005

    Article  MathSciNet  Google Scholar 

  5. Wang XJ, Hu SH, Yang WZ, Li XQ: Exponential inequalities and complete convergence for a LNQD sequence. J. Korean Stat. Soc. 2010, 39(4):555–564. 10.1016/j.jkss.2010.01.002

    Article  MathSciNet  Google Scholar 

  6. Wang XJ, Hu SH, Yang WZ, Ling NX: Exponential inequalities and inverse moment for NOD sequence. Stat. Probab. Lett. 2010, 80(5–6):452–461. 10.1016/j.spl.2009.11.023

    Article  MathSciNet  Google Scholar 

  7. Wang XJ, Hu SH, Shen AT, Yang WZ: An exponential inequality for a NOD sequence and a strong law of large numbers. Appl. Math. Lett. 2011, 24: 219–223. 10.1016/j.aml.2010.09.007

    Article  MathSciNet  Google Scholar 

  8. Sung SH, Srisuradetchai P, Volodin A: A note on the exponential inequality for a class of dependent random variables. J. Korean Stat. Soc. 2011, 40: 109–144. 10.1016/j.jkss.2010.08.002

    Article  MathSciNet  Google Scholar 

  9. Sung SH: On the exponential inequalities for negatively dependent random variables. J. Math. Anal. Appl. 2011, 381: 538–545. 10.1016/j.jmaa.2011.02.058

    Article  MathSciNet  Google Scholar 

  10. Xing GD, Yang SC, Liu AL: Exponential inequalities for positively associated random variables and applications. J. Inequal. Appl. 2008., 2008: Article ID 385362

    Google Scholar 

  11. Wright FT: A bound on tail probabilities for quadratic forms in independent random variables whose distributions are not necessarily symmetric. Ann. Probab. 1973, 1(6):1068–1070. 10.1214/aop/1176996815

    Article  Google Scholar 

  12. Hanson DL, Wright FT: A bound on tail probabilities for quadratic forms in independent random variables. Ann. Math. Stat. 1971, 42: 1079–1083. 10.1214/aoms/1177693335

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are most grateful to the editor and the anonymous referee for careful reading of the manuscript and valuable suggestions which helped in improving an earlier version of this paper. This work was supported by the National Natural Science Foundation of China (11201001, 11171001, 11126176 and 11226207), the Specialized Research Fund for the Doctoral Program of Higher Education of China (20093401120001), the Natural Science Foundation of Anhui Province (1308085QA03, 11040606M12, 1208085QA03), the 211 project of Anhui University and the Students Science Research Training Program of Anhui University (KYXL2012007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aiting Shen.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Shen, A., Wu, R. Some probability inequalities for a class of random variables and their applications. J Inequal Appl 2013, 57 (2013). https://doi.org/10.1186/1029-242X-2013-57

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-57

Keywords