Skip to content

Advertisement

  • Research
  • Open Access

A note on the strong limit theorem for weighted sums of sequences of negatively dependent random variables

Journal of Inequalities and Applications20122012:233

https://doi.org/10.1186/1029-242X-2012-233

  • Received: 15 February 2012
  • Accepted: 10 August 2012
  • Published:

Abstract

In this paper, the strong limit theorem for weighted sums of sequences of negatively dependent random variables is further studied. As an application, the complete convergence theorem for sequences of negatively dependent random variables is obtained. Our results partly generalize and improve the corresponding results of Cai (Metrika 68:323-331, 2008) and Wang et al. (Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. a Mat., 2011, doi:10.1007/s13398-011-0048-0) to negatively dependent random variables under mild moment conditions.

MSC:60F15.

Keywords

  • negatively dependent random variables
  • complete convergence
  • weighted sums

1 Introduction

Definition 1.1 The random variables X 1 , X 2 , , X n are said to be negatively dependent (ND) if
P ( X 1 x 1 , X 2 x 2 , , X n x n ) j = 1 n P ( X j x j ) ;
and
P ( X 1 > x 1 , X 2 > x 2 , , X n > x n ) j = 1 n P ( X j > x j ) ,

for all x 1 , x 2 , , x n R . An infinite sequence of random variables { X i ; i 1 } is said to be ND if every finite subset X 1 , X 2 , , X n is ND.

An array of random variables { X n i ; i 1 , n 1 } is called rowwise ND random variables if for every n 1 , { X n i ; i 1 } is a sequence of ND random variables.

Definition 1.2 The random variables X 1 , X 2 , , X n , n 2 are said to be negatively associated (NA) if for every pair of disjoint subsets A 1 and A 2 of { 1 , 2 , , n } ,
cov ( f 1 ( X i ; i A 1 ) , f 2 ( X j ; j A 2 ) ) 0 ,

whenever f 1 and f 2 are increasing for every variable (or decreasing for every variable), such that the covariance exists. An infinite family { X i ; i 1 } of random variables is said to be NA if every finite subfamily is NA.

The concept of ND random variables was introduced by Ebrahimi and Ghosh[1], and the concept of NA random variables was introduced by Joag-Dev and Proschan [2]. Obviously, independent random variables are ND. Joag-Dev and Proschan [2] pointed out that NA random variables are ND. They also presented an example in which X = ( X 1 , X 2 , X 3 , X 4 ) possesses ND, but does not possess NA. So, we can see that ND is much weaker than NA. Because of the wide applications of ND random variables, the notions of ND random variables have received more and more attention recently. A large number of limit theorems for ND random variables have been established by many authors. We can refer to [212]etc. Hence, extending the limit properties of independent or NA random variables to the case of ND random variables is highly desirable and of considerable significance in theory and application.

As Bai and Cheng [13] remarked, many useful linear statistics based on a random sample are weighted sums of independent identically distributed (i.i.d.) random variables. Examples include least-squares estimators, nonparametric regression function estimators, jackknife estimates and so on. In this respect, studies of strong laws for these weighted sums have demonstrated significant progress in probability theory with applications in mathematical statistics; many authors studied the strong laws for weighted sums of random variables. In the case of independence, Bai and Cheng [13] proved the following strong laws of large numbers for weighted sums.

Theorem 1.1 (Bai and Cheng [13])

Let { X , X n ; n 1 } be a sequence of i.i.d. random variables with E X n = 0 . Suppose that 0 < α , β < , 1 p < 2 and 1 / p = 1 / α + 1 / β , and let { a n i ; 1 i n , n 1 } be an array of real constants such that
A α = lim sup n A α , n < , A α , n α = n 1 i = 1 n | a n i | α .
(1.1)
If E | X n | β < , then
lim n n 1 / p ( i = 1 n a n i X i ) = 0 , a.s.
(1.2)

Theorem 1.2 (Bai and Cheng [13])

Let { X , X n ; n 1 } be a sequence of i.i.d. random variables and
E { exp ( h | X | γ ) } < for some  h > 0 and some  γ > 0 ,
(1.3)
and let { a n i ; 1 i n , n 1 } be an array of real constants such that (1.1) satisfies for α ( 0 , 2 ) . Then for 0 < α 1 and b n = n 1 / α ( log n ) 1 / γ ,
i = 1 n a n i X i / b n 0 a.s.
(1.4)
Moreover, for 1 < α < 2 and b n = n 1 / α ( log n ) 1 / γ + δ , and E X n = 0 ,
i = 1 n a n i X i / b n 0 a.s. ,
(1.5)

where δ = γ ( α 1 ) / α ( 1 + γ ) .

Wu [10]generalized and improved the above Theorem  1.1 to ND random variables, removed the identically distributed condition as follows.

Theorem 1.3 (Wu [10])

Let { X n ; n 1 } be a sequence of ND random variables which is stochastically dominated by a random variable X. Suppose that 0 < α , β < , 0 < p < 2 and 1 / p = 1 / α + 1 / β . If β > 1 , further assume that E X n = 0 . Let { a n i ; 1 i n , n 1 } be an array of real constants such that
i = 1 n | a n i | α n .
(1.6)
If E | X n | β < , then
lim n n 1 / p ( i = 1 n a n i X i ) = 0 , a.s.
(1.7)

Recently, Cai [14]and Wang et al. [12]have studied the complete convergence for NA random variables and arrays of rowwise ND random variables under the exponential moment conditions respectively. Their results generalize and improve the above Theorem  1.2 to NA and ND random variables. Inspired by Cai [14], Wang et al. [12], and other papers mentioned above, we investigate the limit behavior of ND random variables and obtain some complete convergence results. We use methods different from those of Cai [14]and Wang et al. [12].

The main purpose of this paper is to further study the complete convergence for ND random variables and arrays of rowwise ND random variables under weaker moment conditions. As applications, the complete convergence for linear statistics of ND random variables and arrays of rowwise ND random variables are obtained without assumptions of being identically distributed. The results obtained not only generalize the above Theorem 1.2 to the case of ND and arrays of rowwise ND random variables, but also partly improve the corresponding results of Cai [14] and Wang et al. [12].

Throughout this paper, C will represent a positive constant whose value may change from one appearance to the next, and a n = O ( b n ) will mean a n C ( b n ) .

We will use the following concept in this paper. Let { X n ; n 1 } be a sequence of random variables, and let X be a nonnegative random variable. If there exists a constant C ( 0 < C < ) such that
P ( | X n | t ) C P ( X t ) ,
(1.8)

for all t 0 and n 1 . Then { X n ; n 1 } is said to be stochastically dominated by X.

2 Main results and proofs

Now, we state and prove our main results of this paper.

Theorem 2.1 Let { X n ; n 1 } be a sequence of ND random variables which is stochastically dominated by a random variable X. Let T n = i = 1 n a n i X i , n 1 , where the weights { a n i : 1 i n , n 1 } are a triangular array of real constants such that a n i = 0 for i > n . Let
A β = lim sup n A β , n < ; A β , n = n 1 i = 1 n | a n i | β ,
(2.1)
where β = max ( α , γ ) for some 0 < α 2 , γ > 0 and α γ . Assume that E X n = 0 for 1 < α 2 and E | X | β < . Then
n = 1 n 1 P ( | T n | > ε b n ) < for  ε > 0 ,
(2.2)

where b n = n 1 / α ( log n ) 1 / γ .

In order to prove our results, we need the following lemmas.

Lemma 2.1 (Bozorgnia et al. [15])

Let random variables X 1 , X 2 , , X n be ND, f 1 , f 2 , , f n be all nondecreasing (or all nonincreasing) functions, then random variables f 1 ( X 1 ) , f 2 ( X 2 ) , , f n ( X n ) are ND.

Lemma 2.2 (Asadian et al. [4])

Let { X n ; n 1 } be a sequence of ND random variables with E X n = 0 and E | X n | p < for some p 2 and for all n 1 . Then there exists a positive constant C = C ( p ) depending only on p such that for all n 1 ,
E ( | i = 1 n X i | p ) C [ i = 1 n E | X i | p + ( i = 1 n ( E X i 2 ) ) p / 2 ] .
(2.3)
Lemma 2.3 Let X be a random variable and { a n i : 1 i n , n 1 } be an array of constants satisfying (2.1), b n = n 1 / α ( log n ) 1 / γ . Then
n = 1 n 1 i = 1 n P ( | a n i X | > b n ) C E | X | β .
(2.4)

The proof is similar to that of Lemma  2.3 of Sung [16]. So, we omit it.

Lemma 2.4 Let { X n ; n 1 } be a sequence of random variables which is stochastically dominated by a random variable X. For any u > 0 , t > 0 and n 1 , the following two statements hold:
(2.5)
(2.6)
Proof of Theorem 2.1 Without loss of generality, assume that a n i 0 for all 1 i n , n 1 . Define that
X i ( n ) = a n i X i I ( | a n i X i | b n ) + b n I ( a n i X i > b n ) b n I ( a n i X i < b n ) ; T j ( n ) = i = 1 j ( X i ( n ) E X i ( n ) ) , for all  i 1 .
By Lemma 2.1, we can see that for fixed n 1 , { X i ( n ) ; i 1 } is still a sequence of ND random variables. It is easy to check that for any ε > 0 ,
( | i = 1 n a n i X i | > ε b n ) ( max 1 i n | a n i X i | > b n ) ( | i = 1 n X i ( n ) | > ε b n ) ,
which implies that
P ( | T n | > ε b n ) P ( max 1 i n | a n i X i | > b n ) + P ( | T n ( n ) + i = 1 n E X i ( n ) | > ε b n ) i = 1 n P ( | a n i X i | > b n ) + P ( | T n ( n ) | > ε b n | i = 1 n E X i ( n ) | ) .
(2.7)
Firstly, we will show that
b n 1 | i = 1 n E X i ( n ) | 0 , as  n .
(2.8)
If γ > α , by i = 1 n | a n i | β = i = 1 n | a n i | γ C n and Lyapunov’s inequality,
1 n i = 1 n | a n i | α ( 1 n i = 1 n | a n i | γ ) α / γ C ,

which implies that 1 n × n × max 1 i n | a n i | α = max 1 i n | a n i | α C for n 1 .

If γ < α , it easily follows that max 1 i n | a n i | α C for n 1 .

For 0 < α 1 , it follows from Lemma 2.4 and E | X | β < that
(2.9)
For 1 < α 2 , it follows from Lemma 2.4, E X n = 0 and E | X | β < that
(2.10)
From (2.10) and (2.11), we can get (2.9) immediately. Hence, for n large enough,
P ( | T n | > ε b n ) i = 1 n P ( | a n i X i | > b n ) + P ( | T n ( n ) | > ε b n 2 ) .
(2.11)
Secondly, we need only to prove that
(2.12)
(2.13)
It follows from Lemma 2.4 and Lemma 2.3 that
I n = 1 n 1 i = 1 n P ( | a n i X i | > b n ) C n = 1 n 1 i = 1 n P ( | a n i X | > b n ) C E | X | β < ,
(2.14)

where β = max ( α , γ ) for some 0 < α 2 and γ > 0 .

It follows from Lemma 2.2 and the Markov inequality that
II n = 1 n 1 P ( | T n ( n ) | > ε b n 2 ) C n = 1 n 1 b n p E ( | T n ( n ) | p ) C n = 1 n 1 b n p [ i = 1 n E | X i ( n ) | p + ( i = 1 n E | X i ( n ) | 2 ) p / 2 ] II 1 + II 2 .
(2.15)
Let p max { 2 , γ } , α < γ , note that E | X | β = E | X | γ < . It follows from Lemma 2.4, Lemma 2.3 and the Markov inequality that
II 1 = C n = 1 n 1 b n p i = 1 n E | X i ( n ) | p C n = 1 n 1 b n p { i = 1 n | a n i | p E | X i | p I ( | a n i X i | b n ) + i = 1 n b n p P ( | a n i X i | > b n ) } C n = 1 n 1 b n p { i = 1 n | a n i | p E | X | p I ( | a n i X | b n ) + i = 1 n b n p P ( | a n i X | > b n ) } C n = 1 n 1 b n γ i = 1 n | a n i | γ E | X | γ + C n = 1 n 1 i = 1 n P ( | a n i X | > b n ) C n = 1 n γ / α ( log n ) 1 + C E | X | γ < .
(2.16)
If α > γ , note that E | X | β = E | X | α < . We can get that
II 1 = C n = 1 n 1 b n p i = 1 n E | X i ( n ) | p C n = 1 n 1 b n p { i = 1 n | a n i | p E | X i | p I ( | a n i X i | b n ) + i = 1 n b n p P ( | a n i X i | > b n ) } C n = 1 n 1 b n α i = 1 n | a n i | α E | X | α + C n = 1 n 1 i = 1 n P ( | a n i X | > b n ) C n = 1 n 1 ( log n ) α / γ + C E | X | α < .
(2.17)

Next, we will prove II 2 = C n = 1 n 1 b n p ( i = 1 n E | X i ( n ) | 2 ) p / 2 < in the following two cases.

If α < γ 2 or γ < α 2 , let p max { 2 , 2 γ α } . Noting that E | X | α < , we can get that
II 2 = C n = 1 n 1 b n p ( i = 1 n E | X i ( n ) | 2 ) p / 2 C n = 1 n 1 b n p ( i = 1 n E | a n i X i | 2 I ( | a n i X i | b n ) ) p / 2 + C n = 1 n 1 b n p ( i = 1 n | b n | 2 P ( | a n i X i | > b n ) ) p / 2 C n = 1 n 1 b n p ( i = 1 n E | a n i X | 2 I ( | a n i X | b n ) + b n 2 P ( | a n i X | > b n ) ) p / 2 + C n = 1 n 1 ( i = 1 n P ( | a n i X | > b n ) ) p / 2 C n = 1 n 1 ( i = 1 n E | a n i X | 2 b n 2 I ( | a n i X | b n ) ) p / 2 + C n = 1 n 1 ( i = 1 n P ( | a n i X | > b n ) ) p / 2 C n = 1 n 1 ( i = 1 n E | a n i X | α b n α I ( | a n i X | b n ) ) p / 2 + C n = 1 n 1 ( i = 1 n E | a n i X | α b n α ) p / 2 C n = 1 n 1 b n α p / 2 ( i = 1 n | a n i | α E | X | α ) p / 2 C n = 1 n 1 ( log n ) α p / ( 2 γ ) < .
(2.18)
If γ > 2 α or γ 2 > α , by i = 1 n | a n i | β = i = 1 n | a n i | γ C n and Lyapunov’s inequality,
( E | Z | α ) 1 / α ( E | Z | γ ) 1 / γ for  0 < α γ ,
we can obtain that
1 n i = 1 n | a n i | α ( 1 n i = 1 n | a n i | γ ) α / γ C ,
which implies that i = 1 n | a n i | α = O ( n ) . Hence, from max 1 i n | a n i | α C n and the Hölder inequality, then k α ,
i = 1 n | a n i | k = i = 1 n | a n i | α | a n i | k α C n n k α α C n k α .
So,
i = 1 n | a n i | 2 = O ( n 2 / α ) .
Let p > γ ,
II 2 = C n = 1 n 1 b n p [ ( i = 1 n E | X i ( n ) | 2 ) p / 2 ] C n = 1 n 1 b n p ( i = 1 n | a n i | 2 ) p / 2 + C n = 1 n 1 b n p ( i = 1 n | a n i | α ) p / 2 C n = 1 n 1 b n p n p / α C n = 1 n 1 ( log n ) p / γ < .
(2.19)

Hence, the desired result (2.2) follows from (2.15)-(2.19) immediately. The proof of Theorem 2.1 is complete. □

Similar to the proof of Theorem 2.1, we can get the following results for arrays of rowwise ND random variables.

Theorem 2.2 Let { X n i ; i 1 , n 1 } be an array of rowwise ND random variables which is stochastically dominated by a random variable X. Let T n = i = 1 n a n i X n i , n 1 , where the weights { a n i : 1 i n , n 1 } are a triangular array of real constants such that (2.1). Let b n = n 1 / α ( log n ) 1 / γ . If E X n i = 0 for 1 < α 2 and E | X | β < , then (2.2) holds true.

Remark 2.3 Note that the results of Cai [14] and Wang et al. [12] provide a stronger conclusion on the complete convergence for maximums of partial sums under the exponential moment condition than the results presented in Theorem 2.1 and Theorem 2.2 above for α γ ; that is, they obtained results of the form
n = 1 n 1 P ( max 1 j n | T j | > ε b n ) < for  ε > 0 .

It is still an open problem to obtain results of this type for ND random variables under the conditions of α γ and α = γ . One suggests that a solution can be obtained if a better moment inequality than that presented above in Lemma 2.2 could be established.

Declarations

Acknowledgements

The authors are very grateful to the anonymous referees and the editor Prof Andrei Volodin for their valuable comments and some helpful suggestions that improved the clarity and readability of the article. This work was supported by the Project Supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011]47), the National Natural Science Foundation of China (No:71271042, 11061012), the Plan of Jiangsu Specially-Appointed Professors and the Major Program of Key Research Center in Financial Risk Management of Jiangsu Universities of philosophy and social sciences (No:2012JDXM009).

Authors’ Affiliations

(1)
School of Mathematics Science, University of Electronic Science and Technology of China, Chengdu, 610054, P.R. China
(2)
College of Science, Guilin University of Technology, Guilin, 541004, P.R. China
(3)
Institute of Financial Engineering, School of Finance, School of Applied Mathematics, Nanjing Audit University, Nanjing, 211815, P.R. China

References

  1. Ebrahimi N, Ghosh M: Multivariate negative dependence. Commun. Stat., Theory Methods 1981, 10(4):307–337. 10.1080/03610928108828041MathSciNetView ArticleMATHGoogle Scholar
  2. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann. Stat. 1983, 11(1):286–295. 10.1214/aos/1176346079MathSciNetView ArticleMATHGoogle Scholar
  3. Volodin A: On the Kolmogorov exponential inequality for negatively dependent random variables. Pak. J. Stat., Ser. A 2002, 18: 249–254.MathSciNetMATHGoogle Scholar
  4. Asadian N, Fakoor V, Bozorgnia A: Rosental’s type inequalities for negatively orthant dependent random variables. J. Iran. Stat. Soc. 2006, 5(1–2):66–75.Google Scholar
  5. Sung SH: A note on the complete convergence for arrays of dependent random variables. J. Inequal. Appl. 2011., 2011: Article ID 76. doi:10.1186/1029–242X-2011–76Google Scholar
  6. Kuczmaszewska A: On some conditions for complete convergence for arrays of rowwise negatively dependent random variables. Stoch. Anal. Appl. 2006, 24: 1083–1095. 10.1080/07362990600958754MathSciNetView ArticleMATHGoogle Scholar
  7. Amini M, Zarei H, Bozorgnia A: Some strong limit theorems of weighted sums for negatively dependent generalized Gaussian random variables. Stat. Probab. Lett. 2007, 77: 1106–1110. 10.1016/j.spl.2007.01.015MathSciNetView ArticleMATHGoogle Scholar
  8. Amini M, Azarnoosh HA, Bozorgnia A: The strong law of large numbers for negatively dependent generalized Gaussian random variables. Stoch. Anal. Appl. 2004, 22: 893–901.MathSciNetView ArticleMATHGoogle Scholar
  9. Amini M, Bozorgnia A: Complete convergence for negatively dependent random variables. J. Appl. Math. Stoch. Anal. 2003, 16: 121–126. 10.1155/S104895330300008XView ArticleMathSciNetMATHGoogle Scholar
  10. Wu QY: A strong limit theorem for weighted sums of sequences of negatively dependent random variables. J. Inequal. Appl. 2010., 2010: Article ID 383805. doi:10.1155/2010/383805Google Scholar
  11. Wu QY: Complete convergence for negatively dependent sequences of random variables. J. Inequal. Appl. 2010., 2010: Article ID 507293. doi:10.1155/2010/507293Google Scholar
  12. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise negatively orthant dependent random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. a Mat. 2011. doi:10.1007/s13398–011–0048–0Google Scholar
  13. Bai ZD, Cheng PE: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 2000, 46: 105–112. 10.1016/S0167-7152(99)00093-0MathSciNetView ArticleMATHGoogle Scholar
  14. Cai GH: Strong laws for weighted sums of NA random variables. Metrika 2008, 68: 323–331. doi:10.1007/s00184–007–0160–5 10.1007/s00184-007-0160-5MathSciNetView ArticleMATHGoogle Scholar
  15. Bozorgnia A, Patterson RF, Taylor RL: Limit theorems for dependent random variables. World Congress Nonlinear Analysts’92 1996, 1639–1650.Google Scholar
  16. Sung SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 2011, 52: 447–454. 10.1007/s00362-009-0241-9View ArticleMathSciNetMATHGoogle Scholar

Copyright

© Huang and Wang; licensee Springer 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement