Open Access

Strong representation results of the Kaplan-Meier estimator for censored negatively associated data

Journal of Inequalities and Applications20132013:340

DOI: 10.1186/1029-242X-2013-340

Received: 4 February 2013

Accepted: 10 July 2013

Published: 25 July 2013

Abstract

In this paper, we discuss the strong convergence rates and strong representation of the Kaplan-Meier estimator and the hazard estimator based on censored data when the survival and the censoring times form negatively associated (NA) sequences. Under certain regularity conditions, strong convergence rates are established for the Kaplan-Meier estimator and the hazard estimator, and the Kaplan-Meier estimator and the hazard estimator can be expressed as the mean of random variables, with the remainder of order n 1 / 2 ln 1 / 2 n a.s.

MSC:60F15, 60F05.

Keywords

NA sequence random censorship model Kaplan-Meier estimator strong representation strong convergence rate

1 Introduction and main results

Let { T i ; i 1 } be a sequence of true survival times. Random variables (r.v.s) are not assumed to be mutually independent; it is assumed, however, that they have a common unknown continuous marginal distribution function (d.f.) F ( x ) = P ( T i x ) such that F ( 0 ) = 0 . Let the r.v.s T i be censored on the right by the censoring r.v.s Y i , so that one observes only ( Z i , δ i ) , where
Z i = min ( T i , Y i ) : = T i Y i and δ i = I ( T i Y i ) , i = 1 , , n .
Here and in the sequel, I ( A ) is the indicator random variable of the event A. In this random censorship model, the censoring times Y i , i = 1 , , n , are assumed to have the common distribution function G ( y ) = P ( Y i y ) such that G ( 0 ) = 0 ; they are also assumed to be independent of the r.v.s T i ’s. The problem at hand is that of drawing nonparametric inference about F based on the censored observations ( Z i , δ i ) , i = 1 , , n . For this purpose, define two stochastic processes on [ 0 , ) as follows:
N n ( t ) = k = 1 n I ( Z k t , δ k = 1 ) = k = 1 n I ( T k t Y k ) ,
the number of uncensored observations less than or equal to t, and
Y n ( t ) = k = 1 n I ( Z k t ) ,
the number of censored or uncensored observations greater than or equal to t. The following nonparametric estimation F ˆ n of F due to Kaplan and Meier [1] is widely used to estimate F on the basis of the data ( Z i , δ i ) :
F ˆ n ( x ) = 1 s x ( 1 d N n ( s ) Y n ( s ) ) ,

where d N n ( s ) = N n ( s ) N n ( s ) .

Let L be the distribution of the Z i ’s, L ¯ : = 1 L . Since the sequences { T n ; n 1 } and { Y n ; n 1 } are independent, it follows that L = 1 F ¯ G ¯ = 1 ( 1 F ) ( 1 G ) . The empirical d.f. L n ( t ) of L is defined by
L n ( t ) : = 1 n k = 1 n I ( Z k < t ) = 1 Y n ( t ) n : = Y ¯ n ( t ) n ,

where Y ¯ n ( t ) = k = 1 n I ( Z k < t ) .

Define (possibly infinite) times τ F , τ G and τ L by
τ F = inf { y ; F ( y ) = 1 } , τ G = inf { y ; G ( y ) = 1 } , τ L = inf { y ; L ( y ) = 1 } .
Then τ L = τ F τ G . By setting
F ( t ) = P ( Z 1 t , δ 1 = 1 ) = P ( T 1 t Y 1 ) ,
and the empirical d.f. of F is defined by
F n ( t ) : = 1 n k = 1 n I ( Z k t , δ k = 1 ) = N n ( t ) n .
We have then
F ( t ) = 0 F ( t z ) d G ( z ) = 0 t G ¯ ( z ) d F ( z ) ,
and
d F ( t ) = G ¯ ( t ) d F ( t ) .
Another question of interest in survival analysis is the estimation of the hazard function h defined as follows when it is further assumed that F has a density f:
h ( x ) : = d d x ( log F ¯ ( x ) ) = f ( x ) / F ¯ ( x ) for  F ( x ) < 1 ,
with F ¯ = 1 F . The quantity
H ( x ) = log F ¯ ( x ) = 0 x d F ( s ) F ¯ ( s ) = 0 x d F ( s ) L ¯ ( s )
(1.1)
is called the cumulative hazard function. The empirical cumulative hazard function H ˆ n ( x ) is given by
H ˆ n ( x ) : = 0 x d N n ( s ) Y n ( s ) = 0 x d F n ( s ) L ¯ n ( s ) ,
(1.2)

where L ¯ n ( s ) = 1 L n ( s ) .

Since N n ( t ) is a step function, and d N n ( Z ( k ) ) = δ ( k ) , k = 1 , 2 , , n , it can be easily seen that
H ˆ n ( x ) = k = 1 n I ( Z ( k ) x , δ ( k ) = 1 ) n k + 1 ,
(1.3)
and
F ˆ n ( x ) = 1 k = 1 n ( 1 δ ( k ) n k + 1 ) I ( Z ( k ) x ) = 1 k = 1 n ( n k n k + 1 ) I ( δ ( k ) = 1 , Z ( k ) x ) ,
(1.4)

where Z ( 1 ) Z ( 2 ) Z ( n ) denote the order statistics of Z 1 , Z 2 , , Z n , and δ ( i ) is the concomitant of Z ( i ) .

There is extensive literature on the Kaplan-Meier and the hazard estimator F ˆ n ( x ) and H ˆ n ( x ) for censored independent observations. We refer to papers by Breslow and Crowley [2], Foldes and Rejto [3] and Gu and Lai [4]. Martingale methods for analyzing properties of F ˆ n ( x ) are described in the monograph by Gill [5]. However, the censored dependent data appear in a number of applications. For example, repeated measurements in survival analysis follow this pattern, see Kang and Koehler [6] or Wei et al. [7]. In the context of censored time series analysis, Shumway et al. [8] considered (hourly or daily) measurements of the concentration of a given substance subject to some detection limits, thus being potentially censored from the right. Ying and Wei [9], Lecoutre and Ould-Saïd [10], Cai [11] and Liang and Uña-Álvarez [12] studied the convergence of F ˆ n ( x ) for the stationary α-mixing data.

The main purpose of this paper is to study the strong convergence rates and strong representation of the Kaplan-Meier estimator and the hazard estimator based on censored data when the survival and the censoring times form the NA (see the following definition) sequences. Under certain regularity conditions, we find strong convergence rates of the Kaplan-Meier and hazard estimator, and the expression of the Kaplan-Meier estimator and the hazard estimator as the mean of random variables, with the remainder of order n 1 / 2 ln 1 / 2 n a.s.

Definition Random variables X 1 , X 2 , , X n , n 2 are said to be negatively associated (NA) if for every pair of disjoint subsets A 1 and A 2 of { 1 , 2 , , n } ,
cov ( f 1 ( X i ; i A 1 ) , f 2 ( X j ; j A 2 ) ) 0 ,

where f 1 and f 2 are increasing for every variable (or decreasing for every variable) so that this covariance exists. A sequence of random variables { X i ; i 1 } is said to be NA if every finite subfamily is NA.

Obviously, if { X i ; i 1 } is a sequence of NA random variables, and { f i ; i 1 } is a sequence of nondecreasing (or non-increasing) functions, then { f i ( X i ) ; i 1 } is also a sequence of NA random variables.

This definition was introduced by Joag-Dev and Proschan [13]. A statistical test depends greatly on sampling. The random sampling without replacement from a finite population is NA, but is not independent. NA sampling has wide applications such as in multivariate statistical analysis and reliability theory. Because of the wide applications of NA sampling, the limit behaviors of NA random variables have received more and more attention recently. One can refer to Joag-Dev and Proschan [13] for fundamental properties, Matula [14] for the three series theorem, and Wu and Jiang [15, 16] for the strong convergence.

We give two lemmas, which are helpful in proving our theorems.

Lemma 1.1 (Yang [17], Lemma 1)

Let { X i ; i 1 } be a sequence of negatively associated random variables with zero means and | X i | b i , a.s. ( i = 1 , 2 , ). Let t > 0 be such that t max 1 i n b i 1 . Then, for all ε > 0 ,
P ( | i = 1 n X i | ε ) 2 exp ( t ε + t 2 i = 1 n E X i 2 ) .
Lemma 1.2 Let { X i ; i 1 } be a sequence of NA r.v.s with continuous d.f. F, and let F n ( x ) : = 1 n i = 1 n I ( X i < x ) be the empirical d.f. based on the segments X 1 , , X n . Then
sup x R | F n ( x ) F ( x ) | = O ( n 1 / 2 ln 1 / 2 n ) a.s.

Proof Similar to the proof of Lemma 4 in Yang [17], we can prove Lemma 1.2. □

Theorem 1.3 Let { T n ; n 1 } and { Y n ; n 1 } be two sequences of NA random variables. Suppose that the sequences { T n ; n 1 } and { Y n ; n 1 } are independent. Then, for any 0 < τ < τ L ,
sup 0 t τ | H ˆ n ( t ) H ( t ) | = O ( a n ) a.s.
(1.5)
and
sup 0 t τ | F ˆ n ( t ) F ( t ) | = O ( a n ) a.s. ,
(1.6)

here and in the sequel, a n = n 1 / 2 ( ln n ) 1 / 2 .

For positive reals z and t, and δ taking value 0 or 1, let
ξ ( z , δ , t ) = g ( z t ) I ( z t , δ = 1 ) / L ¯ ( z ) ,
(1.7)

where g ( x ) = 0 x d F ( s ) L ¯ 2 ( s ) .

Theorem 1.4 Assume that the conditions of Theorem 1.3 hold. Then
H ˆ n ( t ) H ( t ) = 1 n i = 1 n ξ ( Z i , δ i , t ) + r 1 n ( t )
(1.8)
and
F ˆ n ( t ) F ( t ) = F ¯ ( t ) n i = 1 n ξ ( Z i , δ i , t ) + r 2 n ( t ) ,
(1.9)

where sup 0 t τ | r i n ( t ) | = O ( a n ) a.s. i = 1 , 2 , 0 < τ < τ L .

2 Proofs

Proof of Theorem 1.3 It is easy to see from Property P7 of Joag-Dev and Proschan [13] that { Z n ; n 1 } and { ( Z n , δ n ) ; n 1 } are also two sequences of NA r.v.s. Therefore
sup t 0 | L n ( t ) L ( t ) | = O ( a n ) a.s.
(2.1)
and
sup t 0 | F n ( t ) F ( t ) | = O ( a n ) a.s.
(2.2)

follow from Lemma 1.2 and the fact that both L n and F n are empirical distribution functions of L and F .

Now, by (1.1) and (1.2), let us write
H ˆ n ( t ) H ( t ) = 0 t d F n ( s ) L ¯ n ( s ) 0 t d F ( s ) L ¯ ( s ) = 0 t ( 1 L ¯ n ( s ) 1 L ¯ ( s ) ) d F ( s ) + 0 t d ( F n ( s ) F ( s ) ) L ¯ n ( s ) = 0 t L ¯ ( s ) L ¯ n ( s ) L ¯ n ( s ) L ¯ ( s ) d F ( s ) + F n ( t ) F ( t ) L ¯ n ( t ) 0 t ( F n ( s ) F ( s ) ) d ( L ¯ n ( s ) ) 1 .
(2.3)
Therefore, by the combination of equations (2.1) and (2.2), and L ¯ n ( τ ) L ¯ ( τ ) > 0 , for 0 < τ < τ L , we obtain
sup 0 t τ | H ˆ n ( t ) H ( t ) | sup t 0 | L ¯ n ( t ) L ¯ ( t ) | L ¯ n ( τ ) L ¯ ( τ ) ( F ( τ ) F ( 0 ) ) + sup t 0 | F n ( t ) F ( t ) | L ¯ n ( τ ) + sup t 0 | F n ( t ) F ( t ) | ( 1 L ¯ n ( τ ) 1 L ¯ n ( 0 ) ) sup t 0 | L ¯ n ( t ) L ¯ ( t ) | L ¯ n ( τ ) L ¯ ( τ ) + 2 sup t 0 | F n ( t ) F ( t ) | L ¯ n ( τ ) = O ( a n ) .

Thus, (1.5) holds.

Now we prove (1.6). By (1.3) and (1.4),
H ˆ n ( t ) ln ( 1 F ˆ n ( t ) ) = i = 1 n I ( δ ( i ) = 1 , Z ( i ) t ) n i + 1 i = 1 n I ( δ ( i ) = 1 , Z ( i ) t ) ln n i n i + 1 = i = 1 n I ( δ ( i ) = 1 , Z ( i ) t ) ( ln n i + 1 n i 1 n i + 1 ) .
Therefore, by combining the inequality 0 < ln x + 1 x 1 x + 1 < 1 x ( x + 1 ) , x > 0 , and (2.1), for 0 < τ < τ L , 0 t τ , we get that
0 < H ˆ n ( t ) ln ( 1 F ˆ n ( t ) ) i = 1 n I ( δ ( i ) = 1 , Z ( i ) t ) 1 ( n i ) ( n i + 1 ) i ; Z ( i ) t 1 ( n i ) ( n i + 1 ) = i = 1 n Y n ( t ) ( 1 n i 1 n i + 1 ) = 1 Y n ( t ) 1 n 1 n 1 Y n ( t ) n = 1 n 1 L ¯ n ( t ) = O ( 1 n ) .
(2.4)
By (1.1),(1.6) and (2.4), using the Taylor expansion, e x = 1 + x + o ( x ) , we obtain
F ˆ n ( t ) F ( t ) = 1 F ( t ) ( 1 F ˆ n ( t ) ) = ( e H ( t ) e H ˆ n ( t ) ) + ( e H ˆ n ( t ) e ln ( 1 F ˆ n ( t ) ) ) = e H ( t ) ( 1 e H ˆ n ( t ) + H ( t ) ) + e ln ( 1 F ˆ n ( t ) ) ( e H ˆ n ( t ) ln ( 1 F ˆ n ( t ) ) 1 ) = e H ( t ) ( H ˆ n ( t ) H ( t ) + o ( H ˆ n ( t ) H ( t ) ) ) + ( 1 F ˆ n ( t ) ) ( H ˆ n ( t ) ln ( 1 F ˆ n ( t ) ) + o ( H ˆ n ( t ) ln ( 1 F ˆ n ( t ) ) ) ) = e H ( t ) ( H ˆ n ( t ) H ( t ) ) + o ( a n ) + O ( 1 n ) = F ¯ ( t ) ( H ˆ n ( t ) H ( t ) ) + o ( a n ) .
(2.5)

Thence, the combination (1.5), (1.6) holds. This completes the proof of Theorem 1.3. □

Proof of Theorem 1.4 By (2.1),
1 L ¯ n ( s ) 1 L ¯ ( s ) = L ¯ ( s ) L ¯ n ( s ) L ¯ 2 ( s ) 2 L ¯ ( s ) + L ¯ n ( s ) L ¯ 2 ( s ) + 1 L ¯ n ( s ) = L ¯ ( s ) L ¯ n ( s ) L ¯ 2 ( s ) + ( L ¯ n ( s ) L ¯ ( s ) ) 2 L ¯ 2 ( s ) L ¯ n ( s ) = 1 L ¯ ( s ) L ¯ n ( s ) L ¯ 2 ( s ) + O ( a n 2 ) .
Thus, by the combination of (2.3),
H ˆ n ( t ) H ( t ) = 0 t ( 1 L ¯ n ( s ) 1 L ¯ ( s ) ) d F ( s ) + 0 t d ( F n ( s ) F ( s ) ) L ¯ ( s ) + 0 t ( 1 L ¯ n ( s ) 1 L ¯ ( s ) ) d ( F n ( s ) F ( s ) ) = ( 0 t d F n ( s ) L ¯ ( s ) 0 t L ¯ n ( s ) L ¯ 2 ( s ) d F ( s ) ) + 0 t ( 1 L ¯ n ( s ) 1 L ¯ ( s ) ) d ( F n ( s ) F ( s ) ) + O ( a n 2 ) : = I 1 ( t ) + I 2 ( t ) + O ( a n 2 ) .
(2.6)
Noting that F n ( s ) = N n ( s ) n and N n ( s ) is a step function, we get
I 1 ( t ) = 1 n i ; Z ( i ) t N n ( Z i ) N n ( Z i ) L ¯ ( Z ( i ) ) 1 n 0 t i = 1 n I ( Z i s ) L ¯ 2 ( s ) d F ( s ) = 1 n i ; Z ( i ) t δ ( i ) L ¯ ( Z ( i ) ) 1 n i = 1 n 0 t Z i d F ( s ) L ¯ 2 ( s ) = 1 n i = 1 n I ( Z ( i ) t , δ ( i ) = 1 ) L ¯ ( Z ( i ) ) 1 n i = 1 n g ( t Z i ) = 1 n i = 1 n I ( Z i t , δ i = 1 ) L ¯ ( Z i ) 1 n i = 1 n g ( t Z i ) = 1 n i = 1 n ξ ( Z i , δ i , t ) .
(2.7)
Therefore, to prove (1.8), it suffices to prove that sup 0 t τ | I 2 ( t ) | = O ( a n ) for τ < τ H . Let us divide the interval [ 0 , τ ] into subintervals [ x i , x i + 1 ] , i = 1 , , k n , where k n = O ( a n 1 ) , and 0 = x 1 < x 2 < < x k n + 1 = τ are such that H ( x i + 1 ) H ( x i ) = O ( a n ) . For 0 t τ , it is easy to check that
I 2 ( t ) = 0 t ( 1 L ¯ n ( s ) 1 L ¯ ( s ) ) d ( F n ( s ) F ( s ) ) 2 max 1 i k n sup y [ x i , x i + 1 ] | L ¯ n 1 ( y ) L ¯ n 1 ( x i ) L ¯ 1 ( y ) + L ¯ 1 ( x i ) | + sup 0 x τ | L ¯ n 1 ( x ) L ¯ 1 ( x ) | max 1 i k n | F n 1 ( x i + 1 ) F n 1 ( x i ) F ( x i + 1 ) + F ( x i ) | c max 1 i k n sup y [ x i , x i + 1 ] | L ¯ n ( y ) L ¯ n ( x i ) L ¯ ( y ) + L ¯ ( x i ) | + c max 1 i k n | F n ( x i + 1 ) F n ( x i ) F ( x i + 1 ) + F ( x i ) | + O ( a n 2 ) : = I 21 + I 22 + O ( a n 2 ) .
(2.8)
To estimate I 21 , we further subdivide each [ x i , x i + 1 ] into subintervals [ x i j , x i ( j + 1 ) ] , j = 1 , , b n , where b n = O ( k n 1 / 2 ) = O ( a n 1 / 2 ) such that | L ¯ ( x i ( j + 1 ) ) L ¯ ( x i j ) | = O ( a n 3 / 2 ) uniformly in i, j. Now, by (2.1) and | L ¯ n ( y ) L ¯ n ( x i j ) | 1 / n O ( a n 3 / 2 ) , for y [ x i j , x i ( j + 1 ) ] , it follows that
I 21 = max 1 i k n sup y [ x i , x i + 1 ] | L ¯ n ( y ) L ¯ n ( x i ) L ¯ ( y ) + L ¯ ( x i ) | max 1 i k n max 1 j b n sup y [ x i j , x i ( j + 1 ) ] | L ¯ n ( x i j ) L ¯ n ( x i ) L ¯ ( x i j ) + L ¯ ( x i ) | + max 1 i k n max 1 j b n sup y [ x i j , x i ( j + 1 ) ] ( | L ¯ n ( y ) L ¯ n ( x i j ) | + | L ¯ ( y ) + L ¯ ( x i j ) | ) max 1 i k n max 1 j b n | L ¯ n ( x i j ) L ¯ n ( x i ) L ¯ ( x i j ) + L ¯ ( x i ) | + O ( a n 3 / 2 ) .
(2.9)

For 1 i k n , 1 j b n , 1 k n , let η k = I ( Z k x i ) EI ( Z k x i ) , ζ k = I ( Z k x i j ) EI ( Z k x i j ) . Then L ¯ n ( x i j ) L ¯ n ( x i ) L ¯ ( x i j ) + L ¯ ( x i ) = 1 n k = 1 n ( η k + ζ k ) , { η k } and { ζ k } are NA sequences with | η k | 1 , | ζ k | 1 , E η k = E ζ k = 0 , E η k 2 1 , E ζ k 2 1 .

Taking t = a n in Lemma 1.1, yields the following probability bound:
P ( max 1 i k n max 1 j b n | L ¯ n ( x i j ) L ¯ n ( x i ) L ¯ ( x i j ) + L ¯ ( x i ) | 8 a n ) i = 1 k n j = 1 b n P ( | k = 1 n ( η k + ζ k ) | 8 n a n ) i = 1 k n j = 1 b n P ( | k = 1 n η k | 4 n a n ) + i = 1 k n j = 1 b n P ( | k = 1 n ζ k | 4 n a n ) i = 1 k n j = 1 b n 4 exp ( 4 n a n 2 + n a n 2 ) = 4 k n b n e 3 ln n 1 n 2 .

Using the bound and the Borel-Cantelli lemma, we deduce that I 21 = O ( a n ) a.s. The estimation of I 22 = O ( a n ) is similar noting that | F ( x ) F ( y ) | | L ¯ ( x ) L ¯ ( y ) | for all x and y. Therefore, by (2.6)-(2.9), (1.8) holds. (1.9) follows from (2.5) and (1.8). □

Authors’ information

Qunying Wu, Professor, Doctor, working in the field of probability and statistics.

Declarations

Acknowledgements

Supported by the National Natural Science Foundation of China (11061012), project supported by Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning ([2011] 47), and the Support Program of the Guangxi China Science Foundation (2012GXNSFAA053010, 2013GXNSFDA019001).

Authors’ Affiliations

(1)
College of Science, Guilin University of Technology
(2)
Department of Mathematics, Ji’nan University

References

  1. Kaplan EM, Meier P: Nonparametric estimation from incomplete observations. J. Am. Stat. Assoc. 1958, 53: 457–481. 10.1080/01621459.1958.10501452MathSciNetView ArticleGoogle Scholar
  2. Breslow N, Crowley J: A large sample study of the life table and product limit estimates under random censorship. Ann. Stat. 1974, 2: 437–453. 10.1214/aos/1176342705MathSciNetView ArticleGoogle Scholar
  3. Földes A, Rejtö L: A LIL type result for the product limit estimator. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1981, 56: 75–84. 10.1007/BF00531975View ArticleGoogle Scholar
  4. Gu MG, Lai TL: Functional laws of the iterated logarithm for the product-limit estimator of a distribution function under random censorship or truncation. Ann. Probab. 1990, 18: 160–189. 10.1214/aop/1176990943MathSciNetView ArticleGoogle Scholar
  5. Gill R Mathematical Centre Tracts 124. In Censoring and Stochastic Integrals. Math. Centrum, Amsterdam; 1980.Google Scholar
  6. Kang SS, Koehler KJ: Modification of the Greenwood formula for correlated failure times. Biometrics 1997, 53: 885–899. 10.2307/2533550View ArticleGoogle Scholar
  7. Wei LJ, Lin DY, Weissfeld L: Regression analysis of multivariate incomplete failure times data by modelling marginal distributions. J. Am. Stat. Assoc. 1989, 84: 1064–1073.MathSciNetView ArticleGoogle Scholar
  8. Shumway RH, Azari AS, Johnson P: Estimating mean concentrations under transformation for environmental data with detection limits. Technometrics 1988, 31: 347–356.View ArticleGoogle Scholar
  9. Ying Z, Wei LJ: The Kaplan-Meier estimate for dependent failure time observations. J. Multivar. Anal. 1994, 50: 17–29. 10.1006/jmva.1994.1031MathSciNetView ArticleGoogle Scholar
  10. Lecoutre JP, Ould-Sad E: Convergence of the conditional Kaplan-Meier estimate under strong mixing. J. Stat. Plan. Inference 1995, 44: 359–369. 10.1016/0378-3758(94)00084-9View ArticleGoogle Scholar
  11. Cai ZW: Estimating a distribution function for censored time series data. J. Multivar. Anal. 2001, 78: 299–318. 10.1006/jmva.2000.1953View ArticleGoogle Scholar
  12. Liang HY, Uña-Álvarez J: A Berry-Esseen type bound in kernel density estimation for strong mixing censored samples. J. Multivar. Anal. 2009, 100: 1219–1231. 10.1016/j.jmva.2008.11.001View ArticleGoogle Scholar
  13. Joag-Dev K, Proschan F: Negative association of random variables with applications. Ann. Stat. 1983, 11(1):286–295. 10.1214/aos/1176346079MathSciNetView ArticleGoogle Scholar
  14. Matula PA: A note on the almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 1992, 15: 209–213. 10.1016/0167-7152(92)90191-7MathSciNetView ArticleGoogle Scholar
  15. Wu QY, Jiang YY: A law of the iterated logarithm of partial sums for NA random variables. J. Korean Stat. Soc. 2010, 39(2):199–206. 10.1016/j.jkss.2009.06.001MathSciNetView ArticleGoogle Scholar
  16. Wu QY, Jiang YY: Chover’s law of the iterated logarithm for NA sequences. J. Syst. Sci. Complex. 2010, 23(2):293–302. 10.1007/s11424-010-7258-yMathSciNetView ArticleGoogle Scholar
  17. Yang SC: Consistency of nearest neighbor estimator of density function for negative associated samples. Acta Math. Appl. Sin. 2003, 26(3):385–394.MathSciNetGoogle Scholar

Copyright

© Wu and Chen; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.