Skip to main content

An almost sure central limit theorem of products of partial sums for ρ--mixing sequences

Abstract

Let {X n , n ≥ 1} be a strictly stationary ρ--mixing sequence of positive random variables with EX1 = μ > 0 and Var(X1) = σ2 < ∞. Denote S n = i = 1 n X i and γ= σ μ the coefficient of variation. Under suitable conditions, by the central limit theorem of weighted sums and the moment inequality we show that

x = lim n 1 log n k = 1 n 1 k I i = 1 k S i i μ 1 γ σ k = F ( x ) a . s . ,

where σ k 2 = V a r S k , k , S k , k = i = 1 k b i , k Y i , b i , k = j = i k 1 j , i k with b i , k =0,i>k, Y i = X i - μ σ ,F ( x ) is the distribution function of the random variable e 2 N , and N is a standard normal random variable.

MR(2000) Subject Classification: 60F15.

1 Introduction and main results

For a random variable X, define X p = (E|X|p)1/p. For two nonempty disjoint sets S,T N, we define dist(S,T) to be min{|j - k|; j S, k T}. Let σ(S) be the σ-field generated by {X k , k S}, and define σ(T) similarly. Let C be a class of functions which are coordinatewise increasing. For any real number x, x+, and x- denote its positive and negative part, respectively, (except for some special definitions, for examples, ρ-(s), ρ-(S,T), etc.). For random variables X, Y, define

ρ - ( X , Y ) = 0 sup C o v ( f ( X ) , g ( Y ) ) V a r f ( X ) 1 2 V a r g ( Y ) 1 2 ,

where the sup is taken over all f,gC such that E(f(X))2 < ∞ and E(g(Y))2 < ∞.

A sequence {X n , n ≥ 1} is called negatively associated (NA) if for every pair of disjoint subsets S, T of N,

C o v f ( X i , i S ) , g X j , j T 0 ,

whenever f,gC.

A sequence {X n , n ≥ 1} is called ρ*-mixing if

ρ * ( s ) = sup ρ S , T ; S , T N , dist ( S , T ) s 0 a s s ,

where

ρ ( S , T ) = sup E ( f - E f ) ( g - E g ) / f - E f 2 g - E g 2 ; f L 2 ( σ ( S ) ) , g L 2 ( σ ( T ) ) .

A sequence {X n , n ≥ 1} is called ρ--mixing, if

ρ _ ( s ) = sup ρ _ ( S , T ) ; S , T N , dist ( S , T ) s 0 a s s .

where,

ρ _ ( S , T ) = 0 sup { C o v f X i , i S , g X i , j T V a r f X i , i S V a r g X i , j T ; f , g C } .

The concept of ρ--mixing random variables was proposed in 1999 (see [1]). Obviously, ρ--mixing random variables include NA and ρ*-mixing random variables, which have a lot of applications, their limit properties have aroused wide interest recently, and a lot of results have been obtained, such as the weak convergence theorems, the central limit theorems of random fields, Rosenthal-type moment inequality, see [14]. Zhou [5] studied the almost sure central limit theorem of ρ--mixing sequences by the conditions provided by Shao: on the conditions of central limit theorem, and if ε 0 >0,Var i = 1 n 1 i f S i σ i =O log 2 - ε 0 n , where f is Lipschitz function. In this article, we study the almost sure central limit theorem of products of partial sums for ρ--mixing sequence by the central limit theorem of weighted sums and moment inequality.

Here and in the sequel, let b k , n = i = k n 1 i ,kn with bk,n= 0, k > n. Suppose {X n , n ≥ 1} be a strictly stationary ρ--mixing sequence of positive random variables with EX1 = μ > 0 and Var(X1) = σ2 < ∞. Let S ̃ n = k = 1 n Y k and S n , n = k = 1 n b k , n Y k , where Y k = X k - μ σ ,k1. Let σ n 2 =Var S n , n , and C denotes a positive constant, which may take different values whenever it appears in different expressions. The following are our main results.

Theorem 1.1 Let {X n , n ≥ 1} be a defined as above with 0 < E|X1|r< ∞ for a certain r > 2, denote S n = i = 1 n X i and γ= σ μ the coefficient of variation. Assume that

(a1) σ 1 2 =E X 1 2 +2 n = 2 Cov X 1 , X n >0,

(a2) n = 2 C o v X 1 , X n <,

(a3) ρ-(n) = O(log-δn), δ > 1,

(a4) inf n N σ n 2 n >0.

Then

x lim n 1 log n k = 1 n 1 k I S k , k σ k x = Φ ( x ) a . s .
(1.1)

Here and in the sequel, I{·} denotes indicator function and Φ(·) is the distribution function of standard normal random variable N.

Theorem 1.2 Under the conditions of Theorem 1.1, then

x = lim n 1 log n k = 1 n 1 k I i = 1 k S i i μ 1 γ σ k x = F ( x ) a . s .
(1.2)

Here and in the sequel, F(·) is the distribution function of the random variable e 2 N .

2 Some lemmas

To prove our main results, we need the following lemmas.

Lemma 2.1 [3] Let {X n , n ≥ 1} be a weakly stationary ρ--mixing sequence with E X n =0,0<E X 1 2 <, and

  1. (i)

    σ 1 2 =E X 1 2 +2 n = 2 Cov X 1 , X n >0,

  2. (ii)

    n = 2 C o v X 1 , X n <,

then

E S n 2 n σ 1 2 , S n σ 1 n d N ( 0 , 1 ) a s n .

Lemma 2.2 [4] For a positive real number q ≥ 2, if {X n , n ≥ 1} is a sequence of ρ--mixing random variables with EX i = 0, E|X i |q< ∞ for every i ≥ 1, then for all n ≥ 1, there is a positive constant C = C(q, ρ-(·)) such that

E max 1 j n S j q C i = 1 n E X i q + i = 1 n E X i 2 q 2 .

Lemma 2.3 [6] i = 1 n b i , n 2 =2n- b 1 , n .

Lemma 2.4 [[3], Theorem 3.2] Let {X ni , 1 ≤ in, n ≥ 1} be an array of centered random variables with E X n i 2 < for each i = 1,2,...,n. Assume that they are ρ--mixing. Let {a ni , 1 ≤ in, n ≥ 1} be an array of real numbers with a ni = ±1 for i = 1, 2,..., n. Denote σ n 2 =Var i = 1 n a n i X n i and suppose that

sup n 1 σ n 2 i = 1 n E X n i 2 < ,

and

lim sup n 1 σ n 2 i , j : | i j | A 1 i , j n C o v ( X n i , X n j ) 0 a s A ,

and the following Lindeberg condition is satisfied:

1 σ n 2 i = 1 n E X n i 2 I X n i ε σ n 0 a s n

for every ε > 0. Then

1 σ n i = 1 n a n i X n i d N ( 0 , 1 ) a s n .

Lemma 2.5 Let {X n , n ≥ 1} be a strictly stationary sequence of ρ--mixing random variables with EX n = 0 and n = 2 C o v X 1 , X n < , a n i , 1 i n , n 1 be an array of real numbers such that sup n i = 1 n a n i 2 < and max 1 i n a n i 0 as n → ∞. If Var i = 1 n a n i X i =1 and X n 2 is an uniformly integrable family, then

i = 1 n a n i X i d N ( 0 , 1 ) a s n .

Proof Notice that

i = 1 n a n i X i = i = 1 n a n i a n i a n i X i = : i = 1 n b n i Y n i ,

where b n i = a n i a n i and Y ni = |a ni |X i . Then {Y ni , 1 ≤ in, n ≥ 1} is an array of ρ--mixing centered random variables with E Y n i 2 = a n i 2 E X i 2 < and b ni = ±1 for i = 1, 2,..., n and σ n 2 =Var i = 1 n b n i Y n i =1. Note that X n 2 is an uniformly integrable family, we have

sup n 1 σ n 2 i = 1 n E Y n i 2 = sup n i = 1 n a n i 2 E X i 2 sup n i = 1 n a n i 2 sup i E X i 2 < ,

and

lim sup n 1 σ n 2 i , j : | i j | A 1 i , j n C o v ( Y n i , Y n j ) = lim sup n i , j : | i j | A 1 i , j n C o v ( | a n i | X i , | a n j | X j ) lim sup n i , j : | i j | A 1 i , j n | a n i | | a n j | | C o v ( X i , X j ) | C ( lim sup n ( i , j : | i j | A 1 i , j n | a n i | 2 | C o v ( X i , X j ) | + i , j : | i j | A 1 i , j n | a n i | 2 | C o v ( X i , X j ) | ) ) C sup n i = 1 n | a n i | 2 i > A | C o v ( X 1 , X i ) | 0 a s A ,

and ε > 0, we get

1 σ n 2 i = 1 n E Y n i 2 I Y n i ε σ n = i = 1 n a n i 2 E X i 2 I a n i X i ε sup n i = 1 n a n i 2 E X 1 2 I a n i X 1 ε sup n i = 1 n a n i 2 E X 1 2 I max 1 i n a n i X 1 ε 0 a s n ,

thus the conclusion is proved by Lemma 2.4.

Lemma 2.6 [2] Suppose that f1(x) and f2(y) are real, bounded, absolutely continuous functions on R with f 1 ( x ) C 1 and f 2 ( y ) C 2 . Then for any random variables X and Y,

C o v f 1 ( X ) , f 2 ( Y ) C 1 C 2 - C o v ( X , Y ) + 8 p - ( X , Y ) X 2 , 1 Y 2 , 1 ,

where X 2 , 1 = 0 P 1 2 X > x dx.

Lemma 2.7 Let {X n , n ≥ 1} be a strictly stationary sequence of ρ--mixing random variables with E X 1 =0,0<E X 1 2 < and

0 < σ 1 2 = E X 1 2 + 2 n = 2 C o v X 1 , X n < , n = 2 C o v X 1 , X n < ,

then for 0 < p < 2, we have

n - 1 p S n 0 a . s . a s n .

Proof By Lemma 2.1, we have

lim n E S n 2 n = σ 1 2 .
(2.1)

Let n k = kα, where α>max 1 , p 2 - p . By (2.1), we get

k = 1 P S n k ε n k 1 p k = 1 E S n k 2 ε 2 n k 2 p k = 1 C ε 2 k α 2 p - 1 < .

From Borel-Cantelli lemma, it follows that

n k - 1 p S n k 0a.s.ask.
(2.2)

And by Lemma 2.2, it follows that

k = 1 P max n k n < n k + 1 S n - S n k n 1 p ε k = 1 E max n k n < n k + 1 S n - S n k 2 ε 2 n k 2 p = k = 1 E max n k n < n k + 1 i = n k + 1 n X i 2 ε 2 n k 2 p C k = 1 n k + 1 - n k ε 2 n k 2 p C k = 1 1 k α 2 p - 1 < .

By Borel-Cantelli lemma, we conclude that

max n k n < n k + 1 S n - S n k n 1 p 0 a . s . a s n .
(2.3)

For every n, there exist n k and nk+1such that n k n < nk+1, by (2.2) and (2.3), we have

S n n 1 p = S n - S n k + S n k n 1 p S n k n k 1 p + max n k n < n k + 1 S n - S n k n 1 p 0 a . s . a s n .

The proof is now completed.

3 Proof of the theorems

Proof of Theorem 1.1 By the property of ρ--mixing sequence, it is easy to see that {Y n } is a strictly stationary ρ--mixing sequence with EY1 = 0 and E Y 1 2 =1. We first prove

S n , n σ n d N ( 0 , 1 ) a s n .
(3.1)

Let a n i = b i , n σ n , 1 i n , n 1 . Obviously,

V a r i = 1 n a n i Y i = 1 .

From condition (a4) in Theorem 1.1 and Lemma 2.3, we have

sup n i = 1 n a n i 2 = sup n i = 1 n b i , n 2 σ n 2 = sup n 2 n - b 1 , n σ n 2 C sup n 2 n - b 1 , n n < ,

and

max 1 i n a n i = max 1 i n b i , n σ n b 1 , n σ n C log n n 0 a s n .

By stationarity of {Y n , n ≥ 1} and E |X1|2 < ∞, we know that Y n 2 is uniformly integrable, and from condition (a2) in Theorem 1.1, we get n = 2 C o v Y 1 , Y n <, so applying Lemma 2.5, we have

i = 1 n a n i Y i d N ( 0 , 1 ) .

Notice that

i = 1 n a n i Y i = i = 1 n b i , n Y i σ n = S n , n σ n ,

so (3.1) is valid. Let f(x) be a bounded Lipschitz function and have a Radon-Nikodyn derivative h(x) bounded by Γ. From (3.1), we have

E f S n , n σ n E f ( N ( 0 , 1 ) ) a s n ,

thus

1 log n k = 1 n 1 k E f S k , k σ k - E f ( N ( 0 , 1 ) ) 0 a s n .
(3.2)

On the other hand, note that (1.1) is equivalent to

lim n 1 log n k = 1 n 1 k f S k , k σ k = - f ( x ) d Φ ( x ) = E f ( N ( 0 , 1 ) ) a . s .
(3.3)

from Section 2 of Peligrad and Shao [7] and Theorem 7.1 on P42 from Billingsley [8]. Hence, to prove (3.3), it suffices to show that

T n = 1 log n k = 1 n 1 k f S k , k σ k - E f S k , k σ k 0 a . s . n
(3.4)

by (3.2). Let ξ k =f S k , k σ k -Ef S k , k σ k , 1 ≤ kn m we have

E T n 2 = 1 log 2 n E k = 1 n ξ k k 2 1 log 2 n 1 k l n , 2 k l E ξ k ξ l k l + 1 log 2 n 1 k l n , 2 k l E ξ k ξ l k l : = I 1 + I 2 .
(3.5)

By the fact that f is bounded, we have

I 1 C log 2 n k = 1 n l = k 2 k 1 k l = C log 2 n k = 1 n 1 k l = k 2 k 1 l C ( log - 1 n ) .
(3.6)

Now we estimate I2, if l > 2k, we have

S l , l - S 2 k , 2 k = b 1 , l Y 1 + b 2 , l Y 2 + + b l , l Y l - b 1 , 2 k Y 1 + b 2 , 2 k Y 2 + + b 2 k , 2 k Y 2 k = b 2 k + 1 , l Y 2 k + 1 + + b l , l Y l + b 2 k + 1 , l S ̃ 2 k ,

and

E ξ k ξ l = C o v f S k , k σ k , f S l , l σ l C o v f S k , k σ k , f S l , l σ l - f S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l + C o v f S k , k σ k , f S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l .

By Lemma 2.3 and condition (a2) in Theorem 1.1, we have

V a r ( S k , k ) = i = 1 k b i , k 2 E Y i 2 + 2 j = 1 k - 1 i = j + 1 k b i , k b j , k C o v Y i , Y j i = 1 k b i , k 2 + 2 j = 1 k b j , k 2 i = j + 1 k C o v ( Y i , Y j ) C k ,
(3.7)

and

V a r S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k = i = 2 k + 1 l b i , l 2 E Y i 2 + 2 j = 2 k + 1 l - 1 i = j + 1 l b i , l b j , l C o v Y i , Y j i = 2 k + 1 l b i , l 2 + 2 j = 1 l b i , l 2 i = j + 1 l C o v ( Y i , Y j ) C l .

By Lemma 2.6, the definition of ρ--mixing sequence and condition (a4), we have

C o v f S k , k σ k , f S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l C - C o v S k , k σ k , S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l + 8 ρ - S k , k σ k , S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l S k , k σ k 2 , 1 S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l 2 , 1 C ρ - ( k ) V a r S k , k σ k 1 2 V a r S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l 1 2 + C ρ - ( k ) S k , k σ k 2 , 1 S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l 2 , 1 C ρ - ( k ) + C ρ - ( k ) S k , k σ k 2 , 1 S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l 2 , 1 .

By the inequality X 2 , 1 r r - 2 X r ( r > 2 ) (cf. Zhang [[2], p. 254] or Ledoux and Talagrand [[9], p. 251]), we get

S k , k σ k 2 , 1 r r - 2 S k , k σ k r = r r - 2 1 σ k E S k , k r 1 r ,

and

E S k , k r = E | j = 1 k b j , k Y j | r C j = 1 k b j , k r E X j r + j = 1 k b j , k 2 E X j 2 r 2 C k log r k + k r 2 ,

thus

S k , k σ k 2 , 1 C r r - 2 log k k 1 2 - 1 r + r r - 2 < C ,

similarly,

S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l 2 , 1 < C ,

hence

C o v f S k , k σ k , f S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l C ρ - ( k ) .

Similarly to (3.7), we have

V a r S 2 k , 2 k = i = 1 2 k b i , 2 k 2 E Y i 2 + 2 j = 1 2 k - 1 i = j + 1 2 k b i , 2 k b j , 2 k C o v Y i , Y j i = 1 2 k b i , 2 k 2 + 2 j = 1 2 k - 1 b i , 2 k 2 i = j + 1 2 k C o v Y i , Y j C k ,

and

V a r S ̃ 2 k = V a r i = 1 2 k Y i = i = 1 2 k E Y i 2 + 2 i = 1 2 k - 1 j = i + 1 2 k C o v Y i , Y j = 2 k + 2 i = 1 2 k - 1 j = 2 2 k - i + 1 C o v Y i , Y j C k .

Since f is a bounded Lipschitz function, we have

C o v f S k , k σ k , f S l , l σ l - f S l , l - S 2 k , 2 k - b 2 k + 1 , l S ̃ 2 k σ l C E S 2 k , 2 k + b 2 k + 1 , l S ̃ 2 k σ l C V a r S 2 k , 2 k 1 2 σ l + C b 2 k + 1 , l V a r S ̃ 2 k 1 2 σ l C k l 1 2 + C k l 1 2 log 1 2 k C k l ε ,

where 0<ε< 1 2 . Hence if l > 2k, we have

E ξ k ξ l C ρ - ( k ) + k l ε .

Thus

I 2 C log 2 n l = 2 n k = 1 l - 1 1 k 1 - ε l 1 + ε + C log 2 n l = 2 n 1 l k = 1 l - 1 ρ - ( k ) k C log 2 n l = 2 n 1 l 1 + ε ( l - 1 ) ε ε + C log 2 n l = 2 n 1 l k = 1 n log - δ k k C log 2 n l = 2 n 1 l + C log 2 n l = 2 n 1 l k = 1 n log - δ k k C log - 1 n .
(3.8)

Associated with (3.5), (3.6), and (3.8), we have

E T n 2 C log - 1 n .
(3.9)

To prove (3.4), let n k = e k τ , where τ > 1. From (3.9), we have

k = 1 E T n k 2 C k = 1 log - 1 n k = C k = 1 1 k τ < .

Thus ε > 0, we have

k = 1 P T n k ε k = 1 E T n k 2 ε 2 < .

By Borel-Cantelli lemma, we have

T n k 0 a . s . a s k .

Note that

log n k + 1 log n k = ( k + 1 ) τ k τ 1 a s k .

For every n, there exist n k and nk+1satisfying n k < nnk+1, we have

T n 1 log n k i = 1 n k ξ i i + 1 log n k i = n k n k + 1 ξ i i T n k + C log n k + 1 log n k - 1 0 a . s . a s n ,

(3.4) is completed, so the proof of Theorem 1.1 is completed.

Proof of Theorem 1.2 Let C i = S i μ i , we have

1 γ σ k i = 1 k ( C i - 1 ) = 1 γ σ k i = 1 k S i μ i - 1 = 1 σ k i = 1 k b i , k Y i = S k , k σ k .

Hence (1.1) is equivalent to

x lim n 1 log n k = 1 n 1 k I 1 γ σ k i = 1 k ( C i - 1 ) x = Φ ( x ) a . s .
(3.10)

On the other hand, to prove (1.2), it suffices to show that

x lim n 1 log n k = 1 n 1 k I 1 γ σ k i = 1 k log C i x = Φ ( x ) a . s .
(3.11)

By Lemma 2.7, for enough large i, for some 4 3 <p<2 we have

C i - 1 = S i μ i - 1 C i 1 p - 1 a . s .

It is easy to know that log(1+ x) = x + O(x2) for x < 1 2 , thus

k = 1 n log C k - k = 1 n ( C k - 1 ) C k = 1 n ( C k - 1 ) 2 C k = 1 n k 2 p - 2 C n 2 p - 1 a . s . ,

and

k = 1 n ( C k - 1 ) - C n 2 p - 1 k = 1 n log C k k = 1 n ( C k - 1 ) + C n 2 p - 1 a . s .

Hence for arbitrary small ε > 0, there is n0 = n0(ω, ε), such that for every n > n0 and arbitrary x,

I 1 γ σ k i = 1 k ( C i - 1 ) x - ε I 1 γ σ k i = 1 k log C i x I 1 γ σ k i = 1 k ( C i - 1 ) x + ε ,

so by (3.10), we know that (3.11) is true, and (3.11) is equivalent to (1.2), thus the proof of Theorem 1.2 is complete.

References

  1. Zhang LX, Wang XY: Convergence rates in the strong laws of asymptotically negatively associated random fields. Appl Math J Chinese Univ Ser B 1999, 14(4):406–416. 10.1007/s11766-999-0070-6

    Article  MathSciNet  Google Scholar 

  2. Zhang LX: A functional central limit theorem for asymptotically negatively dependent random fields. Acta Math Hungar 2000, 86(3):237–259. 10.1023/A:1006720512467

    Article  MathSciNet  Google Scholar 

  3. Zhang LX: Central limit theorems for asymptotically negatively associated random fields. Acta Math Sinica 2000, 6(4):691–710.

    Article  Google Scholar 

  4. Wang JF, Lu FB: Inequalities of maximum of partial sums and weak convergence for a class of weak dependent random variables. Acta Math Sinica 2006, 22(3):693–700. 10.1007/s10114-005-0601-x

    Article  Google Scholar 

  5. Zhou H: Note on the almost sure central limit theorem for ρ--mixing sequences. J Zhejiang Univ Sci Ed 2005, 32(5):503–505.

    MathSciNet  Google Scholar 

  6. Khurelbaatar G, Rempala G: A note on the almost sure central limit theorem for the product of partial sums. Appl Math Lett 2006, 19(2):191–196. 10.1016/j.aml.2005.06.002

    Article  MathSciNet  Google Scholar 

  7. Peilgrad M, Shao QM: A note on the almost sure central limit theorem. Statist Probab Lett 1995, 22: 131–136. 10.1016/0167-7152(94)00059-H

    Article  MathSciNet  Google Scholar 

  8. Billingsley P: Convergence of Probability Measures. Wiley, New York; 1968.

    Google Scholar 

  9. Ledoux M, Talagrand M: Probability in Banach Space. Springer Verlag, New York; 1991.

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors were very grateful to the editor and anonymous referees for their careful reading of the manuscript and valuable suggestions which helped in significantly improving an earlier version of this article. The study was supported by the National Natural Science Foundation of China (10926169, 11171003, 11101180), Key Project of Chinese Ministry of Education (211039), Foundation of Jilin Educational Committee of China (2012-158), and Basic Research Foundation of Jilin University (201001002, 201103204).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xili Tan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

XT and YZ carried out the design of the study and performed the analysis. YZ participated in its design and coordination. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tan, X., Zhang, Y. & Zhang, Y. An almost sure central limit theorem of products of partial sums for ρ--mixing sequences. J Inequal Appl 2012, 51 (2012). https://doi.org/10.1186/1029-242X-2012-51

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2012-51

Keywords