Skip to main content

Strong convergence properties for ψ-mixing random variables

Abstract

In this paper, by using the Rosenthal-type maximal inequality for ψ-mixing random variables, we obtain the Khintchine-Kolmogorov-type convergence theorem, which can be applied to establish the three series theorem and the Chung-type strong law of large numbers for ψ-mixing random variables. In addition, the strong stability for weighted sums of ψ-mixing random variables is studied, which generalizes the corresponding one of independent random variables.

MSC:60F15.

1 Introduction

Let (Ω,F,P) be a fixed probability space. The random variables we deal with are all defined on (Ω,F,P). Throughout the paper, let I(A) be the indicator function of the set A. For random variable X, denote X ( c ) =XI(|X|c) for some c>0. Denote log + xlnmax(e,x). C and c denote positive constants, which may be different in various places.

Let { X n ,n1} be a sequence of random variables defined on a fixed probability space (Ω,F,P), and let S n = i = 1 n X i for each n1. Let n and m be positive integers. Write F n m =σ( X i ,nim). Given σ-algebras , in , let

ψ(B,R)= sup A B , B R , P ( A ) P ( B ) > 0 | P ( A B ) P ( A ) P ( B ) | P ( A ) P ( B ) .
(1.1)

Define the mixing coefficients by

ψ(n)= sup k 1 ψ ( F 1 k , F k + n ) ,n0.

Definition 1.1 A sequence { X n ,n1} of random variables is said to be a sequence of ψ-mixing random variables if ψ(n)0 as n.

The concept of ψ-mixing random variables was introduced by Blum et al. [1] and some applications have been found. See, for example, Blum et al. [1] for strong law of large numbers, Yang [2] for almost sure convergence of weighted sums, Wu [3] for strong consistency of M estimator in linear model, Wang et al. [4] for maximal inequality and Hájek-Rényi-type inequality, strong growth rate and the integrability of the supremum, Zhu et al. [5] for strong convergence properties, Pan et al. [6] for strong convergence of weighted sums, and so on. When these are compared with the corresponding results of independent random variable sequences, there still remains much to be desired. The main purpose of this paper is to establish the Khintchine-Kolmogorov-type convergence theorem, which can be applied to obtain the three series theorem and the Chung-type strong law of large numbers for ψ-mixing random variables. In addition, we will study the strong stability for weighted sums of ψ-mixing random variables, which generalizes the corresponding one of independent random variables.

For independent and identically distributed random variable sequences, Jamison et al. [7] proved the following theorem.

Theorem A Let {X, X n ,n1} be an independent and identically distributed sequence with the same distribution function F(x), and let { w n ,n1} be a sequence of positive numbers. Write W n = i = 1 n ω n and N(x)=Card{n: W n / ω n x}, x>0. If

  1. (i)

    W n and ω n W n 1 0 as n,

  2. (ii)

    E|X|< and EN(|X|)<,

  3. (iii)

    x 2 ( y | x | N(y)/ y 3 dy)dF(x)<,

then

W n 1 i = 1 n ω i X i ca.s.,
(1.2)

where c is a constant.

The result of Theorem A for independent and identically distributed sequences has been generalized to some dependent sequences, such as negatively associated sequences, negatively superadditive dependent sequences, ρ ˜ -mixing sequences, φ ˜ -mixing sequences, and so forth. We will further study the strong stability for weighted sums of ψ-mixing random variables, which generalizes corresponding one of independent sequences. The main results of the paper depend on the following important lemma - Rosenthal-type maximal inequality for ψ-mixing random variables.

Lemma 1.1 (cf. Wang et al. [4])

Let { X n ,n1} be a sequence of ψ-mixing random variables satisfying n = 1 ψ(n)<, q2. Assume that E X n =0 and E | X n | q < for each n1. Then there exists a constant C depending only on q and ψ() such that

E ( max 1 j n | i = a + 1 a + j X i | q ) C [ i = a + 1 a + n E | X i | q + ( i = a + 1 a + n E X i 2 ) q / 2 ]
(1.3)

for every a0 and n1. In particular, we have

E ( max 1 j n | i = 1 j X i | q ) C [ i = 1 n E | X i | q + ( i = 1 n E X i 2 ) q / 2 ]
(1.4)

for every n1.

The following concept of stochastic domination will be used frequently throughout the paper.

Definition 1.2 A sequence { X n ,n1} of random variables is said to be stochastically dominated by a random variable X if there exists a constant C such that

P ( | X n | > x ) CP ( | X | > x )
(1.5)

for all x0 and n1.

By the definition of stochastic domination and integration by parts, we can get the following basic property for stochastic domination. For the proof, one can refer to Wang et al. [8], Tang [9] or Shen and Wu [10].

Lemma 1.2 Let { X n ,n1} be a sequence of random variables, which is stochastically dominated by a random variable X. For any α>0 and b>0, the following statement holds

E | X n | α I ( | X n | b ) C { E | X | α I ( | X | b ) + b α P ( | X | > b ) } ,

where C is a positive constant.

2 Khintchine-Kolmogorov-type convergence theorem

In this section, we will prove the Khintchine-Kolmogorov-type convergence theorem for ψ-mixing random variables. By using the Khintchine-Kolmogorov-type convergence theorem, we can get the three series theorem and the Chung-type strong law of large numbers for ψ-mixing random variables.

Theorem 2.1 (Khintchine-Kolmogorov-type convergence theorem)

Let { X n ,n1} be a sequence of ψ-mixing random variables satisfying n = 1 ψ(n)<. Assume that

n = 1 Var( X n )<,
(2.1)

then n = 1 ( X n E X n ) converges a.s.

Proof Without loss of generality, we assume that E X n =0 for all n1. For any ε>0, it can be checked that

P ( sup k , m n | S k S m | > ε ) P ( sup k n | S k S n | > ε 2 ) + P ( sup m n | S m S n | > ε 2 ) 2 lim N P ( max n k N | S k S n | > ε 2 ) 2 lim N 2 ( ε 2 ) 2 i = n + 1 N Var ( X i ) = 16 ε 2 i = n + 1 Var ( X i ) 0 , n ,

where the last inequality follows from Lemma 1.1. Thus, the sequence { S n ,n1} is a.s. Cauchy, and, therefore, we can obtain the desired result immediately. This completes the proof of the theorem. □

With the Khintchine-Kolmogorov-type convergence theorem in hand, we can get the three series theorem and the Chun-type strong law of large numbers for ψ-mixing random variables.

Theorem 2.2 (Three series theorem)

Let { X n ,n1} be a sequence of ψ-mixing random variables satisfying n = 1 ψ(n)<. For some c>0, if

n = 1 P ( | X n | > c ) <,
(2.2)
n = 1 E X n ( c ) converges,
(2.3)
n = 1 Var ( X n ( c ) ) <,
(2.4)

then n = 1 X n converges almost surely.

Proof According to (2.4) and Theorem 2.1, we have

n = 1 ( X n ( c ) E X n ( c ) ) converges a.s.
(2.5)

It follows by (2.3) and (2.5) that

n = 1 X n ( c ) converges a.s.
(2.6)

Obviously, (2.2) implies that

n = 1 P ( X n X n ( c ) ) = n = 1 P ( | X n | > c ) <.
(2.7)

It follows by (2.7) and Borel-Cantelli lemma that

P ( X n X n ( c ) , i.o. ) =0.
(2.8)

Finally, combining (2.6) with (2.8), we can get that n = 1 X n converges a.s. The proof is completed. □

Theorem 2.3 (Chung-type strong law of large numbers)

Let { X n ,n1} be a sequence of mean zero ψ-mixing random variables satisfying n = 1 ψ(n)<, and let { a n ,n1} be a sequence of positive numbers satisfying 0< a n . If there exists some p[1,2] such that

n = 1 E | X n | p a n p <,
(2.9)

then

lim n 1 a n i = 1 n X i =0a.s.
(2.10)

Proof It follows by (2.9) that

n = 1 Var ( X n ( a n ) ) a n 2 n = 1 E ( X n ( a n ) ) 2 a n 2 = n = 1 E X n 2 I ( | X n | a n ) a n 2 n = 1 E | X n | p a n p < .

Therefore, we have by Theorem 2.1 that

n = 1 X n ( a n ) E X n ( a n ) a n converges a.s.
(2.11)

Since p[1,2], it follows by E X n =0 that

n = 1 | E X n ( a n ) | a n = n = 1 | E X n I ( | X n | a n ) | a n = n = 1 | E X n I ( | X n | > a n ) | a n n = 1 E | X n | p a n p < ,

which implies that

n = 1 E X n ( a n ) a n converges.
(2.12)

Together with (2.11) and (2.12), we can see that

n = 1 X n ( a n ) a n converges a.s.
(2.13)

By Markov’s inequality and (2.9), we have

n = 1 P ( X n X n ( a n ) ) = n = 1 P ( | X n | > a n ) n = 1 E | X n | p a n p <.
(2.14)

Hence, the desired result (2.10) follows from (2.13), (2.14), Borel-Cantelli lemma and Kronecker’s lemma immediately. □

3 Strong stability for weighted sums of ψ-mixing random variables

In the previous section, we were able to get the Khintchine-Kolmogorov-type convergence theorem for ψ-mixing random variables. In this section, we will study the strong stability for weighted sums of ψ-mixing random variables by using the Khintchine-Kolmogorov-type convergence theorem.

The concept of strong stability is as follows.

Definition 3.1 A sequence { Y n ,n1} is said to be strongly stable if there exist two constant sequences { b n ,n1} and { d n ,n1} with 0< b n such that

b n 1 Y n d n 0a.s.

For the definition of strong stability, one can refer to Chow and Teicher [11]. Many authors have extended the strong law of large numbers for sequences of random variables to the case of triangular array of rowwise random variables and arrays of rowwise random variables. See, for example, Hu and Taylor [12], Bai and Cheng [13], Gan and Chen [14], Kuczmaszewska [15], Wu [1618], Sung [19], Wang et al. [2024], Zhou [25], Shen [26], Shen et al. [27], and so on.

Our main results are as follows.

Theorem 3.1 Let { a n ,n1} and { b n ,n1} be two sequences of positive numbers with c n = b n / a n and b n . Let { X n ,n1} be a sequence of ψ-mixing random variables, which is stochastically dominated by a random variable X. Assume that n = 1 ψ(n)<. Denote N(x)=Card{n: c n x}, x>0, 1p2. If the following conditions are satisfied

  1. (i)

    EN(|X|)<,

  2. (ii)

    0 t p 1 P(|X|>t)( t N(y)/ y p + 1 dy)dt<,

then there exist d n R, n=1,2, , such that

b n 1 i = 1 n a i X i d n 0a.s.
(3.1)

Proof Let S n = i = 1 n a i X i , T n = i = 1 n a i X i ( c i ) . By Definition 1.2 and (i), we can see that

i = 1 P ( X i X i ( c i ) ) = i = 1 P ( | X i | > c i ) C i = 1 P ( | X | > c i ) CEN ( | X | ) <.
(3.2)

By Borel-Cantelli lemma, for any sequence { d n ,n1}R, the sequences { b n 1 T n d n } and { b n 1 S n d n } converge on the same set and to the same limit. We will show that b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) )0 a.s., which gives the theorem with d n = b n 1 i = 1 n a i E X i ( c i ) . Note that { a i ( X i ( c i ) E X i ( c i ) ),i1} is a sequence of mean zero ψ-mixing random variables. It follows from C r inequality, Jensen’s inequality and Lemma 1.2 that

n = 1 E | a n ( X n ( c n ) E X n ( c n ) ) | p b n p C n = 1 c n p E ( | X n | p I ( | X n | c n ) ) C n = 1 c n p [ c n p P ( | X | > c n ) + E | X | p I ( | X | c n ) ] C n = 1 P ( | X | > c n ) + C n = 1 c n p 0 c n t p 1 P ( | X | > t ) d t
(3.3)

and

n = 1 c n p 0 c n t p 1 P ( | X | > t ) d t 0 t p 1 P ( | X | > t ) n : c n t c n p d t C 0 t p 1 P ( | X | > t ) ( t N ( y ) / y p + 1 d y ) d t .
(3.4)

The last inequality above follows from the fact that

n : c n t c n p = lim u n : t c n u c n p = lim u t u y p d N ( y ) = lim u ( u p N ( u ) t p N ( t ) + p t u y ( p + 1 ) N ( y ) d y )

and

u p N(u)p u y ( p + 1 ) N(y)dy0as u.

Obviously,

n = 1 P ( | X | > c n ) EN ( | X | ) <.
(3.5)

Thus, by (3.3)-(3.5) and condition (ii), we can see that

n = 1 E | a n ( X n ( c n ) E X n ( c n ) ) | p b n p <.
(3.6)

Therefore,

b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) ) 0a.s.,

following from (3.6), Theorem 2.3 and Kronecker’s lemma immediately. The desired result is obtained. □

Corollary 3.1 Let the conditions of Theorem  3.1 be satisfied, and let E X n =0 for n1. Assume that 1 EN(|X|/s)ds<. Then b n 1 i = 1 n a i X i 0 a.s.

Proof By Theorem 3.1, we only need to prove that

b n 1 i = 1 n a i E X i ( c i ) 0a.s.
(3.7)

In fact,

i = 1 a i | E X i ( c i ) | b i = i = 1 c i 1 | E X i I ( | X i | c i ) | i = 1 c i 1 E | X i | I ( | X i | > c i ) i = 1 c i 1 ( c i P ( | X i | > c i ) + c i P ( | X i | > t ) d t ) C E N ( | X | ) + C 1 E N ( | X | / s ) d s < ,

which implies (3.7) by Kronecker’s lemma. We complete the proof of the corollary. □

Theorem 3.2 Let { a n ,n1} and { b n ,n1} be two sequences of positive numbers with c n = b n / a n and b n . Let { X n ,n1} be a sequence of mean zero ψ-mixing random variables, which is stochastically dominated by a random variable X. Assume that n = 1 ψ(n)<. Denote N(x)=Card{n: c n x}, x>0, 1p2. If the following conditions are satisfied

  1. (i)

    EN(|X|)<,

  2. (ii)

    1 EN(|X|/s)ds<,

  3. (iii)

    max 1 j n c j p i = n c i p =O(n),

then

b n 1 i = 1 n a i X i 0a.s.
(3.8)

Proof By condition (i) and (3.2), we only need to prove that b n 1 i = 1 n a i X i ( c i ) 0 a.s. For this purpose, it suffices to show that

b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) ) 0a.s.
(3.9)

and

b n 1 i = 1 n a i E X i ( c i ) 0as n.
(3.10)

Equation (3.10) follows from the proof of Corollary 3.1 immediately.

To prove (3.9), we set ε 0 =0 and ε n = max 1 j n c j for n1. It follows from C r inequality, Jensen’s inequality and Lemma 1.2 that

n = 1 E | a n ( X n ( c n ) E X n ( c n ) ) | p b n p C n = 1 c n p E ( | X n | p I ( | X n | c n ) ) C n = 1 P ( | X | > c n ) + C n = 1 c n p E | X | p I ( | X | c n ) .

Obviously,

n = 1 P ( | X | > c n ) EN ( | X | ) <
(3.11)

and

n = 1 c n p E | X | p I ( | X | c n ) n = 1 c n p E | X | p I ( | X | ε n ) j = 1 ε j p P ( ε j 1 < | X | ε j ) n = j c n p C j = 1 P ( | X | > ε j 1 ) C ( 1 + n = 1 P ( | X | > c n ) ) C ( 1 + E N ( | X | ) ) < .

Therefore,

n = 1 E | a n ( X n ( c n ) E X n ( c n ) ) | p b n p <,
(3.12)

following from the statements above. By Theorem 2.3 and Kronecker’s lemma, we can obtain (3.9) immediately. The proof is completed. □

Theorem 3.3 Let { a n ,n1} and { b n ,n1} be two sequences of positive numbers with c n = b n / a n and b n . Let { X n ,n1} be a sequence of ψ-mixing random variables, which is stochastically dominated by a random variable X. Assume that n = 1 ψ(n)<. Define N(x)=Card{n: c n x}, R(x)= x N(y) y 3 dy, x>0. If the following conditions are satisfied

  1. (i)

    N(x)< for any x>0,

  2. (ii)

    R(1)= 1 N(y) y 3 dy<,

  3. (iii)

    E X 2 R(|X|)<,

then there exist d n R, n=1,2, , such that

b n 1 i = 1 n a i X i d n 0a.s.
(3.13)

Proof Since N(x) is nondecreasing, then for any x>0

R(x)N(x) x y 3 dy= 1 2 x 2 N(x),
(3.14)

which implies that EN(|X|)2E X 2 R(|X|)<. Therefore,

i = 1 P ( X i X i ( c i ) ) = i = 1 P ( | X i | > c i ) C i = 1 P ( | X | > c i ) C E N ( | X | ) < .
(3.15)

By Borel-Cantelli lemma for any sequence { d n ,n1}R, { b n 1 S n d n } and { b n 1 T n d n } converge on the same set and to the same limit. We will show that b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) )0 a.s., which gives the theorem with d n = b n 1 i = 1 n a i E X i ( c i ) . It follows from Lemma 1.2 that

n = 1 Var ( a n X n ( c n ) ) b n 2 n = 1 c n 2 E ( X n ( c n ) ) 2 = n = 1 c n 2 E X n 2 I ( | X n | c n ) C E N ( | X | ) + C n = 1 c n 2 E X 2 I ( | X | c n )
(3.16)

and

n = 1 c n 2 E X 2 I ( | X | c n ) = n : c n 1 c n 2 E X 2 I ( | X | c n ) + n : c n > 1 c n 2 E X 2 I ( | X | c n ) I 1 + I 2 .
(3.17)

Since N(1)=Card{n: c n 1}2R(1)< from (3.14) and (ii), it follows that I 1 <. For I 2 , we have

I 2 = n : c n > 1 c n 2 E X 2 I ( | X | c n ) = k = 2 k 1 < c n k c n 2 E X 2 I ( | X | c n ) I 2 k = 2 ( N ( k ) N ( k 1 ) ) ( k 1 ) 2 E X 2 I ( | X | 1 ) I 2 + k = 2 ( N ( k ) N ( k 1 ) ) ( k 1 ) 2 E X 2 I ( 1 < | X | k ) I 2 I 21 + I 22 , I 21 C k = 2 ( N ( k ) N ( k 1 ) ) j = k 1 j 3 = C j = 1 j 3 k = 2 j + 1 ( N ( k ) N ( k 1 ) ) I 21 C j = 1 ( j + 1 ) 3 N ( j + 1 ) C 1 y 3 N ( y ) d y < .

Since N(x) is nondecreasing and R(x) is nonincreasing, we have

I 22 m = 2 E X 2 I ( m 1 < | X | m ) k = m N ( k ) ( ( k 1 ) 2 k 2 ) C m = 2 E X 2 I ( m 1 < | X | m ) k = m k k + 1 N ( x ) x 3 d x C m = 2 E X 2 R ( | X | ) I ( m 1 < | X | m ) C E X 2 R ( | X | ) < .

Therefore,

n = 1 Var ( a n X n ( c n ) ) b n 2 <
(3.18)

following from the above statements. By Theorem 2.1 and Kronecker’s lemma, we have

b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) ) 0a.s.
(3.19)

Taking d n = b n 1 i = 1 n a i E X i ( c i ) , we have b n 1 i = 1 n a i X i ( c i ) d n 0 a.s. The proof is completed. □

Corollary 3.2 Let the conditions of Theorem  3.3 be satisfied. If E X n =0, n1 and 1 EN(|X|/s)ds<, then b n 1 i = 1 n a i X i 0 a.s.

In the following, we denote α(x): R + R + as a positive and nonincreasing function with a n =α(n), b n = i = 1 n a i , c n = b n / a n , n1, where

0< b n ,
(3.20)
0< lim inf n n 1 c n α(log c n ) lim sup n n 1 c n α(log c n )<,
(3.21)
xα ( log + x ) is nonincreasing for x>0.
(3.22)

Theorem 3.4 Let { X n ,n1} be a sequence of identically distributed ψ-mixing random variables with n = 1 ψ(n)<. If E| X 1 |α( log + | X 1 |)<, then there exist d n R, n=1,2, , such that b n 1 i = 1 n a i X i d n 0 a.s.

Proof Since α(x) is positive and nonincreasing for x>0 and 0< b n , it follows that c n . By (3.21), we can choose constants mN, C 1 >0, C 2 >0 such that for nm,

C 1 n c n α(log c n ) C 2 n.
(3.23)

Therefore, for nm, we have 1 c n α ( log c m ) C 1 n , which implies that

j = m c j 2 j = m α 2 ( log c m ) C 1 2 j 2 α 2 ( log c m ) C 1 2 m .
(3.24)

By (3.22)-(3.24), it follows that

j = m E ( a j X j ( c j ) ) 2 b j 2 j = m c j 2 ( { | X 1 | c m 1 } X 1 2 d P + i = m j { c i 1 < | X 1 | c i } X 1 2 d P ) C + j = m c j 2 i = m j { c i 1 < | X 1 | c i } X 1 2 d P C + C i = m i 1 α 2 ( log c i ) { c i 1 < | X 1 | c i } X 1 2 d P C + C i = m α ( log c i ) { c i 1 < | X 1 | c i } | X 1 | d P C + C i = m { c i 1 < | X 1 | c i } | X 1 | α ( log + | X 1 | ) d P < .

Therefore,

j = 1 Var ( a j X j ( c j ) ) b j 2 j = 1 E ( a j X j ( c j ) ) 2 b j 2 <,
(3.25)

which implies that

b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) ) 0a.s.
(3.26)

from Theorem 2.1 and Kronecker’s lemma. By (3.22) and (3.23) again, we have

j = m P ( | X j | > c j ) j = m P ( | X j | α ( log + | X j | ) c j α ( log c j ) ) j = m P ( | X j | > c j ) j = m P ( | X 1 | α ( log + | X 1 | ) C 1 j ) < , j = 1 P ( X j X j ( c j ) ) = j = 1 P ( | X j | > c j ) = j = 1 m 1 P ( | X j | > c j ) + j = m P ( | X j | > c j ) < .

By Borel-Cantelli lemma, we have P( X j X j ( c j ) ,i.o.)=0. Together with (3.26), we can see that

b n 1 i = 1 n a i ( X i E X i ( c i ) ) 0a.s.
(3.27)

Taking d n = b n 1 i = 1 n a i E X i ( c i ) for n1, we get the desired result. □

Theorem 3.5 Let { X n ,n1} be a sequence of ψ-mixing random variables with n = 1 ψ(n)<. If for some 1p2,

n = 1 n p E | X n α ( log + | X n | ) | p <,

then there exist d n R, n=1,2, , such that b n 1 i = 1 n a i X i d n 0 a.s.

Proof Similar to the proof of Theorem 3.4, it is easily seen that

j = 1 P ( X j X j ( c j ) ) m 1 + j = m P ( | X j | α ( log + | X j | ) c j α ( log c j ) ) m 1 + j = m P ( | X j | α ( log + | X j | ) C 1 j ) < .

By Borel-Cantelli lemma for any sequence { d n ,n1}R, the sequences { b n 1 i = 1 n a i X i d n } and { b n 1 i = 1 n a i X i ( c i ) d n } converge on the same set and to the same limit. We will show that b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) )0 a.s., which gives the theorem with d n = b n 1 i = 1 n a i E X i ( c i ) . Note that { a i ( X i ( c i ) E X i ( c i ) )/ b i ,i1} is a sequence of mean zero ψ-mixing random variables. By C r inequality and Jensen’s inequality, we can see that

j = 1 E | a j ( X j ( c j ) E X j ( c j ) ) | p b j p C ( m 1 ) + C j = m c j p E | X j | p I ( | X j | c j ) C ( m 1 ) + C j = m j p ( α ( log c j ) ) p E | X j | p I ( | X j | c j ) C ( m 1 ) + C j = 1 j p E | X j α ( log + | X j | ) | p < .

It follows by Theorem 2.3 that b n 1 i = 1 n a i ( X i ( c i ) E X i ( c i ) )0 a.s. The proof is completed. □

Corollary 3.3 Let the conditions of Theorem  3.5 be satisfied. Furthermore, suppose that E X n =0 and n = 1 1 P(| X n |>s c n )ds<, then b n 1 i = 1 n a i X i 0 a.s.

References

  1. Blum JR, Hanson DL, Koopmans L: On the strong law of large numbers for a class of stochastic process. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1963, 2: 1–11. 10.1007/BF00535293

    Article  MathSciNet  Google Scholar 

  2. Yang SC: Almost sure convergence of weighted sums of mixing sequences. J. Syst. Sci. Math. Sci. 1995, 15(3):254–265.

    Google Scholar 

  3. Wu QY: Strong consistency of M estimator in linear model for ρ -mixing, φ -mixing, ψ -mixing samples. Math. Appl. 2004, 17(3):393–397.

    MathSciNet  Google Scholar 

  4. Wang XJ, Hu SH, Shen Y, Yang WZ: Maximal inequality for ψ -mixing sequences and its applications. Appl. Math. Lett. 2010, 23: 1156–1161. 10.1016/j.aml.2010.04.010

    Article  MathSciNet  Google Scholar 

  5. Zhu YC, Deng X, Pan J, Ling JM, Wang XJ: Strong law of large numbers for sequences of ψ -mixing random variables. J. Math. Study 2012, 45(4):404–410.

    MathSciNet  Google Scholar 

  6. Pan J, Zhu YC, Zou WY, Wang XJ: Some convergence results for sequences of ψ -mixing random variables. Chin. Q. J. Math. 2013, 28(1):111–117.

    Google Scholar 

  7. Jamison B, Orey S, Pruitt W: Convergence of weighted averages of independent random variables. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1965, 4: 40–44. 10.1007/BF00535481

    Article  MathSciNet  Google Scholar 

  8. Wang XJ, Hu SH, Yang WZ, Wang XH: On complete convergence of weighted sums for arrays of rowwise asymptotically almost negatively associated random variables. Abstr. Appl. Anal. 2012., 2012: Article ID 315138

    Google Scholar 

  9. Tang XF: Some strong laws of large numbers for weighted sums of asymptotically almost negatively associated random variables. J. Inequal. Appl. 2013., 2013:

    Google Scholar 

  10. Shen AT, Wu RC: Strong and weak convergence for asymptotically almost negatively associated random variables. Discrete Dyn. Nat. Soc. 2013., 2013:

    Google Scholar 

  11. Chow YS, Teicher H: Probability Theory: Independence, Interchangeability, Martingales. 2nd edition. Springer, New York; 1988:124.

    Book  Google Scholar 

  12. Hu TC, Taylor RL: On the strong law for arrays and for the bootstrap mean and variance. Int. J. Math. Math. Sci. 1997, 20(2):375–382. 10.1155/S0161171297000483

    Article  MathSciNet  Google Scholar 

  13. Bai ZD, Cheng PE: Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 2000, 46: 105–112. 10.1016/S0167-7152(99)00093-0

    Article  MathSciNet  Google Scholar 

  14. Gan SX, Chen PY: On the limiting behavior of the maximum partial sums for arrays of rowwise NA random variables. Acta Math. Sci., Ser. B 2007, 27(2):283–290.

    Article  MathSciNet  Google Scholar 

  15. Kuczmaszewska A:On Chung-Teicher type strong law of large numbers for ρ -mixing random variables. Discrete Dyn. Nat. Soc. 2008., 2008:

    Google Scholar 

  16. Wu QY: Complete convergence for negatively dependent sequences of random variables. J. Inequal. Appl. 2010., 2010: Article ID 507293

    Google Scholar 

  17. Wu QY: A strong limit theorem for weighted sums of sequences of negatively dependent random variables. J. Inequal. Appl. 2010., 2010:

    Google Scholar 

  18. Wu QY: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012., 2012:

    Google Scholar 

  19. Sung SH: On the strong convergence for weighted sums of random variables. Stat. Pap. 2011, 52: 447–454. 10.1007/s00362-009-0241-9

    Article  Google Scholar 

  20. Wang XJ, Li XQ, Hu SH, Yang WZ: Strong limit theorems for weighted sums of negatively associated random variables. Stoch. Anal. Appl. 2011, 29: 1–14.

    Article  MathSciNet  Google Scholar 

  21. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise asymptotically almost negatively associated random variables. Discrete Dyn. Nat. Soc. 2011., 2011:

    Google Scholar 

  22. Wang XJ, Hu SH, Volodin AI: Strong limit theorems for weighted sums of NOD sequence and exponential inequalities. Bull. Korean Math. Soc. 2011, 48(5):923–938. 10.4134/BKMS.2011.48.5.923

    Article  MathSciNet  Google Scholar 

  23. Wang XJ, Li XQ, Yang WZ, Hu SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 2012, 25: 1916–1920. 10.1016/j.aml.2012.02.069

    Article  MathSciNet  Google Scholar 

  24. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise negatively orthant dependent random variables. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 2012, 106: 235–245. 10.1007/s13398-011-0048-0

    Article  MathSciNet  Google Scholar 

  25. Zhou XC, Tan CC, Lin JG:On the strong laws for weighted sums of ρ -mixing random variables. J. Inequal. Appl. 2011., 2011:

    Google Scholar 

  26. Shen AT: Some strong limit theorems for arrays of rowwise negatively orthant-dependent random variables. J. Inequal. Appl. 2011., 2011:

    Google Scholar 

  27. Shen AT, Wu RC, Chen Y, Zhou Y: Complete convergence of the maximum partial sums for arrays of rowwise of AANA random variables. Discrete Dyn. Nat. Soc. 2013., 2013:

    Google Scholar 

Download references

Acknowledgements

The authors are most grateful to the editor Andrei Volodin and an anonymous referee for careful reading of the manuscript and valuable suggestions, which helped in improving an earlier version of this paper. This work was supported by the Natural Science Project of Department of Education of Anhui Province (KJ2011z056) and the National Natural Science Foundation of China (11201001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ling Tang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Xu, H., Tang, L. Strong convergence properties for ψ-mixing random variables. J Inequal Appl 2013, 360 (2013). https://doi.org/10.1186/1029-242X-2013-360

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-360

Keywords