Skip to main content

Complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables

Abstract

Let { X n i ,i1,n1} be an array of rowwise ρ ˜ -mixing random variables. Some sufficient conditions for complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables are presented without assumptions of identical distribution. As applications, the Baum and Katz type result and the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of ρ ˜ -mixing random variables are obtained.

MSC:60F15.

1 Introduction

The concept of complete convergence was introduced by Hsu and Robbins [1] as follows. A sequence of random variables { U n ,n1} is said to converge completely to a constant C if n = 1 P(| U n C|>ε)< for all ε>0. In view of the Borel-Cantelli lemma, this implies that U n C almost surely (a.s.). The converse is true if the { U n ,n1} are independent. Hsu and Robbins [1] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös [2] proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors. See, for example, Spitzer [3], Baum and Katz [4], Gut [5], Zarei [6], and so forth. The main purpose of the paper is to provide complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables.

Firstly, let us recall the definitions of sequences of ρ ˜ -mixing random variables and arrays of rowwise ρ ˜ -mixing random variables.

Let { X n ,n1} be a sequence of random variables defined on a fixed probability space (Ω,F,P). Write FS=σ( X i ,iSN). Given two σ-algebras , in , let

ρ(B,R)= sup X L 2 ( B ) , Y L 2 ( R ) | E X Y E X E Y | ( Var X Var Y ) 1 / 2 .

Define the ρ ˜ -mixing coefficients by

ρ ˜ (k)=sup { ρ ( FS , FT ) :  finite subsets  S , T N  such that  dist ( S , T ) k } ,k0.

Obviously, 0 ρ ˜ (k+1) ρ ˜ (k)1, and ρ ˜ (0)=1.

Definition 1.1 A sequence { X n ,n1} of random variables is said to be a ρ ˜ -mixing sequence if there exists kN such that ρ ˜ (k)<1.

An array { X n i ,i1,n1} of random variables is called rowwise ρ ˜ -mixing random variables if for every n1, { X n i ,i1} is a sequence of ρ ˜ -mixing random variables.

ρ ˜ -mixing random variables were introduced by Bradley [7] and many applications have been found. ρ ˜ -mixing is similar to ρ-mixing, but both are quite different. Many authors have studied this concept and provided interesting results and applications. See, for example, Bryc and Smolenski [8], Peligrad [9, 10], Peligrad and Gut [11], Utev and Peligrad [12], Gan [13], Cai [14], Zhu [15], Wu and Jiang [16, 17], An and Yuan [18], Kuczmaszewska [19], Sung [20], Wang et al. [2123], and so on.

Recently, An and Yuan [18] obtained a complete convergence result for weighted sums of identically distributed ρ ˜ -mixing random variables as follows.

Theorem 1.1 Let p>1/α and 1/2<α2. Let { X n ,n1} be a sequence of identically distributed ρ ˜ -mixing random variables with E X 1 =0. Assume that { a n i ,1in,n1} is an array of real numbers satisfying

i = 1 n | a n i | p =O ( n δ ) for some 0<δ<1,
(1.1)
A n k = { 1 i n : | a n i | p > ( k + 1 ) 1 } n e 1 / k ,k1,n1.
(1.2)

Then the following statements are equivalent:

  1. (i)

    E | X 1 | p <;

  2. (ii)

    n = 1 n p α 2 P( max 1 j n | i = 1 j a n i X i |>ε n α )< for all ε>0.

Sung [20] pointed out that the array { a n i ,1in,n1} satisfying both (1.1) and (1.2) does not exist and obtained a new complete convergence result for weighted sums of identically distributed ρ ˜ -mixing random variables as follows.

Theorem 1.2 Let p>1/α and 1/2<α2. Let { X n ,n1} be a sequence of identically distributed ρ ˜ -mixing random variables with E X 1 =0. Assume that { a n i ,1in,n1} is an array of real numbers satisfying

i = 1 n | a n i | q =O(n) for some q>p.
(1.3)

If E | X 1 | p <, then

n = 1 n p α 2 P ( max 1 j n | i = 1 j a n i X i | > ε n α ) <,ε>0.
(1.4)

Conversely, if (1.4) holds for any array { a n i ,1in,n1} satisfying (1.3), then E | X 1 | p <.

For more details about the complete convergence result for weighted sums of dependent sequences, one can refer to Wu [24, 25], Wang et al. [26, 27], and so forth. The main purpose of this paper is to further study the complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables under mild conditions. The main idea is inspired by Baek et al. [28] and Wu [25]. As applications, the results of Baum and Katz [4] from the i.i.d. case to the arrays of rowwise ρ ˜ -mixing setting are obtained. The Marcinkiewicz-Zygmund type strong law of large numbers for sequences of ρ ˜ -mixing random variables is provided. We give some sufficient conditions for complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables without assumption of identical distribution. The techniques used in the paper are the Rosenthal type inequality and the truncation method.

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance and x denotes the integer part of x. For a finite set A, the symbol ♯A denotes the number of elements in the set A. Let I(A) be the indicator function of the set A. Denote logx=lnmax(x,e), X + =max(X,0) and X =max(X,0).

The paper is organized as follows. Two important lemmas are provided in Section 2. The main results and their proofs are presented in Section 3. We get complete convergence for arrays of rowwise ρ ˜ -mixing random variables which are stochastically dominated by a random variable X.

2 Preliminaries

Firstly, we give the definition of stochastic domination.

Definition 2.1 A sequence { X n ,n1} of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

P ( | X n | > x ) CP ( | X | > x )
(2.1)

for all x0 and n1.

An array { X n i ,i1,n1} of rowwise random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

P ( | X n i | > x ) CP ( | X | > x )
(2.2)

for all x0, i1 and n1.

The proofs of the main results of the paper are based on the following two lemmas. One is the classic Rosenthal type inequality for ρ ˜ -mixing random variables obtained by Utev and Peligrad [12], the other is the fundamental inequalities for stochastic domination.

Lemma 2.1 (cf. Utev and Peligrad [[12], Theorem 2.1])

Let { X n ,n1} be a sequence of ρ ˜ -mixing random variables, E X i =0, E | X i | p < for some p2 and for every i1. Then there exists a positive constant C depending only on p such that

E ( max 1 j n | i = 1 j X i | p ) C { i = 1 n E | X i | p + ( i = 1 n E X i 2 ) p / 2 } .

Lemma 2.2 Let { X n i ,i1,n1} be an array of rowwise random variables which is stochastically dominated by a random variable X. For any α>0 and b>0, the following two statements hold:

E | X n i | α I ( | X n i | b ) C 1 [ E | X | α I ( | X | b ) + b α P ( | X | > b ) ] ,
(2.3)
E | X n i | α I ( | X n i | > b ) C 2 E | X | α I ( | X | > b ) ,
(2.4)

where C 1 and C 2 are positive constants.

Proof The proof of this lemma can be found in Wu [29] or Wang et al. [30]. □

3 Main results and their applications

In this section, we provide complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables. As applications, the Baum and Katz type result and the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of ρ ˜ -mixing random variables are obtained. Let { X n i ,i1,n1} be an array of rowwise ρ ˜ -mixing random variables. We assume that the mixing coefficients ρ ˜ () in each row are the same.

Theorem 3.1 Let { X n i ,i1,n1} be an array of rowwise ρ ˜ -mixing random variables which is stochastically dominated by a random variable X and E X n i =0 for all i1, n1, β1. Let { a n i ,i1,n1} be an array of constants such that

sup i 1 | a n i |=O ( n r ) for some r>0
(3.1)

and

i = 1 | a n i |=O ( n α ) for some α[0,r).
(3.2)

Assume further that 1+α+β>0 and there exists some δ>0 such that α/r+1<δ2 and s=max(1+ 1 + α + β r ,δ). If E | X | s <, then for all ε>0,

n = 1 n β P ( max 1 j n | i = 1 j a n i X n i | > ε ) <.
(3.3)

If 1+α+β<0 and E|X|<, then (3.3) still holds for all ε>0.

Proof Without loss of generality, we assume that a n i >0 for all i1 and n1 (otherwise, we use a n i + and a n i instead of a n i respectively, and note that a n i = a n i + a n i ). From the conditions (3.1) and (3.2), we assume that

sup i 1 a n i = n r , i = 1 a n i = n α ,n1.
(3.4)

If 1+α+β<0 and E|X|<, then the result can be easily proved by the following:

n = 1 n β P ( max 1 j n | i = 1 j a n i X n i | > ε ) C n = 1 n β E ( max 1 j n | i = 1 j a n i X n i | ) C n = 1 n β i = 1 n E | a n i X n i | C n = 1 n α + β E | X | < .

In the following, we consider the case of 1+α+β>0. Denote

X n i = a n i X n i I ( | a n i X n i | 1 ) ,i1,n1.

It is easy to check that for any ε>0,

( max 1 j n | i = 1 j a n i X n i | > ε ) ( max 1 i n | a n i X n i | > 1 ) ( max 1 j n | i = 1 j X n i | > ε ) ,

which implies that

P ( max 1 j n | i = 1 j a n i X n i | > ε ) P ( max 1 i n | a n i X n i | > 1 ) + P ( max 1 j n | i = 1 j X n i | > ε ) i = 1 n P ( | a n i X n i | > 1 ) + P ( max 1 j n | i = 1 j ( X n i E X n i ) | > ε max 1 j n | i = 1 j E X n i | ) .
(3.5)

Firstly, we show that

max 1 j n | i = 1 j E X n i |0as n.
(3.6)

Actually, by the conditions E X n i =0, Lemma 2.2, (3.4) and E | X | 1 + α / r < (since E | X | s <), we have that

max 1 j n | i = 1 j E X n i | = max 1 j n | i = 1 j E a n i X n i I ( | a n i X n i | 1 ) | = max 1 j n | i = 1 j E a n i X n i I ( | a n i X n i | > 1 ) | i = 1 n E | a n i X n i | 1 + α / r I ( x | a n i X n i | > 1 ) C i = 1 n a n i 1 + α / r E | X | 1 + α / r I ( | X | > 1 a n i ) C ( sup i 1 a n i ) α / r i = 1 n a n i E | X | 1 + α / r I ( | X | > n r ) C ( n r ) α / r n α E | X | 1 + α / r I ( | X | > n r ) = C E | X | 1 + α / r I ( | X | > n r ) 0 as  n ,

which implies (3.6). It follows from (3.5) and (3.6) that for n large enough,

P ( max 1 j n | i = 1 j a n i X n i | > ε ) i = 1 n P ( | a n i X n i | > 1 ) +P ( max 1 j n | i = 1 j ( X n i E X n i ) | > ε 2 ) .

Hence, to prove (3.3), we only need to show that

I n = 1 n β i = 1 n P ( | a n i X n i | > 1 ) <
(3.7)

and

J n = 1 n β P ( max 1 j n | i = 1 j ( X n i E X n i ) | > ε 2 ) <.
(3.8)

By (3.4) and E | X | s <, we can get that

n = 1 n β i = 1 n P ( | a n i X n i | > 1 ) C n = 1 n β i = 1 n P ( | a n i X | > 1 ) C n = 1 n β i = 1 n a n i E | X | I ( | X | > 1 a n i ) C n = 1 n α + β E | X | I ( | X | > n r ) C n = 1 n α + β k = n E | X | I ( k r | X | < ( k + 1 ) r ) = C k = 1 n = 1 k n α + β E | X | I ( k r | X | < ( k + 1 ) r ) C k = 1 k 1 + α + β E | X | I ( k r | X | < ( k + 1 ) r ) C k = 1 E | X | 1 + ( 1 + α + β ) / r I ( k r | X | < ( k + 1 ) r ) C E | X | 1 + ( 1 + α + β ) / r < ,

which implies (3.7).

By Markov’s inequality, Lemma 2.1, C r ’s inequality and Jensen’s inequality, we have for M2 that

n = 1 n β P ( max 1 j n | i = 1 j ( X n i E X n i ) | > ε 2 ) C n = 1 n β E ( max 1 j n | i = 1 j ( X n i E X n i ) | M ) C n = 1 n β [ ( i = 1 n E | X n i | 2 ) M / 2 + i = 1 n E | X n i | M ] J 1 + J 2 .
(3.9)

Take

M>max ( 2 , 2 ( 1 + β ) r [ δ ( 1 + α / r ) ] , 1 + 1 + α + β r ) ,

which implies that βr[δ(1+α/r)]M/2<1 and α+βr(M1)<1. Since E | X | δ <, we have by Lemma 2.2, Markov’s inequality and (3.4) that

J 1 C n = 1 n β ( i = 1 n E | X n i | 2 ) M / 2 = C n = 1 n β [ i = 1 n E | a n i X n i | 2 I ( | a n i X n i | 1 ) ] M / 2 C n = 1 n β [ i = 1 n P ( | a n i X | > 1 ) + i = 1 n E | a n i X | 2 I ( | a n i X | 1 ) ] M / 2 C n = 1 n β ( i = 1 n a n i δ E | X | δ ) M / 2 ( since  δ 2 ) C n = 1 n β [ ( sup i 1 a n i ) δ 1 i = 1 n a n i ] M / 2 C n = 1 n β [ n r ( δ 1 ) n α ] M / 2 = C n = 1 n β r [ δ ( 1 + α / r ) ] M / 2 < .
(3.10)

By Lemma 2.2 again, we can see that

J 2 C n = 1 n β i = 1 n E | X n i | M = C n = 1 n β i = 1 n E | a n i X n i | M I ( | a n i X n i | 1 ) C n = 1 n β i = 1 n P ( | a n i X | > 1 ) + C n = 1 n β i = 1 n E | a n i X | M I ( | a n i X | 1 ) J 3 + J 4 .
(3.11)

J 3 < has been proved by (3.7). In the following, we show that J 4 <. Denote

I n j = { i : ( n j ) r 1 / a n i < [ n ( j + 1 ) ] r } ,n1,j1.
(3.12)

It is easily seen that I n k I n j = for kj and j = 1 I n j =N for all n1. Hence,

J 4 = C n = 1 n β i = 1 n E | a n i X | M I ( | a n i X | 1 ) C n = 1 n β j = 1 i I n j E | a n i X | M I ( | a n i X | 1 ) C n = 1 n β j = 1 ( I n j ) ( n j ) r M E | X | M I ( | X | [ n ( j + 1 ) ] r ) C n = 1 n β j = 1 ( I n j ) ( n j ) r M k = 0 n ( j + 1 ) E | X | M I ( k | X | 1 r < k + 1 ) = C n = 1 n β j = 1 ( I n j ) ( n j ) r M k = 0 2 n E | X | M I ( k | X | 1 r < k + 1 ) + C n = 1 n β j = 1 ( I n j ) ( n j ) r M k = 2 n + 1 n ( j + 1 ) E | X | M I ( k | X | 1 r < k + 1 ) J 5 + J 6 .
(3.13)

It is easily seen that for all m1, we have that

n α = i = 1 a n i = j = 1 i I n j a n i j = 1 ( I n j ) [ n ( j + 1 ) ] r j = m ( I n j ) [ n ( j + 1 ) ] r j = m ( I n j ) [ n ( j + 1 ) ] r [ n ( m + 1 ) n ( j + 1 ) ] r ( M 1 ) = j = m ( I n j ) [ n ( j + 1 ) ] r M [ n ( m + 1 ) ] r ( M 1 ) ,

which implies that for all m1,

j = m ( I n j ) ( n j ) r M C n α n r ( M 1 ) m r ( M 1 ) =C n α r ( M 1 ) m r ( M 1 ) .
(3.14)

Therefore,

J 5 C n = 1 n β j = 1 ( I n j ) ( n j ) r M k = 0 2 n E | X | M I ( k | X | 1 r < k + 1 ) C n = 1 n β n α r ( M 1 ) k = 0 2 n E | X | M I ( k | X | 1 r < k + 1 ) C k = 0 2 n = 1 n α + β r ( M 1 ) E | X | M I ( k | X | 1 r < k + 1 ) + C k = 2 n = k / 2 n α + β r ( M 1 ) E | X | M I ( k | X | 1 r < k + 1 ) C + C k = 2 k 1 + α + β r ( M 1 ) E | X | M I ( k | X | 1 r < k + 1 ) C + C k = 2 E | X | M + 1 + α + β r ( M 1 ) I ( k | X | 1 r < k + 1 ) C + C E | X | 1 + 1 + α + β r < ( since  E | X | s < )
(3.15)

and

J 6 C n = 1 n β j = 1 ( I n j ) ( n j ) r M k = 2 n + 1 n ( j + 1 ) E | X | M I ( k | X | 1 r < k + 1 ) C n = 1 n β k = 2 n + 1 j k n 1 ( I n j ) ( n j ) r M E | X | M I ( k | X | 1 r < k + 1 ) C n = 1 n β k = 2 n + 1 n α r ( M 1 ) ( k n ) r ( M 1 ) E | X | M I ( k | X | 1 r < k + 1 ) C k = 2 n = 1 k / 2 n α + β k r ( M 1 ) E | X | M I ( k | X | 1 r < k + 1 ) C k = 2 k 1 + α + β r ( M 1 ) E | X | M I ( k | X | 1 r < k + 1 ) C k = 2 E | X | M + 1 + α + β r ( M 1 ) I ( k | X | 1 r < k + 1 ) C E | X | 1 + 1 + α + β r < ( since E | X | s < ) .
(3.16)

Thus, the inequality (3.8) follows from (3.9)-(3.11), (3.13), (3.15) and (3.16). This completes the proof of the theorem. □

Theorem 3.2 Let { X n i ,i1,n1} be an array of rowwise ρ ˜ -mixing random variables which is stochastically dominated by a random variable X and E X n i =0 for all i1, n1. Let { a n i ,i1,n1} be an array of constants such that (3.1) holds and

i = 1 | a n i |=O(1).
(3.17)

If E|X|log|X|<, then for all ε>0,

n = 1 n 1 P ( max 1 j n | i = 1 j a n i X n i | > ε ) <.
(3.18)

Proof We use the same notations as those in Theorem 3.1. According to the proof of Theorem 3.1, we only need to show that (3.7) and (3.8) hold, where β=1 and α=0.

The fact E|X|log|X|< yields that

I n = 1 n 1 i = 1 n P ( | a n i X n i | > 1 ) C n = 1 n 1 i = 1 n P ( | a n i X | > 1 ) C k = 1 n = 1 k n 1 E | X | I ( k r | X | < ( k + 1 ) r ) C k = 1 log k E | X | I ( k r | X | < ( k + 1 ) r ) C k = 1 E | X | log | X | I ( k r | X | < ( k + 1 ) r ) C E | X | log | X | < ,

which implies (3.7) for β=1.

By Markov’s inequality, Lemmas 2.1 and 2.2, we can get that

J n = 1 n 1 P ( max 1 j n | i = 1 j ( X n i E X n i ) | > ε 2 ) C n = 1 n 1 i = 1 n E | X n i | 2 = C n = 1 n 1 i = 1 n E | a n i X n i | 2 I ( | a n i X n i | 1 ) C n = 1 n 1 i = 1 n P ( | a n i X | > 1 ) + C n = 1 n 1 i = 1 n E | a n i X | 2 I ( | a n i X | 1 ) C + J 5 + J 6 .
(3.19)

Here, J 5 and J 6 are J 5 and J 6 when M=2, respectively. Similar to the proof of J 5 , we can get that

J 5 C+CE|X|<.
(3.20)

Similar to the proof of J 6 , we have

J 6 C k = 2 n = 1 [ k / 2 ] n 1 k r E | X | 2 I ( k | X | 1 r < k + 1 ) C k = 2 log k k r k r E | X | I ( k | X | 1 r < k + 1 ) C E | X | log | X | < .
(3.21)

This completes the proof of the theorem from the statements above. □

By Theorems 3.1 and 3.2, we can extend the results of Baum and Katz [4] for independent and identically distributed random variables to the case of arrays of rowwise ρ ˜ -mixing random variables as follows.

Corollary 3.1 Let { X n i ,i1,n1} be an array of rowwise ρ ˜ -mixing random variables which is stochastically dominated by a random variable X and E X n i =0 for all i1, n1.

  1. (i)

    Let p>1 and 1t<2. If E | X | p t <, then for all ε>0,

    n = 1 n p 2 P ( max 1 j n | i = 1 j X n i | > ε n 1 t ) <.
    (3.22)
  2. (ii)

    If E|X|log|X|<, then for all ε>0,

    n = 1 1 n P ( max 1 j n | i = 1 j X n i | > ε n ) <.
    (3.23)

Proof (i) Let a n i =0 if i>n and a n i = n 1 / t if in. Hence, conditions (3.1) and (3.2) hold for r=1/t and α=11/t<r. βp2>1. It is easy to check that

1+α+β=p 1 t >0,1+ 1 + α + β r =pts, α r +1=t<pts.

Therefore, the desired result (3.22) follows from Theorem 3.1 immediately.

  1. (ii)

    Let a n i =0 if i>n and a n i = n 1 if in. Hence, conditions (3.1) and (3.17) hold for r=1. Therefore, the desired result (3.23) follows from Theorem 3.2 immediately. This completes the proof of the corollary. □

Similar to the proofs of Theorems 3.1-3.2 and Corollary 3.1, we can get the following Baum and Katz type result for sequences of ρ ˜ -mixing random variables.

Theorem 3.3 Let { X n ,n1} be a sequence of ρ ˜ -mixing random variables which is stochastically dominated by a random variable X and E X n =0 for n1.

  1. (i)

    Let p>1 and 1t<2. If E | X | p t <, then for all ε>0,

    n = 1 n p 2 P ( max 1 j n | i = 1 j X i | > ε n 1 t ) <.
    (3.24)
  2. (ii)

    If E|X|log|X|<, then for all ε>0,

    n = 1 1 n P ( max 1 j n | i = 1 j X i | > ε n ) <.
    (3.25)

By Theorem 3.3, we can get the Marcinkiewicz-Zygmund type strong law of large numbers for ρ ˜ -mixing random variables as follows.

Corollary 3.2 Let { X n ,n1} be a sequence of ρ ˜ -mixing random variables which is stochastically dominated by a random variable X and E X n =0 for n1.

  1. (i)

    Let p>1 and 1t<2. If E | X | p t <, then

    n 1 t i = 1 n X i 0 a.s., n.
    (3.26)
  2. (ii)

    If E|X|log|X|<, then

    1 n i = 1 n X i 0 a.s., n.
    (3.27)

Proof (i) By (3.24), we can get that for all ε>0,

> n = 1 n p 2 P ( max 1 j n | i = 1 j X i | > ε n 1 t ) = k = 0 n = 2 k 2 k + 1 1 n p 2 P ( max 1 j n | i = 1 j X i | > ε n 1 t ) { k = 0 ( 2 k ) p 2 2 k P ( max 1 j 2 k | i = 1 j X i | > ε 2 k + 1 t ) if  p 2 , k = 0 ( 2 k + 1 ) p 2 2 k P ( max 1 j 2 k | i = 1 j X i | > ε 2 k + 1 t ) if  1 < p < 2 , { k = 0 P ( max 1 j 2 k | i = 1 j X i | > ε 2 k + 1 t ) if  p 2 , 1 2 k = 0 P ( max 1 j 2 k | i = 1 j X i | > ε 2 k + 1 t ) if  1 < p < 2 .

By Borel-Cantelli lemma, we obtain that

max 1 j 2 k | i = 1 j X i | 2 k + 1 t 0a.s., k.
(3.28)

For all positive integers n, there exists a positive integer k 0 such that 2 k 0 1 n< 2 k 0 . We have by (3.28) that

| i = 1 n X i | n 1 t max 2 k 0 1 n < 2 k 0 | i = 1 n X i | n 1 t 2 2 t max 1 j 2 k 0 | i = 1 j X i | 2 k 0 + 1 t 0a.s.,  k 0 ,

which implies (3.26).

  1. (ii)

    Similar to the proof of (i), we can get (ii) immediately. The details are omitted. This completes the proof of the corollary. □

Remark 3.1 We point out that the cases 1+α+β>0 and 1+α+β<0 are considered in Theorem 3.1 and the case 1+α+β=0 is considered in Theorem 3.2, respectively. Theorem 3.1 and Theorem 3.2 consider the complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables, while Theorem 3.3 considers the complete convergence for weighted sums of sequences of ρ ˜ -mixing random variables. In addition, Theorem 3.1 and Theorem 3.2 could be applied to obtain the Baum and Katz type result for arrays of rowwise ρ ˜ -mixing random variables, while Theorem 3.3 could be applied to establish the Marcinkiewicz-Zygmund type strong law of large numbers for sequences of ρ ˜ -mixing random variables.

References

  1. Hsu PL, Robbins H: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33(2):25–31. 10.1073/pnas.33.2.25

    Article  MathSciNet  Google Scholar 

  2. Erdös P: On a theorem of Hsu and Robbins. Ann. Math. Stat. 1949, 20(2):286–291. 10.1214/aoms/1177730037

    Article  Google Scholar 

  3. Spitzer FL: A combinatorial lemma and its application to probability theory. Trans. Am. Math. Soc. 1956, 82(2):323–339. 10.1090/S0002-9947-1956-0079851-X

    Article  MathSciNet  Google Scholar 

  4. Baum LE, Katz M: Convergence rates in the law of large numbers. Trans. Am. Math. Soc. 1965, 120(1):108–123. 10.1090/S0002-9947-1965-0198524-1

    Article  MathSciNet  Google Scholar 

  5. Gut A: Complete convergence for arrays. Period. Math. Hung. 1992, 25(1):51–75. 10.1007/BF02454383

    Article  MathSciNet  Google Scholar 

  6. Zarei H, Jabbari H: Complete convergence of weighted sums under negative dependence. Stat. Pap. 2011, 52(2):413–418. 10.1007/s00362-009-0238-4

    Article  MathSciNet  Google Scholar 

  7. Bradley RC: On the spectral density and asymptotic normality of weakly dependent random fields. J. Theor. Probab. 1992, 5: 355–374. 10.1007/BF01046741

    Article  Google Scholar 

  8. Bryc W, Smolenski W: Moment conditions for almost sure convergence of weakly correlated random variables. Proc. Am. Math. Soc. 1993, 119(2):629–635. 10.1090/S0002-9939-1993-1149969-7

    Article  MathSciNet  Google Scholar 

  9. Peligrad M: On the asymptotic normality of sequences of weak dependent random variables. J. Theor. Probab. 1996, 9(3):703–715. 10.1007/BF02214083

    Article  MathSciNet  Google Scholar 

  10. Peligrad M: Maximum of partial sums and an invariance principle for a class of weak dependent random variables. Proc. Am. Math. Soc. 1998, 126(4):1181–1189. 10.1090/S0002-9939-98-04177-X

    Article  MathSciNet  Google Scholar 

  11. Peligrad M, Gut A: Almost sure results for a class of dependent random variables. J. Theor. Probab. 1999, 12: 87–104. 10.1023/A:1021744626773

    Article  MathSciNet  Google Scholar 

  12. Utev S, Peligrad M: Maximal inequalities and an invariance principle for a class of weakly dependent random variables. J. Theor. Probab. 2003, 16(1):101–115. 10.1023/A:1022278404634

    Article  MathSciNet  Google Scholar 

  13. Gan SX:Almost sure convergence for ρ ˜ -mixing random variable sequences. Stat. Probab. Lett. 2004, 67: 289–298. 10.1016/j.spl.2003.12.011

    Article  Google Scholar 

  14. Cai GH:Strong law of large numbers for ρ ˜ -mixing sequences with different distributions. Discrete Dyn. Nat. Soc. 2006., 2006: Article ID 27648

    Google Scholar 

  15. Zhu MH:Strong laws of large numbers for arrays of rowwise ρ ˜ -mixing random variables. Discrete Dyn. Nat. Soc. 2007., 2007: Article ID 74296

    Google Scholar 

  16. Wu QY, Jiang YY:Some strong limit theorems for ρ ˜ -mixing sequences of random variables. Stat. Probab. Lett. 2008, 78(8):1017–1023. 10.1016/j.spl.2007.09.061

    Article  MathSciNet  Google Scholar 

  17. Wu QY, Jiang YY:Strong limit theorems for weighted product sums of ρ ˜ -mixing sequences of random variables. J. Inequal. Appl. 2009., 2009: Article ID 174768

    Google Scholar 

  18. An J, Yuan DM:Complete convergence of weighted sums for ρ ˜ -mixing sequence of random variables. Stat. Probab. Lett. 2008, 78(12):1466–1472. 10.1016/j.spl.2007.12.020

    Article  MathSciNet  Google Scholar 

  19. Kuczmaszewska A:On Chung-Teicher type strong law of large numbers for ρ ˜ -mixing random variables. Discrete Dyn. Nat. Soc. 2008., 2008: Article ID 140548

    Google Scholar 

  20. Sung SH:Complete convergence for weighted sums of ρ ˜ -mixing random variables. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 630608

    Google Scholar 

  21. Wang XJ, Hu SH, Shen Y, Yang WZ: Some new results for weakly dependent random variable sequences. Chinese J. Appl. Probab. Statist. 2010, 26(6):637–648.

    MathSciNet  Google Scholar 

  22. Wang XJ, Xia FX, Ge MM, Hu SH, Yang WZ:Complete consistency of the estimator of nonparametric regression models based on ρ ˜ -mixing sequences. Abstr. Appl. Anal. 2012., 2012: Article ID 907286

    Google Scholar 

  23. Wang XJ, Li XQ, Yang WZ, Hu SH: On complete convergence for arrays of rowwise weakly dependent random variables. Appl. Math. Lett. 2012, 25: 1916–1920. 10.1016/j.aml.2012.02.069

    Article  MathSciNet  Google Scholar 

  24. Wu QY: Sufficient and necessary conditions of complete convergence for weighted sums of PNQD random variables. J. Appl. Math. 2012., 2012: Article ID 104390

    Google Scholar 

  25. Wu QY: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent random variables. J. Inequal. Appl. 2012., 2012: Article ID 50 10.1186/1029-242X-2012-50

    Google Scholar 

  26. Wang XJ, Hu SH, Yang WZ: Convergence properties for asymptotically almost negatively associated sequence. Discrete Dyn. Nat. Soc. 2010., 2010: Article ID 218380

    Google Scholar 

  27. Wang XJ, Hu SH, Yang WZ: Complete convergence for arrays of rowwise asymptotically almost negatively associated random variables. Discrete Dyn. Nat. Soc. 2011., 2011: Article ID 717126

    Google Scholar 

  28. Baek JI, Choi IB, Niu SL: On the complete convergence of weighted sums for arrays of negatively associated variables. J. Korean Stat. Soc. 2008, 37: 73–80. 10.1016/j.jkss.2007.08.001

    Article  MathSciNet  Google Scholar 

  29. Wu QY: Probability Limit Theory for Mixing Sequences. Science Press of China, Beijing; 2006.

    Google Scholar 

  30. Wang XJ, Hu SH, Yang WZ, Wang XH: On complete convergence of weighted sums for arrays of rowwise asymptotically almost negatively associated random variables. Abstr. Appl. Anal. 2012., 2012: Article ID 315138

    Google Scholar 

Download references

Acknowledgements

The authors are most grateful to the editor Jewgeni Dshalalow and anonymous referees for careful reading of the manuscript and valuable suggestions which helped in improving an earlier version of this paper. This work was supported by the National Natural Science Foundation of China (11201001, 11171001), the Natural Science Foundation of Anhui Province (1308085QA03, 11040606M12, 1208085QA03), the 211 project of Anhui University, the Youth Science Research Fund of Anhui University, Applied Teaching Model Curriculum of Anhui University (XJYYXKC04) and the Students Science Research Training Program of Anhui University (KYXL2012007, kyxl2013003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Shen.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Shen, A., Wu, R., Wang, X. et al. Complete convergence for weighted sums of arrays of rowwise ρ ˜ -mixing random variables. J Inequal Appl 2013, 356 (2013). https://doi.org/10.1186/1029-242X-2013-356

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-356

Keywords